Such an interesting project. A non-trivial amount of my consulting work has been in translating statutory requirements into system architecture features as they relate to requirements for privacy and security, often with cryptography. Along the way, you see how laws are not meant to run on machines or be subject to mere logic, and they are not abstract state machines that yield "legal," under the right conditions.
I'm not a lawyer at all - but I am a technologist who builds systems that people and institutions punt thorny legal and ethical decisions to, and they use the tech as a black box of risk, where they hope they can't be held accountable to understand the conseqeunces of the implementation.
The point of most cryptography and security features is to use math to represent and symbolically transfer liability and risk between transaction counterparties, where, law also determines a balance or flow of symbolic liability and risk between parties.
The idea of modelling laws as concrete logic is appealing, but it presumes the quality of a law is if it is resistant to individual interests, judgment and interpretation. To me, that sets up a conflict where the best possible law is either, one that obviates the need for a judge or litigation - vs. - being the most refined and featureful tool for people (and judges) align things like morals, principles, ethics, and interests.
The author's example of that inconsistent common law definition where you didn't actually have to live together in a relationship is arguably an artifact of Canadian charter and British common law where individuals do not have inalienable rights because ultimate and total authority rests with the crown. It doesn't have to be logical or consistent, but this is just my layman observation.
If the written law is illogical, a judge interprets it, and gives it back to the government to refine, and the government often just ignores it until there is a new patron to appease by changing it, or if it doesn't give them the result they want, they just update the regulations of the law without a requirement for debate. Automating legal analysis in Canada seems like more of a way to find arbitrages and loops than to provide any general efficiency or benefit.
Is that what this formal methods project is doing, or do they think people will accept being subject to automated legal decisions?
"The point of most cryptography and security features is to use math to represent and symbolically transfer liability and risk between transaction counterparties"
What is the liability here? For example, the company holds different level of liability in a data breach, depending on whether they implement the security correctly - is that what you mean?
Example would be whose keys encrypt and decrypt data realizes legal things like custody and control and the associated responsibilities.
One area that has recently changed is "de-identification," which used to essentially be a cryptographic problem, but now it's a legal definition where you assert a policy about the data encoded in a way and subject to a risk assessment to make that assertion, and then the de-identified data (PII/PHI) now comes with obligations.
Another example was chip/pin payment cards several years ago, where they transferred liability to the merchant for fraud and chargebacks, where previously, magstrip meant the liability for chargebacks largely stayed with the issuer.
If an online banking account gets hacked, banks have less liability than they did previously - as even though you must assert it was they who were robbed and not you, and they still owe you the money you deposited with them, enhanced authN/MFA has allowed them to imply it's somehow your fault that someone stole the money you trusted them to hold for you.
The security of each tech divides the liability between the parties in these cases.
This is doubly unpersuasive in any common law jurisdiction. The whole point of such a system is that the idea that the correct outcomes are easier to arrive at in the specific than the general, and that the codification happens after precedents.
Even in civil law systems the fact is that case law is a coexistent partner to the interpretation of texts.
The fact this guy is learning law should actually be a cause of not taking this very seriously. I would be more interested by a practitioner making the case. (I am not!) In fact, this applies strongly to the case he gives: I would bet no judge in Canada (outside of Quebec, maybe?) would take his argument seriously. The way he has approached this question honest smacks of ivory tower academia.
If you made be steelman the argument, it should be that the usual methods predate the era of formal methods, and that the overall system could be made better. Then one would have to explain better in an operationally important way for practitioners and workers in the system, usually by some convincing argument about time saved. Remember that arguments over definitions is a very small part of the time taken in the job!
Exactly this. The Platonic ideal of statutory interpretation is the “no vehicles in the park” statute. It’s simple enough on the surface but you can think of an infinite number of edge cases that demand nuance (e.g. Scooters? A war memorial featuring a tank? A wheelchair?).
Enumerating and handling these cases in a formal mathematical sense is at best futile and at worst leads to absurd outcomes. Law isn’t made formally because lawyers don’t know how, it’s because intelligently applied common sense is an essential part of a sane legal system. [0]
[0] For what it’s worth, it’s not as if the formalism vs. common sense debate is actually settled in the legal sphere. See, e.g., the debate about textualism vs. legislative intent in American statutory interpretation. Even the textualist position lies much, much further along the spectrum than formal specification though, and it has its own problems.
Not too long ago in computing history there was a fascinating
confluence of hackers and lawyers who came together to co-examine
their respective formalisms. It was called Groklaw, run by an
energetic blogger named Pamela Jones [1]. For many years it served to
translate law into code and code into law, as well as being an
experiment in the "many-eyes" idea of picking through legal prose in a
participatory, public way.
Powerful interests did not like this. Jones eventually tired of
defending what can only be described as a first generation optimist's
understanding of the Internet as a tool for social good.
[1] possibly a nom de plume of several law and technology researchers
The website [1] is actually still up, so you can read and make your
own mind up. Alternatively there's a quite comprehensive Wikipedia
entry [2]. What do you think happened?
A great read. I had the same experience proving (toy) software correct: the act of formally specifying what you want to prove and proving it makes you understand the problem and the solution much better. It’s like your brain gains super powers.
Why does the author assume we _want_ the law to be well-defined? Law makers frequently don't actually know what they mean and it would simply take too long (and would be too fraught) to try and actually spell out what they mean.
This, however, is not a big--it's a strong advantage! Not only do they avoid the issue of painting themselves into a corner and needing to define what a chair is, but it allows the law to be flexible and react to both extraordinary circumstances as well as slow moving social shifts in the meaning of various terms. It does this by intentionally being vague and allowing issues (which happen rarely compared to non-issues!) to be raised and discussed in their actual, real context.
I understand the desire of software people to try and frame everything through a computational model. It's familiar to us, it's our domain after all! But not everything is a problem best solved computationally.
For most applications, developing quickly is more valuable than correctness. That is why formal methods will likely never have widespread adoption.
As for law, a judge always makes the final decision and they are not a formal method. It doesn't matter how tightly the law is worded if a judge is in the loop evaluating intangibles like intent.
As someone who hacks on code but doesn't have a true computer science or math background, I have a hard time even wrapping my head around what the hell "formal methods" are. How can you write proofs that a whole piece of software works? Especially since software is often compromised of tons of different libraries and frameworks etc.
I could understand proving a certain piece, like a specific algorithm, but how are formal methods better than unit tests?
I think my brain is just too small to understand. It feels to me, as a relative layman, like pablum.
"Formal methods" are anything a computer is able to do. (Things were originally defined the other way around, but if you have experience programing, that direction is more intuitive.)
You can prove that a piece of code is "correct" as long as you have a formal description of what it must do. That idea is quite useless for any code that people touch, because a formal description is about the same as a full program. But there are plenty of niches, like optimization, where you can just say "this program must do the same as this other program on any condition" where you have a simple formal description of correctness, that a computer can take and do the analysis for you.
I worked on a blockchain that used formal methods as part of the development workflow. The development process was essentially writing a paper that formalised an algorithm or a protocol, an implementation in Agda that would generate Haskell (or written straight in Haskell), property based tests against the paper, and then a whole business analysis/QA side where there'd be a requirements matrix against the paper with unit/integration/end-to-end tests on the features too. So you'd end up with formally checked code, property based tests, unit tests, integration tests, end to end tests, manual checks, and a few other tools too. This was a pretty slow process but on the two major product releases I worked on, I can only remember one bug slipping through and that was patched before people realised (and not a major issue either). So I think this style of development can work if you can 1) afford it, 2) afford the time to do it, and 3) formulate the requirements in such detail upfront.
And there are a couple companies out there that do this style of development as a consultancy - it's probably the closest to 'engineering' that software development can get (in my eyes) https://galois.com
Formal methods in a nutshell are using code to statically analyze and prove things about code. The reason this has value despite sounding circular is that often times the code we use to write the statements we wish to prove is easier to understand than the code we are proving.
It is fundamentally the reason why tests are useful, despite also being code. Even though tests themselves may contain bugs, they are still useful because they are simpler and easier to understand than the code being tested.
> how are formal methods better than unit tests?
Strictly speaking, unit tests are a kind of formal method. They are after all code being used to test code. Other formal methods tend to try to concern themselves with universal statements rather than one-off examples as unit tests do. E.g. proving that a program will never crash no matter what input is given to it (whereas a unit test would prove that a program will not crash for a single given input).
You correctly understand the difficulty. AFAICT formal methods only really work in relatively small, self-contained scenarios. You wouldn't want to attempt a formal proof of a Python web app. The seL4 microkernel is one of the biggest verified projects I know of. Maybe reading about that would give you an idea of how things get put together in the field.
Formal methods have been a holy grail for the decades I've been programming. Back in the day, they were utterly useless. If you define the goal as producing software that is guaranteed to fulfil it's requirements, they are still are hopelessly distant from achieving that goal. It's just too hard to formally prove everything.
However, we are making progress. Strongly typed systems are formal proofs your code adherers to some invariants. Compared to traditional format proofs (which largely consist of long complex mathematical statements about preconditions, post conditions, and invariants) they are delightfully simple and low overhead. It used to be that your strongly type system guaranteed you little more than you the arguments you passed matched what the method was expecting. But now Rust's type system gives you strong memory guarantees - there shall be no dangling points in Rust, and no two threads shall stomp over each others working memory. To me, this is am amazing advance. I never expected to see it in my lifetime.
Strong static type systems are the formal methods programmers actually use today. They are getting stronger, proving more and more. They are so good now they are invading areas that used to dismiss them as useless overhead - like Javascript with Typescript, and Python's and PHP's type annotations.
I'd be stunned (and almost certainly dead at the time) if formal systems ever got to the point proved a program exactly met it's requirements. Strong type systems don't try to prove that. But they do go a long way toward proving a program is internally consistent, but which I mean you don't add an float to a pointer, you don't access memory in an undefined state, and you don't have resource leaks. The end result is, as a Rust programmer will tell you, once you get the thing to compile (which can be a major hurdle) there is a good chance it will work.
For all the criticism crypto attracts here HN, it looks to be the area that will lead this drive. You see headlines about the costs of crypto bugs every week at least - because they exceed $10M. For every $10M bug, there must be hundreds that "only" lose a few $100k that we never hear about. The pressure to get it right the on the first release is immense.
Interestingly, and it seems completely lost on most HN commentators is crypto is at it's heart about building a world where all laws are written in software, assertions and promises the laws operate on aren't on paper and memories but rather can only be recorded in a blockchain, the judges are VM's executing code, and the jails have been replaced loss of crypto currency. The original article was about introducing formal methods to our existing law. I'd lay long odds this other mob, who is trying to replace out existing law with something else entirely, will beat them at delivering formally proved contracts between parties.
> I'd be stunned (and almost certainly dead at the time) if formal systems ever got to the point proved a program exactly met it's requirements.
SeL4 (verified microkernel) and CompCert (verified C compiler) are two such examples. [0][1]
They're also both the result of a tremendous amount of work and have won international academic recognition, but still, it's not impossible for 'real world' code to be fully verified.
> You see headlines about the costs of crypto bugs every week at least
Nitpick: 'crypto' is an accepted abbreviation of 'cryptography', it's confusing to use it in reference to cryptocurrency. (Incidentally formal methods have been successfully applied to cryptographic code, but for whatever reason the most popular cryptographic libraries are unverified ones. [2])
> it seems completely lost on most HN commentators is crypto is at it's heart about building a world where all laws are written in software, assertions and promises the laws operate on aren't on paper and memories but rather can only be recorded in a blockchain, the judges are VM's executing code, and the jails have been replaced loss of crypto currency
You're referring to smart contracts here, not to cryptocurrency in general.
I don't see that it's lost on HN commentators, it's precisely the reason most of us dismiss 'smart contracts'. We have good reason to think that what they're aiming for cannot be done in practice (as you say, millions of dollars are lost due to small code defects), and that ordinary/real contracts are here to stay.
I do agree though that if people are going to go with smart contracts, formal verification would be a useful tool.
One purpose of hiring a lawyer is to find loopholes. One business model for lawyers is selling the exploitation of loopholes. Sounds like formal methods can be very valuable in legal training.
But unlike computer security, laws are not determined by hardware that never complain about exploitation. Law is operated by humans, and people may reject exploration by arguing the contract differently. Lawyers are trained to dispute humans rather than rules.
The law isn't perfect, and in that sense, it is like software in that it is constantly changing in anger. With the law, however, you're also in the situation of having many different interpreters. Given that the law is imperfect and requires refinement and adaptation in anger, given the imperfect knowledge of law givers, it is expected that there would be multiple interpretations.
Law does not determine justice by virtue of its codification. Morality does. Law is just the determination of general moral principles. A ruled-based system blindly followed by a machine would be a disaster outside very narrow, niche applications.
I'm not a lawyer at all - but I am a technologist who builds systems that people and institutions punt thorny legal and ethical decisions to, and they use the tech as a black box of risk, where they hope they can't be held accountable to understand the conseqeunces of the implementation.
The point of most cryptography and security features is to use math to represent and symbolically transfer liability and risk between transaction counterparties, where, law also determines a balance or flow of symbolic liability and risk between parties.
The idea of modelling laws as concrete logic is appealing, but it presumes the quality of a law is if it is resistant to individual interests, judgment and interpretation. To me, that sets up a conflict where the best possible law is either, one that obviates the need for a judge or litigation - vs. - being the most refined and featureful tool for people (and judges) align things like morals, principles, ethics, and interests.
The author's example of that inconsistent common law definition where you didn't actually have to live together in a relationship is arguably an artifact of Canadian charter and British common law where individuals do not have inalienable rights because ultimate and total authority rests with the crown. It doesn't have to be logical or consistent, but this is just my layman observation.
If the written law is illogical, a judge interprets it, and gives it back to the government to refine, and the government often just ignores it until there is a new patron to appease by changing it, or if it doesn't give them the result they want, they just update the regulations of the law without a requirement for debate. Automating legal analysis in Canada seems like more of a way to find arbitrages and loops than to provide any general efficiency or benefit.
Is that what this formal methods project is doing, or do they think people will accept being subject to automated legal decisions?