There are examples of big wins with formal methods in industry. I linked to the Amazon paper in my original comment. There are others from Microsoft [0][1] and in high-assurance safety-critical systems such as the Meteor Line 14 in the Paris Metro [2].
I found the conclusion odd because it claims from the study that formal verification methods do not work in practice.
One common complaint I hear from opponents of or developers ignorant of formal methods is that we cannot possibly specify the entire software system: it's too complicated.
We don't have to. We can specify the important parts formally and get big wins: critical components that are correct by design.
> And how do you "prove" that the specification is correct?
It depends on the level of assurance required by the project. Often a model and a model checker is good enough over a large enough state space. Sometimes you might require proof.
Regardless of which approach taken these are far better guarantees than a pile of unit tests and integration tests. They only prove the presence of errors: not their absence.
And for highly distributed, concurrent systems I think I've spent enough time trying to grok hand-waving diagrams, execution traces, and log files to realize that I've been wasting enough time. These systems are too complex to reason about locally in source code alone. Whether the prover can be proved is irrelevant: the most important part of using formal methods is that it requires you to think in precise terms with no hand-waving.
> There are examples of big wins with formal methods in industry.
No one is disputing that, not the paper, not I. Even from the abstract: "results are encouraging".
> it claims from the study that formal verification methods do not work in practice.
Again, since when is "results are encouraging" the same is "this doesn't work in practice". Where do they make that claim?
I think they rightfully put a limit on what "works" means in this context, and show that just because you have formally verified your system (or parts thereof) doesn't mean the whole is bug free. Remember Knuth? "Beware of bugs in the above code, I have only proved it correct, not tried it"
>> And how do you "prove" that the specification is correct?
> It depends on the level of assurance required by the project.
Sorry, and again: how do you prove that the specification is correct? (See also the sibling comment with the quote from Alan Perlis). Correct with regard to what? Itself? How would that work? Another specification?
> these are far better guarantees than a pile of unit tests and integration tests
In some cases that is certainly true. Distributed systems in particular are hard to test.
> They only prove the presence of errors: not their absence.
And as the paper shows, neither does verification, for various reasons, some of which, such as boundary conditions can be and were captured by tests. No one is disputing the verification can be useful, it is the absolutism that's an issue.
The paper is fine but I took the quote that stood out from the article as the premise which made the conclusion seem odd to me.
> Sorry, and again: how do you prove that the specification is correct?
How do you prove that the proof is correct? By writing another proof of the proof?
In practice it's not terribly useful to wring our hands around this issue. I'll leave that to the mathematicians. If you want to know if your specification is correct you either use a model checker to automatically check that some invariants hold over a universe of states or you use a theorem prover in which you trust the kernel.
If you want to know whether you specified the correct properties, invariants, etc then you'll have to have someone review your work. With an interactive theorem prover at least some of this work can be done by the assistant but at the end of the day it's about thinking and testing your assumptions.
> Absolutism is always wrong ;)
Ha! Cute.
I had thought the idea that formal verification will produce software free of all errors is well debunked these days. Maybe I'm wrong. I've always used to to prove only that certain properties hold. To prove the entire system is free of errors would be quite difficult and expensive indeed!
I found the conclusion odd because it claims from the study that formal verification methods do not work in practice.
One common complaint I hear from opponents of or developers ignorant of formal methods is that we cannot possibly specify the entire software system: it's too complicated.
We don't have to. We can specify the important parts formally and get big wins: critical components that are correct by design.
> And how do you "prove" that the specification is correct?
It depends on the level of assurance required by the project. Often a model and a model checker is good enough over a large enough state space. Sometimes you might require proof.
Regardless of which approach taken these are far better guarantees than a pile of unit tests and integration tests. They only prove the presence of errors: not their absence.
And for highly distributed, concurrent systems I think I've spent enough time trying to grok hand-waving diagrams, execution traces, and log files to realize that I've been wasting enough time. These systems are too complex to reason about locally in source code alone. Whether the prover can be proved is irrelevant: the most important part of using formal methods is that it requires you to think in precise terms with no hand-waving.
Update links:
[0] https://www.microsoft.com/en-us/research/wp-content/uploads/...
[1] https://techcrunch.com/2017/05/10/with-cosmos-db-microsoft-w...
[2] http://rodin.cs.ncl.ac.uk/Publications/fm_sc_rs_v2.pdf