The first thing I want to see when I hear about a new programming language is an example. The best examples of Omega that I've found so far are part of the test suite for the interpreter:
Reading code like that makes me want to puke. So, he wants to increase productivity by making us think about and write theorems in addition to our normal code (don't underestimate the time you invest in this), making our code even more static and less reusable in the hope that the reduction in time for bug hunting and testing will make up for the time invested in thinking about and creating more complex code? Hard to believe. Do those language designers create this stuff to show off how intelligent they are or just for the sake of having fun or are they seriously trying to make development easier? I can't imagine building a website with this. But then again, maybe those languages are designed for satellites, space shuttles, and simulations (i.e. where correctness is far more important than productivity)? Well, I think most of us have to make a compromise between productivity and correctness. It's the same as having 100% code coverage. Nice in theory, but it makes refactorings and development too costly and still doesn't guarantee that your software is 100% correct. Here's a nice post if you still believe 100% code coverage is the ultimate #1 goal of your project:
http://blogs.msdn.com/b/cashto/archive/2009/03/31/it-s-ok-no...
A web site with no bug (I mean, no error) is a web site with no vulnerabilities. Wouldn't that be interesting when your customers trust you with their credit card number?
You're implying all vulnerabilities are bugs due to coding error.
To take a high-profile recent case: this would do nothing to prevent padding oracle attacks. Or future crypto attacks using techniques unknown to us now. Why? Because the encryption is working as it should. As advertised. As proven. Or imagine you used Rot13 for encryption.
Your DB could hold people's credit cards in clear-text; how will this language prevent that from happening?
Your website has no authorization tools, so you can access others' data by simply going to their URL; this is still a provably-correct logic structure.
>To take a high-profile recent case: this would do nothing to prevent padding oracle attacks. Or future crypto attacks using techniques unknown to us now
I don't think it is good argument against preventing known errors.
>Your DB could hold people's credit cards in clear-text; how will this language prevent that from happening?
You should specify that credit card information should be encrypted. For example, you define type "data CText value = ..." and two functions: "encrypt :: Key -> value -> CText value" and "decrypt :: Key -> CText value -> value". By requiring that certain table contains column of type "CText CreditCardInfo" you will be certain that you used "encrypt" and you won't mix your data with "CText Password".
Actually, TDD acronym can stand for Type Driven Design. And as you design your code around tests, you can design your code around types. The main difference is that logic errors will show up earlier and these typed invariants are proven, not tested.
So if you add another parameter to your cipertext data type, make it "CText algorithm value", change encrypt and decrypt to "encrypt :: Key algorithm -> value -> CText algorithm value" and "decrypt :: Key algorithm -> CText algorithm value -> value", you can require that that credit card info column should be "CText AES CreditCardInfo". You or your fellow programmer will be forced to provide that. And you won't succeed with ROT13 here.
How is any language going to fix the problem of bad programmers programming badly?
And, more relevantly—how does the fact that a language can't fix those errors make preventing other errors a useless endeavor? It's true that there's no such thing as a programming language that can only write good/safe/acceptable programs, and a lot of people (myself included) who advertise programming languages as a solution to problems tend to overstate the relevance of type systems, correctness proofs, pure functions, &c, but that doesn't mean you can't eliminate a certain variety of error through good language design or by enforcing programmer rigor.
It's not, which is the point. loup-vaillant's comment implied that it was possible to write a "perfect" program in a proof-oriented language, as it would have no vulnerabilities, and people could trust your code.
I wasn't intending to imply that, because language X does not solve likely-intractable problem Y, language X is worthless. Just that likely-intractable problem Y is likely intractable in language Z as well as X, as it's an inherent quality of the problem.
edit: I will however explicitly imply that more syntactically complicated languages are probably more likely to hide many kinds of logic errors. More brainpower spent decoding your code is more brainpower not spent analyzing your logic. Each to their own, clearly; I most definitely side with highly-readable code, and know several people who would likely feel right at home with Omega (e.g.: my brother is a math + philosophy double-major).
Regarding "to each their own": Programmers aren't necessarily the best source for an objective view on productivity. This here is a nice example of CS students claiming subjectively which programming style they found easier, contrasted by a more objective comparison of the actual number of bugs in the code in each programming style:
http://lambda-the-ultimate.org/node/4070
Studies like this tell me to never trust the opinion of a developer about programming languages and programming styles. In some cases not even my own. ;) Of course in this case we had not very experienced developers, but if we look at what expert developers often claim and how widely opinions differ I often have the impression that expert developers' fundamental opinions about programming languages aren't much better than that of a student.
Did you mean to link to something else? 4070 brings me to "Is Transactional Programming Actually Easier?" which deals with concurrency. I'd be interested in what I think you intended to send me to, but I can't find it.
"On average, subjective evaluation showed that students found transactions harder to use than coarse-grain locks, but slightly easier to use than fine-grained locks. Detailed examination of synchronization errors in the students’ code tells a rather different story. Overwhelmingly, the number and types of programming errors the students made was much lower for transactions than for locks. On a similar programming problem, over 70% of students made errors with fine-grained locking, while less than 10% made errors with transactions."
Transaction are felt as being of average difficulty, but they in fact spur far less errors. I bet we could find the same kind of bad judgement with "map" higher order functions vs "for" loops.
Of course I want to have secure code. However, do I have to write 100% of my code in such a verbose language in order to guarantee security? I don't think that any security expert would claim that. It's sufficient if the relevant parts of the system and the communication protocols between your components provide a secure foundation. Writing all of your code in a verbose language would be total overkill and a huge waste of effort. So, in the end a combination of both systems might be the solution. The issue here is that this news post is about "the language of the future". It's definitely far from the language of the future. If at all, it might be a small component of the language(s) of the future (and even for that it should better improve on readability). There's just too much research going into verifiable systems and language theory and by far not enough research into making the majority of our code more expressive and reusable from a very practical point of view.
I'd be surprised if a language with a Haskell-like syntax is verbose.
The verbosity most likely didn't come from the language itself, but from the proofs.
Once you've built the foundations, and proven a host of invariants about them, then you don't need to prove much more when you write the rest of the program. I bet Omega supports the very programming model you advocate: invulnerable foundations, and quickly written "rest of the program". If the code you saw was verbose, that's probably because it was foundation code. (After all, standard data structures are par of the foundations.)
First of all, you're assuming that the person who writes the theorems is always correct and is able to describe their code accurately. This is not the case. Second, it's way more easy to take a bunch of common sense measures than to deal with this.
No I'm not. When I say "no error", that include the specs. For instance, correctly following a specs when the green button set of the emergency shut down makes the interface too astonishing. That's a bug.
> Second, it's way more easy to take a bunch of common sense measures than to deal with this.
Of course. we never tried it seriously. What we didn't try yet is always more difficult. But if we do, it may very well become easier. I don't know.
Anyway, these weren't the point. My point was that the domains where we need bug-free code is broader than we might think at a first glance. Not being able to trust that the code you run is error-free has costs that could very well be more important than taking the time to prove the absence of errors.
Of course human error applies here too; But it's much simpler to get one type right than to use the type in the right way everywhere. Per example: The Red-Black-Tree implementation mentioned in the examples. If we define a tree that cannot be unbalanced, we can skip all explicit checks for whether the tree is balanced, thus eliminating a source for errors (forgetting/bungling the balance checks) and also increasing productivity (since we don't need to write the balance checks in the first place).
I think most of the programming advantages from static typing come from the basics (when done well), rather than the high-end research project type system stuff. A language with inferred (H-M-ish) static types and a good module system will give you reality checks from the compiler, you won't have to write frigging null checks everywhere, and it will avoid the visual clutter and rewriting tedious annotations as code changes. It's probably not as exciting as research, though.
The whole static / dynamic typing thing is its own flamewar waiting to happen, so I'm going to leave it at that.
As a matter of finesse, I think that the correctness vs productivity argument is a specious one when it comes to language design. If correctness is a business requirement then providing language support is a productivity booster for the team (when you consider the timelines of the verification process as part of the development time.) you alluded to different problem domains perhaps requiring formal proof, I just want to suggest that it may decrease the iteration time for the team as a whole.
There is also the matter that the whole "correctness vs. productivity" debate is a false opposite. There is nothing mutually contradictive with correctness and productivity, so there is no reason why you can't have both.
Correctness stems from expressing not only the algorithm but also the results you expect from running it (as types and/or test cases) so they can be verified. But if you somehow convince yourself that you haven't made any mistakes, it's always less work to just write the algorithm (and omit the expected results).
http://code.google.com/p/omega/source/browse/#svn/trunk/test...