Hacker News new | past | comments | ask | show | jobs | submit login
Edsger W.Dijkstra - How do we tell truths that might hurt? (virginia.edu)
76 points by dhotson on March 2, 2011 | hide | past | favorite | 74 comments



The first time I read this, I laughed.

The second time, I cried.

Sure, there are some pretty funny (and clever) thoughts here. The quote about COBOL is priceless.

But this...

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

FWIW, I have written over 1 million lines of BASIC for over 100 customers, most of it still it production, and all of it doing important work producing goods, services, and jobs in so many uncool industries we couldn't live without.

Maybe I'm an outlier, but I have gone on to learn algorithms, dynamic programming, database theory, client/server, and web development. I believe the elegant simplicity of BASIC and database theory, although limited in application, has provided an excellent base upon which to build.

I know that ewd is a giant to be respected, but I think it's a red flag when a teacher mutters "practically impossible to teach", even in jest. IMHO, that says more about the teacher than the student.

Posts like this are great for a laugh, but when you stop to think about it, all they really do is further amplify the perception of a huge gulf between theory and practice. Academics whine while those of us in the trenches are too busy to notice because our sleeves are rolled up while we build that which must be built.


Totally agree, I think we'd all be in major trouble if the mere exposure to Basic corrupted us. There is a whole generation that grew up on QBasic, GWBasic and of course Apple II's Basic.

As an aside, there's a definitely part of me that envies the days of building command line applications.

You really got to focus on the meat of the problem, instead of so much frilly interface issues which occupy us now. I think very few software engineers don't dream of the days when showing a text menu with a prompt for what action they wanted to take was about as far down the UI path you had to go.

I'm of course not arguing that the the web isn't far better in myriad ways, but the expectation to build beautiful interfaces, even if those expectations are purely our own, certainly has changed where we spend our time.


I'm still writing command-line applications (compilers), but they don't even do menus, everything comes from the command line or from files input.

But I don't think the server-side of the web is much different. Web servers are not a lot more than a function of the input stream to the output stream - compilers are much the same. A lot of fun can be had if you're in the deep end of the server side, writing framework functionality.


Ya, no for sure, I got to work on the backend of big web systems for a while and it was fun. (AMZN actually, the mother of all web backends)

But once you are in the teeny startup space then you become the jack of all trades and then you have to deal with the whole stack again. Unless you figure out how to make money off of jsonip.com that is. :)


The BASIC language has evolved a lot.

I think he's referring to very old BASIC which used numbered lines, a whole lot of GOTO's, did not support subroutines (or did it? Correct me...), etc.


Correct. Many of the concrete arguments from "Goto Considered Harmful" fall apart when you even have named labels rather than line numbers. BASIC can be used to make a mess, but BASIC can also be used to write highly structured programs- you must be dilligent. I haven't written BASIC in years, but personally I think BASIC taught me why structured programming, careful naming and indentation are all vital to creating comprehensible code. Staring down a few thousand lines of your own spaghetti code from months before drives the point home in a very useful way.


BASIC has always supported subroutines, albeit in about the most primitive way possible: the GOSUB command jumps to a particular line number, then the RETURN command jumps back to the line after the GOSUB.


ANSI BASIC has supported SUBs and FUNCTIONs going quite a ways back. Microsoft just didn't bother to support that standard except in its commercial compilers, and later beginning with QBasic.

Some non-Microsoft microcomputer BASICs (for example Extended BASIC on TI-99/4A) support them to some extent.


That may be the case, but the oldest ANSI basic that Wikipedia mentions is "for minimal Basic" from 1978. This text is from 1975, and I doubt that ANSI standard had anything like what we nowadays call functions.

Typical Basics of 1975 had gosub/return, but no functions (= no arguments, no return values). Apart from loop variables, all variables were global.

Commodore Basic had a "def fn" command, but it only supported single-line (= single-statement) functions.

Add in the effect that one had to 'name' subroutines by line number, and compare this with the Lisp's and Fortrans of ten years earlier, and Dijkstra's position becomes quite defensible.


Commodore Basic had a "def fn" command, but it only supported single-line (= single-statement) functions.

Sounds vaguely familiar!


FWIW, I have written over 1 million lines of BASIC for over 100 customers, most of it still it production, and all of it doing important work producing goods, services, and jobs in so many uncool industries we couldn't live without.

No offense, but being in production for a long time is not an indicator of design quality. Some code is kept because it works well, other gets stuck in production precisely because it's a tangled, fragile and undocumented mess that people are afraid to touch.

As I see it, the real measure of code quality is how easy it is for others to modify your program.


Easily modified code does not need to be scrapped whole-hog and can be incrementally updated, so I agree with you that being in production for a long time is not an indicator of design quality with either valence.


I started programming on TI-BASIC (that's right, on TI's TI-80 http://en.wikipedia.org/wiki/TI-80). I don't know how mutilated that makes me, but if people such as me were put in an asylum, well... you'd need a pretty big asylum.

IMHO, people who start with Basic are doing it as a first exposure to something they will love to do, so not such a bad thing. Hell, learning TI-BASIC didn't stop me from acing all the Lisp classes I took afterwards.


I believe it is because his support for Formal Methods.

http://www.cs.utexas.edu/users/EWD/ewd11xx/EWD1130.PDF


I believed he earned his right to say whatever he wanted to say about Basic (or any other languages) with important contributions to the field. And, as other pointed out, it's probably related to his work in formal methods and structured programming.


Sure, he earned his right to say it. But pretty much everyone has a right to speak their mind. That doesn't change the fact that this is an opinion, albeit one from a creator of the field.

Stroustrup says things all the time that I do not agree with. It's important to remember that appeal to authority is a logical fallacy -- just because he's well known doesn't mean he is necessarily correct.

Also, as you mention, he is coming at things from a totally different perspective. He probably couldn't care less if everyone is unable to be productive in his favorite language, as long as it met his standards (which do not seem to include many practical or productivity concerns).


On the other hand, you have also to consider what does it look to be "mutilated beyond hope of regeneration" from the inside.

If what you perceive is the same someone "mutilated beyond hope of regeneration" would perceive, it may be time to worry.


The problem with discussions about this hateful list is that discussion of the interesting, timeless comments is drowned out by arguments about whether BASIC still warps minds (or whether it ever did), whether physicists should share FORTRAN, what we could possibly learn from APL's mistakes, what made PL/I so bad, and what company might constitute a modern-day IBM.

Many of the comments on the list are out of date. Many of the rest are such unravellable apothegms, incomprehensible without a context that most of us don't have, that trying to apply them to any modern-day scenario in the way that Dijkstra intended is an exercise more in creativity and personal bias than in interpretation.

Most of the discussion in this comment section is either about the comments which now appear trollish (such as the BASIC one) or about Dijkstra's software design philosophy, which is only extremely tangentially related to this list.


With all due respect to EWD, and the wonderful things he did, I believe that he let himself get carried away with his gift for the glib aphorism. I don't think that those who post them do Dijkstra's reputation any favors.It is as if one were to fill a biography of Churchill with his clever quips and omit 1940.


The difference being that, once you set aside Dijkstra's glib aphorisms, there may not BE a 1940 to talk about.


Dijkstra's algorithm? Postfix notation? A slew of other algorithmic things I don't even understand?


You're talking about the inventor of the shortest-path algorithm and the semaphore.


Obviously, mutual exclusion is widely used. I'm not so sure that semaphores as such are used as widely (and certainly not with functions named P() and V()).


So those of us who read Dijkstra when it was fresh may be more mindful than others about some of the other things that he did. Using the methods that he espoused he and a small team wrote a compiler which, during its lifetime, had a total of four errors.

There are practices that he taught that most of us, even after all our million lines of code, still have not mastered. This is particularly visible from vantage point of the security industry.


The thing that always strikes me when I read Dijkstra's writings on languages is that in the context of present-day discourse, he comes across as a massive troll. He rarely gives any in-place justification for his outrageous use of insulting and attacking language.

I realise some of this is explained by the fact that a lot of this stuff was written for consumption by people who were already aware of him and his ideas, other people at various institutions with him. However this begs the question, if your target audience is already so aware of your unusual ideas that you do not feel the need to provide any kind of context or justification, why moan so loudly at all? I'd love to see some of his colleagues reactions of the time to some of these tracts.

I guess when you're that far into the right-hand crevice of the intelligence bellcurve, you're going to fall into the outer quartiles for quite a lot of other psychometrics as well.


He was a troll back then, too. Alan Kay used to joke that arrogance in computer science is measured in nanodijkstras. EWD could back it up because he was a genius, but it didn't win him friends.

Either way, some of the aphorisms are less defensible now. 35 years later, if something so broadly useful as programming remains "one of the most difficult branches of applied mathematics", then I think we have failed our responsibility to move the science forward.


Ignore a common misbelief, or actively fight against it?

I would say: Get successful ignoring it, and then openly "admit" that you don't share that conventional wisdom.

It can be very tedious, impossible or unfavorable to convince people who just parrot common misconceptions. Or, as Proverbs 27, 22 pithily says: Though thou shouldest bray a fool in a mortar among wheat with a pestle, yet will not his foolishness depart from him.

It's much easier when you have success on your side. I would guess that's the main reason why nobody talks about the majority of Dijkstra's list anymore: There is a overwhelming lot of examples who succeeded ignoring or opposing them. Waterfall software development isn't on the decline because smart people won an argument, but because even big companies realize that the most successful software today wasn't made that way.


Many companies that have made themselves dependent on IBM-equipment (and in doing so have sold their soul to the devil) will collapse under the sheer weight of the unmastered complexity of their data processing systems.

Interestingly, this still applies except s/IBM-equipment/Microsoft software/. Further, I have not seen any companies collapse due to the unmastered complexity (IBM or Microsoft), but I do see a lot of people that have no clue what they are doing, they just do the same steps over and over because someone in the dim dark past discovered that those steps formed a path through the maze of complexity. The Office equivalent of "Up Up Down Down Left Right Left Right A B Select Start".


I'm pretty sure entire companies don't collapse, but projects are canceled and companies are limited in what they can do by relying on overly-complex solutions. I know that from experience.

Of course, that's one thing that can give a startup an advantage against a giant, provided the startup doesn't fall into the same trap.


I don't find anything in this list relevant (except maybe the natural language programming bit).

I think the most "uncomfortable" truth is the following:

* Bug-free/fault-free software is impossible

Although continuous/iterative development is very effective at minimizing the count and impact of bugs, but this is only relevant for web applications, and applications with some decent update system (e.g. Chrome, Apple's "App store", Ubuntu's PPAs, etc).


  "Simplicity is prerequisite for reliability" 
is a great line from the article that's still accurate. Small (simple) changes reduce the risk of using continuous deployment. Code should be modular and decoupled.

There's a great section on simplicity in code in Stuart Halloway's talk from QCon SF last year: http://qconsf.com/dl/qcon-sanfran-2010/slides/StuartHalloway...


> Bug-free/fault-free software is impossible

This is an absolute if you don't have the luxury of formal standards that stop evolving at a defined point in time.

It also means that you get to define 'bug-free' as 'implements the standard to the letter', not 'does what every single user expects all of the time, even when those expectations contradict and are insane.'


> It also means that you get to define 'bug-free' as 'implements the standard to the letter',

That doesn't make it any better. The standard itself cannot be bug-free.

Actually here's another one of them uncomfortable truths:

* You can't guarantee the correctness or reliability of a program by specifying on paper what it should do.

It's a bad approach anyway, and results in stupid programs; stupid in the sense that they will be hard to use, hard to maintain, hard to fix, hard to evolve, etc.

This truth maybe uncomfortable to many people, including bosses and managers and authority figures.

But, I personally actually find this truth very comfortable. Being the lazy slacker that I am, it allows me to justify my approach to programming.


This is an absolute if you

...exclude certain conditions as unrealistic, or take the particular common case as universal.

It also means that you get to define 'bug-free' as 'implements the standard to the letter',

When the specs are engineering specs, this can sometimes be usefully close to true.

not 'does what every single user expects all of the time, even when those expectations contradict and are insane.'

Formal systems will usually tell you when specs are outright contradictory.


> Formal systems will usually tell you when specs are outright contradictory.

Right, and I'm sure they're useful when designing avionics software. But how many web browsers have specifications, let alone specifications that can be subjected to that kind of analysis? Doesn't every user have a potentially self-contradictory and possibly insane 'specification' for 'web browser' in their head, and don't they think it's a bug when that 'specification' is not adhered to?

Maybe I should have just said there are two kinds of software: Software that is an implementation of a mostly-stable specification, like a POSIX-compliant OS or the aforementioned avionics software, and software for which no specification, or at least no stable specification, exists, such as web browsers and text editors.


"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."

BASIC was the first programming language I was exposed to when I started learning to program 5-7 years ago... I would hope today this truth is false.

It did, however, took me 2 years to learn something else other than BASIC plus another year of programming BASIC in Java...


Have you tried learning Scheme? My first language was C, which I learned very much as if were BASIC (plus pointer arithmetic), and I related very much to Dijkstra's observation when a couple of math major friends and I started learning about continuations in college.

Either my friends were unequivocally smarter than me, or my knowledge and experience with C/Java/Python really got in the way of my understanding. The difference in how quickly we grasped the material was really clear: my mental model was that of saving and recalling the call stack. Their conceptual model was quickly somehow intuitive in a way I still don't fully grasp.


I've heard that introductory courses that use Scheme go over pretty well, but most people's heads explode when they take Programming Languages I (in Scheme) at my university. It seems that after 3 or 4 years of Java, C and C++, their minds have formed certain expectations of what a programming language is supposed to be like (which is helpful when learning a new language that shares those expectations).


I first learned BASIC when I was a youngin' too. There's a lot of hyperbole in these "Truths"; I think he's just decrying the poor state of programming languages in those days. I wouldn't take it too literally.

(but who knows? maybe we really are rubbish programmers. Better get started on learning lisp just in case.)

In fact, I'm not sure how many of these are true in academic computer science today. The first one "Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians." just sounds like bragging between academic camps.


>The first one "Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians." just sounds like bragging between academic camps.

From a mathematical/proofs point of view, computer science is much harder then many forms of applied math. Sure, basic programming isn't all that difficult, but algorithm design and some of the more complex fields, like computer vision? Proofs of correctness are harder on algorithms (you need more in your head, state wise, then you do to a simple inductive proof), and even basic edge detection (Let's say 'Canny', because that's a particular algorithm) requires a deep, intuitive understanding of multivariable calculus.


Interesting that you mention Java... I think there was a similar argument for it. Here it is => http://www.joelonsoftware.com/articles/ThePerilsofJavaSchool... .


Here's another take on Java, well, really it's more about Python:

"In a recent talk I said something that upset a lot of people: that you could get smarter programmers to work on a Python project than you could to work on a Java project.

I didn't mean by this that Java programmers are dumb. I meant that Python programmers are smart."

http://www.paulgraham.com/pypar.html


In "Coders at Work", there was an interesting exchange between Seibel and Knuth, where Seibel asks Knuth about Dijkstra's ideas on how computer science should be taught, and Knuth pointed out that Dijkstra, like the rest of us, learned about computers by tinkering with them, not by spending several years working out formal proofs without access to hardware.


My first programming language was C, so I became a business analyst. Some day I will figure why they keep me here working as one.


I find it hard to believe in the way Dijkstra seems to want to develop software. To me he seems to believe in the existence of faultless, consistent and complete specifications. I don't believe that the specifications and requirements of sufficiently complex software can have any of these qualities in real life, in part because of the humans that interface with the systems.


I agree. There's plenty of literature supporting almost the opposite of what Dijkstra preaches. While it's good to aim for formal specification, in practice you're likely to overdesign, deliver late and potentially find yourself unable to react quickly to changing requirements. The key is to deliver a working piece of software because handing your customer a page of beautiful mathematical notation is not going to solve their problem.

(I'm aware of the danger of referencing Worse is Better... but I'll do it anyway :-) ) http://www.dreamsongs.com/WorseIsBetter.html

http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...

In addition, the books 'Hackers' by Steven Levy and 'Coders at Work' by Peter Siebel demonstrate various scenarios where a groups of Hackers had to get something done in a short space of time. Almost invariably, they aim for the final product.


> The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity.

He makes it sound as if this was a bad thing.See http://www.catb.org/jargon/html/anthropomorphization.html


Plenty of Dijkstra's points remain relevant today. My favourite (and I completely agree):

"Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer."


But even this one seems to be disproven by example. Some of the best programmers I know are exceptionally slovenly in their written language. I strongly suspect that this list of "uncomfortable truths" is intended to illustrate that a competent programmer looks, for all intents and purposes, like ewd himself.


I disagree. I seldom meet a talented programmer who isn't able to articulate themselves very well.


There's a difference between "able to articulate well" and "exceptional mastery of one's native tongue".


Indeed. But I can't help but think that is the point he was making.


OK. Can we build a modern day list of 'truths that might hurt'?


> Can we build a modern day list of 'truths that might hurt'?

Sure, if you can do it yourself.

Any collaborative process would just be opening the floodgates to trolling beyond all reason.


By claiming that they can contribute to software engineering, the soft scientists make themselves even more ridiculous.

I might get voted down for taking him (and the post) dead-on, but I think Dijkstra is just plain wrong here. Granted, this was written in 1975, but I think a lot of thinking about human interface design has tremendously benefited various computer fields. That said, there is a subset of "soft scientists" for which this may be true.


His opening thesis starts out strong. But his language bashing is too myopic. It's a shame to read some of these points from such a well respected computer scientist. I guess it just shows that we're all capable of being human.


Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer.

Wow, prior art for what 37signals has been crowing for the last year or so.


I wonder what he would have said about C++ !!


Dijkstra (in 1975) strongly dislikes FORTRAN, BASIC, APL, PL/I, and COBOL. What languages did he like? Pascal?


Lisp. Dijkstra was the original Smug Lisp Weenie.

> Lisp has jokingly been called "the most intelligent way to misuse a computer". I think that description is a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.


LISP's syntax is so atrocious that I never understood its popularity. LISP's possibility to introduce higher-order functions was mentioned several times in its defence, but now I come to think of it, that could be done in ALGOL60 as well. My current guess is that LISP's popularity in the USA is related to FORTRAN's shortcomings.

Source: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD07xx/EW...

The memo you're quoting[1] is a lot milder about Lisp, but doesn't come across as the ramblings of a "Smug Lisp Weenie" (disclaimer: I'm currently having a lot of fun playing around with Clojure, so I might not be entirely objective).

[1] http://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/EW...


I see he favored ALGOL-60, so maybe Pascal, Modula, Ada ...


Pretty sure he was on record against Ada. From his point of view, I don't think Pascal and Modula(-2) had much to recommend them over Algol-60.


Please people, put the year in the title when you submit older stuff! (1975 in this case)


It's Dijkstra. I can understand complaining about not knowing the date of a post by Yegge, but did you really expect anything written by Dijkstra to be dated newer than 2002?


No, but there's a big difference between 1975 and 2002. The fact that the author is famous or dead is irrelevant here.

When I read an article submitted on HN, I can safely assume it was written within the past few months. If it isn't the case, I can't make assumptions about its year of publication (OK, I can assume it was written within the author's lifetime but that doesn't really help).


I agree in general-- putting the year in the title is much desired-- but did you really think that Dijkstra was making new pronouncements from the grave?


How many readers do you really think know where he currently resides? I've been surprised at how many old-school software geniuses are still around. Ex.: Knuth just published Volume 4.

Heck, how many readers of HN know who Dijkstra is?

Yes, including the date on very old articles is sensible.


Maybe I'm just a greybeard, but I assumed that most of the folks around here know who Dijkstra is, and remember when he died about ten years back.

Now, you pesky kids, get off of my lawn.


Dijkstra died the year I was entering 8th grade, a few years before I began experimenting with HTML. I'm now a 4th-year university student.

I'm so sorry, sir. I'll get off immediately.


I started programming in basic the year he died, and then Java the year after. I recall a survey that would put me around the middle, maybe (hopefully) the lower end, of the age group of people that read this site (24).


I'll be blunt - if you don't know who Dijkstra is, you're in the wrong industry. Yes. Seriously.

You might disagree with him, but a programmer NOT KNOWING Dijkstra is the same as being an english major not knowing who Shakespeare is. (Or maybe Chaucer. He's probably the better analog)


You forget the new computer scientists and the people just learning about arrays that find this place sufficiently interesting to read around it.

I'm not one of them, but not everyone is far enough into their CS career to have come across the numerous contributions he has made.

In fact, with the web-development world we're living in, I imagine it's pretty easy to go very far in {php, python, <insert language here that isn't often formally taught>}, and write a ton of code (or at least stitch a great number of libraries together), even ones that use semaphores or some other construct/algorithm he's helped develop, without ever having heard his name.

I would say: "If you don't know who Dijkstra is, you need to read more algorithm books", or "you need to study more history", or "you need to read more academic papers", or "you need to work on harder problems."


It is by Dijkstra. Did you really think that it was going to be new? If it was by Dijkstra did that really matter?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: