Hacker News new | past | comments | ask | show | jobs | submit login

I’m not sure why you start in the 90s; plenty of people were using C and Pascal in the 80s.

Are you sure there was a big rise in dynamic languages at any particular point? I’m not so sure there really was (but I’m interested in documentary evidence if you know of any).

I always figured dynamic languages became more popular as computers got bigger and faster, so efficiency became less important in most scenarios. But that’s not the only trend -- plenty of people programmed in BASIC in the 80s, on tiny 8-bit computers! It was slow as hell, but P-code can be very memory efficient, so even on a tiny slow computer dynamic languages can make sense.

(I count BASIC as dynamic even though it lacks useful abstraction mechanisms because most dialects manage memory for you automatically.)

Edit to add: maybe Forth is a better example (and a much better language!) although I don’t think it was ever as popular as BASIC.




Lisp is from the 1950s, dynamically typed and one of the most influential languages of all times. Lisp introduced GC, the single biggest advance in programming languages (but it took 30 years to make it fast). I am not trying to argue that dynamically typed languages became popular in the 1990s.

If anything, I see ML as the (spectacularly) successful attempt at typing Lisp for increasing reliability. Remember Milner was at Stanford in the 1960s, surely he will have conversed with McCarty. Milner's early theorem provers were in Lisp. He invented ML to reduce the pain of using Lisp (remember, ML stands for "meta language"), and to make theorem proving less error prone. More precisely to reduce the TBC (= trusted computing base) in theorem provers through types (viz the famous Thm abstraction).

ML was essentially finished in the late 1980s [1]. Every responsible programming language designer could have taken on board ML's lessons sincen, and it's an interesting historical question why this did not happen.

[1] R. Milner, M. Tofte, R. Harper, The Definition of Standard ML. http://www.lfcs.inf.ed.ac.uk/reports/88/ECS-LFCS-88-62/


Thanks for that, I wasn’t fully aware of the history of ML! Very interesting (and very impressive that this is just one of Milner’s contributions).

I started with Haskell, where the main influence besides ML was Miranda. (If I remember correctly, Haskell was only created because Miranda was proprietary; similar to BitKeeper and Git.) I guess Miranda’s main innovation was lazy evaluation. That has certainly been influential but outside Haskell I don’t think it’s ever had widespread adoption in the same way as ML-style typing and type inference.

* Every responsible programming language designer could have taken on board ML's lessons sincen, and it's an interesting historical question why this did not happen.*

Agreed! But maybe it is happening, it’s just that it’s taken 30 years instead of the 5 or 10 one might have expected?


Yes, Miranda was very expensive, had an onerous license and only ran on Unix, that's why researchers felt the need to create a free and open alternative.

Lazy evaluation was invented multiple times. First in theoretical studies of the lambda calculus by Wadsworth in 1971. Later, Friedman & Wise, and independently Henderson & Morris proposd a lazy Lisp, Turner (who later did Miranda) gave SASL a lazy semantics (SASL was eager initially). All three were from 1976, so it was clearly an idea in-the-air. SASL later evolved in to Miranda. Another of Miranda's innovations was to use combinators for graph reduction to implement the Miranda run-time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: