"influence becomes more apparent as impact accumulates over time."
Its interesting to try to re-prioritize the papers given a decade of impact (or lack thereof) since the list was published. I have no useful results but it was good mental exercise.
I'm not sure anything interesting happened in the PL world since 2004 besides PEGs. Please feel free to correct me if I'm wrong, I always prefer to be among the first to jump on a new bandwagon in the language design trends.
Gradual and optional typing have become a pretty big idea over the last 6-7 years, as language designers have tried to retrofit type systems onto languages that weren't designed with them.
This idea has been showing up in industry (whether the designers call it "gradual typing" or not) with at least Hack, Flow, TypeScript, and Dart (which has optional type annotations).
You can find lots of references for useful underlying theory and research applications (the first paper that mentions "gradual typing" was Siek and Taha in 2006) at https://github.com/samth/gradual-typing-bib.
Typed Racket (http://docs.racket-lang.org/ts-guide/) is probably the best-engineered exemplar of many of these ideas, especially 1. installing run-time checks at boundaries between typed and untyped code, and 2. tackling problems of typing previously untyped code that uses rich class and interface patterns.
Thanks! Interesting. I'll try to dig out the recent papers. Although I have a feeling that most of this is not that new. LLVM is just a very basic SSA, which is early 90s. Agda builds on similar grounds as things like Coq, Minlog, even ACL2 and Isabelle, which are very old. There've been a lot of buzz in the concurrent code verification ever since the early pi-calculus papers by R. Milner.
But the liquid type inference seems to be a genuinely new thing, and apparently I totally missed it somehow. It even seems like I unknowingly reinvented something similar, by using arbitrary Prolog code as type attributes on top of a basic Hindley-Milner.
What are good examples of things in the bucket of type inference algorithms that are simpler than MLF? You must be looking for ones which infer sophisticated logics, still, otherwise that bucket will always be dominated by HM, no?
Boxy types, HMF, and bidirectional (older but more recently applied to parametric polymorphism) all leverage partial annotations to handle a more flexible parametric polymorphism than HM itself allows.
Another brilliant recent discovery (2001, so still more than a decade ago I am afraid) is the algebraic connection between differentiation and zippers. See Conor McBride's paper "The Derivative of a Regular Type is its Type of One-Hole Contex"
Even in the parser corner, the parser combinator / graph-structured stack / GLL algorithm is both new and exciting. GLL was published in 2010: http://dotat.at/tmp/gll.pdf
A clearly practical and influential advance from the last 10 years is the application of tracing JITs to high level languages (as in TraceMonkey and LuaJIT 2).
Interesting that a lot of this is about "What stands the test of time?" Papers from 1994 to 2004 perhaps didn't have enough time to work their way through the system.
What I find interesting is the gap of time between papers for the authors with multiple listings. A casual look suggests no more than 12 years between papers. This could be because it's a young field, but I found that worth noting.
p.s. To be pedantic, we should put 2004 in the title.
This is akin to the Waterfall method being touted as the dumb way that people used to say to write software. When the entire point of the paper that introduced it by that name did so as a straw man.
Also, do be fair. If "Knuth won" the argument, he still ultimately condemned goto statements. To quote: "I personally wouldn't mind having goto in the highest level, just in case I really need it; but I probably would never use it" Further, that paper is on the list.
And, I do highly recommend people reading the paper. It is, as usual for Knuth, a very thorough exploration of the topic.
Yes, all the articles of Knuth are always insightfull. The problem of the article about goto is not its content but how it has been used since its publication to hurt the whole profession. This article is more harmfull than the occasional presence of a goto.
SICP isn't a research publication—it's a pedagogical work. It's been very valuable as a textbook for CS students, but I don't think it contributed any significant ideas to the field of PL.
Most of the papers listed on the page are PL theory papers, PL is a much bigger discipline than just theory. Some of the papers at the end are more design and/or implementation papers (e.g. the Self paper).
I was hoping this would be about the equivalent to the great works of the ancient and modern worlds, the pyramids, the manhattan project...Disappointed...
Is it simply that it takes a long time for us to decide what constitutes a significant advancement, or is something else going on?