Hacker News new | past | comments | ask | show | jobs | submit login
Extremist Programming (ezyang.com)
167 points by achille on Nov 21, 2012 | hide | past | favorite | 66 comments



I agree with the blog's premise: "extremist" languages are great for language and for research. So this whole rant is not directly related to the post's central thesis. Instead, it's about the assumptions most people have whenever this topic comes up.

What I'm a little annoyed with can be summed up with a single banal and overused phrase: "the right tool for the right job".

For one, this phrase really doesn't say all that much--it's basically a tautology. Yes, using the right tool would be good, but with programming languages, it's rarely obvious what the right tool is! It's just a specialized version of the advice to "make the right choices", which is not much advice at all.

Another problem is that people inevitably ignore how much programming languages overlap. Virtually any languages worth comparing are going to be general-purpose languages. Choosing between a functional language and an OO language is not like choosing between a hammer and a screwdriver to pound in a nail, it's more like choosing between different types of hammer. In a world where hammers can do anything. (I don't know enough about carpentry to extend the analogy properly.) There are very few applications where one language clearly fits and another is clearly unsuited--and if you're in a vertical like that, the question just won't come up in the first place!

Another thing that comes up is people assuming that a multi-paradigm language has the benefits of all the paradigms it supports. I've found this is never the case. Even very multi-paradigm languages tend to favor one paradigm or the other more. And even if they didn't, there are benefits to being consistent. You can do much more by being functional everywhere than you can by merely supporting functional programming in some places. Any mix of paradigms is necessarily going to be a compromise, and the advantages of prioritizing one main paradigm can outweigh the flexibility of supporting more than one to any large extent. Doing one thing, and doing it well, is a powerful idea that doesn't stop applying in designing programming languages.

Now, I'm not leaning one way or the other here in any comparison of languages (I'm sure my biases are pretty evident and show through, they're just not germane to this comment); I just think that summarily dismissing a language for being too focused or too "extremist" or not multi-paradigm is rather short-sighted. Also, often, unless you've tried doing something in a language yourself, don't assume it's more difficult than what you already know. There is much "common wisdom" about (like "functional programming is bad for GUIs") which is often more "common" than "wisdom".


>Choosing between a functional language and an OO language is not like choosing between a hammer and a screwdriver to pound in a nail, it's more like choosing between different types of hammer

Take it one step further and ditch the terrible "tool" analogy altogether. It isn't like choosing between two hammers. Nothing we do is like driving a nail. We can't mix and match the way a carpenter can, taking hammer A for pounding these big nails, hammer B for the small ones, 4 different saws, a lathe, planer, etc. You can't use java's classes and perl's regexes and haskell's laziness. You have to pick the whole toolbox together, and take it as it is.


perl does give you a lot of that, since if has a vaguely lisp-ish facility for modifying language syntax, and is in general very flexible, so it supports things like inheritance, deferred evaluation, etc..

http://search.cpan.org/search?query=acme&mode=all


It's like choosing between two different Swiss army knives.


It's like having a goal of writing a poem and choosing between a sonnet and a haiku.


While on analogies, I like to think of a general purpose programming language as paper and myself - the programmer - as an origami artist. Papers come in different sizes, materials, thickness, elasticity, etc. with each better suited for a certain class of origami. When considering very similar papers, the outcome is more dependent on the artist than on the paper.

This also allows the possibility that it is possible to do great art with scrap paper or to crumple up a beautiful sheet of handmade paper. OOP/FP/etc are like various folding techniques.


This is true to a degree, but there are ways you can use multiple tools mixed and matched.

For a simple example, I might write my prototype in Python. After profiling I might decide to rewrite a couple of inner loops in C (or Cython). I might tie it to SQL Server as a datastore and use T-SQL in the interface.


Also SOA allows to "use different toolboxes".


You are actually saying that we should rethink what a proper software engineering toolbox really ought to be.


Not really. I am just saying the tool analogy is really poor, and thinking of languages as hammers doesn't help us reason about them in any way. The analogy to toolboxes is less bad, but I don't think it is really very helpful either. Programming languages are like any other languages, they are used to communicate ideas.


Well, programming languages are rather specialized languages, compared to languages in general. This specialization is what this thread is about. What I meant was something like: maybe programmers should have a toolbox, call it programming environment, where several specialized languages could be used together in the same application with considerable ease. I have no specific ideas, just a vision.


The Microsoft .NET framework makes some claims of being like that. There is a grain of truth in the claims, but also some big gaps in practice. You can mix different .NET languages that all compile to the CLR virtual machine. For example, F# code can call C# code and vice versa.

But if you're looking to do cross-language like that, you have to know a lot about both languages and place some constraints on yourself.

I don't really see .NET as a viable implementation of your vision, but I wanted to at least point out that it does make those claims in case you were unaware.


The "extremist programming" approach to enable language compositionality would begin by asking what do the languages have in common: are there any fundamental principles from which the specific principles in each language derive? If we were considering just F# and C# for now, and we asked what they have in common - I think the obvious approach is that we should model everything as functions: after all - everything in F# is a function, objects can be seen as tuples of sets of functions and sets of variables, and variables can be seen as functions returning a value.

Another approach might be to look at things as objects, and to try and model functional programming in terms of objects. I'm sure you're familiar with how F# is implemented, and you'll probably agree that it's a disaster (although the language itself isn't terrible). The idea of building functions on top of an OOP which is defined in terms of functions clearly misses the point somewhere. The end result is a very specific compatibility layer between two different languages, but very far from solving the more general problem of language compatibility.

.NET isn't the only platform, or even the first to attempt this approach though, and it's hard to really blame them for not solving this problem, which they didn't set out to solve, but merely added as an afterthought. The author's article is about solving problems on principle, rather than making practical solutions and then trying to adjust them to solve other problems.

Nemerle is an interesting language on the .NET platform. The idea behind it is to have a syntax which can be extended from the language's own macro system - using a type-safe AST which is targeted with a PEG grammar. The eventual idea is that you should be able to use any syntax to target the same semantics in the AST, essentially merging the language and compiler into the same tool. 2.0 version is an attempt to define all of Nemerle's own syntax as it's macros - making it a good example of the "extremist programming" the article is talking about (and certainly a bit closer to jonsen's vision than .NET in general).


> You can't use java's classes and perl's regexes and haskell's laziness. You have to pick the whole toolbox together, and take it as it is.

Which is (i) a pity, and (ii), not entirely true. Sure, current modern compilers represent huge piles of code, but others are small enough that you can realistically craft your tool like a jedi would her lightsaber.

Of the top of my head, I have 3 example of language implementation small enough to be customized, ported, or otherwise scavenged for ideas. Lua, OMeta, and Maru.

http://www.lua.org/

http://www.tinlizzie.org/ometa/

http://piumarta.com/software/maru/


wouldn't using a popular VM (e.g. JVM) give you quite a few of those choices though?


It helps with things like library, tool or framework selection because you end up evaluating the language itself more so than the ecosystem. I don't think many people are actually writing their applications in more than two different general purpose languages at the same time and place. Maybe you inherit some crusty code you just wrap up and re-use but you probably aren't doing new development in that old code-base at the same time.


In the Unix environment, you may often write Java, Python, and Shell, all in a day's work on one project.


And that would be two general purpose programming languages :)


Shell is general purpose (perhaps even more so than Java or Python), when you look at it from the perspective that programs are simply functions you call and get a result from, or which perform additional computation.


Sure.


It would if you wrote a language with those choices in it for the VM in question, but that is also an option without a VM. Sticking to the toolbox analogy, the only way to pick and choose the tools to go into the toolbox is to make your own toolbox.


Another extreme direction to try out is the direction toward the machine. The value of trying out assembler programming may not have similarly direct benefits. But I personally find it a great general advantage to have detailed knowledge of under which practical conditions your program must run. To know that whatever fancy high level constructs you are making use of, you are always building a giant state machine where space is traded for time.


To expand on your suggestion a little, I would suggest going even further - learn how to design your own processors and figure out how the low-level details really work. With hardware description languages such as Verilog, programmers can apply a lot of their knowledge to hardware engineering. I've found that a lot of things carry over from conventional programming languages to HDLs, and that it's incredibly easy to get started. The computer engineering mindset is pretty similar to low-level programming, and really helps you understand how your code runs, even more so than assembly.


One surprise that you find out when you do this is that languages like C don't efficiently map to hardware at all. Hardware is inherently massively parallel, whereas C is completely serial. What modern hardware is doing to be fast is trying to recover as much fine grain parallelism from a sequential C program as possible using pipelines and out of order execution. We are now at a point where that has been mostly milked out, so explicit parallelism is necessary to gain performance, like SIMD and multiple threads.

It's interesting to consider how you can exploit parallelism more easily, for example from going from a sequential instruction set and language to an inherently parallel instruction set and language. Nobody has found the ultimate answer to that yet. GPUs execute thousands of sequential threads in parallel, and while that works for problems with massive and regular parallelism, it does not work for irregular parallelism or parallelism that requires fine grain communication or short lived parallelism or not-so-massive parallelism. FPGAs do work well for those types of parallelism, but they have other problems for general purpose computing. With hardware trends, it's inevitable that we'll see more and more parallelism and eventually a paradigm shift to inherently parallel architectures. Interesting times ahead.


The point you make is really valuable. A few days ago, I found myself explaining to someone why custom hardware and GPUs could so easily outperform processors, and I realized that most programmers have no concept of how much overhead the general nature of a processor entails. (Although, I don't think most programmers really need to know this.)

For instance, let's take the problem of multiplying ten numbers. In a normal processor, you have a loop of instructions, each instruction has to go through a "fetch" state (to load it from memory), a "decode" stage, to figure out what the instruction is, an "issue" stage, to figure out which processor pipeline can best execute this instruction, an "execute" stage, to finally execute the instruction, and maybe a "commit" stage to write the outputs back to memory. (The exact number of stages and amount of parallelism depends on the microarchitecture and pipeline depth, of course). What if we wanted to just build a chip that did this? We could put ten multipliers on the chip, and then do the exact same operations in just a few clock cycles, since we would have no instruction fetch or decode, no commit, no loops, and so on. This is a contrived example, but my point is that general-purpose processors are incredibly slow compared to dedicated hardware, precisely because the extra transistors necessary to make processors general purpose also take a large portion of the computing time.

I find the idea of FPGAs reconfigured per-application to be really interesting. Celoxica (http://www.celoxica.com) seems to do some sort of FPGA-based software acceleration for trading software, for instance. I wonder if it's possible to do something like this for a more general market...


Completely agree. I was about to write something like that. I've designed processorlike hardware, but not touched it since the 80'es, so I'm not really confident in state of the art of hardware programming.


Chuck Moore (of Forth fame) might be one example of actually trying to turn the full-stack, down-to-the-hardware idea into an extremist programming paradigm: http://www.yosefk.com/blog/my-history-with-forth-stack-machi...


Incidentally, this is a formula for coming up with PhD projects: take a common comp sci primitive and then remove it or shift it to someplace else in the life cycle or stack.

http://chester.id.au/2009/10/21/upsetting-the-natural-order/


This is why I use Clojure. I can do Functional, OOP, logic, or any of the dozens of other programming styles in one language. And since it's a lisp I don't have to worry about having to need extra syntax from the language writers to get what I want.

Pragmatic languages FTW!


I interpreted the OP suggesting that you not use a single multiparadigm language because then you won't be forced to follow the new principle everywhere. Of course, it's possible to do (for example) OOP in a lisp-like language, but you'll really be forced to work with it in a language like Java, which may give you a deeper understanding.


Well, avoid Java, which is just C++--, and think Smalltalk instead.


Therefore Java == C?


I don’t think programming languages have inverses, so in general λ + 1 − 1 ≠ λ.


Well, only shittier.


Your comment gives the impression that Clojure is a big multiparadigm mess. But it is not. Clojure's design is clean and simple. Clojure is mostly functional. Sure, since it's a Lisp one can kind of incorporate other paradigms with macros. But out of the box Clojure does not do Java-style OOP (except for interop) or Prolog-style logic.


Perhaps it does place oriented OOP poorly, but it actually has an extremely nice logic engine (see core.logic).


Yeah, I know about it, but it's an additional dependency, right? [org.clojure/core.logic "0.7.5"] Whatever, Clojure is clean and simple is what I wanted to say. :)


>what if we made an OS where everything was a file?

Shameless Plan 9 plug.

http://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#Design_co...


I think the author knows of this and avoids to name names for stylistic reasons. I mean, everything he lists has an archetype well-known to anyone with a clue, why be blunt?


Or, you know, Unix.

Edit: Which came from Multics, I know.


Plan 9 is the spiritual successor of Unix, because in Unix: Everything Is A File (except for the many, many things which are not files after all).


So, could we start a listing of things learned through the application of extremist programming?

Or better yet, a listing that shows where a given principle hasn't been extremified, so we can go try it out and see what happens?


Sure, you could try to treat everything like an object and then find out if an integer was an object (the hard way) or you could just RTFM (the easy way).

Don't get me wrong, experimentation is a great for learning about limitations and capabilities; but I personally wouldn't use it as my primary means for learning about the design of something (unless it was very poorly documented, in which case I would try to avoid using it at all).


Awesome article. I would argue this goes for practices as well. Automated Testing vs TDD and the like.


Nice article and interesting viewpoint, taking principles to the extreme for learning purposes seems very useful.

The thing that impressed me most isn't the article though, but the amazingly beautiful clean look of that blog. Really a pleasure to look at and read on the iPad :-)


He underrates the power of principle.

"Mass is awesome. What if every object in the Universe had mass?"

"Liberty is awesome. What if there should be no such thing as slavery and every human being should be free?"

If you pick the wrong principle and take it to an extreme, then yes, it'll lead to undesirable results, but that means you should throw bad principles out, not all principles.


I have no idea what you are trying to say. You take a point that is explicitly written about programming languages, then I can make out that you think you're saying something by applying it to... physics? and then political philosophy? What? This is not a sensible line of thought on any level I can find, not metaphorically or literally. The post is about trying out programming languages, not slavery.


The post may be about programming over politics, but the post is essentially recommended exploring the consequences of extremes, and that operation is valuable in (heh - taking it to the extreme!) all places. For example: 1984 - extremes of medication, Player Piano - extremes of automation.

You can also notice this technique in a debate, someone takes the emotional stakes to an extreme - while it's usually used as an attempt to convince you, it's actually a useful practice because it can magnify otherwise overlooked elements of the positions, opinions, and principles.


> 1984 - extremes of medication

The only medication I remember from Nineteen Eighty-Four was Liberty Gin. I think you're thinking of Brave New World with soma.


Ah! I am, sorry, and thanks.


Huh? He was advocating for finding a principle and taking it to the extreme. He wasn't discouraging it.

Yes, he said that sometimes it won't work; but the whole point of the article is that you can discover and learn new things when you try taking one principle to its logical conclusion.

Where did you get the idea that he was arguing against that?


Yes, he said to try new principles, he also said not to expect them to really work, which undercuts the motivation for finding really difficult to discern principles.


> he also said not to expect them to really work, which undercuts the motivation for finding really difficult to discern principles.

That would be an argument against science, which labors under the same constraint (i.e. with rare exception, things don't work), but as it turns out it really isn't an argument against science at all, because the rare "thing that works" tends to change the world.


"That would be an argument against science"

No. Great science is done by people who are more like Newton and Tesla, not people like you and Edison.


> Great science is done by people who are more like Newton and Tesla, not people like you and Edison.

Apart from the fact the "great science" isn't the topic of discussion, on the contrary: in terms of maximum effect on the human race, the "greatest" science is the man of the street who can't be bamboozled by an idea that has no supporting evidence.


"in terms of maximum effect on the human race, the "greatest" science is the man of the street who can't be bamboozled by an idea that has no supporting evidence."

That's a rather grandiose claim. Where's your supporting evidence?


> Where's your supporting evidence?

Modern times is my evidence. The reason the Church didn't burn Galileo at the stake in 1633, as they did Bruno in 1600, is, not because they didn't want to, but because public awareness of science, of the value of comparing ideas to reality, had changed enough in that interval to make it a practical impossibility. And that was just 33 years.

The single most important manifestation of science is the general increase in everyday scientific reasoning, not the specialized form of science practiced by professionals, as important as the latter surely is.

Professional, high-level science gives us vaccines and modern medicine, certainly very valuable. Everyday science gives us the notion that an unfamiliar idea or claim must be accompanied by evidence. The second trumps the first.


Sure, the mob refraining from killing and attacking scientists is good. But the credit still goes to Galileo for his insight, not the common man for refraining from abusing Galileo.

IOW, who you have decided to credit is morally perverse.


> Sure, the mob refraining from killing and attacking scientists is good.

What? Are your reading what I'm posting? There was no mob, there was an all-powerful Church. It was the Church, not the public, who objected to what Galileo had to say and who put him on trial.

> But the credit still goes to Galileo for his insight, not the common man ...

You completely missed the point of this history lesson. The reason Galileo didn't share Bruno's fate wasn't Galileo, it was the common man. Times and perceptions had changed. Galileo and the common man both benefited. Everyone moved ahead except the Church.

The Church wasn't reluctant to burn Galileo at the stake because of his discoveries. Quite the contrary -- he was prosecuted for precisely those insights and the degree to which they contradicted Church dogma. But they could only go so far, because of public relations and changing times.

> IOW, who you have decided to credit is morally perverse.

Since I did no such thing, I don't have to defend it.

It's clear from what you're posting that you have no clue about this historical period, so ...

http://law2.umkc.edu/faculty/projects/ftrials/galileo/galile...


You were the one who made a wildly unsubstantiated claim that the common man's attitude about science has more effect on the human race than the insights of actual scientists. None of what you've said since then has justified this wild claim of yours and you can't because it's just patently absurd.

What's clear is that you haven't actually provided enough evidence to support your claim, that your original attack on me is pure hypocrisy, and that you now wish to cover your tracks by pretending that I just don't understand you.


> You were the one who made a wildly unsubstantiated claim that the common man's attitude about science has more effect on the human race than the insights of actual scientists.

"Wildly unsubstantiated"? Do you go outdoors much? It is a fact on the ground -- a dramatic reduction in religious persecution, indeed of religion altogether, a decline in the acceptance of superstitious beliefs, an increase in intellectual freedom demanded by the common man, and a thousand other examples. And the assumption that an idea with no supporting evidence is assumed to be false, by itself the most important evidence for public acceptance of the scientific outlook.

The true revolution in science is not the existence of scientific specialists (as important as that is), it is the fact that the public is willing to pay for the science, because they know it works, both as public policy and as an intellectual model for everyman.

Science is not about "actual scientists" as you put it. Science is about reality testing instead of blind belief. That's what distinguishes modern times from 500 years ago, a time when superstition ruled. Science cannot survive without public support, and public support requires public comprehension. The existence of working scientists is an effect, not a cause.

Look, I'm not going to bring you up to date on the last 500 years of human history, you're responsible for your own ignorance. Solve the problem st its source -- start now:

http://en.wikipedia.org/wiki/History_of_science

> What's clear is that you haven't actually provided enough evidence to support your claim

It's not my claim, as you would know if you knew anything at all.


Look, I get it. If you don't know why you believe it, it's a "fact on the ground." If you don't know why I believe it, then I have a burden of substantiating it for you.

I know it went over your head, but the dispute here isn't about the basic facts of history, it's about your grandiose misinterpretation of those facts and your inability to distinguish a "fact" from an extremely broad generalization.


I read it more like, to use your examples:

"Mass is awesome. What if every object in the Universe had mass [and did not have liberty, only mass]?"

"Liberty is awesome. What if there should be no such thing as slavery and every human being should be free [and had no mass, only liberty]?"

Pure mass without liberty is wrong. Pure liberty without mass is nonsensical.


> Pure liberty without mass is nonsensical.

That's a photon.


That's the right idea for a hobby or research project, but please don't do this in production code. Think about people who have to maintain it after you. I've seen my colleagues wade through a swamp of completely unnecessary C++ metaprogramming madness left by someone who apparently learned about templates yesterday, and it wasn't very nice.


I have to disagree with your first sentence, but the rest I do agree. I think maintenance of the SW should be categorized under "the right tool for the right job". If your company has dozens of skilled "extrimists" then why not use it in production. On the other hand if not..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: