Hacker News new | past | comments | ask | show | jobs | submit login
Systems Past: The software innovations we actually use (davidad.github.io)
135 points by vilhelm_s on March 14, 2014 | hide | past | favorite | 96 comments



> FORTRAN’s conflation of functions (an algebraic concept) and subroutines (a programming construct) persists to this day in nearly every piece of software, and causes no end of problems.

This is exactly what Haskell solves. Not by eliminating subroutines, but by separating the two concepts out again. In particular, functions are functions in the algebraic sense, and subroutines just become values in the IO type.

> Tracing compilers scratch the surface of reversing this mistake, but so far I know of no programming languages that are specifically designed around such a mechanism.

I'm not sure why he thinks tracing compilers help rectify the issue. Perhaps they can claw back some performance gains from knowing a function is pure, but static compilers can do that too.

And I think it's fair to say that Haskell was basically designed around "such a mechanism"; it's a shame the author doesn't know about it.

> ISWIM (which some programming language histories identify as the “root” of the ML family) is based on ALGOL 60 (source), which of course is based on FORTRAN

While ISWIM was influenced by ALGOL, it was based on the λ-calculus. And the λ-calculus, of course, precedes Fortran—and, in fact, computers in general—by quite a margin! It was originally developed in the 1930s.

Many modern functional languages are just thin layers over a typed λ-calculus: Haskell's Core intermediate representation is actually exactly that extended with some low-level primitives. This means that they are far closer to formal logic than they are to Fortran!


> functions are functions in the algebraic sense

This is a minor nit, but there are effects in pure Haskell functions, namely partiality and non-termination. (In other words, the sense in which "functions are functions" is actually a deep question)

There's plenty of academic discussions on how to solve this problem. See stuff like this: http://lambda-the-ultimate.org/node/2003


The Haskell language is described in The Haskell Report via an informal presentation of its denotational semantics. Its types are all "lifted" from Sets to something like Scott Domains to account for partiality and non-termination, which are denoted by the value called "bottom" or _|_.

So, they are not strictly functions in the normal set-theoretic sense, but they are (mostly?) mathematically accurate continuous functions between Scott Domains. As the semantics are not formally defined, there is a limit to what you can say about them, but there is an interesting paper entitled "Fast and Loose Reasoning is Morally Correct" that shows that treating Haskell as if it worked in terms of total set-theoretic functions is a reasonable thing to do in some practical circumstances in the use of Haskell.

If you want really pure, total set-theoretic functions in a programming language, you will have to go to a total functional language such as Coq or Agda. You lose Turing-completeness when you stick to total functions, though, and most people only type-check their functions rather than actually running them (this is not as crazy as it sounds--these languages are primarily used to help formulate formal proofs, which are finished when they pass the type-checker).

In any case, the bit in the blog about FORTRAN and everything conflating procedures and algebraic functions strikes me as nonsense, at least without further explanation to clarify/justify it.


Your link is a good example of partiality:

> user error: Table './ltu/cache' is marked as crashed and should be repaired query: SELECT data, created, headers FROM cache WHERE cid = 'filter:4:37963a22e3cdd6b501519c657a75ceeb' in /home/vhost/ltu/www/includes/database.mysql.inc on line 66.


I always thought that algebraic functions are not guaranteed to be defined given certain parameters. It's just that perfectly algebraic functions don't throw errors, they silently return +-infinity. Like the asymptotes in `tan x`.


To be pedantic, tan doesn't have a value at tau/4 (or pi/2, if you swing that way). Also, algebraic functions don't return, they are - cos(0) is 1, it doesn't return 1, it doesn't compute 1, it is not a kind of computer, nor a kind of program, nor any kind of thing that consumes resources and time and returns a value; it really literally is 1.

Algebraic functions are just syntactic notation. You can sit down and convert from one notation to another, like how cos(5) is a number quite close to 0.28366218546322625, and deriving one representation from the other does take resources and time, because it's a physical process performed by a person or computer.

But sin, cos, tan, cotan, log and all their friends by themselves don't compute, they are just a different kind of notation for numbers.

Which is why I find the desire to make functions in programming like algebraic functions silly - by definition they are two completely different things. One is a specification for a process that produces binary-encoded numbers, the other is a syntactic notation for real numbers.


I upvoted you because I think this is an interesting line of reasoning, but I disagree that it's a very useful one.

You seem to be implicitly defining "computation" as that which a physical computer does (machines, biological brains...). Something that literally consumes physical resources. By that logic, a Turing machine is not a computer and lambda calculus is not about computation. As I said, you could spin the semantics that way but is it useful? Does that give us useful insights?

Functions are not just syntactic notation. Functions are, by definition, mappings from set A to set B. They don't have anything to do with notation. "cos(x)" is merely a notation, yes, but not of a number but of a function. This is an important distinction. "cos(5)" evaluates to a certain number, yes, but it's not just syntactic sugar for that number. Not to mention that functions don't need to operate on sets of numbers.


I agree with your point about functions versus algorithms (or computation if you prefer that term), but I disagree that your definition of functions isn't useful for computing.

It's precisely that sense of function that things like Haskell (or Erlang, or Prolog) try to introduce - a different syntax for writing the same value.

A pure function is just explaining that a particular value is functions applied to other particular (stand in) values (and a way to introduce names to values). These are only useful for explaining which computation you'd like to happen when the program actually goes to carry one out - I'd like the number which is the same as the cosine function applied to 5, which tells it precisely how to compute that number (assuming it can build an algorithm equivalent to cosine). Of course, we could use another pure definition (that cosine of x is the same as a series expansion about x) to allow the compiler to replace the desired cosine function with something that has intrinsics on the system - evaluation a polynomial at the point 5.

So in essence, we can use functions as a tool to describe what value we want from the program (or various structural properties), in such a way that the compiler can correctly infer how to build an algorithm to do so, and tie it correctly in to our (actually) algorithmic code.

Minor aside: functions (in the math world) apply in contexts besides the real numbers, including being syntactic notation for binary numbers (or fields of order 2^n).


To be really pedantic, "algebraic functions" are functions that can be defined as the roots of polynomial equations, I.e. solutions of equations of the form f(x_1, x_2, ..., x_n) = 0. There are other, more general notions of "function" in mathematics, though. I'm not sure how pedantic the author of the original post meant to be!

Most generally, a function is not a number, it is a relation between two sets. One can define such things in a number of ways, but one way is to describe the relation via a formal language such as the lambda calculus. The lambda calculus is not technically a "programming language", it is a formal system of meaningless symbols and a set of rules for forming, manipulating, and eliminating them.

Although there is no fixed relation between reduction rules in the lambda calculus and physical resources in a computer, one can still compare the number of reduction steps required to determine which form in the co-domain of a function defined in the lambda calculus corresponds to a specific form in its domain, and this will provide a reasonable stand-in for comparisons between resource usage of similar computer programs.

So, really, computation is not completely foreign to mathematics, and mathematical functions and "computing functions" are not completely different animals, just... distantly related? Some languages are more mathematical in their approach than others.


> But sin, cos, tan, cotan, log and all their friends by themselves don't compute, they are just a different kind of notation for numbers.

No, they are a notation for ideas. The fact that they have numerical values associated with them is true but misses the point that trigonometric functions are primarily defined with respect to each other and with respect to certain geometric concepts.

In other words, cos(0) equals 1, but 1 has many more meanings than just cos(0). I would much rather have a student tell me that tan(x) = sin(x) / cos(x) than to say tan(x) is just a number.


Actually, to be pedantic, functions are notations for relations between sets of numbers: sin and cos are relations between the real numbers and the closed interval [-1, +1], and so on.


In a purely mathematical sense, if tan(pi/2) doesn't denote a number, what is its type as a value?



Non-termination makes perfect sense, but would you mind sharing a layman explanation of what you mean by the effects of partiality?


Roughly speaking, partial functions are functions that might crash -- or that might throw exceptions, if you prefer. So even if it always terminates, it might to do so at the cost of not returning a value of the expected type.

A bit more precisely, a partial function is a function that is not defined for some values of its domain (its input). A well-known instance is the division operator, which is not defined when the divisor is zero. Other common examples are head and last:

    -- this promises to receive a list of elements of some
    type t and return the first element
    head :: [t] -> t
    head (first:rest) = first 
    head [] = error "that list has no head, yo!"
Where error is analogous to throwing an unchecked Throwable (eg RuntimeException) in Java.

One solution is to use a safer version that returns Maybe a instead of a. This is analogous to using checked exceptions.

    safeHead :: [a] -> Maybe a 
    safeHead (first:rest) = Just first
    safeHead [] = Nothing
The solution above is common and idiomatic.

Another option is to accept only arguments of a different list type that's guaranteed to be non-empty.


Something that is kind of neat is that Fortran 95 has pure procedures. Their definition of "pure" looks very similar to Haskell's.


Most of the comments in this discussion are missing the point of this article in almost its entirety. I am seeing everything from "Haskell is the most mathy of the languages and resolves an approach to algebraic concepts!" to "This article is bullshit!" ... Sad, really.

An easy way to summarize what this article is trying to convey can be derived from the title of the article: Systems Past. The next chapter would simply be: Systems Future. And this is what the author is trying to get across.

There is nothing wrong with languages or OSes. What's wrong is a seemingly pervasive attitude throughout the hacker community to never want to improve on foundational concepts. This is usually argued as: 'if it ain't broke don't fix it'.

One critique of this article I will give is: these software innovations are dependent on the hardware architectures used. And we have been using the same basic computer architecture for decades. So maybe it is not fair to assume revolutionary systems innovations should happen before we have revolutionary hardware systems to program?


I am very happy that you get the point! :-)

To address your criticism: Internetworking fundamentally required new hardware, Interactivity and Hypermedia depended on advances in display technology, and Virtualization and Transactions benefit substantially from hardware acceleration. However, the OS, the PL, and the GC were all independent of any new developments and hardware. Our display technologies are already way ahead of the computational features they should be able to support. Same goes for telecommunications. And the hardware acceleration that powers virtual memory and memory locking is versatile enough to be applied to more advanced abstractions as well (although the advanced abstractions might later benefit from more advanced acceleration).

I spent a few years at MIT trying to design revolutionary hardware systems and left with a deep respect for Intel. Much as I'd like to have a PC based on the Lisp Machine or the Connection Machine, I've come to believe it's we software folks who really aren't keeping up, rather than any kind of stagnation in the hardware world.

In fact, Intel comes out with a whole pile of new machine instructions every other year, and they probably never get invoked once on most PCs: most binaries are effectively compiled for AMD Opteron (the first x86_64 processor, released in 2003) so that they'll run seamlessly on anything since then.


This is an outstanding essay -- more fields need this kind of thinking. Plus it's doubly ironic having it in computation, a field that has always seemed determined to ignore history, and reinvent it.

A nano nit: Yes, GC came from Lisp, but its first mention was in AI Memo #1 (the first MIT AI Lab working paper) by Minsky. I have a copy someplace -- it was only a few pages long.


I would love to read that if you could post it.




The memo was not #1 but #58 (apologies -- my hardcopy is buried someplace, and I made it years ago when rummaging through Marvin's file cabinets for something interesting and ran across it).

But thanks to the miracle of modern science, it's online here: ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-058.pdf

#1 was JMC's memo on Lisp, but (tying back to this thread) you had to manage memory with CONS (malloc()) and erase (free()).

(I note the date was around the time the first PDP-6 was delivered to the AI Lab so the memo was probably timely. Boy, the PFP-10 (basically the 'real version' of the PDP-6) was such a pleasure to write for, with an instruction set that was basically lisp primitives.)



micro nit: McCarthy, not Minsky


Agreed, this essay was incredible.


See David Wheeler's page, The Most Important Software Innovations <http://www.dwheeler.com/innovation/innovation.html>.

The page has been online and refined for 12 years. It lists things such as the Stack, Packet-Switching Networks, Spelling Checker, Relational Model and Algebra (SQL), and quite a few other useful and important software innovations.



Considering the idea of a programming language a fundamental innovation derived from FORTRAN disregards earlier concepts, like Gödel numbering from the year 1931, which exhibit language interpretation. I guess what I'm trying to say is that every good idea is closely related to countless others; pigeon-holing them into the Only 8 categories and naming a "first" doesn't do justice to all of the interesting ideas of computation.


> Virtual memory should have been extended to network resources, but this has not really happened.

I get that we are already operating in heterogeneous virtual memory worlds, but network transactions are so slow. I can't see it being useful to have them as virtual addresses if random reads and writes to network space take literal seconds of round trip. That is so much worse than even disk, there is a reason networking is at most virtual filesystem bound and at least just its own thing above that via URIs.

It really pokes holes in Von Neumann computer models around memory when the memory has heavily disparate access times. You can have networked devices via drivers (like printers) that do have those huge round trip times, or you can have cache hits that give you single digit cycle retrieval. It is NUMA before you even get to the hardware version.

> Reject the notion that one program talking to another should have to invoke some “input/output” API. You’re the human, and you own this machine. You get to say who talks to what when, why, and how if you please. All this software stuff we’re expected to deal with — files, sockets, function calls — was just invented by other mortal people, like you and I, without using any tools we don’t have the equivalent of fifty thousand of. Let’s do some old-school hacking on our new-school hardware — like the original TX-0 hackers, in assembly, from the ground up — and work towards a harmonious world where there is something new in software systems for the first time since 1969.

So if you had a machine executing code - without an operating system - it would need to pull in functions from disk or something whenever they get invoked by another program? And have some means to deduce which function that is, via some mechanism to scan the filesystem to find it. Because you need to discretize out "programs" because each one is inherently insular in its world view. So you just execute dynamic code that invokes other dynamic code.

That sounds like a real big performance hit, though, to have an indirection on every function call to see if its property resident in memory or just fake - at least a conditional every time saying "is this function at 0x00? then it needs to be looked up!".


Virtualization and network resources have a long and glorious research history, although most, if not all, of the approaches are not currently in fashion. (And maybe there's a good reason for that?[1])

"Distributed memory" is one example. Once upon a time it was a big deal. I suppose that things like iSCSI could be regarded as an application, but that is the only use I know about that's at all recent.

"Remote procedure calls", "distributed objects" and stuff like that could be seen as "virtualization", if you like. I suspect all of the major advocates of these have either recanted or died off---at least I hope so. When I'm wearing my network-protocol-guy hat, I hate these things with the fiery passion of a thousands suns.

[1] See http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7..., "A Note on Distributed Computing" by Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall, from Sun ca. 1994, for some early, clear, and correct reasons why making remote things look like local things doesn't work as well as you might think.


> I can't see it being useful to have them as virtual addresses if random reads and writes to network space take literal seconds of round trip.

It's just another stage of the storage hierarchy, isn't it? Registers → CPU cache → DRAM → Flash storage → spinning-rust storage → network storage → archive storage.

It's not practical to cover all of those with the one abstraction (having a CPU read-byte instruction pause while a robot navigates a warehouse to find the right backup tape is a bit ridiculous), but what we have at the moment is mostly a reflection of the way our technology ecosystem has developed, rather than the best way to organise and cache information.


Local networking can be significantly faster than accessing a hard drive. It's very common to have only network filesystems. Look at EC2, for example. Toss a cache on it and everything's good.


This indirection already happens in present systems. Check out my previous blog post about it: http://davidad.github.io/blog/2014/02/19/relocatable-vs-posi... Modern hardware is optimized to do this kind of indirection reasonably fast. What I propose can exploit the same hardware acceleration.


perhaps it was meant in a different sense- I thought he meant that a "socket" resource shouldn't have the programmer specifying the physical hardware and port number while other programs have the ability of monopolising that resource.


Haven't people been communicating information to other people as text for thousands of years? I'm not so convinced that there's a universally better mechanism for communicating with machines.


Text isn't as important as language itself, but there's the rub: no one has yet devised a better way to program computers than language. Even the so-called "visual environments" we see today ultimately boil down to recognizable linguistic concepts: they do it by arranging shapes in space rather than writing text, but you can pick out the same nouns, verbs, and other parts of speech. And that's the problem. Visual environments are trying to represent language, and they're just not as good at it as text, which is the closest thing to a native format yet devised for the stuff. Eventually the inefficiencies start to get annoying, people go for the greater efficiency of text, and the visual environment languishes.

The bottom line is that visual environments will never be as good as text for representing programming languages. Rather than trying to do language better than text, they need to find a paradigm that's better than language, and implement that. I don't have any clue what something like that would even look like; to be honest, I'm not sure it can even be done. But the greatest victories against the most hopeless-seeming odds are won by changing the whole nature of the fight, and that's what visual programming is going to have to do.


What, precisely, do you mean by language? Machine language doesn't exactly have grammar the same way that human language does, and I can't interpret a concatenative language that way either (which is much higher level). We're not going to get away from the idea of a linear sequence of bits/symbols, but beyond that almost anything goes, and indeed probably has gone.


Finding a paradigm better than language, i.e. an abstract symbolic system, would be a such a revolutionary turning point in the ... um, everything that I think programming would at that point seem just silly. What would that even mean? We would become some sort of gods. Mathematics as we know it would certainly become irrelevant. I'm not sure this is even science fiction.


Yes. This is what I always think when people criticize text-based programming languages.

People also point and gesture and use body language to communicate. Both can be done at once, and you can communicate without using a single spoken or written word.


This is a fair point which deserves a response.

Before text was invented, people communicated using spoken language. Text was a major breakthrough because once some communication was written, it became a physical object which could be stored in libraries, carried by sea or horseback, and read by more than one person. However, it was a compromise: you could no longer interact with the reader. As a result, most people prefer to communicate interactively. Even people who cannot speak use sign language instead of resorting to text.

What we are doing right now is a hybrid of textual and interactive communication; we're taking turns writing chunks of text. In the programming world, this is roughly equivalent to a REPL. But actually developing software using a REPL is still quite uncommon; I think it's reserved mostly for Emacs Lisp hackers.

We're doing this instead of communicating by audio partly to create a public indexable record of our correspondence, partly to protect our privacy, partly for synchronization reasons, and partly due to arbitrary norms and the information systems that co-evolved with them; but mostly to save ourselves from mentally keeping track of the edits we make to our expressions before committing them ("scratch that...what I meant to say is.."). When the target of one's communication is a machine with a visual display, that latter concern completely evaporates.

Naturally, the screen editor (introduced in 1961 as Expensive Typewriter for the PDP-1 and pretty well refined by the NLS era) implements this suggestion thoroughly. Yet, generally, the screen editor is used only to edit pieces of text, which are then separately turned into programs. What I'm proposing is nothing more outrageous than a screen editor which edits programs directly instead of textual representations thereof.


Or it could be that it will take another thousand years for us to develop a good language for the new medium. Text took a long time to figure out.


I was just reading https://en.wikipedia.org/wiki/Mail#History earlier. Interesting how they recycled resources (pool of fresh horses) on a network of relay.


> Every programming language used today is descended from FORTRAN

As a matter of fact, everything didn't descend from FORTRAN. COBOL was heavily influenced by FLOW-MATIC which descended from the A-0 System. All three of these were created mostly due to the work of Grace Hopper, commonly called "the mother of COBOL."


APL did not descend from Fortran either - it wasn't even designed as a programming language - rather as a standard notation for algorithms


Lisp is also a language isolate. The first EVAL was written by hand in assembly.


For some truly innovative operating system and programming language, I recommend everybody to go learn about Urbit.

http://www.urbit.org/


I am very glad that Urbit exists and is being funded. I have corresponded with the author and differ with him on some design decisions that I consider significant. Nonetheless, it is an excellent example of the "rethink everything" type of ambition that I would like to see expressed more.


Quote from their tutorial: Whipping or clamming on a %weed simply ignores the sample and reproduces the example - as does bunting, of course. However, %weed remains quite useful. Clearly free thinking people.


Urbit made my head hurt. Interesting, but in an "I don't want to go there and do real work with it at all ever" way.



I'm still scared after watching their videos, too psychedelic.


Reminds me of the "Future of Programming" presentation given by Bret Victor [0]. Bret's talk is much more focused on the concepts that were created during the early period of CS, but abandoned(more or less) over the years rather than the major concepts that have persisted.

[0]: http://vimeo.com/71278954


"ARPAnet is the quintessential computer network. It was originally called “the Intergalactic Computer Network” and ultimately became known as simply “the Internet”."

Awesome, I did not know that.


It was something J.C.R. Licklider envisioned in the early 1960s when he was the first director of the Information Processing Techniques Office at ARPA.

https://en.wikipedia.org/wiki/Intergalactic_Computer_Network

M. Mitchell Waldrop delves into the history in his wonderful book about Licklider titled "The Dream Machine". Here's chapter one:

http://www.nytimes.com/2001/10/07/books/chapters/07-1st-wald...

Here's an example of a memo written by Licklider discussing the Intergalactic Computer Network:

http://worrydream.com/refs/Licklider-IntergalacticNetwork.pd...


Shared Memory seems dangerous because you assume that both actors are behaving correctly. The reason why programs are isolated I always thought wasn't because programmers were lazy but to ensure that a malicious or a badly written program can be contained.


I think that shared memory is bad because it doesn't scale: once you've saturated your memory bus because you have too many cores that use it, then you need to have several memory buses and 'shared memory' doesn't seem so nice anymore..


Well, if that's the goal, it failed completely. If you run a malicious program once, it can spread to all your data.

Isolation was a protection about badly written programs, but it was mainly about simplifying things and increasing the (virtual) memory available.


Is it just me, or is #2 (Operating System – running separate programs concurrently independent of one another) effectively the same as #6 (Virtualization)? It is the same idea – the programmer can pretend that the program has a machine all to itself.


A modern OS is definitely a virtual machine, where each process perceives that it is running on a single CPU with its own single contiguous bank of memory. Threads are a bit of a leaky abstraction but whatever.

What is interesting is that the operating system virtualizes a machine that doesn't actually exist: fake "hardware" that can execute syscalls like read/write/exit. A VM in the contemporary sense has the exact same functionality, with a different interface. Rather than read/write as syscalls, you have to send SATA commands to disk, or commands to a network card, or whatever. Instead of an exit system call as an interface you work with a hardware interface that powers down the physical machine.

Containerization is actually a logical next step from this. Why virtualize a REAL hardware interface only to virtualize a fake one on top of it? The only reason to do that is if you want multiple fake interfaces, eg Linux and Windows. When virtualizing a bunch of Linux machines, mostly you really just want isolation of your processes. Virtualizing real hardware is a hack because Linux was not capable of isolating processes on its own, so you had to run multiple copies of Linux! Now with cgroups and other resource namespacing in the kernel, it can isolate resources by itself.


The fact that an OS supplies system calls is mostly irrelevant – it is a separate concept (not listed in the original article) which we usually call “Software Libraries”. But innovation #2 did not list the standard libraries as a point of an Operating System – the process isolation is the point. Libraries had been in use long before.

I definitely agree that hardware virtualization is going the long way around, and that more refined process isolation is the way to go. The Operating System was made for this, and it should continue to do this; there is no architectural need for an additional level of isolation.


Didn't know what to expect going into this with a title like that, but was pleasantly surprised. A legitimate list.


I wholeheartedly agree with the sentiment expressed by the introduction to this article. We really do seem to have got stuck in a deep rut, where we can make progress laterally but can't seem to come up with anything truly novel to move the state of the art dramatically forward.

I have some issues with the style of the rest of the article, though. It consists of a lot of very interesting and thesis-supporting facts, but they are couched in a lot of arbitrary statements ("only 8 software innovations...") of dubious facts that don't seem very well supported on their own.

I mean, yes, you say there are eight and then list eight, but I am not left convinced that those are the ONLY eight. You say that all languages (aside from a bit of backpedaling in the footnotes) are descended from FORTRAN, which is a pretty bold claim to make, but the justification you provide seems to reduce "descended from" to a mostly meaningless "probably borrowed some ideas from" that is hard to base any value judgement on. Surely not all ideas in FORTRAN were misguided!

The whole rest of the article continues in this pattern, distracting from basically good points with brash and sometimes bizarre (from my perspective, at least) statements that seem to belie a wide but very spotty understanding of computing history. Granted, it's been chaotic and not terribly well-documented, but that ought to give one second thoughts about writing with such a definitive and dismissive tone.

I want to repeat that I agree with the general premise, and I think that it's unfortunate that I came away from the article feeling like I disagreed with it due to the problems noted above. I had to re-read the intro to remember the intent. Hopefully this criticism is accepted in the constructive sense in which I offer it, as I think that there's some great insight there that could be more effectively conveyed.


On the one hand, the article challenges us to question established ways of doing things. On the other, the first footnote correctly points out some projects that were economic failures because they were technology for its own sake rather than providing something of value to people.

Some of us may have the freedom and the desire to hack on things that are destined to be economic failures. But for the rest of us, I think it's more important to err on the side of technologically conservative but economically successful projects. So, most of us, myself included, will continue to work within the context of established programming languages, operating systems, and other groundwork that has already been laid for us.


A very well-written article that promotes thinking, or better yet, re-thinking. The main point to take away is this: don't take for granted current, commonly-used constructs and architectures, they're the result of decades of tradeoffs designed to tip-toe technology's constraints. Today most of those constraints are long gone and the assumptions don't hold anymore, if we could just forget the bad parts instead of accepting them as gospel and use these five decades of experience to build something new and better, maybe we could finally stop our current methodologies from curbing our progress.

Kudos to the author.


Interesting article, but I think that the basic premise, that it's "bizarre" that we're still using concepts developed 50+ years ago, is a bit naive. To give an analogy from a different engineering field, the basics of rocketry and space travel have been conceptually almost completely developed almost 100 years ago. Multistage rockets, orbital stations, etc. Should it be considered bizarre that we still use those same concepts and mechanisms to fly into space? I don't think so. People had figured out an optimal (sometimes the best or only possible) method to do something and we're using it. Sometimes a better idea isn't possible because a better method can't exist. Some ideas are simply timeless.

One of those ideas, I believe, is expression of programs as text. It wasn't even a distinct idea, it's just the most efficient, natural way to express algorithms. You can't get around the fundamental mathematical fact that you need a formal symbolic system to express algorithms, i.e. you need a language. Until we gain the ability to directly interface our brains with computers we'll need to express language in written symbols, and even then I doubt we could get away without text for cognitively expensive activities such as programming (because of limitations of our working memory, etc.). "Lines of text" are anything but limiting.

What the article says are some drawbacks of operating systems I don't think are drawbacks at all. Having the OS lying to programs so that they don't have to know irrelevant details of the machine is a really good thing.

> But when it comes to what the machine is actually doing, why not just run one ordinary program and teach it new functions over time?

What?! You mean like one monolithic piece of code doing everything ranging from memory management to email and multimedia? I must be missing something, am I stupid and just don't understand the proposal?

> Why persist for 50 years the fiction that every distinct function performed by a computer executes independently in its own little barren environment?

Because it's a good idea, it reduces complexity for the function (program) in question.

> A righteous operating system should be a programming language.

Like we had with some early PCs where you essentially had a BASIC interpreter for an OS? That concept got replaced because it was a horrible way for humans to do actual work instead of dicking around all day with toy programs.

> Let’s do some old-school hacking on our new-school hardware — like the original TX-0 hackers, in assembly, from the ground up — and work towards a harmonious world where there is something new in software systems for the first time since 1969.

While I have nothing against assembly (to quote Michael Abrash: "I happen to like heroic coding."), first, I find the idea of regressing to old methods of producing programs to yield new ways of computing a little strange, and second, there's a good reason assembly isn't used unless necessary--it's a horribly unproductive way to solve problems. Unless the assembly in question is Lisp. ;) Or Haskell. So if we're dreaming, let's dream all the way--we need pure functional computing machines, not just "better mouse traps".


>> A righteous operating system should be a programming language.

>Like we had with some early PCs where you essentially had a BASIC interpreter for an OS?

I think that line has some relation to the graphical programming language at the beginning. If so, no, it's nothing like old BASIC interpreters, and more like making GUIs out of interoperating modules.


Am I the only one impressed with how long some of these technologies have been around? I know most of the common ones such as the internet and FORTRAN but I did not realize how long markup languages have existed. Also at the rate that technology improved in such a short amount of time. That must have been an exciting time to work in the field.


I find that in the pursuit of the latest and greatest, a pattern we so often see in society, it is easy to forget to let our historical experiences of the past inform our current and future thinking. There is a place for both - pure forward thinking creation as well as innovation inspired by the past.


Only? No mention of viewport clipping? Compression? Encryption? Bittorrent?


would the way we use computers be completely unrecognisable without those things?


What would the internet look like without encryption?


I included a footnote to address this criticism. http://davidad.github.io/blog/2014/03/12/the-operating-syste...


so wait. are you saying encryption… encryption was invented after 1970?


Asymmetric encryption (public-key cryptography) was developed in the 1970s. The internet would be very different without it.


This reminds me of golang vs brand-x http://cowlark.com/2009-11-15-go/


If this writer is so convinced that we need a new way of interacting with computers, why isn't he building it, instead of just writing about it?


Because software actually is kind of hard, and to get anywhere you need to convince a somewhat larger group than one to all work towards the same goal.

And how do you know he is not?


Yes. I am working on it, but it is "kind of hard" to do by oneself. Also, writing about it is a good step, regardless of who might be convinced or not, simply because it forces me to get my ideas more straightened out.


I'm quite curious where you intend to go with the "mesh" project on Github. "An operating system with the heart of a database" sounds like some of my ideas, as does your doc/index.md, but it seems to stop there.


I am working on it

I don't see that mentioned in the article; a mention or a least a link to your own work would be helpful.


to get anywhere you need to convince a somewhat larger group than one to all work towards the same goal

I don't think this is true. For example, Linus Torvalds didn't convince a larger group to work on Linux first, and then build it; he built Linux first and then the larger group came to it because they saw that it was worth working on.

Granted, Linux was not as ambitious as the kind of thing the article is talking about; but that doesn't mean the method has to be different. Working code, even if it's for a very small subset of what you eventually want to build, is a great convincer.


You want to throw out the OS and programming languages? You think that text is a poor interface for specifying machine behavior? Then show me. Show me something real. Perhaps not text (since text is a dirty word, right?) but something I can install on my machine.

If you're going to pull an Emperor's New Clothes, then it's not enough to loudly (and snarkily) proclaim that the emperor has no clothes on - you need to produce a naked emperor.

Heck, it should be easy, right? If you think that the conventions of the last 50 years are all shit, then how hard could it be to come up with some new ones? If you want to shake the pillars of computer programming, you need to be able to do more than say "everything is crap," and if you can't, then you sound like nothing more than surly, precocious, ignorant teenager.


These arguments probably trace back to the pitfalls of Von Neumann architecture, and the hardcoded expectations of memory and such. It would require experts in very disparate fields (mechanical / electrical engineering, computer engineering, software development, theoretical mathematics, and others) to develop a viable replacement.

I do postulate that we are already kind of warping that model, though. SMP and NUMA force you to question the assumptions and that lead to a lot of the modern changes of tone (functional programming becoming big because asyncs and lambdas are really useful, for example).


I'm sad that you got downvoted, though I have a feeling its simply your choice of language, as your points are spot on. This is a wonderfully curated list of where computers have come from. But to go further and claim that we're all somehow fools for not exploring new ideas without offering any overview of what a new idea might look like is disingenuous.

Plenty of artists can dream up interfaces like in Minority Report, Tron, or Hackers, but to actually build something that achieves a human goal is much, much more difficult.

For what it's worth, I think the Self object environment, Squeak smalltalk and Flow programming are beginning to approach the paradigm shift the author hopes for. But in the meantime, I'm communicating with all of you in words, so I might as well communicate with my computer in them as well.


Honestly I feel like this type of viewpoint is one that almost every programmer gets to after about 10 years doing real work. The difference is tact. Some hack out some experiments in new programming paradigms and some post detailed ideas or concepts of how things can be improved, and others just try to gain street cred by just saying everything is obviously shit and we are all fools for not "fixing" things.


Well said, sir. Worth noting that this overview of where we've come from and how not far we've come was posted by a fellow is both clearly a genius, and who has his most popular github project written in x64 assembly. So I'd argue that even he sees the merit in the past when attempting to blaze a trail into the future.

For one thing, as long as our computers are binary, they are going to require instructions in a very specific way and anything we put over the top of them will expose that architecture to some degree or another.


My entire article is about things that happened between 1955 and 1969. The part where I state my thesis literally concludes "With a solid historical perspective we can dare to do better." I'm not sure how much more obvious I could make it that I "see the merit in the past when attempting to blaze a trail into the future".

It's the _present_ whose merit I find lacking.


Or it could be that the prevalence of this attitude prevents nascent alternatives from receiving adequate support.

Text took thousands of years to develop.


The author forgot node.js! It has... modules!


Quite a load of shit. Those innovations are now taken for granted and the "this was then", "this is now" comments it where you see the leakage in the arguments.

Im not going to quote anything from this to dismiss it. Its simply wrong, a bit extreme, and out of context. You would have to be an idiot to buy the entire premise.


This comment is a load of shit, and so since you're happy to dismiss an entire essay without any back up, evidence or argument, I can just as easily dismiss your comment. Hitchen's razor.


[Transactions] enabled the development of systems called databases, which can reliably maintain the state of complex data structures across incessant read and write operations as well as some level of hardware failures.

This is interesting. It suggests to me that either the author is using a nonstandard or outdated definition of "database" or that a denotation-fundamental tenet of database implementation, transactions, has been left for dead by the side of the road by many modern "database" projects.

Regardless, this is a great and informative piece.


How is transactional integrity an outdated concept? And you are actually telling me that there are "databases" that don't have transactional integrity?


MySQL with MyISAM tables.


Ah. Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: