Hacker News new | past | comments | ask | show | jobs | submit login
Toward a better programming (chris-granger.com)
297 points by ibdknox on March 28, 2014 | hide | past | favorite | 189 comments



I alternate between thinking that programming has improved tremendously in the past 30 years, and thinking that programming has gone nowhere.

On the positive side, things that were cutting-edge hard problems in the 80s are now homework assignments or fun side projects. For instance, write a ray tracer, a spreadsheet, Tetris, or an interactive GUI.

On the negative side, there seems to be a huge amount of stagnation in programming languages and environments. People are still typing the same Unix commands into 25x80 terminal windows. People are still using vi to edit programs as sequential lines of text in files using languages from the 80s (C++) or 90s (Java). If you look at programming the Eniac with patch cords, we're obviously a huge leap beyond that. But if you look at programming in Fortran, what we do now isn't much more advanced. You'd think that given the insane increases in hardware performance from Moore's law, that programming should be a lot more advanced.

Thinking of Paul Graham's essay "What you can't say", if someone came from the future I expect they would find our current programming practices ridiculous. That essay focuses on things people don't say because of conformity and moral forces. But I think just as big an issue is things people don't say because they literally can't say them - the vocabulary and ideas don't exist. That's my problem - I can see something is very wrong with programming, but I don't know how to explain it.


I tend to think of it as evolution: new ideas that succeed and improve performance multiply, new ideas that fail disappear, and old ideas that are still efficient live on.

People still typing the same Unix commands into terminals do it because it's still the most efficient way to accomplish those things. Maybe it's always going to be the optimal solution for some tasks.

Just because an idea is old, doesn't mean it's bad or wrong.


Text as code, for instance, is just an extension of a concept which has been around in one form or another since the scribes of ancient Babylon, or thereabouts. I can imagine more complex ways of representing code but I can't really imagine anything more efficient for translating human concepts into machine language. Except maybe direct, augmented telepathy, but even then people would probably think to the computer in glyphs and symbols. Complexity doesn't always correlate with advancement.


I once had the pleasure of asking Marvin Minsky if he thought we'd ever use something other than text to create computer programs. Without missing a beat he said, "If it's good enough for Plato and Aristotle, it's good enough for me."


The problem with code as text is not the visual representation but the manipulation. From the point of view of tooling for live-coding, it's incredibly painful to deal with an unstructured pile of text that at any given point may be in an invalid state (ie partially edited). Structured editing (http://en.wikipedia.org/wiki/Structure_editor) allows the tooling to deal with consistent, structured data whilst still taking advantage of the expressive power of language.


It's actually not that hard to deal with text, even in partially edited states. It's just most people don't know how to build a decent incremental parser with fairly good error recovery, but some of us do.


But the hard part isn't dealing with invalid parses and error recovery. The hard part is dealing with completely valid bits of code that aren't yet finished:

    (doseq [x (range|cursor|)]
      )
You can wait until the end of time, but that won't finish ;) Whereas I probably wanted to get out (range 10) before it went off and looped forever.

Text also doesn't provide you with stable IDs. If I previously had function "foo" in a file and now that's gone, but there's a function "bar" in the same place with roughly the same code, does that mean it was renamed? Or did foo get deleted and we need to surface that anything using foo is an error? How would you know? The only thing you have to go on is position and position can change for all sorts of other reasons. I ran into this problem with trying to do functions in single editors: when the file can change in any way, reconciling that file with your previous view of the world is tremendously difficult.


I don't have a problem with this. If you accidentally encode an infinite loop, you just break it on the next code change. You can memoize your token stream in an incremental lexer to use tokens as stable IDs: you use the same symbol for bar as you did for foo because the token/tree start position never changed; only the contents of the existing token happened to changed! This is what I do in YinYang, and it works well. Of course, to get here, you have to be incremental 100% and not use existing batch tools (or adapt them first to be incremental).

I'd be happy to share these techniques sometime.


I'd enjoy reading.


Ok, I'll make an effort to really write this up sometime, maybe as an SLE paper.


Please do even if only as an informal blog post or something. :)


I agree that the problem is not error recovery, but is more generaly described by Peter Naur's "Programming as Theory Building"[1]. With that, programming is hard because creating a theory, collectively, and teaching other people about it, is hard. Making it easier is therefore solving the problem Douglas Engelbart envisioned solving: Augmenting the Human Intellect

[1] http://catenary.wordpress.com/2011/04/19/naurs-programming-a...


I apologize for the fanboy post in advance, but Peter Naur was such a badass.


Any chance you could point me at some resources on incremental parsing? It's something I've been interested in for a while.


I wrote a workshop paper last year, but it wasn't very deep. Actually, incremental parsing, at least the way I do, doesn't really involve any special algorithms, here are the three points:

* making your lexer incremental first, memoize tokens between edits so you have something to attach trees to that will still be there after the next edit!

* Match braces (or determine indented blocks) separately before you even bother parsing. This makes incremental parsing more robust, and it is really easy to do without parsing information (unless your language overloads brace lexemes, like Java generics do! Scala didn't have this problem however). Also, autocomplete braces even if your users really hate that, because otherwise you won't have anything nice to reason about 99% of the time (indented languages are really better here for obvious reasons). Tell them to just use emacs if they don't like it.

* The last thing to do is just memoize parse trees (by attaching them to your memoized tokens) and then replay parsing on trees whenever their parses could have changed. So if their is an edit, invalidate the inner most tree, or multiple trees if the edit occurs at a boundary. If a child tree parse changes, invalidate its parents. Log your symbol table additions so you can undo them when they are no longer done on a parse (or the parse tree is deleted); trace your symbol table references if the symbol binding for the name being looked up changes, and don't worry about type checking in a separate pass, because you can just replay the parse multiple times if necessary.

The last point is quite fascinating and something I'm working on right now to turn into a more general programming paradigm. Check out:

http://research.microsoft.com/apps/pubs/default.aspx?id=2112...


Yeah, all those Structure-Editors and Node-Editors and Meta-Node-Editors and all those years and still more and more simple text-editors keep winning. My personal impression is that developers are actually moving away from those uber-tools, IDEs even and move toward vim plus shell tooling...


Fair enough.


Whoa now, if it's evolution, what's the fitness function? It's not fully endogenous. It's market-driven. Network effects & cooperation mean plan9 just ain't gonna see much screen time anymore.


The main problem is that Bell Labs had more success making their research mainstream than Xerox PARC did.

So here we are in 2014, with all that nice GUI live development environments (Interlisp, Smalltalk, Mesa/Cedar) lost to people that prefer a UNIX System V clone.


That doesn't sound convincing. The Xerox PARC GUI is everywhere. The results of their work are about as mainstream as they could possibly be.

Likewise, Smalltalk and Mesa were both hugely influential in everything from C, C++, Java and beyond.

Given that, it seems more likely that graphical development hasn't taken off because it isn't good enough yet. All developers have prejudices, but they typically have to yield to superior methodologies.


> The Xerox PARC GUI is everywhere

Quoting Alan Kay

<quote> Now, the abortion that happened after PARC was the misunderstanding of the user interface that we did for children, which was the overlapping window interface which we made as naive as absolutely we possibly could to the point of not having any workflow ideas in it, and that was taken over uncritically out into the outside world. </quote>

> Given that, it seems more likely that graphical development hasn't taken off because it isn't good enough yet. All developers have prejudices, but they typically have to yield to superior methodologies.

The only existing mainstream environment that can replicate some of the live coding experience of the said systems is Mathematica.

That is why for old guys like myself it is so interesting to see all the live coding discussions that happen on HN, from people that never used those systems or at least saw them being used.


> <quote> Now, the abortion that happened after PARC was the misunderstanding of the user interface that we did for children, which was the overlapping window interface which we made as naive as absolutely we possibly could to the point of not having any workflow ideas in it, and that was taken over uncritically out into the outside world. </quote>

And then tiling window managers were invented, and users rejoiced.


> Now, the abortion that happened after PARC was the misunderstanding of the user interface that we did for children, which was the overlapping window interface which we made as naive as absolutely we possibly could to the point of not having any workflow ideas in it, and that was taken over uncritically out into the outside world.

He also goes on to praise NLS, which failed precisely because of its steep learning curve and unsuitability for large swaths of the personal computing market.

Incidentally, that's why a GUI with overlapping windows succeeded : because it was familiar, intuitive and simple.

IMO, it's rather telling that none of the major DE's or WM's involve automatic tiling. This, despite the fact that there are a myriad of solutions available [1]. It's not about "better" or "worse", but suitability for a given market.

[1] If anyone's interested, I highly recommend i3. As powerful as xmonad with a fraction of the configuration hassle.


Mathematica may have pioneered the notebook[X], but there are imitations[1] which are arguably just as good (or better given the seeming ad-hoc nature of the Mathematica language).

[X] I don't presume to know -- it's just the first system I came into contact with which has this concept.

[1] http://ipython.org/

(edit: apparently there's some magic markup I don't yet understand.)


> I don't presume to know -- it's just the first system I came into contact with which has this concept.

It was already present on the Xerox PARC systems.

You can see a little bit of it here about Symbiotics, http://www.loper-os.org/?p=932 at minute at minute 30.

Or here for Smalltalk-80, https://www.youtube.com/watch?v=JLPiMl8XUKU


That's pretty much the thesis of Bret Victor's "The Future of Programming" talk. It's here, for all who have missed it: http://worrydream.com/dbx/


Good talk.

I had all this stuff in the first chapter(the history) of my HCI-course at University. So, luckily, the younger programmers learn about stuff that already has been there, back in the days.

But I was baffled by what bad stuff must have happened to the world, that we ended up where we are now and not where we should have been.


> But I was baffled by what bad stuff must have happened to the world, that we ended up where we are now and not where we should have been.

UNIX spread into the enterprise world.


What is so bad ab out Unix and what would be an alternative?


A world of command line interfaces and TTY text editors, instead of the powerful interactive GUI world Xerox PARC systems had.


The shell is worlds more powerful than the GUI (or CUI), because things in the shell trivially compose.


Except Xerox PARC systems also had a shell in the form of REPL and live coding.

The UNIX shell is quite primitive by comparison.


That's all stuff we have now.

"The UNIX shell is quite primitive by comparison."

I don't believe that's still the case (and hasn't been for some while). If you believe it is, rather than just making assertions please explain what you think is missing from bash in screen in urxvt.


- bash is just one of many possibilities, so bash != UNIX shell. It is not available by default in all UNIX systems

- bash is not a REPL with support for live coding.

- as consequence of the last point, bash cannot provide graphical output and respective manipulation of system data structures

Writing data to /proc to alter the OS behavior, is pretty basic compared with changing OS behaviour with a "doit" message on a expression block.

EDIT: For better understanding, imagine using something like Mathematica as an OS shell.


You're being argumentative instead of informative or useful.

"bash is just one of many possibilities, so bash != UNIX shell. It is not available by default in all UNIX systems"

Bash is one example of a modern UNIX shell. It is not the only example - there are some that are more advanced in some ways, and some that are less advanced in some ways. I wasn't saying "BASH IS ALL UNIX", I was picking a specific setup as a point of comparison.

"bash is not a REPL with support for live coding."

What are attributes of a REPL which you see bash as lacking? Clearly, it Reads, Evaluates, and Prints. If you just object to the particular language, fine, there are certainly things to object to there - though I think it's pretty great as UI. It's certainly atrocious when you try building anything large out of it.

"as consequence of the last point, bash cannot provide graphical output and respective manipulation of system data structures"

That's mostly false, which you touch on immediately following.

'Writing data to /proc to alter the OS behavior, is pretty basic'

It can get arbitrarily sophisticated, but...

'compared with changing OS behaviour with a "doit" message on a expression block.'

I don't know what this means, please elaborate.

"For better understanding, imagine using something like Mathematica as an OS shell."

There's too many things that can mean for it to aid my understanding much. Do you mostly refer to the ability to embed graphics in the interactive session while retaining a scrolling log? or are there other relevant attributes I'm not following?


In Interlisp, Smalltalk and Mesa environments the shell interacts with the operating system.

All those demos you see from Bret Victor are based on the experience to do live coding originally developed in these environments.

The operating system and applications are blended.

UNIX shell is based on the principle you write little programs in whatever language you feel like and you just have pipes, command line arguments and exit codes as communication mechanisms.

In Xerox PARC environments, you have a REPL experience where you can make use of any function or object in the whole OS and applications. You can change them dynamically (Interlisp, Smalltalk) or via module reload (Mesa).

Since the language of the REPL is the one of the system, you can do tasks, like select something with the mouse and then apply a REPL script to the currently selected graphical objects.

Just a few dummy examples, do a map over all selected windows to change their size for automatic tiling.

Or go over the paragraphs on the text editor and right align them.

The shell blends application macros, operating system scripts and live coding all in the same way.

For graphical editing in the shell you can pretty print data structures so that when you type a variable that represents a graphical structure, it gets drawn graphically in the command line.

You get to change running applications state from the shell, without requiring some kind of network protocol or shared memory API that they need to implement. Any public function or object can be directly accessed and manipulated.

Think of the whole experience as something like DrRacket or Mathematica being the complete OS.


Much more informative - thanks!

That said, little of this is really dependent on another model. It seems your complaints boil down to 1) applications may do a poor job defining interfaces (which is either also true in the PARC setup, or they ignore the need to define interfaces in which case things are unlikely to be stable), 2) everything is too stringly typed (which I mostly agree with, though there are advantages to it), and 3) these (plus cultural attitudes) lead to insufficient/inappropriate infrastructure (which I also agree with - "the application" is typically harmful in the UNIX model).

Having said that, there doesn't seem a theoretical difference in capabilities except for the choice of a more sophisticated terminal (which is something I'd like fixed on modern systems).

As an amusing aside, "Just a few dummy examples, do a map over all selected windows to change their size for automatic tiling." I did literally this just the other day with bash + ratpoison.


There's no doubt Bret Victor has many interesting ideas, but I thought this post[1] had some interesting counterpoints to his general thrust.

[1] http://www.evanmiller.org/dont-kill-math.html


Memory is linear and so is execution( at a basic level ) yet our programming and ideas are not. Imagine programming as a needle with one continues thread stringing boxed functions and values together, sometimes threading something that has been threaded before, and ultimately creating a crisscross of wire that's hard to understand. We cannot organize something that is linear with a non-linear representation. We try to make the crossed wire simpler to understand with programming languages but it is inherently unsolvable as a problem. There will never be a authoritative programming language.


> People are still typing the same Unix commands into 25x80 terminal windows. People are still using vi to edit programs as sequential lines of text in files using languages from the 80s (C++) or 90s (Java).

This seems like an almost willful misunderstanding of what programming is. The power of programs does NOT come from their syntactic structure. The power of programs comes from the non-linearity of programming language semantics -- I can trivially write a program that (theoretically) calculates a number too large to represent in the number of atoms in the universe.

The command line amplifies (non-linearly) what you can conventionally do with simple UIs. As a trivial example:

    $ mkdir -p src/{main,test}/{scala,java,resources}
    $ touch src/{main,test}/{scala,java,resources}/.keep
I've just created 2*3 directories and files with two commands which were very quick to type out. I haven't done actual tests, but I'd argue that that was pretty quick compared to doing it in a GUI (for instance).


One of my favorite CS professors back in the 80s used to joke that we had gone from cave paintings to written language, and now we wanted to back with our computers. (my paraphrase of whatever he actually said back then)

Pictographs work for the illiterate, but that doesn't make them the most efficient interchange mechanism.


A professor I knew had a good way to put it: "With the shell, you have a language. With pointy-clicky, you're reduced to pointing at things and grunting."

While I enjoy the shallow dig, I think this is a pretty great metaphor generally, in a deep (and less condescending) way.

If you drop me in France, and I don't speak French, I'll probably do a lot of communication with pointing and grunting, and I'll probably be able to accomplish simple tasks (particularly with an ideographic picture book on hand). Expressing more complicated things that way is pretty intractable, however. Learning to speak French is the solution, but it's a lot of work.

Moreover, there are certainly contexts where pointing makes more sense ("I'll take that one, that one, and that one" versus trying to pick out differentiating features or count).


Most technical academic textbooks are full of pictures, diagrams, and graphs.


"People are still typing the same Unix commands into 25x80 terminal windows."

Well, I for one am typing the same Unix commands and some new Unix commands into 58x170 terminal windows (on my laptop - bigger at work), and at work my builds run in 10s of seconds if I run tests. In many ways a much better experience, and one that it's easier to build still more great stuff into/atop without breaking the basic model.


The problem isn't in my opinion that people still use languages and environments from the 1980's. It's that the modern languages and environments aren't really any better. If anything, they're worse.


> People are still typing the same Unix commands into 25x80 terminal windows. People are still using vi to edit programs as sequential lines of text in files using languages from the 80s (C++) or 90s (Java).

Well yes, there are always people who stick to the older methods, or sometimes the situation calls for programming through a terminal window, but there have been big advancements since those methods, like the great environment of Visual Studio and such. Why do you say programming has gone nowhere because there are people who don't use the newer advancements?


> Why do you say programming has gone nowhere because there are people who don't use the newer advancements?

Speaking for myself, its because Visual Studio and its ilk don't feel like advancements. They feel like bandaids. They do their damnedest to reduce the pain of programming by automatically producing boilerplate, auto completing words, providing documentation, and providing dozens of ways to jump around text files.

Personally, I don't feel that the bandaid does enough to justify using it (granted, my main language doesn't work well with Visual Studio, so there's that too).

The main source of the pain is, to me, is that we're still working strictly with textual representations of non-textual concepts and logic, no matter how those concepts might better be rendered. We're still writing `if` and `while` and `for` and `int` and `char` while discussing pointers and garbage collection and optimizing heap allocation... Instead of solving the problem, we're stuck down describing actions and formulas to the machinery. No IDE does anything to actually address that problem.

Sorry, rant, but this problem certainly resonates with me.


>The main source of the pain is, to me, is that we're still working strictly with textual representations of non-textual concepts and logic, no matter how those concepts might better be rendered.

I can't see any issue with representing logic abstractly with symbols. It's the same for calculus. Of course the ideas we're representing aren't actually the things we use to represent them, the same as written communication.

Non-textual programming has been explored to some degree, such as Scratch, but it's not seen as much of a useful thing.

>Instead of solving the problem, we're stuck down describing actions and formulas to the machinery. No IDE does anything to actually address that problem.

Describing actions and formulas to a machine in order to make it do something useful is pretty much the definition of programming. IDEs make it a more convenient process.

Unless you want to directly transplant the ideas out of your neural paths into the computer, maybe some AI computer in the future based on a human brain, this is how it's going to be.


> I can't see any issue with representing logic abstractly with symbols.

That's the problem: text isn't abstract enough. So we put some of the text into little blobs that have names (other methods), and use those names instead, and we call that "abstraction," but black-box abstraction doesn't help us see. The symbols in calculus, by contrast, are symbols that help you see. The OA is calling for abstractions over operating a computer that help us see.


Agree. There is must be more abstract way to present ideas than text. In this way, programs are easier to understand and modification, and have less errors and bugs.


I am suspicious. I think it would certainly be easier in some ways for rank beginners- it would make spelling errors and certain classes of syntax errors impossible- but those aren't really the bugs that cause experienced programmers grief. It's generally subtly bad logic, which is more about how people are terrible. Plus, we already know how to create computer languages that largely avoid those problems.

Written language is wonderful in many respects, and I sometimes thing people discount these things out of familiarity. Keyboards too- you can do things very quickly and very precisely with keyboards. Those things matter for your sense of productivity and satisfaction.


> ...while discussing pointers and garbage collection and optimizing heap allocation

People still do that?

We still use 'if' and 'then', but higher level languages like Ruby and Python have eliminated pointers and their ilk from day-to-day discussion, relegating heap allocation discussion to the halls of specialized conferences, while many programmers go about their day-to-day activities skipping over the pain of garbage collection.

They may not win language shootout speed tests, but they remove a lot of programmer pain and help with how long it takes to write code.


> People still do that? Yeah, we do. Somebody has to manage all that memory that you throw around in your ruby/python scripts.


Of course, but I don't see how that "problem" will ever be solved. As long as a process has been abstracted, there will always be someone who has to look after that abstraction.

Somebody has to design and engineer the hardware and every level of abstraction between that hardware and whatever abstraction layer the "average" developer uses.


Our ancestors used symbols to convey messages and logic. Then came the Phoenicians. Its their fault!


That's kind of the thing... that there are other ways of programming (IDE instead of text editor), but for interesting languages, there's no clear cut proof that IDEs are actually better.

For languages specifically such as Java, which have very verbose syntax, and huge amounts of boilerplate for large projects, then sure, an IDE is an improvement.

But even IDEs, really, haven't much changed. And the fact that we even still can have the discussion, sort of proves that it's not that much of an improvement.

The core of his argument was the sequential text in 20+ year-old languages.

A radical change would be similar to how 3d materials & render chains have improved in the last 20 years. From editing Renderman text files, to the nodes system: https://www.google.co.uk/search?q=blender+render+nodes That's a radical shift.


How are you defining an "interesting" language? Just your opinion? I'd say C# is a pretty interesting and powerful language that's extremely well-complemented by VS.

There's a lot more to an IDE than code generation and autocomplete (even if those are very useful and save me a ton of time). Finding all references to a variable or function (simple text searching does not do this nearly as well - the IDE actually analyzes the language and knows what points to what). Immediately letting you know if the syntax is wrong. Very visual and intuitive debugging (it goes to the line # and lets you hover over a variable to see its contents). Jumping to a variable's definition.

There have been experiments with non-textual programming, such as Scratch, but it hasn't really proven to be very useful. If there's going to be some radical shift, it might come from AI and brain scanners maybe?


Furthermore Visual Studio let's you define custom debug visualisation quite easily:

http://www.codeproject.com/Articles/13127/Create-a-Debugger-...

http://visualstudiogallery.msdn.microsoft.com/eedc48e7-5169-...

(but of course one has to think outside of the "evil MS" box)


What I was meaning was 'very different from was done 20/30 years ago, but is actually practical, as powerful as languages from 20/30 years ago, and not just a tech demo...'.

Yes, IDEs help a lot with some of those tasks. But we're really still improving programming in a very evolutionary way - which isn't bad. For sure c# is more powerful and interesting than 1990 era c++. But ultimately, it's not that different, really.

To me, that's the interesting thing about lightroom, and all that stuff. We're able to explore some of the concepts of programming from the old smalltalk systems in a modern way. And maybe, we'll find ways of programming which do actually look different from the old ways. Maybe not.

Visualising functions as things which we can glue together, transform, pass data through, etc. I'm very interested to see where it all goes.


is that really better though ?

not today, but in 5, 10, 20 years from now.

Recent experiences have shown me that flowchart systems are really really specialized tools that require a lot of training to be able to use with a modicum of success.

There are very few problem domains where I believe they provide a more accessible and expressive solution than plaintext.


Given that Chris Granger was a program manager on the Visual Studio team, it seems likely he is aware of what it provides and does not consider it an adequate solution. It may not even be a step on the path to an adequate solution.


> I alternate between thinking that programming has improved tremendously in the past 30 years, and thinking that programming has gone nowhere.

Anyone who thinks it hasn't improved should take a look at the description of the program used as an example in David Parnas's classic 1972 paper 'On the Criteria to be Used in Decomposing Systems into Modules' ( https://www.cs.umd.edu/class/spring2003/cmsc838p/Design/crit... )

It's a KWIC (keyword in context) program and he says a competent programmer should be able to code it in a week. These days it's about 5-10 minutes in Ruby or Perl.


I don't say this to counter your views, but I think there is a threshold limit in everyday life.

Think about boiling water. You boil it till 100 Celsius scale. You keep heating it, but it won't attain more temperature, without turning it into steam. Now think about it, we do not need steam but we need liquid water for survival.


> People are still typing the same Unix commands into 25x80 terminal windows.

I'm using 120x40, thank you very much


This strikes me as armchair philosophizing about the nature of programming language design. Programming languages are not intentionally complex in most cases, they're complex because the problems they solve are genuinely hard and not because we've artificially made them that way.

There is always a need for two types of languages, higher level domain languages and general purpose languages. Building general purpose languages is a process of trying to build abstractions that always have a well-defined translation into something the machine understands. It's all about the cold hard facts of logic, hardware and constraints. Domain languages on the other hand do exactly what he describes, "a way of encoding thought such that the computer can help us", such as Excel or Matlab, etc. If you're free from the constraint of having to compile arbitrary programs to physical machines and can instead focus on translating a small set of programs to an abstract machine then the way you approach the language design is entirely different and the problems you encounter are much different and often more shallow.

What I strongly disagree with is claiming that the complexities that plague general purpose languages are somehow mitigated by building more domain specific languages. Let's not forget that "programming" runs the whole gamut from embedded systems programming in assembly all the way to very high level theorem proving in Coq and understanding anything about the nature of that entire spectrum is difficult indeed.


> There is always a need for two types of languages, higher level domain languages and general purpose languages.

I never suggested otherwise, just that when you're in a domain you should be in that domain. That solution requires something more general purpose to glue domains together, which is the crux of the problem. What does such a language look like? How do you ensure you don't lose all the good properties you gain from the domain specific languages/editors when passing between them?

I think you present a false dichotomy though. General purpose languages are just as much about encoding a process. The distinction between compiling to the machine vs some abstract machine also isn't really relevant: this is about semantics, not implementation. And if you let implementation dictate the semantics you won't get very far from where we are now.

> What I strongly disagree with is claiming that the complexities that plague general purpose languages are somehow mitigated by building more domain specific languages.

I never said that :) I said that programming would be greatly improved by being observable, direct, and incidentally simple. And again those have nothing to do with what "level" you're programming at, they're just principles to apply. I do think there is a general solution that can encompass most of the levels (though I'm not interested in trying to do that any time soon), but there is a common case here and it certainly isn't high level theorem proving or embedded systems. It's stupidly simple automation tasks, or forms over data apps, or business workflows. The world works on poorly written excel spreadsheets and balls of Java mud. You don't have to fix everything to make a huge impact and the things we learn in doing so can help us push everything else forward too.


> How do you ensure you don't lose all the good properties you gain from the domain specific languages/editors when passing between them?

That's a very interesting (and I'm betting hard to solve) problem! However, it's very hard for me to see how Aurora would help with that. From the demo, it looks like yet another visual programming system; such systems don't seem particularly interoperable.


The clever part of the strategy I was using in that demo was that all domains are expressed declaratively as datastructures. This meant that the glue language only needed to be very good at manipulating vectors and maps. You built up a structure that represented music, html, or whatever and then just labeled it as such. Interop between domains then becomes pretty simplistic data transformation from one domain's format to another. And given how constrained the glue language could be, you could build incredibly powerful tools that make that easy. You could literally template out the structure you want and just drag/drop things in, fix a few cases that we maybe get wrong and you're done - you've translated tweets into music.

We ended up abandoning that path for now as there are some aspects of functional programming that prove pretty hard to teach people about and seem largely incidental.


Here's some meat. So how does FP fall down?


Explicitly managing hierarchical data structures leads to a lot of code that isn't directly related to the problem at hand. A lot of attention is dedicated to finding the correct place to put your data. Compared to eg relational or graph data models, where that kind of denormalisation is understood to be an optimisation made at the expense of program clarity / flexibility.

The pervasive use of ordering in functional programming inhibits composition. (Even in lazy languages the order of function application is important). Compare to eg Glitch or Bloom where different pieces of functionality can be combined without regard for order of execution/application. This better enables the ideals that BOT was reaching for - programming via composition of behaviour. In a BOT plugin you can not only add behaviour but remove/override other behaviours which turns out to be very valuable for flexible modification of Light Table.

A more concrete problem is displaying and explaining nested scope, closures and shadowing. As a programmer I have internalised those ideas to the point that they seem obvious and natural but when we showed our prototypes to people it was an endless source of confusion.

Functional programming is certainly a good model for expressing computation but for a glue language the hard problems are coordination and state management. We're now leaning more towards the ideals in functional-relational programming where reactivity, coordination and state management are handled by a data-centric glue language while computation is handed off to some other partner language.


> A more concrete problem is displaying and explaining nested scope, closures and shadowing.

That's what really killed it for me and also one of the things that I found pretty surprising. Tracking references is apparently way harder than I realized. And while I thought we could come up with a decent way to do it, it really did just confuse people. I tried a bunch of different strategies, from names, to boxes that follow you around, to dataflow graphs. None of them seemed to be good enough.


OK, you don't need to convince me! I've been saying for years that the central problem is coordinating state updates, and have been proposing that languages should automatically manage time for us like the way they automatically manage memory. Sean McDirmid and I have submitted a paper on it to Onward.

But as to graphs vs trees I don't think it is so clear-cut. Good arguments on both sides of that issue.


> Let's not forget that "programming" runs the whole gamut from embedded systems programming in assembly all the way to very high level theorem proving in Coq and understanding anything about the nature of that entire spectrum is difficult indeed.

True, but one of the problems which Bret Victor and Chris Granger set out to solve with LightTable (and is mentioned here in §"Programming is Unobservable" and §"Programming is Indirect") is that the tooling for using current programming languages hasn't meaningfully changed since the 70s or 80s.

I agree that generalising over all programming languages is near-impossible, but even the most 'bare-metal' languages only manipulate models of the computer hardware.


> they're complex because the problems they solve are genuinely hard and not because we've artificially made them that way.

I have a ServiceControllerServiceProvider which disagrees.


There's a reason the game Pictionary is hard, despite the "a picture is worth a thousand words" saying. And that is that images, while evocative, are not very precise. Try to draw how you feel.

If you are using card[0][12] to refer to Card::AceSpades, well, time to learn enums or named constants. If, on the other hand, the array can be sorted, shuffled, and so on, what value is it to show an image of a specific state in my code?

There's a reason we don't use symbolic representation of equations, and it has nothing to do with ASCII. It's because this is implemented on a processor that simulates a continuous value with a discrete value, which introduces all kinds of trade offs. We have a live thread on that now: why is aaaaaa not (aaa)(aaa). I need to be able to represent exactly how the computation is done. If I don't care, there is Mathematica, and and the like, to be sure.

If you disagree with me, please post your response in the form of an image. And then we will have a discussion with how powerful textual representation actually is. I'll use words, you use pictures. Be specific.


It's not about choosing one or the other, it's about allowing both. I can use symbols (though not sentences or other usefully descriptive language), but do I have an opportunity to represent those symbols at all? no.

I'm not saying we should forsake language, if you look at the now very out of date Aurora demo, all the operations have sentence descriptions. This certainly isn't an all or nothing thing. If it makes sense to visualize some aspect of the system in a way that is meaningful to me, I should be able to do so - that is after all how people often solve hard problems.


In your example you used an ace of spades. Your picture took up half my screen. I can't imagine trying to actually manipulate logic when each element is taking up half my screen - can you?

Instead, I can just create a variable called AceSpades. It's not as... philosophical? but it's a million times more practical. Instead of needing an artist to come and draw up a new symbol for a concept I've created, I can just write it in text as a variable name. A lot of graphically based languages have been tried but they just don't scale to general problems. They work extremely well as limited languages for very constrained problems, but as soon as you need a new concept the complexity goes out the window compared to a traditional text based language. Why? Same reason as before - defining a concept in text is easier than designing a new symbol.

You touch on this in your blog too in how most of what programmers do is glue things together. You didn't really define what "glue things together" means though. I think it means "define new concepts using existing concepts". Eg, we take a MouseInput and a Sphere and create a new SphereSlice. With a text language we're just glueing the input library together with our geometry library. With a symbol based language, we have to actually define the new symbols and concepts of what a slice of a sphere is.


"[Graphical languages] work extremely well as limited languages for very constrained problems"

Even that might need a citation...


[I'm curious why I got downvoted above - did I miss something?]

I don't think that needs a citation though. Any GUI is a graphical language for a constrained problem. I could use a generic text language to post this comment to HN, or I can use a specialized graphical language that provides me with this resizable text input box and reply button. It's extremely constrained in this case. If it had GUI elements for doing formatting, or the ability to post comments to other websites, it would be a less constrained graphical language. As you reduce constraints the graphical language would either need the ability to create new concepts or have many additional concepts predefined. So to answer your question about a citation: this very comment box is my citation. It works better than a purely text based input for submitting this comment.

Obviously for a general language you can't predefine all the required concepts which means they need to be user defined. User defining concepts in a graphical language is a difficult task as it requires creating uniquely human recognizable symbols for the new concepts.

You have two ways to get those new human recognizable symbols - you can generate them with an AI, or a human must generate them. AI is nowhere near able to generate symbols for concepts it doesn't understand as it would need to be a true AI with the ability to learn and understand new concepts. Having your graphical language's users define new concepts in effect makes those users into language designers. This is a bigger problem than it sounds as language design is an extremely difficult problem, and I personally don't want to be designing a language when I'm trying to solve a problem as I'll no doubt get the language design wrong if I'm focused more on the problem than the language design.


"Any GUI is a graphical language for a constrained problem."

With that broad a definition (and I don't think it's horribly unreasonable), I agree it doesn't need a citation (even if I think GUIs are overapplied).


A few examples of graphical programming languages are LabView, Scratch, and Lego Mindstorms (NXT). (I'm not advocating graphical programming, just providing examples.)

Edit: maybe graphical GUI editors such as in Xcode and Visual Studio could be considered tools for programming languages that are partially graphical and partially text based.


Would you include PLC programming? It's safe to say that's been pretty successful in industrial settings.


Yeah, I'm most familiar with graphical programming by way of hearing people complain about LabView.


I would be interested in seeing you take a larger chunk of code and convert it to a more symbol-rich representation. Showing a single card, especially a very large one, isn't a good representation of the idea. I would also appreciate a description of how exactly I inserted these symbols into the editor.

I will spot you that I won't natively know the language in question. In turn, I warn you that the most likely criticism I will make is that you've greatly increased the cognitive load of what I have to understand to understand your code without a corresponding payoff, even accounting for fluency in the vocabulary. (I say this not to be a jerk, but precisely to issue fair warning so you can head it off at the pass.) I will also spot fluency in your paradigm of choice... while I hope that the result is not a superficial syntax gloss on top of fold & map, I am happy to accept that I would need to know what those things are.

(I've come to start issuing the same challenge to anyone who thinks a visual programming language is the answer to our programming complexity problems, for instance. Don't draw me three boxes and two lines showing a simple map transform. Draw me something not huge, but nontrivial, say, the A-star algorithm. Then tell me it's better. Maybe it is, if you work on it enough, but don't scribble out the equivalent of "map (+1) [1, 2, 3]" and tell me you've "fixed" programming. Trivial's trivial in any representation.)


Sure, there are plenty of cases where visualization is helpful. But I see so many blog posts about it, and not much in the way of actual progress.

Take the card again. It's your example, after all. I cannot think of any way to use that to, say, write a small AI to play poker. I suppose I could see a use in a debugging situation for my 'hand' variable to display a little 5@ symbol (where @ is the suit symbol). But okay, let's think about that. What does it take to get that into the system?

No system 'knows' about cards. So I need a graphics designer to make a symbol for a card. I surely don't want an entire image of a card, because I have 20 other variables I am potentially interested in, which is why in this context a 5@ makes sense (like you would see in a bridge column in a newspaper). So somebody has to craft the art, we have to plug it into my dev sysstem, we need to coordinate it with the entire team, and so on. Then, it is still a very custom, one off solution. I use enums, you use ints, the python team is just using strings like "5H" - it goes on and on. I don't see a scalable solution here.

Well, I do see one scalable solution. It is called text. My debugger shows a textual depiction of my variable, and my wetware translates that. I'm a good reader, and I can quickly learn to read 54, "5H", FiveHearts as being the representation of that card. Will I visually "see" the value of a particular hand as quickly? Probably not, unless I'm working this code a lot. But I'll take that over firing up a graphics team and so on.

I do plenty of visualizations. It is a big reason for me using Python. If I want to write a Kalman filter, first thing I'm doing is firing up matplotlib to look at the results. But again, this is a custom process. I want to look at the noise, I want to look at the size of the kalman gain, I want to plot the filter output vs the covariance matrices, I want to.... program. Which I do textually, just fine, to generate the graphics I need.

I've dealt with data flow type things before. They are a royal pain. Oh, to start, it's great. Plop a few rectangles on the screen, connect with a few lines, and wow, you've designed a nand gate, or maybe a filter in matlab, or is it a video processing toolchain? Easy peasy. But when I need to start manipulating things programmatically it is suddenly a huge pain.

I am taking time out of writing an AI to categorize people based on what they are doing in a video (computer vision problem) to post this message. At a rudimentary level graphical display is great. It is certainly much easier for me to see my results displayed overlaid on the video, as opposed to trying to eyeball a JSON file or something. But to actually program this highly visual thing? I have never, ever heard anything but hand waving as to how I would do that in anything other than a textual way. I really don't think I would want to.

Anyway, scale things up in a way that I don't have to write so many matplotlib calls and you will have my attention. But I just haven't seen it. I've been programming since the early 80s, and graphical programming of some form or another has been touted as 'almost here'. Still haven't seen it, except in highly specialized disciplines, and I don't want to see it. "Pictures are worth a thousand words" because of compression. It's a PCA - distill a bunch of data down to a few dimensions. Sometimes I really want that, but not when programming, where all the data matters. I don't want a low order representation of my program.


> So I need a graphics designer to make a symbol for a card.

I think this is the crux of the debate. The point isn't high quality visualizations, it's about bringing the simple little pictures you'd draw to solve your problem directly into the environment. Can you draw a box and put some text in it? Tada! Your own little representation of a card.

I'm not suggesting that you hire people out to build your representations :) This is about providing tools for understanding. Maybe you don't see value in that, and there's no reason you can't just keep seeing things as plain raw text (that's just a representation itself).

> Anyway, scale things up in a way that I don't have to write so many matplotlib calls and you will have my attention.

Give us a bit and I think we can provide a whole lot more than just that. But we'll see!


I enjoyed watching the demo and reading the post. I hope you continue to think about this and innovate.

Something that I feel like is missing is the abstraction quality of programming. That is, the idea that I typically have very little use for a particular graphic when writing a program. I'm trying to express "whenever the user hits this button, flip over the top card in this set, move it over here, and then make the next card the top card" or whatever.

Some of Bret's demos look to me like he's thinking directly about this, and trying to discover where the abstraction fits in, and how direct manipulation can help to basically "see" that the abstraction is working. Perhaps that's a good guide to where direct manipulation could really help -- for anything relative complex, it's a big pain to see that code works. A direct manipulation system to basically flip through possibilities, especially into edge cases, and make sure they work as intended would definitely help out. I don't know whether that's the final way you want to express the system -- language is really powerful, even a million years later! -- but a way to see what the language does would be really awesome.


I'm optimistic that your team is making real progress behind the scenes, but please remember that when you say 'do some math' some of us think 'discontinuous galerkin' instead of 'add one'. Not that everyone needs to, but one reason the early pioneers made such great progress is that they were building tools to solve truly challenging problems. The fact that we can build TODO lists in 40 seconds today is incidental.


Just use Unicode, and a programming language that uses the full power of Unicode symbology in its syntax. E.g.

♠♣♥♦ × A23456789TJQK


Please don't. People are already terrible at naming things, I for one am not going to try the entire Unicode table to find out which symbol you chose for "MetadataService". Plain text is fine, it's searchable, readable, and somewhat portable (minus the line ending debacle).

If you need something more, vim has the "conceal" feature which can be used to replace (on the lines the cursor is not on) a given text with another (eg show ⟹ instead of =>). Would you be better off if there was an option to do this for variable/class/method names? I'm not sure.


> vim can be used to replace a given text with another (eg show ⟹ instead of =>)

If you use the short ⇒ to substitute for => (rather than long ⟹ as in your example), as well as many other Unicode symbols, then the overall code can be much shorter and thus more understandable.

The spec for the Fortress programming language made a point of not distinguishing between Unicode tokens in the program text and the ASCII keys used to enter them. Perhaps that's the best way to go?


Why do you think that "much shorter" implies "more understandable"?

I think we have a lot of experience to suggest otherwise.

Anyone who has had to maintain old Fortran or C code will likely know what I mean. With some early implementations limiting variable and function identifiers to 8 characters or less, we'd see a proliferation of short identifiers used. Such code is by far some of the hardest to work with due to variable and function names that are short to the point of being almost meaningless.

Then there are languages like APL and Perl, which make extensive use of symbols. APL has seen very limited use, and Perl code is well-known for suffering from maintenance issues unless extreme care is taken when initially creating the code.

Balance is probably best. We don't want excessively long identifiers like is often the case in Java, but we surely don't want excessively short ones, either.


As somebody who spent some years writing Perl code, I don't feel that having a few well-defined ASCII symbols were such an issue. The problems with Perl are that symbols change depending on the context (eg, an array @items needs to be accessed via $items[$i] to get an item at position $i, to tell Perl it is a scalar context), and weak typing. Even with changing symbols, it makes it easier to distinguish between scalars, arrays and hashes, especially with syntax highlighting. As opposed to languages like Haskell or Scala, in which library designers are free to display their creativity with such immediately obvious operators as '$$+-'.

Edited to add that I agree with your overall point. Shorter is not always clearer. It can be a benefit to have a few Unicode symbols displayed via 'conceal' but it's not (at least in my experience) a major productivity gain. And the number needs to be kept small. If I want Unicode symbol soup, I'll play a roguelike.


If you're using Unicode: 🂡🂾🃍🃛🂠

https://en.wikipedia.org/wiki/Unicode_Playing_Card_Block


I think the problem is the card example is a bad one. 5H is already acceptable for nearly every case, since there is so little data in the image.

Also it is probably good to remember that most of the good examples of doing this have probably already been done, debug visualizations in physics engines are a great example, a perfect way of showing incredibly complex data.

The only way to expand on that would be to add time and ease isolating a piece of data.


Try writing a sudoku programmer with constraint based programming.

You 'teach' the computer the rules of the game and the computer works to figure out allowed values.

https://en.wikipedia.org/wiki/Constraint_programming


"There's a reason we don't use symbolic representation of equations, and it has nothing to do with ASCII. It's because this is implemented on a processor that simulates a continuous value with a discrete value, which introduces all kinds of trade offs."

I think that's wrong. I think the reason we don't use symbolic representations is entirely due to a preference for ASCII. Whether relying on just the symbolic representation would be sufficient depends on domain - sometimes I don't care about the precise FP value (-ffast-math exists for a reason), maybe I'm working with bigints instead of floats, maybe ... But having the ability to specify and talk about the symbolic version seems only helpful - maybe some combination of static analysis and testing can make sure my implementation matches it?

Edited to add: Of course, whether it is helpful enough for the added difficulty specifying it with typical UI is another question.


> maybe some combination of static analysis and testing can make sure my implementation matches it?

I'm becoming a fan of the ideas espoused in eg http://www.vpri.org/pdf/m2009001_prog_as.pdf and http://shaffner.us/cs/papers/tarpit.pdf‎ where rather than trying to verify that arbitrary code matches your specification, you make the specification itself executable and then supply hints and heuristics on the most efficient way to execute it. It's a similar idea to first writing an sql query and then hinting which execution plan should be used. Compared to writing arbitrary code this approach limits expressiveness but it removes the verification problem entirely. So far we've only seen this approach used heavily in querying and in constraint solving. Luckily, it turns out that large chunks of the programs we write day to day can be expressed in terms of either database-style queries or constrained search (see eg http://boom.cs.berkeley.edu/).


I think that in the medium term, this is a good place to go. Specify denotation, and operational constraints, and then fill in details whenever the compiler can't figure out the rest.

Actually, the above is how I've been thinking about it for a while, but I'm not sure there's any reason we can't say "denotational constraints" as well - where exact denotation is just a particularly tight constraint.


Okay, how does

   \integral{\integral nasty expresion dx}dy}
get implemented? (excuse the faux LaTex, I never remember the syntax)


Something like that (typing from memory):

  (defun do-integrate (tree var)
      ;; -- do some crazy symbolic manipulation on the source tree
  )

  (defmacro integral (expr var)
     ,(do-integrate expr ,var))


I don't think I expressed my concern very well. The point is is that you often cannot just depend on some built in to do your mathematics for you - stability, speed, and convergence depend heavily on the implementation that you use. There is a huge gulf between symbolic expressions and how we solve them on a computer. For example, all we really know how to solve is Ax=b - linear equations. Oh, we have special tricks for some nonlinear cases, but a huge part of computer math is getting your system approximated as Ax=b, and then solving for that. You can't just hand wave that away by throwing symbols in your code.

People will bring up various symbolic and other specialized math packages. Sure enough, they exist. And, they are dog slow. They are used to explore problem spaces, and to get the solutions to small problems. But when it comes to it, we reach for C++, for Fortran (and now, maybe Julia) to get performant code. If I want to type equations and get results, why wouldn't I already be using one of those systems, heavily optimized for math computations of that sort? If I am in a general purpose language, it is because I need to specify the system in great detail, and obscuring it with symbols like sum and integrate just doesn't work.

And, in case the larger point is lost, this is an example to illustrate the larger case - that we can't really obscure large parts of the computational model. In cases where we can, we already do so. For example, I don't write my own red-black trees, I count on STL to do it for me. v.insert (make_pair("name", "Roger")) is a great abstraction (albeit verbose). But even when it comes to using something like BLAS & LAPACK, I have to choose the representation and so much more. Dense vs sparse, etc. So please don't retort with some integral or other math equation that can be solved efficiently with default methods - that misses the point. CS since the 50s has been a study of how to abstract concepts away, and have a done a fine job of it. A few pictures doesn't magically make the remaining problems disappear.

edit:

I somewhat know what I am talking about here. In my Ph.D program (left it, didn't graduate, not claiming a PhD) in the 80s my advisor was advocating an expert AI system to resolve these problems. Any guess as to the result of that research? The hint is that no expert system is built into any of our heavy math systems today. And just this week I sat through a session where a researcher was presenting early work he was doing no a DARPA grant to, wait for it, same thing, basically.

It's a outrageous, tremendously hard problem, and we keep getting people announcing that they have solved it, they just don't quite have the code running yet. Okay. Show me, don't tell me. A point-click interface to inject a sqrt symbol into my code, or an image of a playing card, is so far from the mark, both from the triviality of it, and the distance between that and a hard problem.


My points where 1) that these kinds of concerns are entirely orthogonal to whether we limit our expression to ASCII symbols, and separately 2) that there can be useful things done with analytic forms even when we aren't able to turn them directly into machine code.


It's like you didn't read what I said.


I agree with you. I just want to say, the live-code facet of the demo is great, and I think it supports some of what Bret Victor has been advocating about providing immediate feedback. However, a large part of it looked like just another programming language where the environment required a lot of mouse-clicks to do simple things, such as get some basic math in to the equations. Personally, I don't want to program using pictographs.

Part of programming is communicating thoughts. Plain text is great for that - we all read and write text. I also think it's easier to type "sqrt" or "sum", than to right-click, scroll through a pop-up menu and select the symbol I want. I suppose symbols can help one verify they've written what they intended if the environment supports translating "sqrt" and other functions into symbols that stretch over and around variables and other equations, but that's not what was demonstrated.

Finally, as others have pointed out, the mathematical expression of an algorithm doesn't necessarily lead to an efficient one.


The card example is a bit contrived. Imagine instead being able to to write linearizationtable = [insert interactive graph here]. Instead of linearizationtable[20][2] = {{0,5}, {4, 20}, .... }

And even if you have a graph it can still be stored in text format for version control reasons.


I'm concerned about Chris's desire to express mathematical formulas directly in an editing environment.

Coming from a mathematician with more than enough programming experience under his belt, programming is far more rigorous than mathematics. The reason nobody writes math in code is not because of ASCII, and it's not even because of the low-level hardware as someone else mentioned. It's because math is so jam-packed with overloaded operators and ad hoc notation that it would be an impossible feat to standardize any nontrivial subset of it. This is largely because mathematical notation is designed for compactness, so that mathematicians don't have to write down so much crap when trying to express their ideas. Your vision is about accessibility and transparency and focusing on problem solving. Making people pack and unpack mathematical notation to understand what their program is doing goes against all three of those!

So where is this coming from?

PS. I suppose you could do something like, have layovers/mouseovers on the typeset math that give a description of the variables, or something like that, but still sum(L) / len(L) is so much simpler and more descriptive than \sigma x_i / n


I agree with you, and incidentally so does Gerald Sussman (co-inventor of Scheme). He helped write an entire book on Lagrangian mechanics that uses Scheme because he believes the math notation is too fuzzy and confusing for people.

https://mitpress.mit.edu/sites/default/files/titles/content/...


Both this and the subsequent text on differential geometry are very good but they are written against this enormous undocumented scheme library (scmutils) that is in my opinion very difficult to debug or figure out how macros expand out.


The mathematical notation also lacks a lot of type information. Looking at his example formula, First of all, both x and N are magic variable names, where did they come from? Second, what happens if x is a string and + is considered string concatenation. Mathematical notation has all the problems of duck typing and even more. In a way that's also the beauty of math, you can multiply matrices with same rules you learned scalar multiplication in 2nd grade. For quick problems where you can keep all variables in your head this is nice to save on typing, but for larger applications you need clearly defined interfaces and a way to encode them.


This. Types are the hardest thing about translating mathematics into programming. And it's really the fault mathematics (or a feature of it, depending on your viewpoint).


Excellent point - reminds me of a quote about Prolog that stating problems in a solvable form is as hard as solving the problem. Creating a graphical environment can't sidestep the difficulty in rigorously defining a problem or figuring out how to solve it.


Natural language (like english, spanish) show why this kind of thinking lead to nowhere, and why a programming language is more like english than like glyphs.

Sometime the post not say: We want to make a program about everything. To make that possible, is necesary a way to express everything that could be need to be communicate. Words/Alphabet provide the best way.

In a normal language, when a culture discover something (let say, internet) and before don't exist words to describe internet-things then it "pop" from nowhere to existence. Write language have this ability in better ways than glyphs.

In programming, if we need a way to express how loop things, then will "pop" from nowhere that "FOR x IN Y" is how that will be.

Words are more flexible. Are cheap to write. Faster to communicate and cross boundaries.

But of course that have a Editor helper so a HEX value could be show as a color is neat - But then if a HEX value is NOT a color?, then you need a very strong type system, and I not see how build one better than with words.


Interesting work and I really liked the LightTable video but I think there's a reason these types of environments haven't taken off.

To understand why programming remains hard it just takes a few minutes of working on a lower-level system, something that does a little I/O or has a couple of concurrent events, maybe an interrupt or two. I cannot envision a live system that would allow me to debug those systems very well, which is not to say current tools couldn't be improved upon.

One thing I've noticed working with embedded ARM systems is that we now have instruction and sometimes data trace debuggers that let us rewind the execution of a buggy program to some extent. The debugger workstations are an order of magnitude more powerful than the observed system so we can do amazing things with our trace probes. However, high-level software would need debugging systems an order of magnitude more powerful than the client they debug as well.


It depends entirely on how much state they need to capture. Ocaml has long had a time travelling debugger (http://caml.inria.fr/pub/docs/manual-ocaml-400/manual030.htm...) which is very useful in the small. Data-centric languages like Bloom (http://www.bloom-lang.net/) can cheaply reconstruct past states using the transaction log. Frameworks like Opis (https://web.archive.org/web/20120304212940/http://perso.elev...) allow not only moving forward and backwards but can exhaustively explore all possible branches using finite state model-checking. The key in each case is to distinguish between essential state and derived state. http://shaffner.us/cs/papers/tarpit.pdf‎ has more to say on that front.


Both the indirect and incidentally complex can be helped with literate programming. We have been telling stories for thousands of years and the idea of literate programming is to facilitate that. We do not just tell them in a linear order, but jump around in whatever way makes sense. It is about understanding the context of the code which can be hard.

But the problem of being unobservable is harder. Literate programming might help in making chunks more accessible for understanding/replacing/toggling, but live flow forwards-backwards, it would not. But I have recently coded up an event library that logs the flow of the program nicely. Used appropriately, it probably could be used to step in and out as well.

I am not convinced that radical new tools are needed. We just have to be true to our nature as storytellers.

I find it puzzling why he talks about events as being problems. They seem like ideal ways of handling disjointed states. Isn't that how we organize our own ways?

I also find it puzzling to promote Excel's model. I find it horrendous. People have done very complex things with it which are fragile and incomprehensible. With code, you can read it and figure it out; literate programming helps this tremendously. But with something like Excel or XCode's interface builder, the structure is obscured and is very fragile. Spreadsheets are great for data entry, but not for programming-type tasks.

I think creation is rather easy; it is maintenance that is hard. And for that, you need to understand the code.


I have a tremendous respect for people who dare to dream big despite all cynicism and common assumptions, and especially people who have the skills to actually make the changes. Please keep doing the work you're doing.


Toward a better computer UI

The Aurora demo did not look like a big improvement until maybe http://youtu.be/L6iUm_Cqx2s?t=7m54s where the TodoMVC demo beats even Polymer in LOC count and readability.

I've been thinking of similar new "programming" as the main computer UI, to ensure it's easy to use and the main UI people know. Forget Steve Jobs and XEROX, they threw out the baby with the bath water.

Using a computer is really calling some functions, typing some text input in between, calling some more.

Doing a few common tasks today is

  opening a web browser
  clicking Email
  reading some
  replying
  getting a reply back, possibly a notification

  clicking HN
  commenting on an article in a totally different UI than email
  going to threads tab manually to see any response
  
And the same yet annoyingly different UI deal on another forum, on youtube, facebook, etc. Just imagine what the least skilled computer users could do if you gave them a computing interface that didn't reflect the world of fiefdoms that creates it.

FaceTwitterEtsyRedditHN fiefdoms proliferate because of the separation between the XEROX GUI and calling a bunch of functions in Command Line. Siri and similar AI agents are the next step in simple UIs. What people really want to do is

  tell Dustin you don't agree with his assessment of Facebook's UI changes
  type/voice your disagreement
  share with public
And when you send Dustin and his circle of acquaintances a more private message, you

  type it
  share message with Dustin and his circle of designers/hackers
To figure out if more people agreed with you or Dustin

  sentiment analysis of comments about Dustin's article compared to mine
That should be the UI more or less. Implement it however, natural language, Siri AI, a neat collection of functions.

Today's UI would involve going to a cute blog service because it has a proper visual template. This requires being one of the cool kids and knowing of this service. Then going to Goolge+ or email for the more private message. Then opening up an IDE or some text sentiment API and going through their whole other world of incantations.

Our glue/CRUD programming is a mess because using computers in general is a mess.


The standard deviation is a poor example IMO, in many languages you can get much closer to mathematical notation.

    def stddev(x):
        avg = sum(x)/len(x)
        return sqrt(sum((xi-avg)**2 for xi in x) / len(x))

    stddev xs = let avg = sum xs / length xs
                in sqrt $ sum [(x-avg)**2 | x <- xs] / length xs


It's even a poor example of C++. Using valarray, you end up with basically the same thing as your above examples:

    #include <valarray>
    #include <iostream>
    
    double standard_dev(const std::valarray<double> &vals)
    {
    	return sqrt(pow(vals - (vals.sum() / vals.size()), 2).sum() / vals.size());
    }
    
    int main()
    {
    	std::cout << standard_dev({2, 4, 4, 4, 5, 5, 7, 8}) << '\n';
    }
…and none of those are really much less readable than the math version. All in all, that "example" clearly wasn't made in good faith, and left a bad taste in my mouth.


I think it was a poor choice of example anyway - the code solution is superior because it is far more descriptive of what is happening. The math notation requires existing knowledge which without it, you're basically screwed in attempting to understanding what the hell it does. With code you can search for terms and see what they do, or even better, have an intelligent editor provide hyperlinks to definitions.

My personal opinion is that rather than trying to make programming more like math, we should make math more like programming - such that we stop assuming the reader has some magical knowledge needed to understand it.


Hate to break it to you people, but rms was always right- the #1 reason why programming sucks is that everyone wants complete control over all of the bullshit they threw together and thought they could sell.

Imagine an environment like a lisp machine, where all the code you run is open and available for you to inspect and edit. Imagine a vast indexed, cross-referenced, and mass-moderated collection of algorithm implementations and code snippets for every kind of project that's ever been worked on, at your fingertips.

Discussing how we might want slightly better ways to write and view the code we have written is ignoring the elephant problem- that everything you write has probably been written cleaner and more efficiently several times before.

If you don't think that's fucked up, think about this: The only reason to lock down your code is an economic one, despite that all the code being made freely usable would massively increase the total economic value of the software ecosystem.


Locking down my code for economic reasons has worked pretty well for me. It's allowed me to have a pretty good lifestyle running my business for the last fifteen years and kept my customers happy because they know I have a financial incentive to keep maintaining my products.


and he's oh so healthy

in his body and his mind


I liked this article. I particularly liked the way the author attacked the problem by clearing his notions of what programming is and attempting to come at it from a new angle. I'll be interested to see what his group comes up with.

That said, I think that fundamentally the problem isn't with programming, it's with US. :) Human beings are imprecise, easily confused by complexity, unable to keep more than a couple of things in mind at a time, can't think well in dimensions beyond 3 (if that), unable to work easily with abstractions, etc. Yet we're giving instructions to computers which are (in their own way) many orders of magnitude better at those tasks.

Short of AI that's able to contextually understand what we're telling them to do, my intuition is that the situation is only going to improve incrementally.


I agree. I beleive that most of the incidental complexity has to do with the fact that in end, every single thing greater than a single bit in the digital realm is a convention.

A byte is a convention over bits. An instruction is a convention over bytes. A programming language is a convention over instructions.

It turns out that every time someone sets out to solve a problem with programming, they create their own convention.

It just so happens that either there is no convetion over how to create covnentions, or it is just not followed and thus creates a paralel convention.

We cannot get our arbitrary conventions in line with each other, unless we plan in advance.

Considering, its amazing how far we have come in the middle of this chaos of unrestrained creation.


Leibniz wrote in 1666: "We have spoken of the art of complication of the sciences, i.e., of inventive logic... But when the tables of categories of our art of complication have been formed, something greater will emerge. For let the first terms, of the combination of which all others consist, be designated by signs; these signs will be a kind of alphabet. It will be convenient for the signs to be as natural as possible—e.g., for one, a point; for numbers, points; for the relations of one entity with another, lines; for the variation of angles and of extremities in lines, kinds of relations. If these are correctly and ingeniously established, this universal writing will be as easy as it is common,and will be capable of being read without any dictionary; at the same time, a fundamental knowledge of all things will be obtained. The whole of such a writing will be made of geometrical figures, as it were, and of a kind of pictures — just as the ancient Egyptians did, and the Chinese do today. Their pictures, however, are not reduced to a fixed alphabet... with the result that a tremendous strain on the memory is necessary, which is the contrary of what we propose" http://en.wikipedia.org/wiki/Characteristica_universalis


You might like The Universal Computer: The Road from Leibniz to Turing http://www.amazon.com/The-Universal-Computer-Leibniz-Turing/...


The standard deviation example conflates two questions:

1: Why can't we use standard mathematical notation instead of strings of ASCII?

2: Why do we need lots of control flow and libraries when implementing a mathematical equation as an algorithm.

The first is simple: as others have pointed out here, math notation is too irregular and informal to make a programming language out of it.

The second is more important. In pretty much any programming language I can write:

    d = sqrt (b^2 - 4*a*c)

    x1 = (-b + d)/(2*a)

    x2 = (-b - d)/(2*a)
which is a term-by-term translation of the quadratic equation. But when I want to write this in C++ I need a loop to evaluate the sigma term.

But in Haskell I can write this:

    stDev :: [Double] -> Double

    stDev xs = sqrt((1/(n-1)) * sum (map (\x -> (x-m)^2)) xs)

       where

          n = fromIntegral $ length xs

          m = sum xs / n
This is a term-by-term translation of the formula, in the same way that the quadratic example was. Just as I use "sqrt" instead of the square root sign I use "sum" instead of sigma and "map" with a lambda expression to capture the internal expression.

Experienced programmers will note that this is an inefficient implementation because it iterates over the list three times, which illustrates the other problem with using mathematics; the most efficient algorithm is often not the most elegant one to write down.


Historically it has been easy to claim that programming is merely incidentally complex but hard to actually produce working techniques that can dispel the complexity.

The truth is that programming is one of the most complex human undertakings by nature, and many of the difficulties faced by programmers - such as the invisible and unvisualizable nature of software - are intractable.

There are still no silver bullets.

http://en.wikipedia.org/wiki/No_Silver_Bullet http://faculty.salisbury.edu/~xswang/Research/Papers/SERelat...


Sadly I feel that LT has jumped the shark at this point. What started off as a cool new take on code editors has now somehow turned into a grand view of how to "fix programming". I can get behind an editor not based around text files, or one that allows for easy extensbility. But I can't stand behind some project that tries to "fix everything".

As each new version of LT comes out I feel that it's suffering more and more from a clear lack of direction. And that makes me sad.


Forgive me if my understanding is totally out of whack, but it seems here that the writer is calling for an additional layer of abstraction in programming - type systems being an example.

While in some cases that would be great, I'm not entirely sure more abstraction is what I want. Having a decent understanding of the different layers involved, from logic gates right up to high-level languages, has helped me tremendously as a programmer. For example, when writing in C, because I know some of the optimisations GCC makes, I know where to sacrifice efficiency for readability because the compiler will optimise it out anyway. I would worry that adding more abstraction will create more excuses not to delve into the inner workings, which wouldn't be to a programmer's benefit. Interested to hear thoughts on this!


I think this improved programming vision starts at a higher level language like Clojure/JS/Haskell and builds on that.

To allow the everyday Joe to use simplified programming all the way down to machine code is a harder task. Languages like Haskell try to do it with an advanced compiler that can make enough sense of the high level language to generate efficient machine code.

Of course you'll still lose performance on some things compared to manual assembler but with larger programs advanced compilers often beat writing C/manual assembly.

Honestly the bigger performance problem is not wether you can make a high level language that generates perfect machine code but wether you can get through the politics/economics of JS/Obj-C/Java to distribute it.


I think you could broaden your horizons and try something other than imperative programming language on a von-Neumann machine, perhaps. Knowing how the machine works can be useful if you're working on low level stuff where efficiency is a priority - but that's only a small subset of programming problems - most people don't care about the how or how fast - they simply want to convert some user input to some pixels or files in various ways - and the abstractions they should be using for that are user inputs, pixels and files - not interrupts, registers and pointers.

A kind of obvious point on where having a knowledge of internals doesn't really help much is doing any kind of concurrent programming at scale. If you attempt to solve race conditions with the mindset of "knowing how the machine works", you end up inventing fences, mutexes, semaphores and monitors. (And OK, these are useful tools, and have been generally sufficient until recently - but they simply don't scale.) Compare this with something like Erlang's concurrency model - that of having many individual processes communicate through message passing (ie, the actor model), and it becomes much simpler to reason about concurrency (perhaps at the cost of efficiency, but as previously mentioned, that only matters in few cases). Erlang's model is abstract and says nothing about the machine on which it runs - and an existing knowledge of languages like C doesn't help a great deal to learning it over no programming experience at all.

Think of specialized being the antonym of of abstract, and consider that's what you are - a specialist - your skills are relevant for a specialized field, of implementing efficient programs on specific machines - but what OP wants to do is make programming available to anyone, as a general skill - he doesn't want everyone to become specialists at programming von-Neumann machines.


Chris, have you read Prof. David Harel's[1] essay Can Programming be Liberated, Period?[2]

The sentiments expressed in the conclusion of Harel's article Statecharts in the Making: A Personal Account[3] really jumped out at me last year. When I read your blog post, I got the impression you are reaching related conclusions:

"If asked about the lessons to be learned from the statecharts story, I would definitely put tool support for executability and experience in real-world use at the top of the list. Too much computer science research on languages, methodologies, and semantics never finds its way into the real world, even in the long term, because these two issues do not get sufficient priority.

One of the most interesting aspects of this story is the fact that the work was not done in an academic tower, inventing something and trying to push it down the throats of real-world engineers. It was done by going into the lion's den, working with the people in industry. This is something I would not hesitate to recommend to young researchers; in order to affect the real world, one must go there and roll up one's sleeves. One secret is to try to get a handle on the thought processes of the engineers doing the real work and who will ultimately use these ideas and tools. In my case, they were the avionics engineers, and when I do biological modeling, they are biologists. If what you come up with does not jibe with how they think, they will not use it. It's that simple."

[1] http://www.wisdom.weizmann.ac.il/~harel/papers.html

[2] http://www.wisdom.weizmann.ac.il/~harel/papers/LiberatingPro...

[3] http://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts.H...


I haven't seen that, thanks so much for the pointer!

> in order to affect the real world, one must go there and roll up one's sleeves

This has always been our strategy :) Whatever we do come up with, it will be entirely shaped by working with real people on coming up with something that actually solves the problem.


Wolfram Language addresses a lot of these points. Equations and images both get treated symbolically, so we can manipulate them the same way we manipulate the rest of the "code" (data).


It doesn't handle the "true" debugging discussed in the article. One of the goals of the author is to move away from stepping through breakpoints and print statements to watch data "flow" through a program.


With debugging, we'll get there. I have some prototypes, but it's a long way from a research prototype to production, and we're still quite busy on getting actual products out the door.

And even at the moment, the fact that so much of a typical program in the Wolfram Language is referentially transparent means its easy to pick something up out of your codebase and mess around with it, then put it back. That's a huge win over procedural languages.

But in terms of the language, many of the ideas Chris is talking about are already possible (and common) in the Wolfram Language:

It's functional and symbolic, so programs are all about applying transformations to data. In fact, the entire language is 'data', with the interesting side effect that some 'data' evaluates and rewrites itself (e.g. If).

The mathematical sum notation is unsurprisingly straightforward in WL.

And StandardForm downvalues allow for arbitrary visual display of objects in the frontend.

For example, the card would have a symbolic representation like PlayingCard["Spade", 1], but you could write

  StandardForm[PlayCard[suit_, n_]] := ImageCompose[$cardImages[suit], $cardNumbers[n]];
to actually render the card whenever it shows up in the FrontEnd.

Graphics display as graphics, Datasets display as browseable hierarchical representations of their contents along with schema, etc...


I love seeing the challenges of programming analyzed from this high-level perspective, and I love Chris's vision.

I thought the `person.walk()` example, however, was misplaced. The whole point of encapsulation is to avoid thinking about internal details, so if you are criticizing encapsulation for hiding internal details you are saying that encapsulation never has any legitimate use.

I was left wondering if that was Chris's position, but convinced it couldn't be.


Black boxing is very, very important and necessary if we're ever going to build a complex system, BUT my point is that you should be able to see what it does if you need to. So I don't think we're at odds in our thinking.


That seems like more of an argument in favor of having all source code available (i.e. not using closed-source libraries) than an argument against OOP. The question of what code executes when you call `person.walk()` is no different than the question of what code executes when you call `(person :walk)`: it depends entirely on the value of `person`! This is the core of dynamic dispatch in OOP and higher-order functions in FP, they enable behavioral abstraction. You can impose restrictions on the behavior through types or contracts, but at the end of the day you can't know the precise behavior except in a specific call. And this is precisely where a live programming environment comes in handy.


Thanks for explaining Chris.


I've been lucky to write at least one small application per year, although most of my work is now on the creative side: books, videos, web pages, and such.

So I find myself getting "cold" and then coming back into it. The thing about taking a week to set up a dev environment is spot on. It's completely insane that it should take a week of work just to sit down and write a for-next loop or change a button's text somewhere.

The problem with programming is simple: it's full of programmers. So every damn little thing they do, they generalize and then make into a library. Software providers keep making languages do more -- and become correspondingly more complex.

When I switched to Ocaml and F# a few years ago, I was astounded at how little I use most of the crap clogging up my programming system. I also found that while writing an app, I'd create a couple dozen functions. I'd use a couple dozen more from the stock libraries. And that was it. 30-40 symbols in my head and I was solving real-world problems making people happy.

Compare that to the mess you can get into just getting started in an environment like C++. Crazy stuff.

There's also a serious structural problem with OOP itself. Instead of hiding complexity and providing black-box components to clients, we're creating semi-opaque non-intuitive messes of "wires". A lot of what I'm seeing people upset about in the industry, from TDD to stuff like this post, has its roots in OOP.

Having said all that and agreeing with the author, I'm a bit lost as to just what the heck he is ranting on about. I look forward to seeing more real tangible stuff -- I understand he's working on it. Best of luck.


I liked the part of the article concerning "what is programming" and how we seemingly see ourselves plumbers and glue makers - mashing together various parts and trying to get them to work.

I felt that the article takes a somewhat depressing view. Sure, these days we probably do all spend a lot of time getting two pieces of code written by others to work together. The article suggests there's no fun or creativity in that, but I find it plenty interesting. I see it as standing on the shoulders of giants, rather than just glumly fitting pipes together. It's the payoff of reusable code and modular systems. I happily use pre-made web servers, operating systems, network stack, code libraries etc. Even though it can be frustrating at times when things don't work, in the end my creations wouldn't even be possible without these things.


I love Chris Granger's work, and LightTable, but jeeez my eyes were going weird by the "Chasing Local Maxima" section.

Turn the contrast down!



#ddd -> #ccc

It seems like I can never win the contrast debate :p Try it now.


The problem is dark backgrounds rarely work well unless if you have nice OLED display. I know they are cooler, and its the current hotness among young people whose eyes haven't started to give out yet...but dark themes really are limited by current LCD displays. Not to mention, everyone has a different display as well as eyes, and you can't really predict how the text will bleed from one viewer to the next!

This is what I get from being married to a visual designer.


You might have an easier time with it if you increase the text size a little.

Or used black text on lighter background like most of the rest of the world.


Make the point. A color show differently to different people and the hardware setup. Instead, #ccc is specific.


Awesome, cheers!


> programming is our way of encoding thought such that the computer can help us with it.

I really liked this. But I think we're encoding work, not thought.

If I could add to the list of hard problems: cache invalidation, naming things, encoding things.

I think the problem in a lot of cases is that the language came first, then the problem/domain familiarity comes later. When your language lines up with your problem, it's just a matter of implementing the language. Your algorithms then don't change over time, just the quality of that DSL's implementation.


I think this article forgot to emphasize the act of reading documentation which probably takes 25% to 50% of the time programming. I think Google and StackOverflow already greatly improved it but maybe there is still room for improvement. Maybe one can crowd source code snippets in a huge Wikipedia-like repository for various languages. I’m imagining a context-sensitive auto-complete and search tool in which one can quickly browse this repository of code snippets which all are prepared to easily adapt to existing variables and function names.


Just a few quotes from Alan Perlis:

There will always be things we wish to say in our programs that in all known languages can only be said poorly.

Re graphics: A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures.

Make no mistake about it: Computers process numbers - not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity.


Chris' criticisms of the current state of programming remind me of Alan Kay's quote, "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves."

Thank you for all the work on Light Table, and I'm looking forward to seeing what the team does with Aurora.


As someone who is trying to improve the situation (https://dsl-platform.com) it's strange getting feedback from other developers. While we are obviously not very good at marketing, when you talk to other developers about programming done on a higher level of abstraction usual responses are:

* I'm not interested in your framework (even if it's not a framework) * so you've built another ORM just like many before you (even if there is no ORM inside it) * not interested in your language, I can get most of what I need writing comments in php (even if it's not remotely the same)

It takes a lot of time to transfer some of the ideas and benefits to the other side and no, you can't do it in a one minute pitch that average developer can relate to.


Visual representations are not terribly hard to come by in this day any age. It's almost trivial to write a little script that can visualize your tree data-structures or relations. Plenty of good environments allow us to mingle all kinds of data.

I'm more interested in programs that understand programs and their run-time characteristics. It'd be nice to query a system that could predict regressions in key performance characteristics based on a proposed change (something like a constraint propagation solver on a data-flow graph of continuous domains); even in the face of ambiguous type information. Something like a nest of intelligent agents that can handle the complexity of implementation issues in concert with a human operator. We have a lot of these tools now but they're still so primitive.


The author is correct that programming is currently under-addressing a specific set of use cases: solving problems with conceptually simple models in equally simple ways; in other words, "keep simple programs simple."

However, thinking about computation as only simple programs minimizes the opportunities in the opposite domain: using computation to supplement the inherently fragile and limited modeling that human brains can perform.

While presenting simplicity and understanding can help very much in realizing a simple mental model as a program, it won't help if the program being written is fundamentally beyond the capability of a human brain to model.

The overall approach is very valuable. Tooling can greatly assist both goals, but the tooling one chooses in each domain will vary greatly.


Programming is taking the patterns which make up a thought and approximate them in the patterns which can be expressed in a programming language. Sometimes the thoughts we have are not easily expressed in the patterns of the computer language which we write in. What is needed is a computer language which pulls the patterns from our thoughts and allows them to be used within the computer language. In other words we need to automatically determine the correct language in which to express the particular problem a user is trying to solve. This is AI, we need compression - modularisation of phase space through time. The only way to bring about the paradigm shift he is describing in any real sense is to apply machine learning to programming.


I am optimistic about our field.

Things have not stayed stale for the past 20~30 years, in fact, state of programming have not stayed stale even in the recent 10 years.

We've been progressively solving problems we face, inventing tools, languages, frameworks to make our lives easier. Which further allows us to solve more complicated problems, or similar problems faster.

Problems we face now, like concurrency, big data, lack of cheap programmers to solve business problems were not even problems before, they are now, because they are possible now.

Once we solve those problems of today, we will face new problems, I don't know what they would be, but I am certain many of them would be problems we consider impractical or even impossible today.


Yeah it's interesting, every time I hear "software has been stagnant for decades!", I think to myself that my god, it's hard enough to keep up with the stagnant state of things, I can't imagine trying to keep up with actual progress!


Keeping up with actual progress should be easier. The current "stagnant" state could be called that because your attention is wasted on miracle cures that promise the moon, but mostly deliver a minor improvement or make things worse.


I don't see a bunch of miracle cures that promise the moon, I see a bunch of things that promise, and sometimes deliver, hard-won incremental improvement. The OP seems a lot more like a moon-promising miracle-cure than all the stagnant stuff I'm wasting my attention on.


To clarify my moon examples would be NodeJS, MongoDB is web scale, HTML5/WebGL/VMs/Flash on mobiles, fast JIT/VMs for languages that aren't designed to be fast from the beginning etc.

Things that are technically hard and get a lot of hype. And maybe MVC/OOP/DI/TDD design patterns and agile.

The OP is promising something that's more of an architecture design issue like those of MVC libs. If he fails it will be because of a product design that doesn't catch on. It has no guarantee of catching on even if it's good. LISP and Haskell didn't. But their ideas trickle into other languages.


Yeah I had a fairly good sense of what you meant by promise-the-moon technologies, and I believe many of those you mentioned to be exactly the sort of hard-won incremental improvements that I was talking about. Good ideas trickling down is also the exact sort of hard-won incremental improvement I'm talking about.

I suppose my general point is that things aren't stagnant, they are merely at a point where real progress tends to be hard-won and incremental. This may be frustrating to visionaries, but it seems both inevitable and perfectly fine to me.


Except MongoDB's and Node.js's incremental improvements in their marketed use case of easy scalability weren't worth your time if you really were concerned with scalability. You would have been better served by existing systems.

So the marketing pivoted to being simple for MongoDB and being SSJS for Node. In Mongo's case scalability was severely hampered by the fundamental design, but many developers fell for the marketing and it cost them. Node.js can perform on some hello world benchmarks, but writing large scalable systems was a minefield of instability, callback hell bugs, lack of JS support for CPU intensive tasks, etc. It's still catching up to systems that existed in 2007.

The incremental improvement on scalability is nowhere to be seen. They do improve some other metric like programmer enthusiasm. Other newcomers did improve on easy scalability after more careful thought and years of effort but the hype machine largely left the topic.

A similar case can be made for HTML5/Flash promises for mobiles. You can use it but it often makes the process more difficult than writing two native apps in many cases. Good luck guessing which.


This is sort of my point about incremental improvement being hard-won, though. It's really difficult to make something that is actually better than other things, even for pretty narrow criteria. That's why I'm always suspicious of things (like the OP) that claim they will bring a major sea-change of betterness across broad criteria.


But that's exactly why programming is stagnant--it hasn't gotten simpler.


I'm optimistic about our field and hope the machine/deep learning crowd don't crack AI so quickly (allowing computers to program themselves obviously puts us out of business).


You want better programming? Get better requirements and less complexity. Programming languages and IDE's are part of the problem, but a lot of the problems come from the actual program requirements.

In many cases, it's the edge cases and feature creep that makes software genuinely terrible and by the time you layer in all that knowledge, it is a mess.

I don't care if you use VIM, EMACS, Visual Studio, or even some fancy graphical programming system. Complexity is complexity and managing and implementing that complexity is a complex thing.

Until we have tools to better manage complexity, we will have messes and the best tool to manage complexity are communication related, not software related.


This seems reminiscent of the "wolfram language" stuff a couple of weeks ago. Perhaps it's a trend, but I can't shake the feeling like I am seeing a rehash of the 4GL fiasco of the 90s.

I have a lot of respect for Chris. So, I hope I am wrong.


I think a lot could be won by reducing complexity of the systems. In modern operating systems we stack too many abstraction layers ontop of each other. Emacs is a great example of a development environment which prevents a lot of complexity because everything is written in one language (Emacs Lisp), functions are available throughout the system, one can rewrite functions at runtime and one can easily pinpoint the source code of any function with the find-function command. It would actually be great to have an operating system as simple, extensible and flexible.


What I'd like for programming is a universal translator. Somebody writes a program in Java or Lisp, and I can read and modify it in Python and the author can read my changes in their own pet language. I write an Ant script and you can consume it with rubygems. You give me a program compiled into machine language or Java or .NET bytecode and I can read it in Python and run my modified version in the JVM, CLR, Mac, iPhone, Android, browser. Transparently, only looking at what the source language was if I get curious.


> Writing a program is an error-prone exercise in translation. Even math, from which our programming languages are born, has to be translated into something like this:

The article then compares some verbose C++ with a mathematical equation. That is hardly a fair comparison, the C++ code can be written and read by a human in a text editor, right click the equation > inspect element ... it's a gif. I loaded the gif into a text editor, it's hardcore gibberish.

Personally, I would stick with the verbose C++.


I wholly agree with this article. The exact point the author is getting at is something that I have been trying to say, but rather inarticulately (probably because I didn't actually go out and survey people and define "what is programming and what is wrong with it").

I really can't wait for programming to be more than just if statements and thinking about code as a grouping of ascii files and glueing libraries together. Things like Akka are nice steps in that direction.


i have to disagree somewhat. imho the difference is in abstraction. i think good forms of abstraction have allowed computing proceed as far as it has, and will allow it to proceed further.

i think abstraction may correllate with a ide or librarys usefulness, popularity, and development time, moreso than what your video demonstrates.

i have a question, how many clicks would getting this snippet from above to work?

you also have to navigate various dropdown menus? (dropdowns are pretty terrible UI, and i would think reading diff dropdown lists im not familar with would be jarring.) IMHO it would be like writing software with 2 mouse buttons, dropdowns or other visual elements, and instead of with keyboard, and would actually be slower. the opposite of my point above

    #include <valarray>
    #include <iostream>
    
    double standard_dev(const std::valarray<double> &vals)
    {
    	return sqrt(pow(vals - (vals.sum() / vals.size()), 2).sum() / vals.size());
    }
    
    int main()
    {
    	std::cout << standard_dev({2, 4, 4, 4, 5, 5, 7, 8}) << '\n';
    }


I'm wondering, did the author ever play with Smalltalk/Self? Essentially those environments let you interact with objects directly, in about as much as makes sense. Seems a good fit for the "card game" complaint.

Doesn't help with the mathematical notation, though (Although it would be possible to do something about that, I suppose).


I hope the production release will be editable by keyboard alone, instead of needing the mouse for every little thing.


that prototype is basically nothing like what the end result will be. And yeah, it will be keyboardable :)


man. i've been thinking about this stuff a lot.

especially after I saw rich hickey's presentation "simple made easy" (my notes on it [1]).

I'm actually on a mission now to find ways to do things that are more straight forward. One of my finds is [2] 'microservices', which I think will resonate with how I perceive software these days.

[1] http://daemon.co.za/2014/03/simple-and-easy-vocabulary-to-de... [2] http://martinfowler.com/articles/microservices.html


I'm intrigued.

This is a problem that many, many very smart people have spent careers on. Putting out a teaser post is brave and I have to believe you know what you are doing.

I am looking forward to the first taste. Do you have an ETA ?


I have been saying stuff like this for years, although not as eloquently or detailed. But now Chris Granger is saying it, and no one can say he's not a "real" programmer, so you have to listen.

I think it boils down to a cultural failure, like the article mentions at the end. For example, I am a programmer myself. Which means that I generate and work with lots of static, cryptic colorful ASCII text program sources. If I stop doing that, I'm not a programmer anymore. By definition. I really think that is the definition of programming, and that is the big issue.

I wonder if the current version of Aurora derives any inspiration from "intentional programming"?

Also wonder when we can see a demo of the new version.


> I wonder if the current version of Aurora derives any inspiration from "intentional programming"?

The long term vision definitely does. At the moment we are mostly focused on building a good glue language. By itself it is already very capable for building CRUD apps and reactive UIs. If we can nail the tooling and make it as approachable as excel then that gives us a solid platform for more adventurous research.


Sounds so philosophical ... almost sounds like something to do with how to get strong A.I and expecting some sort of universal answer ... such as 42.


There are entire families of problems that would be better solved with a far more visual approach to code. For instance, worrydream has some UX concepts on learnable programming that just feel much better than what we use today.

We could do similar things to visualize actor systems, handle database manipulation and the like. The problem is that all we are really doing is asking for visualization aids that are only good at small things, and we have to build them, one at a time. Without general purpose visualizations, we need toolsets to build visualizations, which needs more tools. It's tools all the way down.

You can build tools for a narrow niche, just like the lispers just build their DSLs for each individual problem. But even in a world without a sea of silly parenthesis and a syntax that is built for compilers, not humans, under every single line of easy, readable, domain-centric code lies library code that is 100% incidental complexity, and we can't get rid of it.

Languages are hard. Writing code that attempts to be its own language is harder still. But those facts are not really the problem: They are a symptom. The real problem is that we are not equipped to deal with the detail we need to do our jobs.

Let's take, for instance, our carefree friends that want to build contracts on top of Bitcoin, by making them executable. I am sure a whole lot of people here realize their folly: The problem is that no problem that is really worth putting into a contract is well defined enough to turn it into code. We work with a level of ambiguity that our computers can't deal with. So what we are doing, build libraries on top of libraries, each a bit better, is about as good a job as we can do.

I do see how, for very specific domains, we can find highly reusable, visual high level abstractions. But the effort required to build that, with the best tools out there, just doesn't make any practical sense for a very narrow domain: We can build it, but there is no ROI.

I think the best we can do today is to, instead of concentrate so much on how shiny each new tool really is, to go back to the real basics of what makes a program work. The same things that made old C programs readable works just as well in Scala, but without half the boilerplate. We just have to forget about how exciting the new toys can be, or how smart they can make us feel, and evaluate them just on the basis of how can they really help us solve problems faster. Applying proper technique, like having code that has a narrative and consistent abstraction levels, will help us build tools faster, and therefore make it cheaper to, eventually, allow for more useful general purpose visualization plugins.



Chris Granger sure doesn't make it easy to contact him.


demonstrates an immediate connection with their tool: http://vimeo.com/36579366




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: