Literate programing is used to tangle text and code. You tangle the document and the code blocks are pulled out to create a working program. But Orgmode's Babel, iPython and Wolfram's Computational Essays are literate computing, with code being executed in the document. Howard Abrams has extended the concept to create literate devops (https://www.youtube.com/watch?v=dljNabciEGg) using orgmode babel where the code in the document is used to configure software running on local or remote servers.
I use Babel as part of my daily workflow. Very cool stuff.
If I remember correctly, Mathematica started the notion of having a notebook interface when working with code, Theodore Gray even has the patent for this [0]. Even though the post reads like a longform ad for Mathematica and the Wolfram language, I do agree the notion of code written to present an idea/solution as a story. Sadly, with Jupyter notebooks being free/open source and compatible with other language kernels, there seems to be a decline in the mindshare for Mathematica and its notebook interface among researchers.
[0] Patent US 8407580 B2 - Method and system for presenting input expressions and evaluations of the input expressions on a workspace of a computational system
(https://www.google.com/patents/US8407580)
Mathematica is a great prototyping language for people who know mathematics. It is the equivalent of Microsoft Excel. The problem is when you want to do something beyond that you will probably rewrite it in something else. And that something else is increasingly being Python.
I do like the term "computational essay". It describes a way of presenting information. While "notebook" feels more to me a set of calculations that may or may not be commented and don't have a set structure.
It would be neat if we had a standard notebook notation or system that would subsume all of these competing standards. Unfortunately it would probably end up like this https://xkcd.com/927/
>The problem is when you want to do something beyond that you will probably rewrite it in something else. And that something else is increasingly being Python.
You aren't going to use your Python code in your Jupyter notebook in production, are you? Jupyter is for exploration and exposition. You're going to have to rewrite it anyway.
This is happening, as a direct consequence of Stephen Wolfram's ego, not because there's something inherently wrong with Mathematica, the language (now renamed to "Wolfram language"), or the ecosystem.
i don't think (or probably know), if literate programming was interactive, in the same sense as jupyter or mathematica equivalents. moreover, literate programming, '...enables programmers to develop programs in the order demanded by the logic and flow of their thoughts.' [0]
jupyter-notebooks are closer to brett-victor's super cool talk "inventing on principle" than literate-programming, imho.
Literate programming really feels very different than this, at least in the examples that are easily found online. Literate programming reads far more like 'better commenting' (and strikes me as a massive pain to write) and less like a type of long-form writing.
Literate programming, as Knuth presented it was that the literate/comment part was the primary document, and the executable code was just a minor detail for the benefit of the stupid computer that can't read English and only cares about arithmetic not meaning.
I wrote my master's thesis in literate haskell. It was cool to watch the same source material generate either a program or a dissertation, depending on what processor I sent it through.
Pretty much like Jupyter notebooks. It's a great way to present research, especially statistical stuff. It's pretty much impossible to verify any statement about a large dataset without having it to hand and ready to calculate. Even having the data printed in an appendix is going to be discouraging verification.
Plus the stories you can tell with this sort of thing are much more vivid. If someone has a question about some part of the data, they can ask the computer instead of the author. For a lot of queries.
The article even has an image of the interface from 1988.
> And with the release of Mathematica 1.0 in 1988 came another critical element: the invention of Wolfram Notebooks. Notebooks arrived in a form at least superficially very similar to the way they are today (and already in many ways more sophisticated than the imitations that started appearing 25+ years later!): collections of cells arranged into groups, and capable of containing text, executable code, graphics, etc.
Wolfram really is a terrible liar (also very, very smart). He knows damn well that Maple had notebooks in the mid-1990s, because it was one of Mathematica's main commercial rivals, so "started appearing 25+ years later" is nothing other than a lie.
I considered that interpretation before writing and I found it impossible to read that way. For a start, "imitations" is plural, so no, it doesn't just refer to Jupyter. And "started" certainly implies that other imitations hadn't appeared before that.
While I'm at it: he deserves an uncharitable reading, because he's having a dig at Jupyter (which may be an imitation of Mathematica notebooks) and neglecting to mention any influence of previous work on Mathematica notebooks, such as Knuth's literate programming.
Calling these things "computational essays" is a little like calling essays written on paper "ink essays". Computation is just the means. Human understanding is the goal.
I prefer Bret Victor's "explorable explanations" (http://worrydream.com/#!/ExplorableExplanations) because it captures the essence of what this new media is being used for, while also being more general than Wolfram's conception.
So-called “explorable explanations” are outright harmful. They make it next to impossible for the reader to develop their mental faculties for abstraction.
I use Babel as part of my daily workflow. Very cool stuff.