Ken Iversion originally designed APL as exactly this - an algorithm notation, then it was noticed that -- since this was well specified -- it could actually be executed. I urge anyone who finds interest in this to read Ken's "the description of finite sequential processes"[0], especially the description of the simplex algorithm[1], which is perhaps the most elegant I've ever seen.
I am actually surprised that kragen did not mention APL more, as I know he is familiar with it, and he is usually pretty good about giving credit where credit is due.
I don't see the point of precise handwritten software notation. Pseudocode is a thinking tool because of its ambiguity and looseness. We need to process problems in a number of passes of increasing precision and correctness, so we want to start with something that is more vague than a programming language. When we are ready for precision we can write the code, using encapsulation to manage how much of the problem we bite off at a time.
Also the point about pseudocode being unnecessarily cumbersome is not true IMO, because we don't need to adhere to any language syntax that we find redundant. I've never seen an audience to a whiteboard script complain about a missing colon or self/this keyword when the human context made it unnecessary.
I actually have seen folks complain about missing semicolons. Luckily, they did not take the stance heavily and getting them to discuss it immediately led them to agree it didn't matter.
I go so far as to say any coding in stuff like google docs can miss items like semicolons without a bat of an eye from me. Sure, they are typing. But if it is not their normal coding environment, the aesthetics and finer points of the language are... superfluous at best for the points I generally care about in that kind of correspondence.
Sometimes precision of a certain variety is useful for thinking out a program's design. When writing Haskell code, I will often write out the "obvious" parts as well as the types I think will be appropriate for the smaller, less obvious parts. You can define the type but leave the implementation undefined. Whether or not this typechecks tells me if I'm on the right path.
This is really neat! It's fun to imagine what true 2D programming could look like, unconstrained by bitmap fonts.
I'll state the obvious, though: this isn't going to be useful for the #1 time I need to code on a whiteboard: interviews. Both parties need to be familiar with the syntax in order for it to be useful.
Most of the visual, programming environments failed. The successful ones still did text for the logic. Scratch, made to teach kids, was a neat hybrid that made text visual in a lego-like way that made connecting pieces easier.
This seems overcomplicated. Theoretical computer science papers just write the algorithm in words, using widely accepted mathematical notation wherever they can (e.g. sigma for sums, set notation, and so on).
It seems that this isn't for CS paper writing, but for writing pseudocode during design, which needs much more flexibility than communicating results in a POPL paper.
Regular pseudocode, formal specs close to programming (eg VDM language), and graphical stuff like Statecharts worked well in industry for a long time. Including for high-assurance systems where ambiguities lead to catastrophic failure. Middle option was advantageous where it was precise enough for tools to catch errors or prove properties. Any of these worked well enough and didn't look as weird as OP to average programmer.
Those are fairly established notations that...when hey first came out, weren't as obvious as they appear now.
It is unfair to expect that pseudo code notations be static at this point in history when new programming languages that look "weird" pop up all the time.
I've been a huge fan of computer science papers going the literate route. Imagine if more were of the variety of https://smile.amazon.com/dp/0123750792.
Though, I also feel that that adds to the time of creating these papers. So, by my own view this might be more expensive than it is worth.
To be fair, it seems like you linked to a textbook, rather than a paper. Textbooks have different aims than papers (pedagogy vs. succinctly conveying necessary info to an expert in the field)
I'm not sure which direction that takes the "fair" though. I expect textbooks to take a long time to write. Writing a paper that is also executable should be a lot easier than a textbook that is the same. Or, a textbook period.
In particular, it saddens me that I can't find any examples of papers that are literate examples. I started dabbling with the idea here[1], but I think I fundamentally misunderstood some of the ideas when I did that. I keep meaning to take up the idea again. I can't deny that it is a slow process, though.
Nope, writing papers takes longer. The collective amount of time it took to write the papers (I'm not counting the research time) that a textbook is based on, is much greater than the time it takes to write the textbook.
Also, there is no incentive because of the reasons I stated (pedagogy to someone new vs. succinct explanation to expert)
I can't claim you are wrong. But this is so non-intuitive to me that it hurts. I get why "covered ground" for text books would be easier to do. But many textbooks should be pushing the field, as well. Right?
And I fully note that the book I linked to is exceptional. It won a oscar.[1] And was basically required reading at pixar, if I recall.
There is nothing original in textbooks? They just need to cite the relevant papers? I find it perfectly intuitive that it is harder to come up with an original argument/idea than it is to summarize one you've already heard.
Papers are very condensed and probably scrutinized more than textbooks are, via the peer review process. Papers have to have proofs for any theorems, whereas textbooks might omit them (especially the less important ones).
Also, to be fair, a textbook might rely on a dozen different papers in the span of a single page (just taking key insights or arguments for the papers).
Both of those claims seem... off to me. Specifically, this is a reductionist view of them that makes them sound easier. I do not claim they are harder, just different. Each difficult in their own way.
I would be interested in seeing the publication times for many text books. Empirically, we should see something in the difficulty in relation to the time it took to create. Right?
Edit, since I saw your point of referencing many papers in a single page. Don't papers do the same?
We were comparing the amount of time it takes to write a textbook versus the amount of time it takes to write all the papers the textbook cites (or relies on, I guess). So I don't think "hardness" and the fact that papers also cite other papers really changes this comparison.
I mean, a textbook might take years to write/edit (on and off) But it takes at least a month (on and off) to write/edit a paper.
But you don't think this is a learnable thing? Doing a paper, you already have to write code. Just structure it differently.
As it is, the code is often neglected. And rarely shared. Both bad facets.
And making a full text book that is executable is what I was comparing. That entire book is a program. I'd imagine that was much harder than the typical paper. Many of which are surveys and gloss over deficiency of the supporting code. (Again, I still concede the black swan nature of that book.)
There is a real need for a concise notation for collaborative whiteboarding, so I see this as a good start.
I'd even go so far to say that as a notation for interviews, if the candidates were informed ahead of time, then I can see it working out better than the current approach for both candidate and interviewers.
At the very least, you could see whether the candidate could learn a new notation, and that alone is a pretty valuable signal.
"At the very least, you could see whether the candidate could learn a new notation, and that alone is a pretty valuable signal."
I could see that as a benefit. Much like how they would have to learn unusual requirements or API's on the job then solve problems with them. A notation like this and basic specs might be a useful filter.
Clearly there is room for improvement in this area.
But I'd rather see papers include code snippets.
If a widely used and highly readable language is used (Python, for example) then the meaning of what is going on would be much more clear, and in some cases could be immediately tested out with ones own data or at least played with in an existing, widely installed, live system.
I think ideally a psudo-code should be directly translatable into any programming language. It should be dead simple. It should use all of the best practices of the most common programming languages. It should be typable.
None of this garvage of arrows and triangles. I'm not Greek, I don't have that kind of stupid keyboard.
Also, it should follow programming practices for variable names.
Rather then A, B, C, D, X and the lot use forward_momentum or ForwardMomentum. We're not using typewriters any more. The extra few keystrokes for typing out words, and not symbols, is well worth it.
Interestingly, exactly how it's done in almost every academic paper or article on software I've read outside some really niche ones. Plus most prototypes in industry that were done as UML, Visual Basic, etc. Python got somewhat popular in this niche as it's quite readable & high-level.
The people downvoting these comments don't represent a majority opinion on pseudocode in any way. Quite the opposite.
I'm an undergrad and I'm currently working with people in the physics. All of their code, python or not, uses N, n, and x as the most common variable names.
There is a definitely antipattern that comes from mathematics where variables are cryptic in an effort to look like math, and also I believe to try and show off how smart your are. However, in some domains, the single letter variables become standard. In IR, w means word, d means document, D means the document corpus, and the like. Of course, w sometimes means "weight" when applied to a x, which maybe mean an document d.
That is a recurring problem. The structure is easy to follow. Variable names often suck. I think they get it from their Algebra and Calculus classes because that was first type of books I saw using those letters almost exclusively.
It would. Start with the word problem. Define some variables maybe big or short-hand closer to actual description. Then use algebra with those variables to model the problem in math. Solve it. Others understand it more easily in peer review.
Widely used only applies when the paper was written. Actual code snippets put more future demand on readers when those languages are potentially no longer popular. I would strongly suggest the use of an abstract version merely for the authors to abstract their own thinking, then additionally have embedded concrete versions.
I think the real question is, why is the amazing interactive simulation medium that we call a computer so abysmal at capturing and representing ideas (particularly ideas about the computer itself) that we have to resort to inventing a new notation for pencil & paper to deal with it?
I don't think that's the issue here. It's the verboseness of code/pseudocode which this solves. The understandingness of code was not raised as a problem.
This notation is not to say that ideas can be expressed better on paper with this notation, but rather that, in situations where you must use pencil (such as whiteboarding), code is overly verbose and this notation is less so.
Then again, I do see some benefits of writing this on paper over writing code on a keyboard, like the flexibility of being able to write wherever you want, and write notes or pictures in the margins. But these are minor.
Also I may have misunderstood your comment.
Edit: actually, upon rereading: I guess you were referring to this line in the opening paragraph
>But our programming languages are very poorly suited for handwriting: they waste the spatial arrangement, ideographic symbols, text size variation, and long lines (e.g. horizontal and vertical rules, boxes, and arrows) that we can easily draw by hand, instead using textual identifiers and nested grammatical structure that can easily be rendered in ASCII (and, in the case of older languages like FORTRAN and COBOL, EBCDIC and FIELDATA too.)
which is an interesting thing to think about. So I guess your criticism of modern computers for representing ideas is valid. But I suppose the advantage of the more restricted inputs of a computer (keyboard into a 1d string of ascii text) is the structure that it inherently has, that paper and pencil does not. So there are tradeoffs.
One use is working out ideas when you're not sitting at a desk. You can easily carry around a small notebook which is much more usable than a phone for this. (If you have a favorite on-phone programming environment that's better, I'd like to hear of it.)
minor nit but this feels very much like it was designed by an academic in that I think you could do this in latex very easily but not at all in a standard text editor. Those horizontal and vertical lines. But otherwise you need a specialized program or a whiteboard to pull this off. Also all those math symbols. I don't know where those live in my input.
One of the first sentences on Wikipedia about psuedocode:
> The purpose of using pseudocode is that it is easier for people to understand than conventional programming language code, and [...].
So in common usage, "psuedocode" is supposed to be easily understood. I imagine that that isn't one of the goals of this new notation, though. It looks more useful for internal use (e.g., within your team) than external use (e.g., a whiteboard interview), since it's basically unreadable if you haven't seen it before.
Since it abstracts a programming language and allows for simplified syntax and inline explanations, more people can understand it. Why would you think that's not a goal?
That's an explicit goal of pseudocode. It's implied by the fact that it's used by both laypersons and programmers to explore algorithms for any sub-field of CompSci or application in IT. It's the level of detail and symbols they modify when targeting specific groups.
It really isn't. Psuedocode is a notation to move design along, to communicate and iterate. It isn't meant for the 6 year old kid or non programming PHB people manager, laypersons if you might say. Psuedocode notations require learning investments just like any other notations.
Perhaps you didnt understand what for everyone means. It wasn't meant literally. It's often used by programmers, even laypersons, as a short-hand for a broad audience. I've seen laypersons write pseudocode and formal specs after training. Many were also taught to use an executable form called 4GL's which still have a lot of revenue and users. COBOL was the first in this since its language let even accountants and such write programs.
Far as a learning investment, that's true for any precise notation, modeling tool, or formal language. Even English takes Americans years to learn. So, pointing out pseudocode has a learning curve like everything else doesn't invalidate my claim that it reduces detail to aid understanding and by many people.
Note: Im also not arguing group-specific pseudocodes cant be developed. Only that the normal kind is meant to be accessible and usually is.
Anyone (even lay people!) can learn how to program, anyone can learn math, anyone can learn a formal specification language. I've seen accountants learn and excel at APL, the most obtuse language ever.
So, what is "the notation that everyone can read?" Since you've claimed that it exists and is well defined, it must be describable. Surely, it isn't because this notation is different and everyone doesn't like different things?
The Pascal-looking one in the paper I cited that academic publications and classrooms have used successfully going back decades. Strange you seem so unfamiliar with it.
I'm fine with new things. That's another tangent you're on. The original one that started all this was a claim that pseudocode wasn't about broad understanding. The Pascal-like pseudocode that I've seen for over a decade now has been there for familiarity and understanding. Even lay people get it without much work. If they didn't want it comprehensible, they'd be using C language they wrote the code in or who knows what.
Now, the OP designing a new method for perceived benefits is fine. I even encourage experimentation. That's a different topic. I haven't even responded to that one that I recall. I was countering misinformation about common pseudocode. Discussing OP's would require me actually using it on a number of problems along with diverse set of others with a meta-analysis of reported pro's and con's. Obviously haven't done that... ;)
Another attempt at articulating what I think is your point: languages have network effects. You're less effective programming in a language if others don't understand what you did. But the notation of OP is intended to not require network effects. Pseudocode is often intended purely to help one privately think through a problem. Even if nobody adopts this notation, it might be useful to you.
Does that seem on the right track? I don't mean to put words in your mouth.
In the PDF, they describe their algorithms in pseudo-code that combines the common, BASIC/ALGOL-like text with some common notation (i.e. division) from math. I immediately understood the algorithms enough to implement them myself in about any language without soneone telling me what the notation meant. A common effect of published pseudocode since it's intended to be widely understood.
This new notation Id have to think about and practice with. Just using it in a paper with the label pseudocode would cause confusion. It's less intuitive in a world of widely-deployed, ALGOL-like notations. Maybe it has benefits worth sacrificing the wide usability but person switching it better be OK with that.
In the PL community, when we want to communicate an algorithm in a paper in something like code, we will say something to the effect of C-style pseudo code. This serves two roles: 1) everyone "knows" C so you don't have to explain your notation very well, and 2) we will skip over all the ickiness of C (hence the C-like).
I get the impression that the work here is more about thinking and iterating on paper, not just communicating on paper, which, IMHO, is a very niche use case that most people aren't going to run into. We don't do anything in our C-style pseudo codes but present.
I agree with this. The notation might also apply to a smallish group of people, like mathematicians in a certain field who use it to communicate and work together. Notations are powerful like programming languages, if you can reuse an existing one or use ones other people you need to work with already know, there is an advantage to that.
Ken Iversion originally designed APL as exactly this - an algorithm notation, then it was noticed that -- since this was well specified -- it could actually be executed. I urge anyone who finds interest in this to read Ken's "the description of finite sequential processes"[0], especially the description of the simplex algorithm[1], which is perhaps the most elegant I've ever seen.
I am actually surprised that kragen did not mention APL more, as I know he is familiar with it, and he is usually pretty good about giving credit where credit is due.
[0] http://keiapl.info/eemcd/keisimplexpages/keisimplex447.html
[1] http://keiapl.info/eemcd/keisimplexpages/keisimplex452.html