Hacker News new | past | comments | ask | show | jobs | submit login
A Paper Algorithm Notation (2014) (canonical.org)
61 points by kick on Feb 16, 2020 | hide | past | favorite | 30 comments



I've found Joy to be a really interesting and useful notation. https://en.wikipedia.org/wiki/Joy_(programming_language) http://joypy.osdn.io/notebooks/index.html

The syntax is enormously simple (the only keywords and '[' and ']' for quotes), the semantics are elegant, and implementation is trivial[0] It seems to me to combine the best aspects of Forth and Lisp.

In working with Joy, I've come to suspect that elaborate syntax, and the parsing thereof, is a kind of MacGuffin: https://en.wikipedia.org/wiki/MacGuffin

> ... a MacGuffin (sometimes McGuffin) is an object, device, or event that is necessary to the plot and the motivation of the characters, but insignificant, unimportant, or irrelevant in itself.

[0] https://osdn.net/projects/joypy/scm/hg/Joypy/blobs/tip/thun/...

Work-in-progress. That code does evaluation, type-checking, and type-inference; and there is a start on a compiler to machine code as well as some meta-programming tools. All in 52 kilobytes, more than half of which are comments.

Compare and contrast with https://git.sr.ht/~sforman/Prolog-Junkyard/tree/master/typet...

> A simple language with semantics based on the language and code from "Formal Methods: A First Introduction using Prolog to specify Programming Language Semantics" by Lutz Hamel. Lexer, parser, evaluation function.


Ha! I was waiting for someone to mention Joy! You can find a bunch of stuff on Joy hanging around if you know where to look:

http://nsl.com/

http://archive.vector.org.uk/art10000360

https://vector.org.uk/conquering-recursion/

It's a neat concatenative language, and everyone reading this thread should at least give it a cursory glance!

NB. Joy is a bit more APLJK-inspired than Lisp-inspired.


Yeah, sometimes I feel like a broken record here on HN: certain subjects pop up and if I see them I'm almost guaranteed to mention certain other subjects. But then again, there are always people who haven't heard about <FOO> yet, eh?


See also APL, which was designed as a concise algorithmic notation first and only adopted as an actual language afterwards. (Hence the use of custom symbols)

Although unlike this notation, APL is more typewriter-optimized; many symbols are made by double-striking two other more basic symbols, and the typesetting is simple and linear - it clearly doesn't take advantage of the full expressive power of handwriting. (Plus, its array-based programming flow, while extremely powerful, is very different from the way most people write algorithms these days)


Older APL programs used a 2D layout to show control flow, which probably worked well in handwriting but not so well once APL started being used as a programming language, rather than as a notation for specifying a program to be written. This page has some examples:

http://www.zerobugsandprogramfaster.net/essays/5b.html


That page deserves its own hn submission.


APL was also handwriting-optimized; KEI came up with it in the first place because he needed something for quickly expressing thought on a chalkboard.


As I said below, I've been intrigued by APL ever since I picked up a couple of discarded APL books in 1995 and read through them repeatedly, taking profuse notes. I keep hoping that someday I will learn APL well enough to be able to just jot down program fragments and be sure that they'll work, rather than giving me a rank error; the same is true of APL descendants like Numpy, although I have a lot more experience with them. I watched Roger Hui's "Tour de Force of APL in 16 Expressions", and although I understood the expressions, I don't understand how to design them. There are a bunch of notes in Dercuano about variants of APL. I've downloaded Aaron Hsu's dissertation, but I haven't read it yet.

For now, though, I can write out algorithms in what seems to me to be a straightforward fashion in paperalgo.

Some APLish things I've done include recoding T$'s 64-byte demo Klappquadrat in Numpy http://canonical.org/~kragen/demo/klappquadrat.html and this keyboard-driven structure-editing calculator with broadcasting over one-dimensional vectors http://canonical.org/~kragen/sw/dev3/rpn-edit#50_iota_2_ln_*...


For a language optimized for pen and paper, `z.@re` seems verbose, when one could use either `z.re` or `z@re` to describe member access. I notice that later on the page he drops into the more obvious notation l.next for a linked list member.

This one may be cultural, but when I write math (including pseudocode), I use × rather than ⋅ for ordinary multiplication. This is 100% a matter of habit, but also it's too easy to mix up ⋅ and ., at least in my handwriting. Dot product I write with a small open circle, ∘.

The ← for assignment is good, but I wish it was used consistently; elsewhere we see =, and then == for equals, which is totally unnecessary in handwriting if one has adopted a distinct symbol for assignment. ↑ for returns is alright though I'd probably write it →, but this is a minor aesthetic judgement call: I like the symmetry that i ← double(x) could expand to i ← (λ(x) → x × x)(x).

The actual meat of the article, starting with loops and conditionals, is good stuff. I wouldn't understand it if I just saw it written out, but it beats the pants off UML-like bubble diagrams, and I don't expect it would take much exposure to render it familiar.


I'm glad to hear your thoughts! I have a great deal of respect for your technical judgment.

There are definitely errors in there. Some are inconsistencies as I experimented over time. However, the distinction intended to be expressed by =/← is the distinction between initialization and mutation. I might have gotten it wrong sometimes.

In the case of UML-like bubble diagrams, have you looked at Alloy?


Aha, that makes sense (the initialization and mutation, that is).

It probably won't surprise you to know that I would scribble a := there; `==` is one of my least favorite of the tokens we've inflicted upon ourselves as a profession. Certainly it's good to draw the distinction between initialization and mutation.

Although as an immutable initialization, `=` is correct as an equation as well... hmm.

I wasn't familiar with Alloy, but I'm about to be. I've clobbered a few things together with PlantUML, and don't have any use for a diagramming system that doesn't have a canonical textual representation, which it appears Alloy does... it appears Alloy does rather a lot, actually.

My goodness. This thing has a SAT solver strapped to the side of it. I did not see that coming.


This was one of the reason, I started looking at APL, to be able to write code better on paper.

And a reason why I like applicative programming in haskell with all of its squiggly <$> and <*>.

Another interesting idea might be to look at TLA+, because it reminds me of the way I used to scribble out basic ideas for physics simulations, i.e distance with accelerating velocity:

s' = s + v /\ v' = v + dv


Pseudo-code I think is like "sketching" an algorithm. It doesn't have to be precise it just needs to produce a "picture", the big ideas, that give us the characteristics of the algorithm being described.

Thus pseudo-code makes it easier to understand an algorithm. It should be intuitive, like a picture, so it is trivial to understand what it depicts.

I've often wondered why algorithms are described in pseudo-code and not in any actual programming language. The above is my best answer so far to that question.



Oh hey, thanks! I didn't see that in 2016.


> That may be easier to read, but which one would you rather scribble on a whiteboard or a napkin?

The one that's easier to read. And easier to remember.


Am I misreading his notation, or did he get FizzBuzz wrong?


I think you're right, the "Fizz" and "Buzz" conditions need an = 0 if we're to assume the expression on the right executes when the condition is true.


Ah, no, I have been reading it wrong, the code is correct: given the "i % 3 ∧ i % 5" in the first case, the "i % 5" for the "Fizz" case is effectively an "i % 3 = 0". Clever. Readable? Maybe.


Right, multiples of 3 print "Fizz", and multiples of 5 print "Buzz". It would be clearer to pattern-match on the tuple (i % 3 == 0, i % 5 == 0), or even (i % 3, i % 5), but at that point I haven't introduced pattern-matching.


Clever, yes.

Readable? Eh, Fizzbuzz is an opportunity to stot. I'll allow it ;)


APL is well thought out, battle tested and IMHO a better system. It comes with vector semantics, which simplify a lot of things - but you don’t have to use them.


I use APL daily; I still think Kragen's notation is worth a look. It's important to examine alternative approaches!


I most definitely agree. I should have qualified with “if you think of using such a notation....”

If you are meditating about the concept, by all means examine as many alternatives. But if you are looking for “the right way to do it”, APL is a better choice in this case IMO.


Hey, maybe you can help me get APL into my brain? I've been trying to jam it in there for years and it still doesn't fit.


Happily! Have you tried out J's labs? That's what I usually recommend, just because of how varied, deep & consistent they are. If you have, and they didn't work for you, I'd ask if you've read J for C Programmers, which is a useful book if you're having trouble with the paradigm overall.

J is in an interesting spot because it's a bit more complicated than K (though, I'd argue, less complicated than modern APL2, which won out), but has learning resources that actually help you get a feel for the language.

And of course, check out idiom libraries, they sometimes help to make things "click."

(J Labs can be found in every J install; there are some pieces that only work when you have Windows, but overall they work anywhere a computer can be found, and the ones requiring Windows aren't really essential to learning.)

Just ping me if there's anything else!


APL and K does fit mine, J doesn't.

Perhaps it would help you to start with K (e.g. https://github.com/JohnEarnest/ok is free, has a web repl with interactive graphics and stuff)

In a way, K is a cross of C and APL; K is the essence of APL minimized to the absolute minimum usable subset, even though less pure; e.g. complex values are not part of the language, and matrices only as vectors of vectors (unlike APL/J where you can have either). It also does away with user defined operators and a lot of other stuff.


I've been intrigued by APL ever since I picked up a couple of discarded APL books in 1995 and read through them repeatedly, taking profuse notes. I keep hoping that someday I will learn APL well enough to be able to just jot down program fragments and be sure that they'll work, rather than giving me a rank error; the same is true of APL descendants like Numpy, although I have a lot more experience with them. There are a bunch of notes in Dercuano about variants of APL. I've downloaded Aaron Hsu's dissertation, but I haven't read it yet.

For now, though, I can write out algorithms in what seems to me to be a straightforward fashion in paperalgo.


Have you read Iverson’s works, “Notation as a tool of thought” or “the description of finite sequential processes”? Both are available on the jaoftware website, and give a good (in my biased opinion) introduction to APL

I say biased because I had used APL and K extensively before reading them - they might seem less clear without this background, though it should be noted they are both considered classics by non APLers.

Personally, TDOFSP contains the simplest most straightforward description of the simplex algorithm for linear programming/planning that I’ve encountered.


I haven't! Thank you very much! I've started reading NAATOT a couple of times but never finished.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: