That was very much not what I was expecting - I was expecting another attempt at producing a more English-like language which could genuinely be read aloud.
I'm not familiar with APL, but I can imagine that the test process for "how well can this language be read aloud" would be to have someone read out a short program over the phone and measure how long it takes and how many bugs are introduced. I wonder how people would handle significant-whitespace languages over the phone, for example?
I feel as if I am missing something here, some previous reading that would make this clearer? I do not understand how the code snippet he provides is "lyrical". Any more stuff I could read on this sort of topic?
I don’t think it’s a strong selling point that you need a 400 page book to make sense of a 1000 line program. I find the style of the Dyalog dfns library much more pleasant to read and much easier to follow.
It’s definitely an impressive piece of software and I realize that his thesis is much more than just “comments” to the code.
My objection was to the implied claim that a 1000 line program is easy to understand and work with, easier than a much longer program. I don’t think that’s true if the 1000 line program is written in an incredibly terse style where a lot of contextual information has been deliberately eliminated.
For example, in this post Aaron comes up with a bit of APL that’s a direct transliteration of “ Description #5: Increment the n field of the parent for nodes whose parent is of type 2 and kind 3 for each node by 1.”
My claim is that an experienced APL and compiler programmer could figure out what the APL is doing at the level of “description 5”.
But I think he/she would struggle to recognize the problem statement at the top “Pass Overview: Count the Rank of Indexing Expressions” because a lot of contextual information has been lost in the transliteration to APL. That information could have been included in the code in the form of comments or by assigning a helpful name to this snippet.
Aaron didn’t do that by design, and he didn’t need to because he’s been working on this for years. It’s all in his head and now in his thesis. But if you’re not Aaron you have to read the thesis to learn the missing contextual information.
Dyalog’s dfns library is including the contextual information inline in the form of comments and local utility functions. I prefer that to having to cross-reference some other piece of documentation.
“P first-axis reduce or replicate/compress (commuted)”
Because APL functions only have a left and a right argument, and evaluates right to left but does parens first, it often means wrapping things on the left into parens to get them evaluated first. Commute (⍨) tacked onto a function reverses it’s arguments so you can put them the other way round and remove the parens e.g. 8-3 is 5 and 8-⍨3 is 3-8 is minus 5.
p⌿⍨... says (...)⌿p
Exactly what ⌿p does depends on what the rest is - it’s either a foldr operation if given a function, or “copy the items in p this many times each” if given a value or an array of values that matches the shape of p.
Looking at the link, it’s a replicate - that is, a filter. It filters p where the matching positions in t are =2, and t[p] is an array lookup (multiple indices at once) into another array.
I have no idea (yet?) how people say APL “reads left to right”, it seems to “decipher a bit at a time jumping all about from easy sections to hard sections”
I'm not familiar with APL, but I can imagine that the test process for "how well can this language be read aloud" would be to have someone read out a short program over the phone and measure how long it takes and how many bugs are introduced. I wonder how people would handle significant-whitespace languages over the phone, for example?
(Yes, I know hardly anyone ever does this)