Hacker News new | past | comments | ask | show | jobs | submit login
Writing Code to Be Spoken (2019) (sacrideo.us)
25 points by 7thaccount on Jan 21, 2020 | hide | past | favorite | 12 comments



That was very much not what I was expecting - I was expecting another attempt at producing a more English-like language which could genuinely be read aloud.

I'm not familiar with APL, but I can imagine that the test process for "how well can this language be read aloud" would be to have someone read out a short program over the phone and measure how long it takes and how many bugs are introduced. I wonder how people would handle significant-whitespace languages over the phone, for example?

(Yes, I know hardly anyone ever does this)


I feel as if I am missing something here, some previous reading that would make this clearer? I do not understand how the code snippet he provides is "lyrical". Any more stuff I could read on this sort of topic?


His APL to GPU compiler is less than 1000 loc in it's entirety and the "comments" are the 400 page accompanying master's thesis.

There are a few HN discussions u/arcfide and several YouTube presentations at functional programming conferences. Neat stuff!

https://news.ycombinator.com/item?id=13565743

https://www.sacrideo.us/smaller-code-better-code/


I don’t think it’s a strong selling point that you need a 400 page book to make sense of a 1000 line program. I find the style of the Dyalog dfns library much more pleasant to read and much easier to follow.


When it is an entire compiler and teaches you APL and compiler techniques?

It is part thesis, part tutorial, and part textbook. Cool item. I honestly don't remember the page number, but it was definitely over 100.


It’s definitely an impressive piece of software and I realize that his thesis is much more than just “comments” to the code.

My objection was to the implied claim that a 1000 line program is easy to understand and work with, easier than a much longer program. I don’t think that’s true if the 1000 line program is written in an incredibly terse style where a lot of contextual information has been deliberately eliminated.

For example, in this post Aaron comes up with a bit of APL that’s a direct transliteration of “ Description #5: Increment the n field of the parent for nodes whose parent is of type 2 and kind 3 for each node by 1.”

My claim is that an experienced APL and compiler programmer could figure out what the APL is doing at the level of “description 5”.

But I think he/she would struggle to recognize the problem statement at the top “Pass Overview: Count the Rank of Indexing Expressions” because a lot of contextual information has been lost in the transliteration to APL. That information could have been included in the code in the form of comments or by assigning a helpful name to this snippet.

Aaron didn’t do that by design, and he didn’t need to because he’s been working on this for years. It’s all in his head and now in his thesis. But if you’re not Aaron you have to read the thesis to learn the missing contextual information.

Dyalog’s dfns library is including the contextual information inline in the form of comments and local utility functions. I prefer that to having to cross-reference some other piece of documentation.


This is why I love `for` loops, and why I hate functions that accept functions as arguments.

Sustainable code should be easy to read out loud.


I for one, cannot stand writing for-loops when unnecessary.


Might have been more compelling if it had been illustrated using a more mainstream language.

For instance what does

    p⌿⍨
mean?


It says, unhelpfully,

“P first-axis reduce or replicate/compress (commuted)”

Because APL functions only have a left and a right argument, and evaluates right to left but does parens first, it often means wrapping things on the left into parens to get them evaluated first. Commute (⍨) tacked onto a function reverses it’s arguments so you can put them the other way round and remove the parens e.g. 8-3 is 5 and 8-⍨3 is 3-8 is minus 5.

p⌿⍨... says (...)⌿p

Exactly what ⌿p does depends on what the rest is - it’s either a foldr operation if given a function, or “copy the items in p this many times each” if given a value or an array of values that matches the shape of p.

Looking at the link, it’s a replicate - that is, a filter. It filters p where the matching positions in t are =2, and t[p] is an array lookup (multiple indices at once) into another array.

I have no idea (yet?) how people say APL “reads left to right”, it seems to “decipher a bit at a time jumping all about from easy sections to hard sections”


You gotta dig into APL a bit for these idioms to make sense.


He is out of his mind




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: