Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why dont the symbols just have names instead of stuff like ||\x.

I assume it gets easy to read once you practiced it a bunch but I feel like its such a hindrance to readability/popularity for the sake of appearing simple (look we can reverse a list with a single character!)



If you want a serious answer, as a k programmer ||\|x (which I assume your example is based on) is instantly readable as producing a bitmask of ones up to the last. ie 0 1 0 0 1 0 -> 1 1 1 1 1 0. Or a max scan from the right on integer arrays. I don't know how to even describe that in English succinctly, let alone most mainstream programming languages. (Haskell - reverse.scanl1(||).reverse - ok, not too bad)


I dont see why one cant have both. Give it names and translate them down to the base symbols so it has a readable syntax.

Id just call it something like exceptLast, lastOf, or endOff or something.


You've sort of answered your own question. None of those three would obviously mean the same thing to me. And that's just one possible combination. If you consider all possible combinations of 4 symbols in K, most of them will have a distinct but obvious (and useful) meaning that is extremely hard to summarise in a single word.


You can give them names if you want, and Q does so.

However doing so looses the ease of recognition and the malleablity the symbols provide.


The symbols do have names, eg | is commonly called a ‘vertical bar’ or ‘pipe’. I think when people read the code though, they say the name of the operator instead, e.g. your example is ‘reverse max scan x’. A complication is that the meaning of a symbol depends on the surrounding symbols (eg | can be reverse or max) and another is that the meaning can depend on context, eg | is max but when the inputs are arrays of bits (where 1 is true and 0 is false), max is equivalent to logical or, and so people may call the operator ‘or’ in those contexts.

I think advocates would claim that the symbols allow for faster interaction with the computer and for larger programs to fit in one’s working memory.


This always comes up in discussions about APL, J, and K.

Have you ever tried reading Euclid's "The Elements"? It's all prose and appreciably less clear than the modern algebraic formulations. Of course, that's only because we're all already familiar with the algebra, but once you know both notations, the terse symbolic one is an obvious win on readability and clarity.

And it's just not that hard to learn some tens of symbols. In practice, languages like Rust and whatnot require you to learn orders of magnitude more words, which require you to learn new mental models to understand what the mnemonic names mean anyway. The "readability" is really just smoke and mirrors, IMHO.

Once you know APL or K, then clusters of "unreadable" symbols become immediately recognizable and, frankly, stupidly straightforward. And to top it off, instead of some opaque identifier, the "name" in APL is usually the entire implementation! That empowers you to make variations as needed, reason about performance, and observe meta-patterns between names. Those are higher-level cognitive tasks that the symbols make much more legible than is possible with "readable names" everywhere.

In our software development industry, the word "readability" is mostly just code for "familiarity to me".


I agree with you for the most part. Brainfuck certainly would be way more readable if you gave a collection of symbols names and used those instead.

But thats mostly because base level constructs of brainfuck are too low level. Nonetheless pragmatically, what we're already familiar with is part of the equation. If you wrote a language where + actually meant - then surely youd consider this to be harmful to the adoption of the language.

Theres also the fact that the time it takes to learn these symbols is multiplied by ever person learning it, so if youre going to introduce new constructs/symbols/mental mappings, it has to be worth the tradeoff. If the tradeoff is just less characters to type imo its not worth it.

If one language has say "sort(x,y,z)" and another says "!" means sort, I just dont think that's particularly interesting


many of K's symbols are, in fact, familiar. the dyads + - * & | < > = have meanings that should be immediately obvious to most programmers, even if they don't initially appreciate how general the K versions of these primitives are.

some symbols have simple mnemonic associations to operations that are common; the monad # is "count", the dyad @ is "at-index", the dyad ? is "find".

primitives have enlightening symmetries that simplify memorization. the adverb / ("over") has a twin that captures a trace of intermediate steps, \ ("scan"). all valences of "over" have corresponding scans.

overall the set of primitive operations is very carefully chosen such that simple combinations fit together in many useful ways. more than simply saving keystrokes- which has benefits for interactively exploring data and algorithms- k provides an efficient way of thinking about and communicating a large range of algorithmic ideas to other k programmers.


That makes a lot of sense. I was being overly skeptical, Ill have to spend time on learning it before criticizing it.


why does Chinese or Japanese have symbols? Why does mathematics have symbols? Symbols represent concepts. Once you know what a symbol means you can grok a lot from it. They are limited by using keyboard characters, if the authors had their way, they would have a custom keyboard with custom symbols, but then most people will not be able to use it with any computer, so they stuck to such characters. The idea is notation as a tool of thought. You don't have to think of chunks of words to think of reversing a list. Yet, that's how we often think when we program, we read keywords or list expressions or loops and run the code in our head which slows comprehension.

https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p... or https://www.jsoftware.com/papers/tot.htm


Probably the same reason in algebra people went from using "plus" to "p." to "+": for readability.


    I assume it gets easy to read once you practiced it a bunch
Yup, but for me at least, the learning curve is far gentler than a supposedly simple language like C or Go. After using C professionally for 12 years I still have to look up operator precedence or use cdecl from time to time. Meanwhile I feel like I can store all of K in my head with room to spare.


That’s roughly where q comes in, as syntactic sugar on top of k


Nice just what I was looking for.


It is like asking why don't all human languages use the Latin alphabet and use drawings instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: