I've been spending some time coding fp. It is a programming language heavily inspired by the language John Backus described in his 1977 Turing Award lecture.
There's plenty of examples in the projects github repository for the interested!
Nice and clean. Keep up the good work. One thing I understood in programming languages is their ability to integrate with the rest of the ecosystem. A couple of things I like about your approach is how you allow your users to get a repl without even them having to install any perm files on their machine. That’s brilliant.
Nice. I can roughly understand it though I don't quite understand why inner product has a transpose in it (assuming you use are using the symbols per APL).
Check my understanding: IP = distribute `+` across an application of `*` across all transposed elements.
If your vectors/arrays are just arrays and column/row vectors are shape conventions then there is no need for transposition. Right? Unless you have some sort of checks for shapes
Hi, yeah you assumes right that the transpose symbol comes from APL!
This language does not have ”implicit iteration” a la APL, thus * is just a binary operation you can apply like this *:<1,2> which will yield 3. This is why inner product needs to first transpose. TBH I don’t like this, I just wanted to follow the specification given in the paper as closely as possible. I might diverge from this and implement ”implicit iteration” in the future!
Turing Award lectures are not acceptance speeches in the normal sense. They've been technical lectures since the start related to the primary work/interest of the individual receiving the award.
As I'm here maybe someone can explain something I didn't found an answer until now: How does FP relate to data-flow languages?
"Classical" data-flow languages are quite different on the surface but fundamentally their inner workings look like data-processing pipelines. FP expression also look like data-processing pipelines… How is this related? Also, if the fundamental paradigm would be data-flow, why nobody designed a (functional) HW description language around that? It would be a perfect match I guess because hardware is all about signals flowing though processing elements! (Something like Haskell's Clash is imho a worse fit as the fundamental semantic of Haskell isn't data-flow; using lazy lambda calculus to model HW doesn't look like a proper fit; there's a quite large "impedance mismatch" you need to bridge first).
i've always been curious about fp, but, to my knowlege there aren't any implementations of it or closely related successors to play with-- or i suppose now there's one.
We should stop the ASCII madness at some point. The world speaks Unicode. Only programming languages are mostly stuck in the 7-bit era. That sucks.
People even use extra keyboards to input their emojis, but we "can't" input math or programming symbols? Really?
An "international" keyboard layout gives you already a shitload of "special" signs.
Using the compose key you can input even more.
When this isn't enough it's trivial to add something to ~/.XCompose
I'm not advocating to have a pure symbolic programming language. But things that are better expressed through symbols just should be.
It has reasons why you use symbols in math instead of writing everything out in some human language.
Programming is evidently symbolic, just like math: We know that as we let professionals read code while in a MRI scanner and we could look on their brain activity. The brain activity correlates with the one seen when reading (and understanding) math, not with the one seen while processing human language.
We should design programming languages accordingly!
People even use extra keyboards to input their emojis,
but we "can't" input math or programming symbols? Really?
Your usage of emojis may be different from mine, but personally I'd estimate they appear in text about once per sentence (or even couple sentences) at the absolute most. That's quite a bit lower than the symbol density in most code.
An "international" keyboard layout gives you already a
shitload of "special" signs.
Unfortunately not always the _same_ shitload - for instance, the OS X layout for Norwegian has the seemingly-ordinary ( and ) on Option-Shift-8 and Option-Shift-9!
> Your usage of emojis may be different from mine, but personally I'd estimate they appear in text about once per sentence (or even couple sentences) at the absolute most. That's quite a bit lower than the symbol density in most code.
That's completely past my point.
The point is: People are using special keyboards to input their emojis. (You can even buy those as hardware gadgets!)
So there shouldn't be any issue with the requirement to use some special keyboard (layout) to input code if even causal people mange to use such special keyboard (layouts) for very mundane purposes.
> > An "international" keyboard layout gives you already a shitload of "special" signs.
> Unfortunately not always the _same_ shitload - for instance, the OS X layout for Norwegian has the seemingly-ordinary ( and ) on Option-Shift-8 and Option-Shift-9!
I was talking especially about a so called "international layout", not some country specific one.
Also for billions of people there is no direct correspondence between keys on the keyboard and the symbols they're trying to type. Just think about Asiatic languages.
There is just no valid reason to limit the symbols used in programming to the ASCII set, as in every day use ASCII was already replaced by much richer symbol sets, which are perfectly fine to use for even less techy people. Only programming languages are caught in the "tradition" of drawing some "ASCII-art" instead of just using some proper Unicode symbols. That's laughable.
Changing this would make in the end programming languages even simpler to read as, like I said, programming is symbolic by it's pure nature.
Of course there is still a kind of balance needed: Using symbols for just everything wouldn't be helpful, as nobody wants to learn hundreds or even thousands of symbols just to read code (I'm not proposing to mimic an Asiatic language, as this wouldn't be accessible). But using a health mixture of symbols and words would be beneficial. Language level features would be better encoded as proper symbols (instead of the "written out symbols" we mostly use today, like e.g. "keywords"), but most user level code like APIs in libs should still remain word-based.
APL got this almost right. Only that the code is too dense, imho, so it's hard to parse (for humans).
But a modern language that would replace for example keywords with symbols would be a step forward again.
Edit: This example actually contains the symbol for a clock which in an unintentional illustration of the point renders just fine in editor and terminal but not here.
Lots of languages support using special characters in symbols.
Clojure for example.
user=> (defn [x](+ 1 x))
#'user/
user=> ( 7)
8
In most cases this is going to be spurious a textual description of the operation is easier to type, more descriptive, more memorable.
Your symbol of choice may have multiple look alike characters, may not be present in a different users font. Your fellow user may not know how to enter the characters you choose.
The text labels inc or increase are both more memorable, understandable, trivial to disambiguate and easy to work with and your fellow user will not be cursing you as they scan your code for the right character to cut and paste especially if it looks like a square with a hex number in it because they can't render it.
Math is an extremely dense formation of a an idea full of useless single character labels that make sense only to initiates. This isn't a great feature for code to emulate.
> Lots of languages support using special characters in symbols.
Most don't.
> In most cases this is going to be spurious a textual description of the operation is easier to type, more descriptive, more memorable.
"easier to type": depends on you keyboard layout…
"more descriptive": depends on the operation; a lot of math symbols for example need pages long descriptions when written out…
"more memorable": depends on your native language, and the language the text description is written in…
So to summarize: No, no, and no. ;-)
> Your symbol of choice may have multiple look alike characters, may not be present in a different users font.
This is also true for any text description. Who says ASCII symbols are universal?
Like it's completely possible to not have for example symbols for an African language installed, it's also completely possible to not have any ASCII font installed!
The solution is trivial: Just install the font needed to display the language you want to read.
"Look alike" symbols are irrelevant in this context. (And by the way one of the many things that Unicode got very, very wrong; the need for Unicode normalization is just brain dead; especially as there's more than one normalization mode, and they yield completely different and incompatible results; but let's not criticize Unicode here as this would become to long for this post).
> Your fellow user may not know how to enter the characters you choose.
Well, that's the point of keyboard layouts, and the mentioned extensions like the compose key…
That's nothing that should be a barrier for someone who likes to program a computer.
If you're unable to cope with switching the keyboard layout you'll be likely unable to cope with anything related to programing. So I see no issue here.
> The text labels inc or increase are both more memorable, understandable, trivial to disambiguate
Again: Please don't assume that everybody on earth is an English native speaker! In case you didn't know: Most people aren't. (Yeah, I know, especially US people don't recognize anything beyond their borders; but there's actually quite a lot, like most of the world, for example ;-)).
Also the given example is funny, as quite a lot of languages use a symbol for "increase"… Ever seen a `++` somewhere?
Abbreviations by the way are symbols! (And they share all the understandably / discoverability issues of symbols).
Symbols need of course some written out form, at least for lookup / search.
But abbreviations aren't just aliases (like well used symbols), which makes them even worse than proper symbol usage.
> easy to work with
What do you mean with that?
> your fellow user will not be cursing you as they scan your code for the right character to cut and paste especially if it looks like a square with a hex number in it because they can't render it
You need to install fonts…
But that's a solved "problem" since decades! Most systems come with preinstalled fonts for all of Unicode, since "forever".
> Math is an extremely dense formation of a an idea […]
Which is exactly what people wanted. Because it fits the use-case extremely well.
And like I've pointed out already: Code is like math. Reading (and understanding) code activates the same brain centers as reading (and understanding) math.
> […] full of useless single character labels that make sense only to initiates.
The "single character labels" are symbols, too.
They're not useless by the way. ;-)
And when we're at it: Calling math "useless" is ridiculous.
That one needs to understand the domain one works in is nothing unexpected. Code in general also doesn't make sense to outside people. No mater whether the symbols are written out or not. I could simply show you ADA or COBOL code to prove that. ;-)
> This isn't a great feature for code to emulate.
I don't argue to use only symbols; as I stated already before.
But common "language level" syntax would be better expressed in a symbolic way. Long words are only noise. They don't provide any benefit in reading (or understanding!). Quite the contrary as our brain needs more time and capacity to recognize them. That's one of the reasons why mathematicians prefer symbols. Symbols are just way simpler to read and understand quickly!
Of course you need to learn the symbols. But that's equally true for the written out forms! Code doesn't become "intuitively" understandable when written out in human language.
It becomes actually less understandable as you need to parse (and remember!) much larger chunks to get the meaning. A human brain has only max. 5 seconds of working memory (most people have less). Long words use up this capacity much more quickly than short symbols. That's a fact. (Just look up who can do better mental arithmetic on average, and how this correlates with the length of word used for numbers in the native language of those people).
Of course not everything should be abbreviated. Only things where it makes sense.
Someone who knows some language should be able to read code in that language without "learning Chinese" first. (That's why I actually don't like abbreviations in code: Because you can't understand such code without first learning the—quite often completely random and ad hoc—abbreviations someone who never heard of code-competition introduced for no reason).
But having noisy long written out symbols for language constructs is just not helpful. It's even an obstacle to quick reading of code. (Just go out and try to read larger chunks of ADA. You will realize quickly: Reading such code is actually very exhausting!)
Currently a singular set of characters trivially entered on virtually all current developers keyboards is used to create English language keywords and identifiers formed out of ASCII characters that on any decent programming font are trivially distinguishable and identifiable.
Whether or not you think this is ideal or equitable people who natively speak any of thousands of languages and numerous character sets can trivially communicate and write mutually intelligible code with font serving only as a aesthetic choice.
17 people from 17 different places each using their own characters and whatever special characters they please produce code that not one of the 17 should be able to type easily with numerous look alike characters in and it should lift choice of font from a aesthetic choice to a functional that effects not only the font selected in the editor but potentially the editor itself given the need to support interesting things like combined characters. Given this the logical thing to do would be for people to agree not to stick any weird characters in your code...which is what I think most people do even on languages that have such a feature indicating its basically kind of useless.
> "easier to type": depends on you keyboard layout…
Yes you would be advised to use an English layout or something which makes it easy to type these characters. You are also going to be using english keywords for example.
> "more descriptive": depends on the operation; a lot of math symbols for example need pages long descriptions when written out…
Such symbols also have an English name. The alternative to the special character isn't a pages long description of the operation its the name.
> "more memorable": depends on your native language, and the language the text description is written in…
A word in a singular small character set is easier to remember than an identifier that could contain thousands of different characters some of which are only found in a fraction of your people's languages and many which look alike. It takes more bits to encode because it is trivially more complicated.
===MATH VS CODE===
I never said that math was useless. I said single character identifiers especially ones which change from context to context are useless. There is no reason to believe that because math has similarity to math that a similar notation is valuable or indeed even that mathematical notation is particularly good.
> It becomes actually less understandable as you need to parse (and remember!) much larger chunks to get the meaning. A human brain has only max. 5 seconds of working memory (most people have less). Long words use up this capacity much more quickly than short symbols. That's a fact. (Just look up who can do better mental arithmetic on average, and how this correlates with the length of word used for numbers in the native language of those people).
My recollection is that its not a certain number of seconds of working memory but how many balls you can keep up in the air and your brain works around this problem by in effect remembering one "chunk" as opposed to its constituent parts which is why you can basically function at all. There is no evidence whatsoever that using symbolic representation of non ascii characters improves programming performance or indeed that any part of that statement is accurate. Seeing as there are several languages in which one can use both entirely ascii text along with whatever unicode you like you ought to be able to back up your assertion with a citation.
I liked how Fortress would transform your ASCII file into LaTeX renderable output. Also the APL input in emacs and many other editors offers a pretty reasonable compromise. `.i` => ι (iota), as an example. It doesn't take much effort to learn. Then there's also the option of letting people type it longhand and having the editor substitute for you: `\iota` (or maybe `\iota<tab>`) => ι. The prefix option is handier for the common symbols, but the the TeX-style work well for less common ones.
That's more or less what the compose key does (but works everywhere and not only in EMACS). :-)
Only that there isn't such combination defined by default. But adding it is trivial:
<Multi_key> <i> <i> : "ι" U03B9 # GREEK SMALL LETTER IOTA
This line in your ~/.XCompose and you can type "ι" by pressing compose-i-i without changing the whole keyboard layout away form your default.
I'm using the compose key the whole time to input "special symbols"™. It's very intuitive. (Like, for example you get the "™" when pressing compose-t-m).
I have the compose key on the otherwise useless CAPS-LOCK (and CAPS-LOCK on/off by pressing left and right space simultaneously). Both options are only ticking a check-mark away when using Linux.
And what would be the alternative? A joystick maybe? (No, the joystick idea isn't serous of course).
You can already program by drag & drop. Try Scratch. But this just does not scale…
Before we give up keyboards and switch to brain interfaces, maybe we should first try to liberate programming form the monadic style, or something like that. ;-)
I'm not sold on this pen & paper idea to be honest.
It looks slow and cumbersome. It misses all the advantages of using a computer.
How would for example code competition, context sensitive features, or refactoring work? How about editing features of a capable editor like this here:
"Liberating programming form monadic style" was only a pun on the parent post. :-)
If you do FP (functional programming) in an advanced typed language you will likely end up with code written in monadic style, meaning that you wrap all (effectful) computation in some monads.
In my opinion that's in the end not really much better than the usual imperative style—and that closes the circle to the original citation: "Can programming be liberated from the von Neumann style?" (which was the title of a quite important paper).
> It looks slow and cumbersome. It misses all the advantages of using a computer.
It's not a computer. It is a calculator for pen & paper.
> How would for example code competition, context sensitive features, or refactoring work? How about editing features of a capable editor like this here: [Helix]. It would be very hard, if even possible, to replicate such user experience with "pen & paper" (even if "pen and paper" would be digital).
Exactly. Keyboard-driven programming as it has evolved is un-replicable in a pen & paper form. But this goes both ways. Pen & paper modality allows you to do stuff you can't with keyboard, e.g. drawing & writing weird symbols like ∞∆⫸ (also outside UTF-8). Mouse is a poor substitute.
Analog pen & paper world lacks computation. What this pen & paper calculator should do well is to solve Euler problems in-situ at the speed of thought. Example: https://youtu.be/y5Tpp_y2TBk?t=18
> Instead the "text" (code) should become even more interactive. [Bret Victor] & [Enso] & interactive notebooks.
120% agree. Been doing some work & research in this space for quite some time.
People presenting their own work get a lot of leeway and there's nothing in this that approaches clickbait. Having to click on a thing to learn more about what it is is not in itself clickbait.
Could you describe/explain it? Referencing some obscure language from a lecture in 1977 means nothing to me.
edited for everyone else's convenience, from wikipedia:
FP (short for functional programming)[2] is a programming language created by John Backus to support the function-level programming[2] paradigm. It allows building programs from a set of generally useful primitives and avoiding named variables (a style also called tacit programming or "point free"). It was heavily influenced by APL which was developed by Kenneth E. Iverson in the early 1960s.[3]
The FP language was introduced in Backus's 1977 Turing Award paper, "Can Programming Be Liberated from the von Neumann Style?", subtitled "a functional style and its algebra of programs." The paper sparked interest in functional programming research,[4] eventually leading to modern functional languages, which are largely founded on the lambda calculus paradigm, and not the function-level paradigm Backus had hoped. In his Turing award paper, Backus described how the FP style is different:
An FP system is based on the use of a fixed set of combining forms called functional forms. These, plus simple definitions, are the only means of building new functions from existing ones; they use no variables or substitutions rules, and they become the operations of an associated algebra of programs. All the functions of an FP system are of one type: they map objects onto objects and always take a single argument.[2]
FP itself never found much use outside of academia.[5] In the 1980s Backus created a successor language, FL, which was an internal project at IBM Research.
The lecture in question is not obscure, it's one of the most influential papers in Computer Science and is foundational to a lot of modern PL research. It's currently sitting at 4166 citations on Google Scholar--that's twice as many as Dijkstra's "goto considered harmful".
In general I'm all for including context, but in this case and for this audience (people on HN interested in useless programming languages) a reference to the paper is plenty.
Yes the readme contains the exact same referential description as this post. I looked at the examples, I don't know the language and its purpose is not immediately obvious to me.
If I missed some explanation in the repo, please enlighten me.
I've already read this, it's not an appropriate description of the language. Those are available in every language I've ever worked with. What is differentiating about this language? Why is it interesting? What is the design philosophy that makes it worthwhile? Was it the first language to introduce some of these? Does it enable the user to implement things other languages can't as elegantly?
I feel like those shouldn't be very hard questions to answer. I can answer them about every language I've ever worked with, and every language I've developed.
You could tell me the answer which isn't all that complicated and you seem unable or unwilling to do.
When you post a project somewhere, or generally start a discussion with people, it's generally wise to establish a minimum context about what you are talking about and not require the participant to read a 40 year old paper, or lecture, or so on, just to understand the basis of your premise.
Even more so online when the work is multiplied by the number of people who will read your post and a two-sentences description can solve it.
Honestly, if you're this opposed to exploring Backus's 1977 Turing Award lecture, you are not the target audience. This isn't some obscure paper that's been lost to time, it's been cited 96 times in 2022 alone [0]. It's a huge freaking deal in the PL community, and OP is perfectly justified in referencing it and moving on.