Besides his contribution to language design, he authored one of the best puns ever. His last name is properly pronounced something like "Virt" but in the US everyone calls him by "Worth".
That led him to quip, "In Europe I'm called by name, but in the US I'm called by value."
The joke goes back to Adriaan van Wijngaarden introducing Wirth at a conference in the 1960s. I'd love to see a video of the audience reaction to that one.
Today I learned (from the Wikiquote page), what an obviously socially witty person he seems to have been!
> Finally a short story for the record. In 1968, the Communications of the ACM published a text of mine under the title "The goto statement considered harmful", which in later years would be most frequently referenced, regrettably, however, often by authors who had seen no more of it than its title, which became a cornerstone of my fame by becoming a template: we would see all sorts of articles under the title "X considered harmful" for almost any X, including one titled "Dijkstra considered harmful". But what had happened? I had submitted a paper under the title "A case against the goto statement", which, in order to speed up its publication, the editor had changed into a "letter to the Editor", and in the process he had given it a new title of his own invention! The editor was Niklaus Wirth.
It is refreshing to see the old-fashioned trope of the genius computer scientist / software enginieer as a "foreigner to the world" being contested again and again by stories like this.
Of course people like Niklaus Wirth are exceptional in many ways, so it might be that the trope has/had some grain of truth, that just does not co-correlate with the success of said person :)
And of course people might want to argue about the differences betweem SE, CS and economics.
After all that rambling... RIP and thank you Niklaus!
The joke really only works if you use his first name! The complete joke is that "by value" means pronouncing first and last name to sound like "Nickles Worth".
It reminds me of my only meeting with Andy Tanenbaum / AAT [0], one of the smartest, nicest computer science guy I've ever met in my life. I can't recall the many puns and jokes he shared, but it was just incredible.
The linked tweet says "Whereas Europeans generally pronounce my name the right way ('Ni-klows Wirt'), Americans invariably mangle it into 'Nick-les Worth'. This is to say that Europeans call me by name, but Americans call me by value."
Okay, either me or the ref. may have been wrong. but I distinctly remember "Veert", because it was non-intuitive to me (as the way to pronounce Wirth), as a guy who didn't know any German at the time, only English. So the ref., probably.
It's not really wrong. There are English accents (such as Received Pronunciation) where an "ee" before an "r" is normally pronounced with an [ɪ] like in "wit". In any case, even if you pronounce the "ee" as something else like [i], "Veert" is probably still the sequence of letters that maximises the likelihood that an English speaker will understand by it something close to the true German pronunciation ([vɪʁt] or [vɪɐt]). "Virt", for example, would be read by most people as [vɜrt] (rhyming with "hurt") which to my ear is further off from the correct pronunciation compared to something like [viət].
"Veert" is correct, in the sense that it's how a German would pronounce it. Of course, the great man wasn't German; I don't know how he pronounced his own surname.
"Wit" is just wrong. Perhaps that was a joke that I missed about the man's humour.
He was Swiss, more exactly from the city of Winterthur located in the canton (state) of Zürich. The canton's official language is German, however. Of course, people over there speak in a strong local dialect called "Züritüütsch".
You just elongate the vowel i.e. pronounce it longer. The double „ü“ just indicates this vowel to be stressed. Dialects do not follow a strict orthography, however, so you might find it written slightly differently in other contexts.
Wirth lived in the United States for some time throughout his life but is a Zürich native. He must have spoken Züritüütsch („Zurich German“) privately, I am pretty sure (without having known him personally).
Umlauts aren't diphtongs; it's the same sound all the way through. GP used two consecutive ones in order to show that the sound is long. (And whaddoino, if the dialect has an official orthography, maybe that's how it's supposed to be spelled.)
We have both, and I'd tend to pronounce "Wirth" similar to "wit" as far as the "i" goes. It's not always clear just from looking at the letter. But some words have explicit cues: There are "stretching consonants" like a-a, a-h, e-e, e-h, i-e, i-h, etc: Aal, Kahn, dehnen, dienen, sühnen, etc. And sometimes the following consonant gets doubled up to indicate a shorter pronunciation, like in "Bitte".
The "i" sound in "wit" does exist in German and is what is normally indicated by "i" on its own. The long "ee" sound is normally spelt as "ie" in German.
And what's the difference? AFAICT it's pretty much exactly the same sound, except in one case it's longer, in the other shorter. Say "bit"... Then say it again, only looonger... And you get "beet". Say "wit", but longer, and you get "wheat".
Besides all his innumerable accomplishments he was also a hero to Joe Armstrong and a big influence on his brand of simplicity.
Joe would often quote Wirth as saying that yes, overlapping windows might be better than tiled ones, but not better enough to justify their cost in implementation complexity.
RIP. He is also a hero for me for his 80th birthday symposium at ETH where he showed off his new port of Oberon to a homebrew CPU running on a random FPGA dev board with USB peropherals. My ambition is to be that kind of 80 year old one day, too.
Wirth was such a legend on this particular aspect. His stance on compiler optimizations is another example: only add optimization passes if they improve the compiler's self-compilation time.
Oberon also, (and also deliberately) only supported cooperative multitasking.
>His stance on compiler optimizations is another example: only add optimization passes if they improve the compiler's self-compilation time.
What an elegant metric! Condensing a multivariate optimisation between compiler execution speed and compiler codebase complexity into a single self-contained meta-metric is (aptly) pleasingly simple.
I'd be interested to know how the self-build times of other compilers have changed by release (obviously pretty safe to say, generally increasing).
Life was different in the '80s. Oberon targeted the NS32000, which didn't have a floating point unit. Let alone most the other modern niceties that could lead to a large difference between CPU features used by the compiler itself, and CPU features used by other programs written using the compiler.
That said, even if the exact heuristic Wirth used is no longer tenable, there's still a lot of wisdom in the pragmatic way of thinking that inspired it.
Speaking of that, if you were ever curious how computers do floating point math, I think the first Oberon book explains it in a couple of pages. It’s very succinct and, for me, one of the clearest explanations I’ve found.
Rewrite the compiler to use a LLM for complication. I'm only half joking! The biggest remaining technical problem is the context length, which is severely limiting the input size right now. Also, the required humongous model size.
That’s not a simple fix in this context. Try making it without slowing down the compiler.
You could try to game the system by combining such a change that slows down compilation with one that compensates for it, though, but I think code reviewers of the time wouldn’t accept that.
His stance should be adopted by all languages authors and designers but apparently it's not. The older generation of programming language guru like Wirth and Hoare are religiously focused on simplicity hence they never take compilation time for granted unlike most popular modern languages authors. C++, Scala, Julia and Rust are all behemoth in term of complexity in language design hence have very slow compilation time. Popular modern languages like Go and D are the breath of fresh air with their lightning fast compilation due to their inherent simplicity in their design. This is to be expected since Go is just a modern version of Modula and Oberon, and D is designed by a former aircraft engineer where simplicity is mandatory not an option.
You cannot add a loop skew optimization to compiler before compiler needs a loop skew optimization. Which it would not need at all because it is loop skew optimization (it requires matrix operations) that need a loop skew optimization.
In short, compiler is not an ideal representation of the user programs it needs to optimize.
Perhaps Wirth would say that compilers are _close enough_ to user programs to be a decent enough representation in most cases. And of course he was sensible enough to also recognize that there are special cases, like matrix operations, where it might be wirthwhile.
EDIT: typo in the last word but I'm leaving it in for obvious reasons.
Wirth ran an OS research lab. For that, the compiler likely is a fairly typical workload.
But yes, it wouldn’t work well in a general context. For example, auto-vectorization likely doesn’t speed up a compiler much at all, while adding the code to detect where it’s possible will slow it down.
So, that feature never can be added.
On the other hand, may lead to better designs. If, instead, you add language features that make it easier for programmers to write vectorized code, that might end up being easier for programmers. They would have to write more code, but they also would have to guess less whether their code would end up being vectorized.
perhaps you could write the compiler using the data structures used by co-dfns (which i still don't understand) so that vectorization would speed it up, auto- or otherwise
It hasn't won. Threads are alive and well and I rather expect async has probably already peaked and is back on track to be a niche that stays with us forever, but a niche nevertheless.
Your opinion vs. my opinion, obviously. But the user reports of the experience in Rust is hardly even close to unanimous praise and I still say it's a mistake to sit down with an empty Rust program and immediately reach for "async" without considering whether you actually need it. Even in the network world, juggling hundreds of thousands of simultaneous tasks is the exception rather than the rule.
Moreover, cooperative multitasking was given up at the OS level for good and sufficient reasons that I see no evidence that the current thrust in that direction has solved. As you scale up, the odds of something jamming your cooperative loop monotonically increase. At best we've increased the scaling factors, and even that just may be an effect of faster computers rather than better solutions.
in the 02000s there was a lot of interest in software transactional memory as a programming interface that gives you the latency and throughput of preemptive multithreading with locks but the convenient programming interface of cooperative multitasking; in haskell it's still supported and performs well, but it has been largely abandoned in contexts like c#, because it kind of wants to own the whole world. it's difficult to add incrementally to a threads-and-locks program
i suspect that this will end up being the paradigm that wins out, even though it isn't popular today
I was considering making a startup out of my simple C++ STM[0], but the fact that, as you point out, the transactional paradigm is viral and can't be added incrementally to existing lock-based programs was enough to dissuade me.
nice! when was this? what systems did you build in it? what implementation did you use? i've been trying to understand fraser's work so i can apply it to a small embedded system, where existing lock-based programs aren't a consideration
It grew out of an in-memory MVCC DB I was building at my previous job. After the company folded I worked on it on my own time for a couple months, implementing some perf ideas I had never had time to work on, and when update transactions were <1us latency I realized it was fast enough to be an STM. I haven't finished implementing the STM API described on the site, though, so it's not available for download at this point. I'm not sure when I'll have time to work on it again, since I ran out of savings and am going back to full-time employment. Hopefully I'll have enough savings in a year or two that I can take some time off again to work on it.
that's exciting! i just learned about hitchhiker trees (and fractal tree indexes, blsm trees, buffer trees, etc.) this weekend, and i'm really excited about the possibility of using them for mvcc. i have no idea how i didn't find out about them 15 years ago!
Sound’s nifty. Did this take advantage of those Intel (maybe others?) STM opcodes? For a while I was stoked on CL-STMX which did (as well as implementing non-native version to the same interface)
No, not at all. I'm pretty familiar with the STM literature by this point, but I basically just took the DB I'd already developed and slapped an STM API on top. Given that it can do 4.8M update TPS on a single thread, it's plenty fast enough already (although scalability isn't quite there yet; I have plenty of ideas on how to fix that but no time to implement them).
Since I've given up on monetizing this project, I may as well just link to its current state (which is very rough, the STM API described in the website is only partly implemented, and there's lots of cruft from its previous life that I haven't ripped out yet). Note that this is a fork of the previous (now MIT-licensed) Gaia programming platform (https://gaia-platform.github.io/gaia-platform-docs.io/index....).
The version of this code previously released under the Gaia programming platform is here: https://github.com/gaia-platform/GaiaPlatform/blob/main/prod.... (Note that this predates my removal of IPC from the transaction critical path, so it's about 100x slower.) A design doc from the very beginning of my work on the project that explains the client-server protocol is here (but completely outdated; IPC is no longer used for anything but session open and failure detection): https://github.com/gaia-platform/GaiaPlatform/blob/main/prod....
Meanwhile, in JS/ECMAScript land, async/await is used everywhere and it simplifies a lot of things. I've also used the construct in Rust, where I found it difficult to get the type signatures right, but in at least one other language, async/await is quite helpful.
Await is simply syntactic sugar on top of what everybody was forced to do already (callbacks and promises) for concurrency. As a programming model, threads simply never had a chance in the JS ecosystem because on the surface it has always been a single-threaded environment. There's too much code that would be impossible to port to a multithreaded world.
Mostly won for CRUD apps (yes and a few others). Your DAW, your photo editor, your NLE, your chatbot girlfriend, your game, your CAD, etc might actually want to use more than one core effectively per task.
Even go had to grow up eventually.
A core problem is that it's now clear most apps have hundreds or thousands of little tasks going, increasingly bound by network, IO, and similar. Async gives nice semantics for implementing cooperative multitasking, without introducing nearly as many thread coherency issues as preemptive.
I can do things atomically. Yay! Code literally cooperates better. I don't have the messy semantics of a Windows 3.1 event loop. I suspect it will take over more and more into all walks of code.
Other models are better for either:
- Highly parallel compute-bound code (where SIMD/MIMD/CUDA-style models are king)
- Highly independent code, such as separate apps, where there are no issues around cooperation. Here, putting each task on a core, and then preemptive, obviously wins.
What's interesting is all three are widely used on my system. My tongue-in-cheek comment about cooperative multitasking winning was only a little bit wrong. It didn't quite win in the sense of taking over other models, but it's in widespread use now. If code needs to cooperate, async sure beats semaphores, mutexes, and all that jazz.
Async programming is not an alternative to semaphores and mutexes. It is an alternative to having more threads. The substantial drawback of async programming in most implementations is that stack traces and debuggers become almost useless; at least very hard to use productively.
Indeed, however the experience with crashes and security exploits, has proven that scaling processes, or even distributing them across several machines, scales much better than threads.
In the last 15 to 20 years asynchronous programming --- as a form of cooperative multi-tasking [1] --- did gain lot's of popularity. That was mainly because of non-scalable threads implementations in most language runtimes, e.g. the JVM. At the same time the JS ecosystem needed to have some support for concurrency. Since threads weren't even an option the community settled first on callback-hell and then on async/await. The former reason to asynchronous programming alleged win is currently being reversed. The JVM has introduced light weight threads that have the low runtime cost of asynchronous programming and all the niceties of thread-based concurrency.
[1]: Asynchronous programming is not the only form of cooperative programming. Usually cooperative multi-tasking systems have a special system call yield() which gives up the processor in addition to io induced context-switches.
In .NET and C++ asynchronous programming is not cooperative, it hides the machinery of a state machine mapping tasks into threads, it gets prempted and you can write your own scheduler.
But, isn't the separation of the control-flow into chunks, either separated by async/await or by sepration between call and callback, a form of cooperative thread yielding on top of preemptive threads? If that isn't true for .NET, then I'd really interested to understand what else it is doing.
async/await has the advantage over cooperative multitasking that it has subroutines of different 'colors', so you don't accidentally introduce concurrency bugs by calling a function that can yield without knowing that it can yield
i think it's safe to say that the number of personal computers running operating systems without preemptive multitasking is now vanishingly small
as i remember it, oberon didn't support either async/await or cooperative multitasking. rather, the operating system used an event loop, like a web page before the introduction of web workers. you couldn't suspend a task; you could only schedule more work for later
The key thing about 2023-era asynchronous versus 1995-era cooperative multitasking is code readability and conciseness.
Under the hood, I'm expressing the same thing, but Windows 3.1 code was not fun to write. Python / JavaScript, once you wrap your head around it, is. The new semantics are very readable, and rapidly improving too. The old ones were impossible to make readable.
You could argue that it's just syntactic sugar, but it's bloody important syntactic sugar.
I never left 1991 and I haven't seen anything that has made me consider leaving ConcurrentML except for the actor model, but that is so old the documentation is written on parchment.
> You could argue that it's just syntactic sugar, but it's bloody important syntactic sugar.
Yes, of course you could, since everything beyond, uh, paper tape, next-state table, and current pen-position (or whatever other pieces there are in a theoretical Turing machine) is basically syntactic sugar. Or, IOW, all programming languages higher than assembly are nothing but syntactic sugar. I like syntactic sugar.
(But OTOH, I'm a diabetic. Gotta watch out for that sugar.)
Exactly. The way I think about it, the "async" keyword transforms function code so that local variables are no longer bound to the stack, making it possible to pause function execution (using "await") and resume it at an arbitrary time. Performing that transformation manually is a fair amount of work and it's prone to errors, but that's what we did when we wrote cooperatively multitasked code.
Sure, that's a good way to look at it. Another way to look at it: because the process of transforming code for cooperative multitasking is now much cleaner and simpler, it's fine to use new words to describe what to do and how to do it.
cooperative multitasking, as i use the term, keeps you from having to transform your code. it maintains a separate stack per task, just like preemptive multitasking. so async/await isn't cooperative multitasking, though it can achieve similar goals
possibly you are using the terms in subtly different ways so it appears that we disagree when we do not
that definition is different from the definition i'm using; it covers both what i'm calling 'cooperative multitasking' and things like async/await, the npm event handler model, and python/clu iterators
> In programming, a task is simply an independent execution path. On a computer, the system software can handle multiple tasks, which may be applications or even smaller units of execution. For example, the system may execute multiple applications, and each application may have independently executing tasks within it. Each such task has its own stack and register set.
> Multitasking may be either cooperative or preemptive. Cooperative multitasking requires that each task voluntarily give up control so that other tasks can execute. (...)
> The Mac OS 8 operating system implements cooperative multitasking between applications. The Process Manager can keep track of the actions of several applications. However, each application must voluntarily yield its processor time in order for another application to gain it. An application does so by calling WaitNextEvent, which cedes control of the processor until an event occurs that requires the application’s attention.
that is, this requirement that each task have its own stack is not just something i made up; it's been part of common usage for decades, at least in some communities. the particular relevant distinction here is that, because each task has its own stack (or equivalent in something like scheme), multitasking doesn't require restructuring your code, because calling a normal function can yield the cpu. in the specific case of macos this was necessary so that switcher/multifinder/process-manager could multitask mac apps written for previous versions of macos that didn't have multitasking
thanks! but here we were discussing specifically the distinction between the approaches to concurrency that require you to explicitly structure your code around yield points, like async/await, and the kinds that don't, like preemptive multitasking and what i'm calling cooperative multitasking. this is unnecessarily difficult to discuss coherently if you insist on applying the term 'cooperative multitasking' indiscriminately to both, which i've shown above is in violation of established usage, and refusing to suggest an alternative term
i'll see if i can flesh out the wikipedia article a bit
Where did I mix preemptive and cooperative multitasking?
And why do you think that in the case of an explicit event loop you don't have to yield? You do have to, and have to sort out some way to continue on your own. Which makes the new 'syntactic sugar' approaches much easier of course. Doesn't mean the principle isn't the same and they don't deserve the same name.
if the implied contrast is with cooperative multitasking, it's exactly the opposite: they're there to expose the event loop in a way you can't ignore. if the implied contrast is with setTimeout(() => { ... }, 0) then yes, pretty much, although the difference is fairly small—implicit variable capture by the closure does most of the same hiding that await does
Not asking about old JavaScript vs new JavaScript. Asking about explicit event loop vs hidden event loop with fancy names like timeout, async, await...
do you mean the kind of explicit loop where you write
for (;;) {
int r = GetMessage(&msg, NULL, 0, 0);
if (!r) break;
if (r == -1) croak();
TranslateMessage(&msg);
DispatchMessage(&msg);
}
or, in yeso,
for (;;) {
yw_wait(w, 0);
for (yw_event *ev; (ev = yw_get_event(w));) handle_event(ev);
redraw(w);
}
async/await doesn't always hide the event loop in that sense; python asyncio, for example, has a lot of ways to invoke the event loop or parts of it explicitly, which is often necessary for integration with software not written with asyncio in mind. i used to maintain an asyncio cubesat csp protocol stack where we had to do this
to some extent, though, this vitiates the concurrency guarantees you can otherwise get out of async/await. software maintainability comes from knowing that certain things are impossible, and pure async/await can make concurrency guarantees which disappear when a non-async function can invoke the event loop in this way. so i would argue that it goes further than just hiding the event loop. it's like saying that garbage collection is about hiding memory addresses: sort of true, but false in an important sense
What worries me is we may have a whole generation who doesn't know about the code you posted above and thinks it's magic or worse, real multiprocessing.
(To set the tone clearly - this seems like an area where you know a _lot_ more than me, so any questions or "challenges" below should be considered as "I am probably misunderstanding this thing - if you have the time and inclination, I'd really appreciate an explanation of what I'm missing" rather than "you are wrong and I am right")
I don't know if you're intentionally using "colour" to reference https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... ? Cooperative multitasking (which I'd never heard of before) seems from its Wikipedia page to be primarily concerned with Operating System-level operations, whereas that article deals with programming language-level design. Or perhaps they are not distinct from one another in your perspective?
I ask because I've found `async/await` to just be an irritating overhead; a hoop you need to jump through in order to achieve what you clearly wanted to do all along. You write (pseudocode) `var foo = myFunction()`, and (depending on your language of choice) you either get a compilation or a runtime error reminding you that what you really meant was `var foo = await myFunction()`. By contrast, a design where every function is synchronous (which, I'd guess, more closely matches most people intuition) can implement async behaviour when (rarely) desired by explicitly passing function invocations to an Executor (e.g. https://www.digitalocean.com/community/tutorials/how-to-use-...). I'd be curious to hear what advantages I'm missing out on! Is it that async behaviour is desired more-often in other problem areas I don't work in, or that there's some efficiency provided by async/await that Executors cannot provide, or something else?
> I ask because I've found `async/await` to just be an irritating overhead
Then what you want are coroutines[1], which are strictly more flexible than async/await. Languages like Lua and Squirrel have coroutines. I and plenty of other people thing it's tragic that Python and Javascripts added async/await instead, but I assume the reason wasn't to make them easier to reason about, but rather to make them easier to implement without hacks in existing language interpreters not designed around them. Though Stackless Python is a CPython fork that adds real coroutines, also available as the greenlet module in standard CPython [2], amazing that it works.
[1] Real coroutines, not what Python calls "coroutines with async syntax". See also nearby comment about coroutines vs coop multitasking https://news.ycombinator.com/item?id=38859828
We used coroutines in our interrupt rich environment in our real time medical application way back when. This was all in assembly language and the coroutines vastly reduced our multithreading errors to effectively zero. This is one place where C , claimed to be close to the machine falls down.
well some of the things i know are true but i don't know which ones those are; i'll tell you the things i know and hopefully you can figure out what's really true
yes! i'm referencing that specific rant. except that what munificent sees as a disadvantage i see as an advantage
there's a lot of flexibility in systems design to move things between operating systems and programming languages. dan ingalls in 01981 takes an extreme position in 'design principles behind smalltalk' https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....
> An operating system is a collection of things that don't fit into a language. There shouldn't be one.
in the other direction, tymshare and key logic's operating system 'keykos' was largely designed, norm hardy said, with concepts from sigplan, the acm sig on programming languages, rather than sigsosp
sometimes irritating overhead hoops you need to jump through have the advantage of making your code easier to debug later. this is (i would argue, munificent would disagree) one of those times, and i'll explain the argument why below
in `var foo = await my_function()` usually if my_function is async that's because it can't compute foo immediately; the reasons in the examples in the tutorial you linked are making web requests (where you don't know the response code until the remote server sends it) and reading data from files (where you may have to wait on a disk or a networked fileserver). if all your functions are synchronous, you don't have threads, and you can't afford to tie up your entire program (or computer) waiting on the result, you have to do something like changing my_function to return a promise, and putting the code below the line `var foo = await my_function()` into a separate subroutine, probably a nested closure, which you pass to the promise's `then` method. this means you can't use structured control flow like statement sequencing and while loops to go through a series of such steps, the way you can with threads or async
so what if you use threads? the example you linked says to use threads! i think it's a widely accepted opinion now (though certainly not universal) that shared-mutable-memory threading is the wrong default, because race conditions in multithreaded programs with implicitly shared mutable memory are hard to detect and prevent, and also hard to debug. you need some kind of synchronization between the threads, and if you use semaphores or locks like most people do, you also get deadlocks, which are hard to prevent or statically detect but easy to debug once they happen
async/await guarantees you won't have deadlocks (because you don't have locks) and also makes race conditions much rarer and relatively easy to detect and prevent. mark s. miller, one of the main designers of recent versions of ecmascript, wrote his doctoral dissertation largely about this in 02006 http://www.erights.org/talks/thesis/index.html after several years working on an earlier programming language called e based on promises like the ones he later added to js; but i have to admit that, while i've read a lot of his previous work, i haven't read his dissertation yet
cooperative multitasking is in an in-between place; it often doesn't use locks and makes race conditions somewhat rarer and easier to detect and prevent than preemptive multitasking, because most functions you call are guaranteed not to yield control to another thread. you just have to remember which ones those are, and sometimes it changes even though your code didn't change
(in oberon, at least the versions i've read about, there was no way to yield control. you just had to finish executing and return, like in js in a web page before web workers, as i think i said upthread)
that's why i think it's better to have colored functions even though it sometimes requires annoying hoop-jumping
You will get them in .NET and C++, because they map to real threads being shared across tasks.
There is even a FAQ maintained by .NET team regarding gotchas like not calling ConfigureAwaitable with the right thread context in some cases where it needs to be explicitly configured, like a task moving between foreground and background threads.
(it arguably needs to be updated, so that people stop writing single line 'return await' methods which waste performance for no reason (thankfully some analyzers do flag this))
Also (AFAIK) not in JavaScript. An essential property of cooperative multitasking is that you can say “if you feel like it, pause me and run some other code for a while now” to the OS.
Async only allows you to say “run foo now until it has data” to the JavaScript runtime.
IMO, async/await in JavaScript are more like one shot coroutines, not cooperative multitasking.
The quick answer is that coroutines are often used to implement cooperative multitasking because it is a very natural fit, but it's a more general idea than that specific implementation strategy.
interesting, i would have said the relationship is the other way around: cooperative multitasking implies that you have separate stacks that you're switching between, and coroutines are a more general idea which includes cooperative multitasking (as in lua) and things that aren't cooperative multitasking (as in rust and python) because the program's execution state isn't divided into distinct tasks
Yeah thinking about it more I didn’t intend to imply a subset relationship. Coroutines are not only used to implement cooperative multitasking, for sure.
well, i mean, lua's 'coroutines' are full tasks with their own stacks, unlike, say, python's 'coroutines'. so arguably it isn't that one can be used to implement the other; it's that they're two names for the same thing
lua's coroutines aren't automatically scheduled (there isn't a built-in run queue) but explicitly resumed, which is a difference from the usual cooperative-multitasking systems; arguably on that basis you could claim that they aren't quite 'cooperative multitasking' on their own
the last time i implemented a simple round-robin scheduler for cooperative multitasking was in july, as an exercise, and it was in arm assembly language rather than lua. it was 32 machine instructions and 64 lines of code (http://canonical.org/~kragen/sw/dev3/monokokko.S), plus 14 lines of example code to run in the threads. when i went to go look at that just now i was hoping to come up with some kind of crisp statement about the relative importance or complexity of the stack-switching functionality and the run-queue maintenance facility, but in fact there isn't a clear separation between them, and that version of the code creates all the tasks at assembly time instead of runtime. a more flexible version with start, spawn, yield, and exit calls, which respects the eabi so you can write your task code in c (http://canonical.org/~kragen/sw/dev3/einkornix.het seq.), is 53 lines of assembly and 34 machine instructions, but similarly has no real separation of the two concerns
> arguably on that basis you could claim that they aren't quite 'cooperative multitasking' on their own
Right, I think this is where I am coming from. Generators, for example, can be implemented via coroutines, but I would not call a generator "cooperative multitasking."
That's very cool! Yeah, I have never done this myself, but in my understanding implementations in assembly can be very small.
> when i went to go look at that just now i was hoping to come up with some kind of crisp statement about the relative importance or complexity of the stack-switching functionality and the run-queue maintenance facility, but in fact there isn't a clear separation between them
That's fair, but I don't think that's the final say here, as you were building a system for cooperative multitasking explicitly, with no reason to try and separate the concerns. When a system is very simple, there's much less reason for separation.
Actually, this makes me realize why I probably have this bias for thinking of them separately: async/await in Rust. The syntax purely creates a generator, it is totally inert. You have to bring along your own executor (which contains a scheduler among other things). Separating the two cleanly was an explicit design goal.
while python-style generators aren't cooperative multitasking (by the usual definition in which cooperative multitasking maintains a separate stack for each task), they can be implemented using cooperative multitasking, which is (arguably!) what happens if you use lua coroutines to implement generators
it certainly isn't the final say! it's just an analysis of how my own code turned out, not any kind of universal lesson
the implementation in monokokko, which reserves the r10 register to always point to the currently running task, is five instructions
.thumb_func
yield: push {r4-r9, r11, lr} @ save all callee-saved regs except r10
str sp, [r10], #4 @ save stack pointer in current task
ldr r10, [r10] @ load pointer to next task
ldr sp, [r10] @ switch to next task's stack
pop {r4-r9, r11, pc} @ return into yielded context there
interestingly, what you say of rust's generators is also sort of true of monokokko
> The syntax purely creates a generator, it is totally inert. You have to bring along your own executor (which contains a scheduler among other things).
the above five instructions, or arguably just ldr r10, [r10], is the executor. the in-memory task object consists of the saved stack pointer, the link to the following task, and then whatever variables you have in thread-local storage. but from a different point of view you could say that the in-memory task object consists of the saved stack pointer, a pointer to executor-specific status information (which for this executor is the following task, or conceptually the linked list of all tasks), and then other thread-local variables. i think the difference if you were to implement this same executor with rust generators is just that you probably wouldn't make the linked list of all tasks an intrusive list?
I'm gonna have to mull over "implement generators using cooperative multitasking" a bit :)
> i think the difference if you were to implement this same executor with rust generators is just that you probably wouldn't make the linked list of all tasks an intrusive list?
You still could, and IIRC tokio uses an intrusive linked list to keep track of tasks. There's no specific requirements for how you keep track of tasks, or even a standardized API for executors, which is why you'll hear people talk about why they want "to be generic over runtimes" and similar.
That's fascinating. I'd imagine there are actually two equilibria/stable states possible under this rule: a small codebase with only the most effective optimization passes, or a large codebase that incorporates pretty much any optimization pass.
A marginally useful optimization pass would not pull its weight when added to the first code base, but could in the second code base because it would optimize the run time spent on all the other marginal optimizations.
Though the compiler would start out closer to the small equilibrium in its initial version, and there might not be a way to incrementally move towards the large equilibrium from there under Wirth's rule.
The author cited, Michael Franz, was one of Wirth's PhD students, so what he relates is an oral communication from Wirth that may very well never have been put in writing. It does seem entirely consistent with his overall philosophy.
Wirth also had no compunction about changing the syntax of his languages if it made the compiler simpler. Modula-2 originally allowed undeclared forward references within the same file. When his implementation moved from the original multi pass compilers (e.g. Logitech's compiler had 5 passes: http://www.edm2.com/index.php/Logitech_Modula-2) to a single pass compiler http://sysecol2.ethz.ch/RAMSES/MacMETH.html he simply started requiring that forward references had to be declared (as they used to be in Pascal).
I suspect that Wirth not being particularly considerate of the installed base of his languages, and not very cooperative about participating in standardization efforts (possibly due to burn out from his participation in the Algol 68 process) accounts for the ultimately limited commercial success of Modula-2 & Oberon, and possibly for the decline of Pascal.
> ... his 80th birthday symposium at ETH where he showed off his new port of Oberon to a homebrew CPU running on a random FPGA dev board with USB peripherals.
I think I last watched it during the pandemic and was inspired to pick up reading more about Oberon. A demonstration / talk like that is so much better when the audience are rooting for the presenter to do well.
I first wrote it as "worthwhile", but then the pun practically fell out of the screen at me.
I love Wirth's work, and not just his languages. Also his stuff like algorithms + data = programs, and stepwise refinement. Like many others here, Pascal was one of my early languages, and I still love it, in the form of Delphi and Free Pascal.
RIP, guruji.
Edited to say guruji instead of guru, because the ji suffix is an honorific in Hindi, although guru is already respectful.
I'm a former student of his. He was one of the people that made me from a teenager that hacked on his keyboard to get something to run to a seasoned programmer that thinks before he codes.
Even before I met him at the university I was programming in Oberon because there was a big crowd of programmers doing Wirth languages on the Amiga.
I'm also a student of his, and later met him socially on a few occasions as a graduate student (in a different institute).
Undergraduate students were all in awe of him, but I got the impression that he did not particularly enjoy teaching them (Unlike other professors, however, he did not try to delegate that part of his responsibilities to his assistants). He seemed to have a good relationship with his graduate students.
In his class on compiler construction, he seemed more engaged (the students were already a bit more experienced, and he was iterating the Oberon design at the time). I remember an exchange we had at the oral exam — he asked me to solve the "dangling ELSE" problem in Pascal. I proposed resolving the ambiguity through a refinement of the language grammar. He admitted that this would probably work, but thought it excessively complex and wondered where I got that idea, since he definitely had not taught it, so I confessed that I had seen the idea in the "Dragon Book" (sort of the competition to his own textbook). Ultimately, I realized that he just wanted me to change the language to require an explicit END, as he had done in Modula-2 and Oberon.
Socially, he was fun to talk to, had a great store of computer lore, of course. He was also much more tolerant of "heresies" in private than in public, where he came across as somewhat dogmatic. Once, the conversation turned to Perl, which I did not expect him to have anything good to say about. To my surprise, he thought that there was a valid niche for pattern matching / text processing languages (mentioning SNOBOL as an earlier language in this niche).
No, Borland did have a Modula-2 compiler (where actually Martin Odersky of Scala fame worked on), but they decided to focus on Turbo Pascal and sold it.
I'm not suggesting Turbo Pascal was written in Modula2, I'm saying it implemented Modula2, not Pascal. Modula2 is a superset of Pascal. Pascal never had modules AFAIK but Turbo Pascal did.
At least several Pascal, Modula-2, and Oberon-2 compilers.
My very first compiled programming language was Pascal. I got the free "PCQ Pascal" from the Fish disks as I wasn't able to get the C headers from Commodore which I would have needed for doing proper Amiga programming. Likewise later Oberon-A although I don't remember where I got that from.
There were also commercial Modula-2 and Oberon-2 compilers. I just found that the Modula-2 compiler was open sourced some years back. https://m2amiga.claudio.ch/
wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand; now only hoare and moore remain, and moore seems to have given the reins at greenarrays to a younger generation;
young people may not be aware of the practical, as opposed to academic, significance of his work, so let me point out that begin
the ide as we know it today was born as turbo pascal;
most early macintosh software was written in pascal, including for example macpaint;
robert griesemer, one of the three original designers of golang, was wirth's student and did his doctoral thesis on an extension of oberon, and wirth's languages were also a very conspicuous design inspiration for newsqueak;
> wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand;
And yet far from the last. Simple, correct, and beautiful software is still being made today. Most of it goes unnoticed, its quiet song drowned out by the cacophony of attention-seeking, complex, brittle behemoths that top the charts.
In no particular order: 100r.co, OpenBSD (& its many individual contributors such as tedu or JCS), Suckless/9front, sr.ht, Alpine, Gemini (&gopher) & all the people you can find there, Low Tech Magazine, antirez, Fabrice Bellard, Virgil Dupras (CollapseOS), & many other people, communities, and projects - sorry I don't have a single comprehensive list, that's just off the top of my head ;)
I would add Jochen Liedtke (unfortunately he passed away already more than 20 years ago) as inventor of the L4 microkernel.
Several research groups continued work on L4 after Liedtke's death (Hermann Härtig in Dresden, Gernot Heiser in Sydney, a bit of research at Frank Bellosa's group in Karlsruhe and more industrial research on L4 for embedded/RT systems by Robert Kaiser, later a professor in Wiesbaden), but I would still argue that Liedtke's original work was the most influential, though all the formal verification work in Sydney also had significant impact - but that was only enabled by the simplicity of the underlying microkernel concepts and implementations.
i... really don't think kris de decker is on niklaus wirth's level. i don't think he can write so much as fizzbuzz
fabrice bellard is wirth-level, it's true. not sure about tedu and jcs, because i'm not familiar enough with their work. it's absurd to compare most of the others to wirth and hoare
you're comparing kindergarten finger paintings to da vinci
> it's absurd to compare most of the others to wirth and hoare
You're the one trying to directly compare achievement, not me. If you're looking for top achievers, I'd have to name PHP or systemd, and THAT would be out of place ;)
I even said "in no particular order", because I don't think any two can be easily compared.
My main criterion for inclusion was the drive for simplifying technology, and publishing these efforts:
> An apostle [...], in its literal sense, is an emissary. The word is [...] literally "one who is sent off" [...]. The purpose of such sending off is usually to convey a message, and thus "messenger" is a common alternative translation.
Every single project, person, or community I've named here has some form of web page, blog, RSS feed, papers/presentations, and/or source code, that serve to carry their messages.
Achievement can be measured, simplicity can only be appreciated.
uxn is cool but it's definitely not the same kind of achievement as oberon, pascal, quicksort, forth, and structured programming; rek and devine would surely not claim it was
you don't get to be described as an 'apostle of simplicity' just because you like simplicity. you have to actually change the world by creating simplicity. devine and rek are still a long way from a turing award
you don't get to dictate who does or doesn't get recognized for creating awesome works that influence and inspire others. take your persistent negativity elsewhere.
btw, uxn is absolutely the exemplification of "software built for humans to understand" and simplicity. I mean...
> the resulting programs are succinct and translate well to pen & paper computing.
> to make any one program available on a new platform, the emulator is the only piece of code that will need to be modified, which is explicitly designed to be easily implemented
i don't think uxn is trivial, i think it's a first step toward something great. it definitely isn't the exemplification of "software built for humans to understand"; you have to program it in assembly language, and a stack-based assembly language at that. in that sense it's closer to brainfuck than to hypertalk or excel or oberon. it falls short of its goal of working well on small computers (say, under a megabyte of ram and under a mips)
the bit you quote about uxn having a standard virtual machine to permit easy ports to new platforms is from wirth's 01965 paper on euler http://pascal.hansotten.com/niklaus-wirth/euler-2/; it isn't something devine and rek invented, and it may not have been something wirth invented either. schorre's 01963 paper on meta-ii targets a machine-independent 'fictitious machine' but it's not turing-complete and it's not clear if he intended it to be implemented by interpretation rather than, say, assembler macros
i suggest that if you develop more tolerance for opinions that differ from your own, instead of deprecating them as 'persistent negativity' and 'dictating', you will learn more rapidly, because other people know things you don't, and sometimes that is why our opinions differ. sometimes those things we know that you don't are even correct
i think this is one of those cases. what i said, that you were disagreeing with, was that uxn was not the same kind of achievement as oberon, pascal, quicksort, forth, and structured programming (and, let me clarify, a much less significant achievement) and that it is a long way from [meriting] a turing award. i don't see how anyone could possibly disagree with that, or gloss it as 'uxn is trivial', as you did, unless they don't know what those things are
i am pretty sure that if you ask devine what he thinks about this comment, you will find that he agrees with every word in it
Someone sent me this thread so I could answer, and I do agree. I for one think uxn is trivial, it was directly inspired by the VM running Another World and created to address a similar need. It's not especially fast, or welcoming to non-programmers, it was a way for my partner and I to keep participating in this fantastic universe that is software development, even once our access to reliable hardware was becoming uncertain. It's meant to be approachable to people in a similar situation and related interests, and possibly inspire people to look into assembly and stack machines -- but it has no lofty goals beyond that. We're humbled that it may have inspired a handful of developers to consider what a virtual machine designed to tackle their own needs might look like.
A lot of our work is owed to Wirth's fantastic documentation on Oberon, to the p-machine and to pascal. Niklaus' works has influenced us in ways that it would be very unlikely that we could pass forward. I'm sad to hear of Nicklaus' passing, there are people who inspire me in similar ways, that are alive today and that I look up to for inspiration, but to me, Wirth's work will remain irreplaceable. :)
There wasn't a single place I asserted that uxn is specifically novel or unprecedented. In fact, Devine's own presentation[0] about uxn specifically cites Wirth and Oberon, among countless other inspirations and examples. I'm saying it's awesome, accessible, simple and open.
I don't need to "develop more tolerance for differing opinions" - I have no problem with them and am completely open to them, even from people who I feel are communicating in an unfriendly, patronizing or gatekeeping manner. rollcat shared some other people and projects and you took it upon yourself to shoot down as much as possible in that comment - for what purpose? No one said Drecker is "on Wirth's level" when it comes to programming. We don't need him to write FizzBuzz, let alone any other software. I'm sorry you don't recognize the value of a publication like Low-Tech Magazine, but the rest of us can, and your need to shoot down that recognition is why I called your messages persistently negative.
Further, when I give kudos to uxn and recognize it as a cool piece of software, there's absolutely no point in coming in and saying "yeah but it's no big deal compared to ____" , as if anyone was interested in some kind of software achievement pissing contest. The sanctity and reverence for your software idols is not diluted nor detracted from by acknowledging, recognizing and celebrating newer contributors to the world of computing and software.
I have to come back and edit this and just reiterate: All I originally said was "uxn ftw" and you found it necessary to "put me in my place" about something I didn't even say/assert, and make it into some kind of competition or gatekeeping situation. Let people enjoy things. And now, minimizing this thread and never looking at it again.
Yeah, these younguns have a lot to learn. :-) The notion that there's something innovative about using a small VM to port software is hilarious. BTW, here is a quite impressive and effective use of that methodology: https://ziglang.org/news/goodbye-cpp/
Dewey Schorre and Meta II, eh? Who remembers such things? Well, I do, as I was involved with an implementation of Meta V when I was on the staff of the UCLA Comp Sci dept in 1969.
Heh, no way do I remember the details. I just remember that we were rewriting it in 360 assembler but I left before that was completed (if it was), and that I wrote an Algol syntax checker in Meta V that was obliquely referenced at the end of RFC 57.
I think it was a very minor contribution ... they wrote a pseudo-Algol program as a form of documentation of their network protocol and were concerned about checking the grammar/syntax of the program (people actually cared about the quality of documentation back then), and I wrote a syntax checker for it in Meta V, as it was on hand because it was written at UCLA (I don't know whether it was Dewey (Val) who designed and implemented Meta V or someone else) and was used by people in the Comp Sci Dept. to design programming languages. But the dept was shifting to networking at the time (the IMP had just arrived) under the direction of Leonard Kleinrock and through the efforts of pioneers Steve Crocker, Vint Cerf, and Jon Postel (all of whom had attended Taft High School together) ... this is why the authors of that network protocol were visiting UCLA. I got involved because I worked for Crocker, under the direct management of Charley Kline, who was the fellow who made the first ever networked login.
(steps to reproduce: install dependencies; make; rm admu-shell admu-shell-fb admu-shell-wercam admu admu.o admu-shell.o admu_tv_typewriter.o; time make -j. it only takes 0.37 seconds if you only make -j admu-shell and don't build the other executables. measured on debian 12.1 on a ryzen 5 3500u)
i wrote pretty much all of it on october 12 and 13 of 02018 so forgive me if i don't think that writing a terminal emulator without scrollback is an achievement comparable to pascal and oberon etc.
not even if it were as great as st (and actually admu sucks more than st, but it can still run vi and screen)
Last I checked, st was around 8k lines. It's not bad (xterm's scrollbar and button handling is in a similar LOC range), but I'd argue it's not minimalist, so even if writing a terminal had qualified, st isn't it.
WRT to the scrollback it seems like they're going overboard in being difficult about features that adds little code but has much impact, while not paying close attention to their dependencies. Things they could've done without (the copy on my system includes libz, libpng, libexpat (assuming for fontconfig, which is itself a giant steaming pile of excessive complexity), and even libbrotlicommon... I'm pretty sure I have no brotli images on my system that st has any business touching...
I used st until I replaced it with my own, and I can't fault it for many things in terms of usability, though. Other than the box drawing - it's not pixel perfect (I only bring that up because I bikeshedded a pixel-perfect override for the boxdrawing characters for my font renderer when I was bored a while back so it's the one place where I can crow about mine being better than st ;) - in every other area it still has warts to clean up)
it would be interesting to see what an even more minimalist and more usable terminal emulator looked like. both your work and st are constrained by having to support terminal control languages with a lousy strength to weight ratio, something oberon opted out of
Yeah, there's a ton of cruft. On one hand I find it fun to see how much of vttest I can make it through, on the other hand, supporting DECALN (DEC service test pattern - just fills the screen with capital E) is just a boxticking exercise, and while that's one of the dumbest one there are also dozens that are hardly ever used, or that are used in rare cases but doesn't really need to.
That is one area where st's "tmux copout" on scroll somewhat makes sense - it would be a reasonable option to define a clean sufficient subset that lets you run enough stuff to tell people to just run anything that breaks under tmux/screen or a separate filter.
But from what I see with terminals, there's a lot of reluctance to do this not because people believe all these codes are so important but because it seems to become a bit of a matter of pride to be a precise as possible. I admit to having succumbed to a few myself, like support for double-width and double-height characters, as well as "correct" (bright/dim rather than on/off) blink and support for the nearly unsupported rapid blink... There is also a pair of escape codes to enable and disable fraktur. This is fertile ground for procrastinating terminal developers to implement features used by one person in the 70's sometime.
At the same time I sometime catch myself hoping some of these features will be used more... A very few I'll probably add support for because I want to use them in my text editor, e.g. different coloured underlines, and squiggly underlines are both easy to do and actually useful..
I think with a cleaner set of control codes, though, you could certainly fit quite a few of those features and still reduce the line count significantly...
I have used DECALN, which is sometimes useful for testing purpose (especially in full screen, although sometimes even if it isn't).
I will want to see support for the PC character set, for EUC character sets (including EUC-TRON), TRON-8 character code, bitmap fonts (including non-Unicode fonts), Xaw-like scrolling and xterm-like selecting, ability to disable receiving (not only sending) 8-bit controls (which should be used to switch between EUC-JP and EUC-TRON, as well as other purposes), a "universal escape" sequence (recognized anywhere even in the middle of other sequences), and some security features (I have some ideas that I don't even know if it is possible on Linux or on BSD, such as checking the foreground process, and being able to discard any data the terminal emulator sent to the application program that as not yet been read yet, which can avoid a file or remote server to send answerbacks which will execute commands in the shell, if you add a cancellation code into the shell prompt, etc)
It's a one-liner to fill your screen, and the fraction of people who even know DECALN exist is so small I wouldn't be surprised if the total invocations of DECALN by terminal users in recent decades is smaller than the number of indirect invocations by terminal implementers via vttest.
It made sense as a service tool on a physical terminal, not much now. It's not that it's a problem - it's trivial to support. It's just that it is an example of one of hundreds of little features that are box-ticking exercises that would cause about a dozen people worldwide to shrug if they noticed they weren't there before they'd do the same thing a slightly different way and not think about it again.
Some of the ones you list are useful some places, but many of them don't need to be in every terminal. I want more, but smaller, options built from generic reusable components.
E.g. most of my terminal does not care what your character set is, or what type of fonts you want to use, or how your scrolling works, or if you have scroll bars, or whether there is a shell, or whether there's a program being run by a terminal vs. a program embedding the terminal, or if it's running in a window, or whether it has GUI output at all or is entirely headless. This is true for most terminals. Yet these components are rarely separated and turned into reusable pieces of code.
Most of the features you list are features I don't need, won't implement, and don't care about. But what I do care about is that with some exceptions (e.g. the Gnome VTE widget) most terminals reinvent way too much from scratch (and frankly most users of terminal widgets like VTE still reinvent way too much other stuff from scratch), put too much effort into supporting far too many features that are rarely used, instead of being able to pick a terminal that is mostly just pulling in generic components as a starting point.
The result is a massive amount of code that represents the same features written over and over and over again and sucking time out of the bits that differentiate terminals in ways useful to users.
E.g. just now I've been starting to untangle the bits in my terminal that handles setting up the PTY and marshaling IO between the shell or other program running in it and the bits that handle the output to the actual terminal. The goal is to make it as easy for casual scripts to open a terminal window and control it as it was on the Amiga, without having to spawn a separate script to "run in it".
On the Amiga you could e.g. do "somecommand >CON:x/y/w/h/Sometitle" to redirect "somecommands" output to a separate terminal window without any foreground process, with the given dimensions and title, and assorted other flags available (e.g. "/CLOSE" would give yo ua close button, "/WAIT" would keep the window open after the process that opened it went away etc.).
If you've written a terminal, then part of your terminal represents 99% of the code to provide something almost like that.
Beyond that, I'm going to rip the escape code handling out too, so code that don't want to do escape codes still can pretend they're talking to a terminal with a somewhat ncurses-y interface but with the freedom to redirect the rendering or render on top of it, or whatever, then that makes "upgrading" from a terminal UI to somewhat of a GUI far easier (the Amiga, again, had a lot of this, with apps that'd mix the same system console handler used for the terminal with few graphical flourishes; it lowered the threshold to start building more complex UI's immensely)
Then I'm going to extract out the actual rendering to the window into a separate component from the code that maintains the (text) screen buffer, so that I can write code that uses the same interface to render either to a terminal or directly to a window.
Same for e.g. font-handling - I've decided I don't care about bitmap fonts, but the actual bit of my terminal that cares about any kind of fonts is ~40 lines of code, and to most of my terminal components it doesn't matter if you output anything anywhere, but even of the remaining code actually dealing with GUI output, 3/4 doesn't care, or know, about fonts at all (managing a window, clearing, filling, scrolling take up more). Making it pluggable so someone could plug in either a client side bitmap font renderer or code to use the old X11 text drawing calls is trivial.
Because with all of these things broken out into components it doesn't matter much if my terminal doesn't support your feature set, if "writing another terminal" doesn't mean writing the 95% of the code that implements shared functionality over and over again.
You could write literally half a dozen of custom tiny terminals like that before even approaching the line count of xterm's mouse button handling code alone.
That is good, to have separate components of the codes, that can then be reused. Do you have terminal emulator codes with such things? Then, we can see, and it can easily be modified.
i've been thinking that maybe nested tables would be better than character cells, for example, accommodating proportional fonts and multiple sizes much better
Everyone has... I'm not sure it's a big loss, but I found it funny to "fix" and the fun of tweaking tiny things like that lies at the core of a whole lot of terminal bikeshedding...
> i've been thinking that maybe nested tables would be better than character cells, for example, accommodating proportional fonts and multiple sizes much better
A lot of simplicity could easily go away if it's not done well, but I like the idea. I want to eventually support some limited "upgrades" in that direction, but will take some cleanup efforts before that'll be priority.
dang, maybe we can change the url to this instead? this url has been stable for at least 14 years (http://web.archive.org/web/20070720035132/https://lists.inf....) and has a good chance of remaining stable for another 14, while the twitter url is likely to disappear this year or show different results to different people
Since Twitter is suppressing the visibility of tweets that link outside their site I think it would be perfectly fair to block links to twitter, rewrite them to nitter, etc. There also ought to be gentle pressure on people who post to Twitter to move to some other site. I mean, even I've got a Bluesky invite now.
Well I didn't mean to just endorse Bluesky but call it out as one of many alternatives.
I'm actually active on Mastodon but I am thinking about getting on Instagram as well because the content I post that does the best on Mastodon would fit in there.
I've been a massive fan of the PhD dissertation of Wirth's student Michael Franz since I first read it in '94. He's now a professor at UC Irvine, where he supervised Andreas Gal's dissertation work on trace trees (what eventually became TraceMonkey)
in the neat/scruffy divide, which goes beyond ai, wirth was the ultimate neat, and kay is almost the ultimate scruffy, though wall outdoes him
alan kay is equally great, but on some axes he is the opposite extreme from wirth: an apostle of flexibility, tolerance for error, and trying things to see what works instead of planning everything out perfectly. as sicp says
> Pascal is for building pyramids—imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for building organisms—imposing, breathtaking, dynamic structures built by squads fitting fluctuating myriads of simpler organisms into place.
kay is an ardent admirer of lisp, and smalltalk is even more of an organism language than lisp is
yeah, i wish i had had the pleasure of meeting him. i reimplemented meta-ii 3½ years ago and would recommend it to anyone who is interested in the parsing problem. it's the most powerful non-turing-complete 'programming language' i've ever used
(i mean i would recommend reimplementing it, not using my reimplementation; it takes a few hours or days)
after i wrote it, the acm made all old papers, including schorre's meta-ii paper, available gratis; they have announced that they also plan to make them open-access, but so far have not. still, this is a boon if you want to do this. the paper is quite readable and is at https://dl.acm.org/doi/10.1145/800257.808896
i think adr is the equivalent of '.long' in gcc or 'dw' in masm, though the description of the adr pseudo-operation is not very clear. it says it 'produces the address which is assigned to the given identifier as a constant'. on a stack/belt machine or an accumulator machine, 'produces' could conceivably mean 'pushes on the stack/belt' or 'overwrites the accumulator with', but the meta ii machine doesn't have an operand stack, belt, or accumulator; it has a return stack with local variables, an input card, an output card, and a success/failure switch, so it doesn't make sense to read 'produces' as a runtime action. moreover, 'adr' is not listed in the 'machine codes' section; it's listed along with 'end' in a separate 'constant and control codes' section, which makes it sound like a pseudo-operation like '.long'. i suspect 'end' tells the assembler to exit
i agree, it would make much more sense to say
'.syntax' .id .out('cll ' *) .out('hlt')
and thus eliminate the otherwise-unused adr, or simply to put the main production of the grammar at the beginning of the grammar, which is what i did in meta5ix. i think they do define 'r' on an empty call stack as a machine halt, btw
they do mention this startup thing a bit in the text of the paper (p. d1.3-3)
> The first thing in any META II machine program is the address of the first instruction. During the initialization for the interpreter, this address is placed into the instruction counter.
so i think the idea is that their 'binary executable format' consists of the address of the entry point, followed by all the code, and the loader looks at the first word to see where to start running the code. this sounds stupid (because why wouldn't you just start running it at the beginning?) but elf, a.out, and pe all have similar features to allow you to set the entry point to somewhere in the middle of the executable code, which means you have total freedom in how you order the object files you're linking. so even though it's maybe unnecessary complexity in this context, it's well-established practice even 60 years later, and maybe it already was at the time, i don't know
i hope this is helpful! also i hope it's correct, but if not i hope it's at least helpful :)
It is a good paper, and I give much respect for ACM opening up their paywall of old papers. They even stopped rate limiting downloads. I'd like to think my incessant whining about this had some effect. :) It is such a wonderful thing for curious people everywhere to be able to read these papers.
thanks! i like lua a lot despite its flaws; that's what i wrote my own literate programming system in. lua's 'bytecode' is actually a wordcode (a compilation approach which i think wirth's euler paper was perhaps the first published example of) and quite similar in some ways to wirth's risc-1/2/3/4/5 hardware architecture family
i hope they do go to open access; these papers are too valuable to be lost to future acm management or bankruptcy
Are you familiar with the 'Leo' editor? It is the one that comes closest to what I consider to be a practically useful literate programming environment. If you haven't looked at it yet I'd love it if you could give it a spin and let me know what you make of it.
i read a little about it many years ago but have never tried it. right now, for all its flaws, jupyter is the closest approximation to the literate-programming ideal i've found
Yes, Jupyter is definitely a contender for the crown, it's a very powerful environment. I've made use of a couple of very impressive notebooks (mostly around the theme of automatic music transcription) and it always gets me how seamlessly the shift between documentation and code is. I wished the Arduino guys would do something like that it would be make their programming environment feel less intrusive and less 'IDE' like (which mostly just gets in the way with endless useless popups).
there are a lot of design decisions that are pretty debatable, but the ones that seem clearly wrong to me are:
- indexing from 1 instead of 0;
- the absence of a nilness/nonexistence distinction (so misspelling a variable or .property silently gives the wrong answer instead of an exception);
- variables being global by default, rather than local by default or requiring an explicit declaration;
- printing tables by default (with %q for example) as identities rather than contents. (you could argue that this is a simplicity thing; lua 5.2 is under 15000 lines of code, which is pretty small for such a full-featured language, barely bigger than original-awk at 6200 lines of c and yacc plus 1500 lines of awk, and smaller than mawk at 16000 lines of c, 1100 lines of yacc, and 700 lines of awk. but a recursive table print function with a depth limit is about 25 lines of code.)
none of these are fatal flaws, but with the benefit of experience they all seem like clear mistakes
It's been long time since I last used lua and only positive memories remained :) I used it for adding scripting in the apps I worked on and the experience was very good -- sandboxed from the start, decent performance.
Perhaps having 0-based indexes would've been bad for our users but I don't think they used arrays at all.
> wirth was the greatest remaining apostle of simplicity, correctness, and software built for humans to understand
Absolutely!
And equally important was his ability to convey/teach CS precisely, concisely and directly in his books/papers. None of them have any fluff nor unnecessary obfuscation in them. These are the models to follow and the ideals to aspire to.
As an example see his book Systematic Programming: An Introduction.
Just thought about that when Donald Knuth's Christmas lecture https://www.youtube.com/live/622iPkJfYrI lead me to one of his first TeX lectures https://youtu.be/jbrMBOF61e0 : If I install TeX on my Linux machine now, is that still compiled from the original Pascal source? Is there even a maintained Pascal compiler anymore? Well, GCC (as in GNU compiler collection) probably has a frontend, but that still does not answer the question about maintenance.
These were just thoughts. Of course researching the answers would not be overly complicated.
> If I install TeX on my Linux machine now, is that still compiled from the original Pascal source?
If you install TeX via the usual ways–TeX Live and MikTeX are the most common—then the build step runs a program (like web2c) to convert the Pascal source (with changes) to C, then uses a C compiler. (So the Pascal source is still used, but the Pascal "compiler" is a specialized Pascal-to-C translator.) But there is also TeX-FPC (https://ctan.org/pkg/tex-fpc), a small set of change (patch) files to make TeX compilable with the Free Pascal compiler (https://gitlab.com/freepascal.org/fpc/).
Free Pascal does very infrequent releases though the compiler is under active development and even has a bunch of new features both for the compiler (e.g. a wasm backend) and the language itself. There are always multiple daily commits in the git log by several developers.
The point of Whitney's array languages is to allow your solutions to be so small that they fit in your head. Key chunks should even fit on one screen. A few years ago, Whitney reportedly started building an OS using these ideas (https://aplwiki.com/wiki/KOS).
I'm aware of the idea. I'm also aware that I can read and understand a whole lot of pages of code in the amount of time it takes me to decipher a few lines of K for example, and the less dense codes sticks far better in my head.
I appreciate brevity, but I feel there's a fundamental disconnect between people who want to carefully read code symbol by symbol, who often seem to love languages like J or K, or at least he better able to fully appreciate them, and people like me who want to skim code and look at the shape of it (literally; I remember code best by its layout and often navigate code by appearance without reading it at all, and so dense dumps of symbols are a nightmare to me)
I sometimes think it reflects a difference between people who prefer maths vs languages. I'm not suggesting one is better than the other, but I do believe the former is a smaller group than the latter.
For my part I want grammar that makes for a light, casual read, not having to decipher. I want to be able to get a rough understanding with a glance, and gradually fill in details, not read things start to finish (yes, I'm impatient)
A favourite example of mije is the infamous J interpreter fragment, where I'd frankly be inclined to prefer a disassembly over the source code. But I also find the ability to sketch out such compact code amazing.
I think Wirths designs very much fit in the languages that are skimmable and recognisable by shape and structure category. I can remember parts of several of Wirths students PhD theses from the 1990s by the shape of procedures in their Oberon code to this day.
That's not to diminish Whitney's work, and I find that disconnect in how we process code endlessly fascinating, and regularly look at languages in that family because there is absolutely a lot to learn from them, but they fit very different personalities and learning styles.
I sometimes think it reflects a difference between people who prefer maths vs languages. I'm not suggesting one is better than the other, but I do believe the former is a smaller group than the latter.
This dichotomy exists in mathematics as well. Some mathematicians prefer to flood the page with symbols. Others prefer to use English words as much as possible and sprinkle equations here and there (on their own line) between paragraphs of text.
The worst are those that love symbols and paragraphs, writing these dense walls of symbols and text intermixed. I’ve had a few professors who write like that and it’s such a chore to parse through.
i keep hoping that one day i'll understand j or k well enough that it won't take me hours to decipher a few lines of it; but today i am less optimistic about this, because earlier tonight, i had a hard time figuring out what these array-oriented lines of code did in order to explain them to someone else
textb = 'What hath the Flying Spaghetti Monster wrought?'
bits = (right_shift.outer(array([ord(c) for c in textb]),
arange(8))).ravel() & 1
and i wrote them myself three months ago, with reasonably descriptive variable names, in a language i know well, with a library i've been using in some form for over 20 years, and their output was displayed immediately below, in https://nbviewer.org/url/canonical.org/~kragen/sw/dev3/rando...
i had every advantage you could conceivably have! but i still guessed wrong at first and had to correct myself after several seconds of examination
i suspect that in j or k this would be something like (,(@textb)*.$i.8)&1 though i don't know the actual symbols. perhaps that additional brevity would have helped. but i suspect that, if anything, it would have made it worse
by contrast, i suspect that i would have not had the same trouble with this
bits = [(ord(c) >> i) & 1 for c in textb for i in range(8)]
however, as with rpn, i suspect that j or k syntax is superior for typing when what you're immediately evaluating expressions rather than writing a program to maintain later, because the amount of finger typing is so much less. but maybe i just have a hard time with point-free style? or maybe, like you say, it's different types of people. or maybe i just haven't spent nearly enough time writing array code during those years
> If you go to the link and press 'help' you'll see some docs for ngn/k
Almost as impenetrable as the code unless you already know the language. But that's ok - I guess that's the main audience...
E.g. trying to figure out what "\" means, in that help is only easier now because you gave me the whole line, as there are 64 occurrences of "\" in that doc and I wouldn't have known what pattern to search for to limit it...
It's back to the philosophical disconnect of expecting people to read start to finish/know far more detail inside out rather than skimming and relying on easy keyword lookups... (yes, we're lazy)
> 'reduce with concat'
So "flatten" in Ruby-speak, I take it (though "flatten" without an argument in Ruby will do this recursively, so I guess probably flatten(1) would be a more direct match).
> you don't need to do any mapping, it's automatic.
After these pointers (thanks!), here's - mostly for my own learning, what I ended up with not in an attempt to get closer to the linenoise (we can do that with a horrific level of operator overloading that'd break most of the standard library, though we can't match k precisely). Please don't feel obliged to go through this unless you're morbidly curious; I just had to, but I'm sure you'd suffer going through my attempt at figuring out what the hell k is doing...:
textb="What hath the Flying Spaghetti Monster wrought?"
# Firstly, I finally realised after a bunch of testing that 1) "(8#2)" does something like this.
# That is, the result of 8#2 is (2 2 2 2 2 2 2 2), which was totally not what I expected.
def reshape(len,items) = [].fill(0...x)
class Integer
# For the special case of (x#y) where x is a positive integer, which is frankly the only one
# I've looked at, we can do:
# So now 4.reshape(2) returns [2 2 2 2] just like (4#2) in ngn/k
def reshape(items) = Array(items)*self
# Now we can do something somewhat like what I think "encode" is
# actually doing - this can be golfed down, but anyway:
# With this, "a".ord.encode(8.reshape(2)) returns [0,1,1,0,0,0,0,1],
# equivalent to (8#2)\ "a" in ngn\k
def encode(shape)
rem = self
Array(shape).reverse.map do |v|
val = rem % v
rem = rem / v
val
end.reverse
end
end
# Now we can break Array too.
class Array
# First a minor concession to how Ruby methods even on the Array
# class sees the focal point as the Array rather than the elements.
# E.g. `self` in #map is the Array. If the focus is to be on applying the
# same operation to each element, then it might be more convenient
# if `self` was the element. With this, we can do ary.amap{reverse}
# instead of ary.map{|e| e.reverse} or ary.map{ _1.reverse}.
# To get closer to k, we'd have needed a postfix operator that we could
# override to take a block, but unfortunately there are no overridable
# postfix operators in Ruby. E.g. we can hackily make
# ary.>>(some_dummy_value) {a block} work, but not even
# ary >> (some_dummy_value) { a block} and certainly not
# ary >> { a block }
#
def amap(&block) = map { _1.instance_eval(&block) }
# If we could do a "nice" operator based map, we'd just have left it
# at that. But to smooth over the lack of one, we can forward some
# methods to amap:
def encode(...) = amap{encode(...)}
# ... with the caveat that I realised afterwards that this is almost certainly
# horribly wrong, in that I think the k "encode" applies each step of the
# above to each element of the array and returns a list of *columns*.
# I haven't tried to replicate that, as it breaks my mind to think about
# operating on it that way. That is, [65,70].encode(2.reshape(10))
# really ought to return [[6,7],[5,0]] to match the k, but it returns
# [[6,5],[7,0]]. Maybe the k result will make more sense to me if I
# take a look at how encode is implemented...
def mreverse = amap{reverse}
end
# Now we can finally get back to the original, with the caveat that due to
# the encode() difference, the "mreverse.flatten(1)" step is in actuality
# working quite differently, in that for starters it's not transposing the arrays.
#
p textb.bytes.encode(8.reshape(2)).mreverse.flatten(1)
# So to sum up:
#
# textb -> textb.bytes since strings and byte arrays are distinct in Ruby
# (8#2) -> 8.reshape(2)
# x\y -> y.encode(x) ... but transposed.
# |x -> x.mreverse
# ,/+x -> x.flatten(1) .. but really should be x.transpose.flatten(1)
#
# Of course with a hell of a lot of type combinations and other cases the k
# verbs supports that I haven't tried to copy.
i can't decode it either but i think you're right. note that this probably gives the bits in big-endian order rather than little-endian order like my original, but for my purposes in that notebook either one would be fine as long as i'm consistent encoding and decoding
The big caveat being it clicked too late that 1) "encode" is not "change base and format", but "for each element in this array, apply the module to the entire other array, and pass the quotient forward", and 2) encode returns a list of columns of the remainders rather than rows (the output format really does not make this clear...).
So you can turn a list of seconds into hour, minute, seconds with e.g.: (24 60 60)\(86399 0 60), but what you get out is [hour, minute, second] where hour, minute, second each are arrays.
If you want them in the kind of format that doesn't break the minds of non-array-thinking people like us because the order actually matches the input, you'd then transpose them
by prepending "+", because why not overload unary plus to change the structure of
the data?
+(24 60 60)\(86399 0 60)
Returns
(23 59 59
0 0 0
0 1 0)
Or [[23,59,59], [0,0,0], [0,1,0]] in a saner output format that makes it clear to casuals like me which structure is contained in what.
Now, if you then want to also flatten them, you prepend ",/"
I feel much better now. Until the next time I spend hours figuring out a single line of k.
I think the K would likely be both simpler and harder than your first example by reading very straightforwardly in a single direction but with operators reading like line noise. In your case, my Numpy is rusty, but I think this is the Ruby equivalent of what you were doing?
textb = 'What hath the Flying Spaghetti Monster wrought?'
p textb.bytes.product((0...8).to_a).map{_1>>_2}.map{_1 & 1}
Or with some abominable monkey patching:
class Array
def outer(r) = product(r.to_a)
def right_shift = map{_1>>_2}
end
p textb.bytes.outer(0...8).right_shift.map{_1 & 1}
I think this latter is likely to be a closer match to what you'd expect in an array language in terms of being able to read in a single direction and having a richer set of operations. We could take it one step further and break the built in Array#&:
class Array
def &(r) = map{_1 & r}
end
p textb.bytes.outer(0...8).right_shift & 1
Which is to say that I don't think the operator-style line-noise nature of K is what gives it its power. Rather that it has a standard library that is fashioned around this specific set of array operations. With Ruby at least, I think you can bend it towards the same Array nature-ish. E.g. a step up from the above that at least contains the operator overloading and instead coerces into a custom class:
textb = 'What hath the Flying Spaghetti Monster wrought?'
class Object
def k = KArray[*self.to_a]
end
class String
def k = bytes.k
end
class KArray < Array
def outer(r) = product(r.to_a).k
def right_shift = map{_1>>_2}.k
def &(r) = map{_1 & r}.k
end
p textb.k.outer(0...8).right_shift & 1
With some care, I think you could probably replicate a fair amount of K's "verbs" and "adverbs" (I so hate their naming) in a way that'd still be very concise but not line-noise concise.
that all seems correct; the issue i had was not that python is less flexible than ruby (though it is!) but that it required a lot of mental effort to map back from the set of point-free array operations to my original intent. this makes me think that my trouble with j and k is not the syntax at all. but conceivably if i study the apl idiom list or something i could get better at that kind of thinking?
I think you could twist Python into getting something similarly concise one way or other ;) It might not be the Python way, though. I agree it often is painful to map. I think in particular the issue for me is visualizing the effects once you're working with a multi-dimensional set of arrays. E.g. I know what outer/product does logically, but I have to think through the effects in a way I don't need to do with a straightforward linear map(). I think I'd have been more likely to have ended up with something like this if I started from scratch even if it's not as elegant.
p textb.bytes.map{|b| (0...8).map{|i| (b>>i) & 1} }.flatten
EDIT: This is kind of embarrassing, but we can of course do just this:
textb.bytes.flat_map{_1.digits(2)}
But I think the general discussion still applies, and it's quite interesting how many twists and turns it took to arrive at that
As far as I know, Henry Baker is still with us. I had a dream where I interviewed Wirth for like 20 hrs so we could clone him with an LLM. We need to grab as much video interviews from folks as possible.
henry baker has made many great contributions, but last time i talked to him, he was waiting for somebody to start paying him again in order to do any more research
but i'm sure he'd agree his achievements are not in the same league as wirth's
Relevant excerpt of Dijkstra's own account (from EWD1308 [1]):
Finally a short story for the record. In 1968, the Communications of the ACM published a text of mine under the title "The goto statement considered harmful", which in later years would be most frequently referenced, regrettably, however, often by authors who had seen no more of it than its title, which became a cornerstone of my fame by becoming a template: we would see all sorts of articles under the title "X considered harmful" for almost any X, including one titled "Dijkstra considered harmful". But what had happened? I had submitted a paper under the title "A case against the goto statement", which, in order to speed up its publication, the editor had changed into a "letter to the Editor", and in the process he had given it a new title of his own invention! The editor was Niklaus Wirth.
Prof Wirth was a major inspiration for me as a kid. I eagerly read his book on Pascal, at the time not appreciating how unusual it was for its elegance and simplicity. I also followed with interest his development of the Oberon language and Lilith workstation. When I was 13, he gave a talk not too far away, I think it might have been Johns Hopkins, and my dad took me to it. It was a wonderful experience, he was very kind and encouraging, as I think the linked photo[1] shows.
A sad day. He was a titan of computing and still deserved even more attention that the got. If his languages had been more prevalent in software development, a lot of things would be in a better shape.
After playing around a bit with Basic on the C64/128, Pascal became my first "real" programming language I learned. In the form of UCSD Pascal on Apple II at my school as well as Turbo Pascal 3.0 on a IBM PC (no AT or any fanciness yet). Actually a Portable PC with a build-in amber CRT.
When I got my Amiga 500, Modula 2 was a very popular language on the Amiga and actually the M2Amiga system was the most robust dev env. I still think fondly of that time, as Modula 2 made it so easy to develop structured and robust programs. The module concept was quite ahead of the time, while the C world kept recompiling header files for so many years to come. Today, Go picked up a lot from Modula 2, one reason I immediately jumped onto it. Not by chance, Robert Griesemer was a student of Wirth.
During the 90ies, while MS Dos was still used, Turbo Pascal still was the main go-to language on the PC for everyone, as it was powerful, yet approachable for non-fulltime software developers. It picked up a lot of extensions from Modula 2 too and also had a nice Object system. It peaked at the version 6 and 7. Probably to the day my favorite development environment, partially because of the unmatched speed of a pure character based UI. And Turbo Pascal combined the nice development environment with a language which found a great compromise between power and simplicity.
Unfortunately, I was only vaguely familiar with his later work on Oberon. I ran the Oberon system natively on my 386 for some toying around. It was extremely impressive with its efficiency and full GUI in the time of DOS on the PC. A pity, it didn't achive more attention. Probably it would have been very successful if it had gained tracking in the not too late 80ies, in the early 90ies of course Windows came along.
From a puristic point of view, the crowning achievement was of course when he really earned the job title of a "full stack developer", not only designing Oberon and the OS, but the CPU to run it as well. Very impressive and of a huge educational value.
Wirth was the chief designer of the programming languages Euler (1965), PL360 (1966), ALGOL W (1966), Pascal (1970), Modula (1975), Modula-2 (1978), Oberon (1987), Oberon-2 (1991), and Oberon-07 (2007). He was also a major part of the design and implementation team for the operating systems Medos-2 (1983, for the Lilith workstation), and Oberon (1987, for the Ceres workstation), and for the Lola (1995) digital hardware design and simulation system. In 1984, he received the Association for Computing Machinery (ACM) Turing Award for the development of these languages.
But we also had syntax highlighting in the early 90's, so, using uppercase to denote language elements was already an archaic approach that hurt readability.
I'm kind of a fan of Lola, an easy-to-learn HDL which was inspired by Pascal/Oberon vs. Verilog (inspired by C) and VHDL, inspired by Ada.
I like Wirth's whole software stack: RISC-5 (not to be confused with RISC-V) implemented in Lola, Oberon the language, and Oberon the environment. IIRC Lola can generate Verilog - I think the idea was that students could start with an FPGA board and create their own CPU, compiler, and OS.
I also like his various quips - I think he said something like "I am a professor who is a programmer, and a programmer who is a professor." We need more programmer/professors like that. Definitely an inspiration for systems people everywhere.
Also collaborated with Apple on Object Pascal initial design, his students on Component Pascal, Active Oberon, Zonnon, and many other research projects derived from Oberon.
For those who don't know, Pascal was what a lot of the classic Mac software was written in, before Objective-C and Swift. It grew into Delphi, which was a popular low-code option on Windows.
I wouldn’t describe Delphi as low code, it is an IDE. Wikipedia also describes it like this[1] and does not include it in its list of low code development platforms[2].
It was a RAD platform though. From following your links:
> Low-code development platforms trace their roots back to fourth-generation programming language and the rapid application development tools of the 1990s and early 2000s.
> Delphi was originally developed by Borland as a rapid application development tool for Windows as the successor of Turbo Pascal.
It's a shame that Pascal was largely abandoned (except for Delphi, which lived on for a while); I believe several Pascal compilers supported array bounds checking, and strings with a length field. In the 1980s this may have been considered overly costly (and perhaps it is considered so today as well), but the alternative that the computing field and industry picked was C, where unbounded arrays and strings were a common source of buffer overflow errors. Cleaning this up has taken decades and we still probably aren't done.
Better C/C++ compilers and libraries can help, but the original C language and standard library were certainly part of the issue. Java and JavaScript (etc.) may have their issues but at least chasing down pointer errors usually isn't one of them.
A side effect of UNIX adoption, C already being in the box, whereas anything else would cost money, and no famous dialect (Object Pascal, VMS Pascal, Solaris Pascal, UCSD Pascal) being portable.
Unfortunately Pascal only mattered to legions of Mac and PC developers.
My father celebrated 60 two weeks back and told me he bought license for new Delphi and loves it, I was quite surprised with the development he described.
I considered telling him that he could get most of the things (he also buys various components) for free today, but then.. he is about 5 years before retirement and won't relearn all his craft now.
Myself, I am not sure whether its nostalgia but I miss the experience of Delphi 7 I started with 20 years back. In many ways, the simplicity of VLC and the interface is still unbeaten.
> I considered telling him that he could get most of the things (he also buys various components) for free today, but then.. he is about 5 years before retirement and won't relearn all his craft now.
Free Pascal / Lazarus shouldn't be all that much to relearn.
> Myself, I am not sure whether its nostalgia but I miss the experience of Delphi 7 I started with 20 years back.
Delphi 1, 28 years now.
> In many ways, the simplicity of VLC and the interface is still unbeaten.
1) Yup.
2) VCL, btw.
3) Now that Embarcadero is hiking up the price of Delphi with every release, I think the standard-bearer for best librry / framework is probably the LCL, the Lazarus Component Library.
I learned Pascal and MODULA-2 in college, in my first two programming semesters. MODULA-2 was removed shortly afterwards but Pascal is still used in the introductory programming course. I'm very happy to have had these as the languages that introduced me to programming and Wirth occupies a very special place in my heart. His designs were truly ahead of their time.
I had Pascal and some Modula as well (on concurrent programming course).
I learned C++ later myself as a Pascal with bizzare syntax. I always felt like semantics of C++ was taken entirely from Pascal. No two lanuages ever felt closer to each other for me. Like one was just reskin of the other.
I already told this story multiple times, when I came to learn C, I already knew Turbo Pascal since 4.0 up to 6.0, luckly the same teacher that was teaching us about C, also had access to Turbo C++ 1.0 for MS-DOS.
I adopted C++ right away as the sensible path beyond Turbo Pascal for cross-platform code, and never seen a use for C primitive and insecure code, beyond being asked to use it in specific university projects, and some jobs during the dotcom wave.
On Usenet C vs C++ flamewars, there might be still some replies from me on the C++ side.
I learned C that way (algorithms class was in C), even had a little printout table of the different syntaxs for the same instructions (here's how you write a for, if, record, declare a variable, etc). At the time I remember thinking that the C syntax was much uglier, and that opinion has stayed with me since -- when I learned Python everything just seemed so natural.
I started my first company based on Delphi, which itself was based on Turbo Pascal. Wirth was a great inspiration, and his passing is no small loss.
May his work keep inspiring new programmers for generations to come.
One of his quotes: "Whereas Europeans generally pronounce my name the right way ('Ni-klows Wirt'), Americans invariably mangle it into 'Nick-les Worth'. This is to say that Europeans call me by name, but Americans call me by value."
He was indeed! I wrote my bachelors thesis on bringing modularity to a language for monitoring real time systems and his work, especially on MODULA-2, was a huge source of inspiration.
A sad day for the history of computing, the loss of a great language designer, that influenced many of us in better ways to approach systems programming.
I'm much more sad when life sort of decays (Alzheimer's, dementia, or simply becoming slow/stupid/decrepit), ends early, or when life is simply wasted.
This is beautiful phrasing, very much how I think about life myself. Let's hope the last days of Mr Wirth were free from physical pain. Thinking of my grandpa who died all of a sudden, apparently without serious physical impairments or aches, at age 90, after a happy, well-lived, ethical life.
Heaven is happier by one person now for sure, again. And maybe some compilers over there also need tinkering. Rest in peace, Mr Wirth.
He gave a talk at the CHM (He was inducted as a fellow in 2004) I got to talk with him and was really struck by someone who had had such a huge impact was so approachable. When another person in the group challenged Modula-2 he listened respectfully and engaged based on the the idea that the speakers premise was true, then nicely dissented based on objective observations. I hope I can always be that respectful when challenged.
Pascal was my first "real" language after Basic, learned it in the late 80s, wrote a couple of small apps for my dad in it.
Learned most of it from a wonderful book whose name I have forgotten, it had a wrench on its cover, I think?
Anyway, still rocking Pascal to this day, since I still maintain 3 moderately complex installers written with InnoSetup, which uses RemObjects Pascal as a scripting language.
4 years ago, a new guy on our team, fresh from school, who never even knew this language existed, picked up Pascal in a week, and started maintaining and developing our installers much further. He did grumble a bit about the syntax but otherwise did a splendid job. I thought that was a tribute to the simplicity of the language.
> Pascal was my first "real" language after Basic, learned it in the late 80s
Me too, word for word. I spent a few years in my pre-teens immersed in the Turbo Pascal IDE, which was a full-on educational tool of its own that explained everything about the language. I moved on to C after that, but I still get a nostalgic vibe from looking at Pascal syntax. It was a great foundational experience for me as a programmer.
Also, his Oberon system provides a rich seam to mine. This, from a symposium held at ETH Zurich on the occasion of his 80th birthday in 2014, is a whirlwind retrospective.
"Reviving a computer system of 25 years ago" https://www.youtube.com/watch?v=EXY78gPMvl0
I just needed a feature of Pascal yesterday in one of my Rust libraries: ranged integers. I know, you can go about it in different ways, like structs with private fields and custom constructors, or just with a new generic type. But, having the ability to specify that some integer can only be between 15..25 built-in is a fantastic language feature. That's even so with runtime bounds checking disabled because the compiler would still complain about some subset of violations.
What an innovator and a role model. I wish I can be as passionate about my work in my 80's as him.
Not only did Pascal (TP more precisely) taught me about systems programming with a safer language, it was also my first foray into type driven programming, learning to use the type system to express conditions not bound to happen.
Ranged numerics and enumerations were part of that.
Pascal was my second language, after BASIC. I was about twelve and pointers cost me a little to understand. But the first hurdle was not having line numbers. It seemed weird.
In the end, it was definitely worth the effort, and I learnt good habits from it. I used it in college, and I suppose I kinda still do, because I do a lot of PL/SQL.
He was hugely important for generations of coders.
R.I.P. Niklaus Wirth. Your ideas, languages and designs were the inspiration for several generations of computer scientists and engineers. Your Lilith computer and Modula-2 language kindled a small group of students in Western Siberia’s Akademgorodok to create KRONOS - an original RISC processor-based computer, with it’s own OS and Modula-2 complier, and lots of tools. I was very lucky to join the KRONOS team in 1986 as a 16 yo complete beginner, and this changed my life forever as I become obsessed with programming. Thank you, Niklaus.
When I first got to play with Turbo Pascal (3.something?), I was more impressed by the concise expression of the language in the EBNF in the manual than by Turbo Pascal itself, and it was what made me interested in parsers and compilers, and both Wirth's approach to them and the work his students undertook in his research group has been an interest of mine for well over 30 years since.
I hold an old print of his Pascal language report near and dear on my.bookshelf. he bootstrapped oberon with one peer in 1-2 years.
his preference for clarity over notational fancyness inspired so many of us.
the Pascal family of languages are not only syntactically unambiguous to the compiler they are also clear and unambiguous to humans. can. the Carbon successor to c++ strives for the same iirc.
Wirth made one of the most critical observations in the whole history of computing: as hardware develops, software complicates to compensate and slow things down even further.
Still remember at 14 scrounging $$ together to buy a 2nd hand copy of a Modula-2 compiler for my Atari ST and then eagerly combing through the manual as my parents drove me home from the city. Really was a different era. Like a lot of other people who have posted here who probably came of age like me in the 80s, I went from BASIC to Pascal to Modula-2 and only picked up C later. Wirth's creations were so much a part of how I ended up in this industry. The world of software really owes him a lot.
The greatest of all quiche eaters has just passed away. May he rest in peace. https://www.pbm.com/~lindahl/real.programmers.html But seriously, PASCAL was the first programming language I loved that's actually good. Turbo Pascal. Delphi. Those were the days. We got a better world thanks to the fact that Niklaus Wirth was part of it.
I haven't read that one yet, but "Algorithms + Data Structures = Programs" is just an absolutely beautiful gem of a book. It embodies his principles of simplicity and clarity. Even though it's outdated in many places, I adored reading it.
Speaking of which, that book is one of the very few sources I could find that talks about recursive descent error recovery and goes further than panic mode.
There's also an interesting book "A model implementation of standard Pascal" - not by Wirth - that implements an ISO Standard Pascal compiler in Standard Pascal, written as a "literate program" with copious descriptions of every single aspect. Basically, the entire book is one large program in which non-code is interspersed as very long multiline comments.
compiler.pas can be compiled with a modern Pascal compiler, but the resulting compiler cannot compile itself. I don't know if that's caused by a transcription error, a bug in the modern compiler or a bug in the Model Implementation.
I would love it if somebody gets this working. I don't think I myself will continue with this project.
This is a huge loss in computer science. Everyone interested in computing, no matter if using other languages than Pascal or derivatives, should read his "Algorithms + Data Structures = Programs" book.
R.I.P.
"Algorithms + Data Structures = Programs" was a seminal book for me when I was learning about software development and it has influenced how I think about programming. Also, Pascal (in its various dialects) was my main language for many years on multiple platforms (CP/M, MS-DOS, Windows).
After having read some of the comments on Pascal here -- fellow HNers, what's your view on Pascal as a teaching/introductory language in 2023, for children aged 10+? Mostly thinking of FreePascal, but TurboPascal in DOSBox/FreeDOS/SvarDOS is also a possibility.
I'm also thankful for references to "timeless" Pascal books or online teaching materials that would be accessible for a 10+ year old kid who is fine with reading longer texts.
(My condolences are below, fwiw. His death is, interestingly, a moment of introspection for me, even if I'm just a hobbyist interested in small systems and lean languages.)
Niklaus Wirth is most famous for Pascal but the best language is his last, namely Oberon which is both smaller and more capable than Pascal. If you are interested in a freestanding compiler (separate from the operating system Oberon), have a look at OBNC.
> what's your view on Pascal as a teaching/introductory language in 2023, for children aged 10+?
I think it's still the best language to start with.
And don't let yourself be dissuaded by comments here about "no ecosystem" etc; that's BS, IMnsHO. There are tons of compilers and IDEs you could use, from Free Pascal (with or without Lazarus), via PowerPascal (IIRC) and other smaller implementations, to the old versions that Borland / Inprise / CodeGear / Idera / Embarcadero have released as freeware over the years.
I wouldn't teach Pascal any more. While the ecosystem around it is not quite dead it is not alive either. So, everything feels a bit fallen out of time. At least to me it would be demotivating to learn the Latin of computer science.
My very first language was Pascal. I have since forgotten it, but distinctly remember the feeling computers are fun! And the red pascal book. Thank you Niklaus, for all the fun and impact you had on subsequent languages.
I really appreciate his work. He had a full life. Since yesterday, without knowing, I was just studying a section of a book detailing the code generation of one of the first Pascal compilers for the CDC 6400.
I was just exploring Pascal last month. I've been meaning to do some more programming in it. I think it's a good compromise for someone who wants a lower level language but doesn't want to use C or C++. The FreePascal compiler also rips through thousands of lines of code a second so the compile times are really short
RIP King. 2nd language I learned was Pascal (Turbo 5 then 6) in high school. Tried UCSD P-System from a family friend with corporate/educational connections on 5.25" but didn't have a manual, and this was before the internet. I could/should have tried to use the library to get books about it, but gave up.
Fond memories; I feel like the 90s kids were the last ones to really get to appreciate Pascal in a "native" (supportive, institutional) setting.
I also loved learning Oberon/Bluebottle (now A2 I guess), which I was so fascinated with. I think that and Plan 9's textual/object interface environments are super interesting and a path we could have taken (may converge to someday?)
RIP, and thanks for helping indirectly to put me on my career path.
I learned pascal fairly late in the grand scheme of things (basic->6502 assembly->C and then eventually pascal) but it was used for the vast majority of my formal CS education first by instruction, then by choice, and eventually in my first real programming job. The later pascal dialects remain IMHO far better than any other languages I write/maintain/guide others in using. Like many others of his stature it was just one of his many hits. Niklaus Wirth is one of the giants I feel the industry stands on, and I thank him for that.
"All those moments will be lost in time, like tears in rain..."
I bought the Art of Computer Programming volume 4A a few years ago and didn't even start reading it. 1-3 I read when I… god, my youngest child is almost that age now.
I think tonight is the time to start on 4A, before we lose Knuth too.
And as I picked it down I noticed that, almost by coincidence, AoCP stood next to Wirth's PiM2. It wasn't intentional but it feels very right. There's a set of language books that end with Systems Programming with Modula 3, the Lua book. Thinking Forth, PiM2, then a gap, then the theory-ish section starts with five volumes of Knuth. Sigh.
Coming from ZX Spectrum at home and seeing the beauty of Turbo Pascal on an IBM PC-compatible has greatly contributed to my love of programming. R.I.P., Professor Wirth.
... and you never will, since OSes don't provide anything like that out of box and to make design ecosystem work like that, a major effort for fat clients would be needed. In a time when dev tools are mostly free (apart from intellij idea but this approach has its own drawbacks) and focused on other technologies / platforms.
Pascal (Turbo Pascal on a PC) was my second programming language after Assembler and some C (had a copy of Aztec-C compiler) on an Amiga when I was 17ish. Pascal taught me modular programming, breaking down largensystems. I‘d written my own matrix calc library and would program animations for my physics class. And I learned the basic concepts of OO. It was a joy to program in.
sad. after learning basic on a zx81 my father found thrown out in the trashbin (i still have that machine today) my parents got me a 8088 PC. when my friends were playing games on those atari st and amiga machines, i was progrmaming in turbo pascal 3.0 i found from a cheap book that came with a 5"14 floppy. pascal is the first true language i learned and i would probably not be coding today without Niklaus. he changed my life, and he had a huge, huge impact on computing : pascal, oberon. delphi, and many more things. i will miss him dearly.
Pascal was the first programming language I ever learned, and a book on it that he coauthored was the first programming book I ever purchased. I hold him (and Pascal) in a special place in my heart.
Modula-2 had a huge influence on my early understanding of Software Engineering and Computer Science. I feel it is one of his under-valued contributions. RIP Niklaus. One of the great ones.
algol isn't a piece of software, so it doesn't have maintainers. i don't know if the algol committee ever officially disbanded but wirth had already resigned before algol 68 came out
Oh sorry, I misremembered the list and meant Oberon but fair point. I had just noticed that the last stable release was in 2020.
If I had read more closely to the wording, the language was _designed_ by Wirth but that doesn't necessiate him being fingers-to-keyboard (or whatever modality) despite it saying he was the developer.
Perhaps perversely (or maybe just a reflection of my own middle age?), but the HN black bar is one of my favorite aspects of HN. Death rites are essential, but their significance is often lost on the young who (naturally) pervade tech; what HN has developed in the black bar is really perfect.
Anyway: I trust we're just seeing natural human latency here, but this clearly merits the HN black bar. RIP, Nik Wirth -- truly one of the giants, and someone whose work had a tremendous personal influence for so many of us!
Not trying to be crude, but is someone passing away after a long, rich life of 89 years something to mourn? Isn’t that kind of the best case scenario?
For me something like a black banner signifies a tragedy, not merely a death. A bunch of children being shot, a war, a disease ravaging a country, etc.
I’m curious to learn others’ perspectives however.
I think mourning is more than just tragedy. It's recognition of loss. And the tradition of black things around death has seemed more a sign or respect than as indication of some tragic underpinnings. But I actually don't know the history of the tradition, so I am happy to be corrected.
> Not trying to be crude, but is someone passing away after a long, rich life of 89 years something to mourn? Isn’t that kind of the best case scenario?
It can be "kind of a best case scenario" and yet you still mourn the loss. Mourning doesn't require a tragedy.
My grandmother died in her sleep at 94, pretty healthy all things considered (still had a good head, could putter along, and was in her own home of more than 60 years), after having had a great day. Pretty much the best death she and we could have hoped for. I still wouldn't have minded having her nearby for a few more years.
You know, I had a comment earlier about the importance of death rites being broadly lost on the young (and without meaning to sound pejorative, I have to believe that you are relatively young). I had thought to myself that I was perhaps being unfair -- surely even a child understands the importance of a funeral? -- but your comment shows that I wasn't wrong.
So as it apparently does need to be said: we're humans -- we mourn our dead. That is, the black bar denotes death, not tragedy; when we mourn those like Wirth who lived a full life, we can at once take solace in the fullness of a life lived and mourn that life is finite. The death rite allows us to reflect on the finiteness of our own lives, and the impact that Wirth had us, and the impact that we have on others. You are presumably too young to have felt this personal impact, but I assure you that many are brought back to their own earliest exposure to computing -- for many of us, was Pascal.
Again, RIP Nik Wirth; thank you for giving so many of us so much.
While very out of fashion these days, a black armband used to be a signal of mourning someone's death, whether the death was a "tragedy" (likely meaning unexpected, particularly violent, particularly early, or something similar) or not. The black bar is a digital imitation of that.
Niklaus Wirth contributed quite a bit to our field, and, directly or indirectly, impacted many of the people who frequent this (programming technology oriented) site.
A lot of tragedies happen in the world, but you're not going to see a black bar on HN for every one of them. It's not so much about the magnitude of the loss, but the contribution that person made to the history of computing.
Some people say that it doesn't matter if someone dies at age 89 -- after they have lived a full life and contributed all they had to give -- it's still just as sad and shocking.
Personally, I don't agree, to me it's just not as sad or shocking. People don't live forever and Wirth's life was as successful and complete as possible. It's not a "black day" where society truly lost someone before they fulfilled their potential.
That led him to quip, "In Europe I'm called by name, but in the US I'm called by value."