Hacker News new | past | comments | ask | show | jobs | submit login
A Plea for Lean Software (1995) [pdf] (ethz.ch)
163 points by tosh on Aug 5, 2020 | hide | past | favorite | 137 comments



It is worth re-posting this, and re-reading Wirth's works every couple of years. The module concept of Oberon's predecessor Modula-2 is still unrivalled today: .def modules that specify the interface and that can be compiled separately from their .mod implementations, which may not even exist when a client application can already be coded against the compiled interface in a type-safe way.

Also, Wirth's book Compilerbau (in German, not sure if it was translated) is a piece of pristine clarity; at just ~100 pages in pocket paperback form, everyone reading it immediately feels writing a compiler or byte-code interpreter is simple and they can do it.


I love Wirth's work, but that has been available in Ada, Mesa, Mesa/Cedar and a plenty of other languages as well.

And since it is available in Ada, it is rivalled today.

More to the point, Ada allows for multiple interface packages, an idea also copied by Modula-3, where the same package can be exposed in different ways to multiple clients.

For example, the same package can have a public interface for in-house consume that is wider than the official public interface for third parties.


Ada came to mind when the parent mentioned separate interface and implementation files. Making methods in the implementation private unless they appear in the interface was an inspired design decision. I use only Oracle's PL/SQL dialect but I appreciate the design of Ada more the longer I code. Honestly, I'd consider using full-blown Ada in modern software development. It gives you the ability to write really clean code.


When I wrote Ada code long ago, it was only used on military projects.

To me, it was hard to write code in Ada. Lots of niceties from other languages were unavailable in Ada, by design. For example there were no variable argument lists.

It grew on me though, and several years later I worked on a commercial project that used Ada. I was surprised because I expected adopting Ada to be like adopting the adopting the tax code.

Then I realized one thing - although Ada is harder to write, it is nice to have an existing Ada project. And people who have done Ada a while learn to think in Ada and it's not as hard to be expressive.

It's also possible to be pretty accurate in Ada. You can know exactly what the largest or smallest integer is. Moreover you can define integers of a specific range, like -11 to 219.

Nowadays all of that has matured and I think ada is a viable commercial language, and interesting things like spark have happened.

Too bad in the intervening years other languages haven't changed much.

For example, C could have added modules. I guess nobody cares about C.


I think that by Ada 95 they were already available, were you still using Ada 83?

Yeah, just check the list of features for C2X, WG14 isn't that keen in innovating that much, nor in fixing C's flaws.


I notice that Ada 95 included inheritance. My experience in other languages with inheritance is that the feature creates a lot of complexity. Have you used inheritance in Ada and, if so, has it created any issues?


The problem with many other languages is that they do everything through inheritance. In most popular languages, inheritance is set up such that it does anything you want it to. This is what creates complexity, not the inheritance in and of itself.

Ada does things slightly differently. It manages to separate out the various parts of OOP into different language constructs, and this makes it possible to pick and choose what you need, and not get everything including the kitchen sink when you try to use one thing (like inheritance.)


I use OOP languages since I learned OOP with Turbo Pascal 5.5 back in 1991.

So no, I never had any big deal with inheritance in any language, and in what concerns Ada its tag based dispatch is also quite interesting as idea.


I'd use Ada if there was a really good open source version with an MIT or similar license. It does "feel" nice.


Nim has a type system that is heavily inspired by Modula/Oberon/Ada


Did you have a look at gnat?

Your generated code is free:

https://en.m.wikipedia.org/wiki/GNAT#License


OCaml (and probably Standard ML) also have powerful module systems that I would argue rival Modula-2's module system.


Don't forget C, which has separate compilation as well ;-). And you can trivially make multiple interfaces, too.


Separate compilation without type safety, hardly much better than macro assemblers.

So yeah, you can go that route and I have done so, poor man's modules, as it helped to keep me sane with C, but it requires discipline.


> but it requires discipline.

That pretty much defines software engineering.

For as long as I have been coding, I’ve watched people and corporations chase the will’o the wisp of the “undisciplined coder,” where we can hire naive, barely-educated children, straight out of school, and mold them into our personal “coding automatons,” or even better, let people who are experts in other domains, create software, without having to deal with “annoying” engineers.

So...how’s that working out?

Even when we have AI creating software (and we will), the AI will still need disciplined requirements, which, I suspect, will look a lot like...code.


Pretty good, every solution that expects discipline is a source of revenue for security consulting, code quality automation products and conferences.

The outcome of Azure Sphere having chosen C as their main SDK language is proving itself without surprises,

https://techcommunity.microsoft.com/t5/internet-of-things/az...


> That pretty much defines software engineering. Sure, but by moving things that require discipline into type systems/tools, it makes working with others easier.

Not to mention that no matter how disciplined you are, you will make mistakes, and having the compiler catch those for you is valuable.

It also means that the discipline applied by the programmer can be focused on areas that can't be checked or enforced by a compiler.


> So...how’s that working out?

Fantastically if the goal is to set up recurring revenue to maintain the produced systems.

> Even when we have AI creating software (and we will), the AI will still need disciplined requirements, which, I suspect, will look a lot like...code.

https://github.com/webyrd/quines is an interesting example of writing code to create code based on a specification. Perhaps not the AI code generator of some people's dreams, but it exists today.


There’s just a certain amount of effort that you can spend in a certain amount of time. And discipline takes effort. If you need less discipline you can spend your effort somewhere else. For instance you can put effort into fitting your code into an ownership model like in Rust or proof the code with Coq. The difference is that with C you can never know if there was enough discipline (usually there isn’t).


It’s my experience that discipline is “front-loaded.” It takes conscious effort for some time, while establishing a habit, then, it becomes pretty much “free.”

For example, when I was writing ObjC and PHP, I got used to using Whitesmiths indenting. Once I started writing Swift, it was more appropriate to use KNF style.

It took a couple of months of having to remember to not use the old indenting style, but I haven’t given indenting any thought in years.

”We are what we repeatedly do. Excellence, then, is not act; but a habit.” -Attributed to Aristotle


> So...how’s that working out?

So well that it was a large part of the reason I accepted an offer (today in fact) somewhere else, life is too short for that mess.


How are modules related to type safety?


Because, if you program in a type-safe language, if you call something that is compiled separately from you, you'd still like to maintain type safety across that call boundary.


You absolutely get type safety across module boundaries with C in that if provider and user both compile against the same interface, this will be typesafe.

You could even have this type safety on the linker level as far as C is concerned. You just need an object file format that exports C types for symbols. This is not done on any of the (few) systems I know, and probably for practical reasons.

Some other languages give you this link time safety, but I assume at the cost of less interoperable object files.


> For example, the same package can have a public interface for in-house consume that is wider than the official public interface for third parties.

Isn’t this similar to package private in Java or internal in C#?


Not really, because the interface is separate from the implementation and you can provide multiple interface packages for the same implementation package.

So client A sees mylib-public-A, client B sees mylib-public-B, but both link to the same mylib so to speak.


Ahhh, neat!


Yes, except that still limits you to two consumers.


That would be "Compiler Construction", the last version being freely available[1].

Given that Oberon is a simpler language than pretty much all of its predecessors and that the latest revision went even further, I'd be interested what Wirth does think about contemporary strongly typed system languages like Rust or Go (the latter being quite, erm, influenced by Oberon). Or heck, Eiffel, being the language of his successor at ETH Zürich.

IIRC he didn't have a high opinion of functional programming.

[1]: https://people.inf.ethz.ch/wirth/CompilerConstruction/index....


Ah, found it here[1]:

"To postulate a state-less model of computation on top of a machinery whose most eminent characteristic is state, seems to be an odd idea, to say the least. The gap between model and machinery is wide, and therefore costly to bridge."

[1]: https://people.inf.ethz.ch/wirth/Articles/GoodIdeas_origFig....


Those sentences form the opening of a rather peculiar paragraph. You don't have to read past the abstract of the seminal paper on functional programming, Backus's Can Programming be Liberated from the Von Neumann Style? to see that the goal isn't to eradicate state, just to tame it. "Unlike von Neumann languages, these systems have semantics loosely coupled to state." (emphasis mine) Loose coupling is not the same thing as elimination.

In the next paragraph, Wirth further indicates that he has chosen to argue against a caricature of functional programming when he suggests that "[Functional programming] probably also considers the nesting of procedures as undesirable." That's another strange thing to insinuate against a programming style that is noted for its use of closures.

(For that matter, where would closures be without state?)


This is a weird phenomenon. There seems to be two conflicting philosophies in regards to reducing accidental complexity in software.

The "less is more" crowd adheres to avoiding and reducing feature bloat and writes lower level, often efficient, very consistent code that is easy to grok.

And the "correctness by concept" crowd, with many variants thereof. Expressive type systems, functional programming, abstraction and general "higher order-ness" are dominating themes here.

Languages and paradigms often land on the spectrum of these two. I wonder if these concepts can be married in some way and what we would have to give up to do so.


I'm not sure that they're all that different in the first place. At the one "less is more" place I worked, they also relied pretty heavily on abstraction and general higher-order-ness. They just had a different way of doing it: Service boundaries and protocols. Arguably the Unix philosophy is similar: A bunch of small programs that do one thing and do it well, which you can chain together with pipes.

The CTO's official reason for the "less is more" philosophy was not that he didn't think that more powerful language features weren't useful, it was that sticking to less powerful features discouraged the growth of individual modules into large complicated tangles, by making it actively painful to do that.

My one, somewhat guarded, criticism of that approach is that I think it may have depended critically on the company being in a position to maintain some very selective hiring practices. When you limit yourself to only hiring people who can really appreciate Chuck Moore's style, well, you've limited your hiring quite a bit. I could be convinced that the "correctness by concept" approach is less fragile and dependent on having a rigid corporate monoculture in order to work out properly.


> That's another strange thing to insinuate against a programming style that is noted for its use of closures.

Let's be honest: there are two completely opposite meanings of "functional programming" that we're burdened with having to put up with, and most of what passes for "functional programming" straight up isn't. Somehow the people making heavy use of closures manage to pass themselves off as doing FP, even though that style is decidedly unfunctional. I wish there were wide recognition of the distinction between this pseudo-functional (PF) style from actual FP, and that we'd call it out as appropriate.

Frustratingly, the inhabitants of this bizarro world who program in the PF style still tend to undeservedly hold the same smug expressions on their faces as the one FP folks do with regard to OO—with lifted noses about OO being unclean, even though PF folks' closure-heavy style is no better—with PF being equivalent to OO, except for the former being being less syntaxy, which only leads it it being harder to spot the trickery employed in the PF folks' programs. This is fairly annoying.


Huh? I don't know if you are aware of this, but higher-order functions make heavy use of closures. I don't think there is even a purpose to higher-order functions without closures. I don't how haskell is implemented internally, but conceptually, it makes use of closures quite heavily. It uses closures on currying, it uses closures on monads, and on everything else. If you assert that heavy use of closures alone is decidely unfunctional, then haskell must be one, which is obviously false.

I don't know what definition of functional programming you are using[f], but you don't have to be arrogant about it. Your comment is fairly annoying.

[f] Let me guess, only immutable variables and pure functions? How PF.


> higher-order functions make heavy use of closures

No, you're conflating higher-order functions with closures. Higher-order functions that use closures make use of closures. Higher-order functions that don't use closures do not.

> I don't think there is even a purpose to higher-order functions without closures.

I'm not sure how anyone can say this with a straight face, let alone someone who considers themselves to be in a position to challenge somebody about whether or not they grok functional programming.


>No, you're conflating higher-order functions with closures

Well, if you are actually doing the real hardcore FP™, and not just the lame pretentious PF, then yes higher-order functions will indeed very much make heavy use of closures. Did you miss the part where I gave haskell as an example? And note that I didn't say that closures and higher-order functions are the same thing.

> Higher-order functions that use closures make use of closures. Higher-order functions that don't use closures do not

How are these tautologies even an argument? This does not say anything meaningful, like saying wet water is wet. Don't worry, I'm not even trying to keep a straight face while reading what all you have said so far.

> I don't think there is even a purpose to higher-order functions without closures.

Heh, I did qualify my statement with "I don't think" since I didn't really give that much though on that one.

But okay, I admit that statement is dumb and invalid since the usual map, filter, reduce functions are good examples where higher-order functions is not a closure. But more often than not, you really do need to use closures to do anything beyond simple cases like map(array, x => x*x).

My overall point still holds. I'm still in a very good position to challenge your dogmatic beliefs that: heavy-usage of closures is pseudofunctional and unfunctional.


> How are these tautologies even an argument?

They're not, and that was exactly the point of my comment: it's a circular argument that you have to take responsibility for, not me. You seem to have missed that—it's your nonsense claims that are in focus when the tautology is being spelled out.

Higher-order functions and closures are different things.

> Well, if you are actually doing the real hardcore FP™, and not just the lame pretentious PF

I wouldn't call the pseudo-functional style "hardcore"—any more than OO is hardcore, given that they're equivalent. It's frequently portrayed as the naive/easy way out. Actual FP, on the other hand, is hardcore. (And pretentious—which is an odd attempt to try to stir me up; do you think I'm an advocate of FP or something? I suggest re-reading.)

> But more often than not, you really do need to use closures to do anything

Yes, which is why I'm not an FP advocate.

I was very clear in my original comment. The pseudo-functional style is a preference for how to write programs, and therefore immediately defensible as valid. What's not defensible, though, is equivocating on the meaning of "function" while simultaneously trying to lump the pseudo-functional style in with FP. The moment one starts making heavy use of closures and carrying around state is the moment one forfeits the right to be smug about how unclean OO is, given the equivalence of objects and closures and given that one is no longer actually practicing FP.

> I'm still in a very good position to challenge your dogmatic beliefs that: heavy-usage of closures is pseudofunctional and unfunctional

No, you're not. It's unfunctional by definition.


It sure is easy moving the goalposts around when you have no grounds to base on. Please, please you have said this much and haven't still even once defined what true functional programming™ is?

> It's unfunctional by definition.

There you go, more self-fulfilling tautologies. And for some magical reason, it's me that are making nonsense claims? How is "higher-order functions make heavy use of closures" a nonsense claim?

I have provided a very clear and direct counter-example that falsifies your core argument. On the other hand, you have provided zero actual rebuttals. In case it isn't clear, calling mine "nonsense, circular and tautological" and yours "by definition" doesn't count as an argument.

> The pseudo-functional style is a preference for how to write programs, and therefore immediately defensible as valid

Is the word style even relevant here? You can call it style, paradigm, or computational model, it doesn't change your point.

> What's not defensible, though, is equivocating on the meaning of "function" while simultaneously trying to lump the pseudo-functional style in with FP.

Ugh, I'm guessing your definition of "function" is a special amorphous one that changes meaning to conveniently support your claims.

> The moment one starts making heavy use of closures and carrying around state is the moment one forfeits the right to be smug about how unclean OO is, given the equivalence of objects and closures and given that one is no longer actually practicing FP.

No, repeating your statements don't make them true. Once again, see my original counter-example with haskell. If you insist in ignoring it, then fine with me. I'm done here.


> It sure is easy moving the goalposts around

If I've moved the goalposts, you should be able to show where it happened. So do—point to it or fuck off.

As for the rest of your comment and being "done", that's fine. There's zero chance that I'm going to waste my time on a point-by-point rebuttal for anyone who's acting in this much bad faith, ignoring the points I've already made, and trying to pawn off the flaws in your arguments as mine.


Internally, Haskell's intermediate representation is a version of the lambda calculus. Which would mean that, practically speaking, Haskell is largely just one big pile of closures.

Which really shouldn't be a surprise. After all, you can't curry if you can't close.


In the case of the former, I think you're misunderstanding Wirth. His statement isn't predicated on the idea that functional programming 100% requires the language to eliminate state, necessarily; just that functional programming discourages state in lieu of primitives less aligned with the machine.

For instance, functional programmers would almost all tell you that `map (x => ...) xs` is "better" than `for i from 0..len(xs): xs[i] = ...`. But the former, implemented trivially, is very slow: from the allocation of the closure to the allocation of the new list to the function calls on each iteration and the lack of tail recursion in `map`'s implementation (this is a trivial implementation, remember?)

Of course, the functional programmer would tell you, "Well, it's easy to optimize that, the performance issues are just because your implementation is too trivial", and Wirth would rejoin, "Too trivial? What's that?"


I don't think this is a fair point, since the two snippets do different things. One creates a new list, and the other does not. If, as a functional programmer, I was actually interested in mutating the existing sequence (e.g. for performance reasons), I would definitely write the loop.

If you're interested in maximum constness (which I tend to be, because I find it's almost always easier to read code where values don't unexpectedly change in a branch somewhere) then you'd be comparing

    let ys = map f xs
to

    let ys = []
    for i from 0 .. len(xs):
        ys.push(f(xs[i]))
where the former obviously makes it much more clear what's going on.

Sure, it's using "primitives further from the physical machine" but that is exactly what programming is about! You create a new layer of primitives on top of the old ones, where the new layer makes it slightly easier to express the solution to the problem you're solving. You do this incrementally.

When someone has built a more easily handled set of primitives for you, it would be silly not to use them, all else equal.

----

In other words: the only real reason to mutate values is to improve performance at the cost of readability, and at the cost of losing the ability to safely share that data across concurrent threads.

If, indeed, that is a cost you're willing to pay for the additional performance, no functional programmer I know would shy away from the imperative mutating loop.


They do two different things, but they do different things the way the style they're written in encourages. Performance-considerations aside, the functional programmer would rather create a new list (or, as you said, they're "interested in maximum constness" - the precise preference for statelessness Wirth is calling out); the imperative programmer would mutate the existing list.

Wirth is not talking about clarity, not in the sense of "can I look at the code and understand the high-level intent of the programmer"; Wirth is interested in clarity in the sense of "can I look at the code and understand exactly what it's doing, at every level"?

For Wirth, programming is not about using an endless stack of primitives that get you further and further from the physical machine, so much that they start to obfuscate what's happening at lower layers. It's about building the smallest, simplest stack of primitives such that you can express yourself effectively while still understanding the entirety of the system. The Oberon system includes everything from the HDL for the silicon all the way up to the OS and compiler in around 10,000 lines of code because you're supposed to be able to keep all of it in your head.

I'm not saying that any of this is correct, per se, nor am I arguing for it - I'm sympathetic to it in some ways and disagree with it in others (I am, in fact, very much into FP). I'm just trying to give a charitable and clear interpretation of his perspective. FP may not want to get rid of state in one sense, as you've pointed out; but it wants to get rid of state in another, and Wirth doesn't like that because it necessitates complexity - and Wirth hates that.


For Wirth, programming is not about using an endless stack of primitives that get you further and further from the physical machine, so much that they start to obfuscate what's happening at lower layers. It's about building the smallest, simplest stack of primitives such that you can express yourself effectively while still understanding the entirety of the system.

Now that is an interesting perspective I hadn't even considered. Also not sure I would agree – but if I was interested in finding out more but found the OP unconvincing, where would I go to find out more?



Easy, it thinks they are full of bloat, including Go.

Each release of Oberon-07 drops features, it is reduced to a C like with GC, with a single form of loop constructions.


Got any specific citations? His general opinion seems pretty clear, but I would like him going on about some details.

I think I remember him saying that if one would want to design a language, starting with Oberon would be his recommendation. In that regard Go at least does something right.

And it does at least have a specification, too, which is another item that Wirth is pretty adamant about.

I'd pay good money to have him and Meyer argue about design, syntax and semantics.


Easy, compare 1992's Oberon with Oberon-07 revisions from 2011, 2013, 2014, 2015 and 2016.

Each Oberon-07 revision, as mentioned, drops language features.

Also note that as far as I know, he wasn't too keen in the offsprings from Oberon, namely Active Oberon, Oberon.NET, Component Pascal and Zonnon.

Oberon-2 was his last collaborative work in the context of Oberon language family.

And while for me Active Oberon is the best one for systems programming (still in use at ETHZ OS classes), with support for several low level features that original Oberon requires Assembly, I doubt Wirth would appreciate it, given that it is Modula-3 like in size and features.

http://cas.inf.ethz.ch/projects/a2/repository/raw/trunk/Lang...


Oberon has three (not one) loop statements, namely WHILE, REPEAT and FOR:

https://www.miasap.se/obnc/oberon-report.html

If anyone is interested in using the language outside of the Oberon operating system, here is a freestanding compiler:

https://www.miasap.se/obnc/


Oberon yes, but I guess you missed the Oberon-07 part of the comment.


I'm referring to Oberon-07 which is the latest version of Oberon, last updated in 2016.


I stand corrected, what was dropped was LOOP and EXIT, and I somehow mixed it up.

Sorry about that.


Given how many languages you know, and how many revisions of languages, you might be forgiven for having mixed up one detail on one revision...


Though this is why we shouldn't be snarky when replying to others. Aside from just being nice, we might be the one in error and not know it.


Exactly, Oberon is now a purely structured language in which each statement sequence fully executed. There are no goto-like constructs.


> The module concept of Oberon's predecessor Modula-2 is still unrivalled today

I'm not familiar with Modula-2's module system. What does it provide that the module system of OCaml does not?

>.def modules that specify the interface and that can be compiled separately from their .mod implementations, which may not even exist when a client application can already be coded against the compiled interface in a type-safe way.

I believe .mli files can be compiled separately from the matching .ml files, and client modules can be compiled against an .mli that does not have a corresponding .ml file.

And OCaml also supports functors, so modules can be parameterized.

Sorry, not arguing that Modula-2's module system is not good, guess I'm just not convinced that it's unrivalled today. And for all I know ML's module system was probably influenced by Modula-2.


> The module concept of Oberon's predecessor Modula-2 is still unrivalled today

The module concept of Oberon is also very good (leaner than Modula's). There are also other languages with good module concepts, e.g. Ada, or the CLR based languages.

> Wirth's book Compilerbau ... is a piece of pristine clarity

For a certain type of compiler (rarely used today).


Sounds like COM type libraries.


"With Project Oberon we have demonstrated that flexible and powerful systems can be built with substantially fewer resources in less time than usual. The plague of software explosion is not a "law of nature."

What then, in the software world, is a "law of nature", and how might we discover such a law/s by examining evidence, such as, the evidence deposited over the last 25 years of software proliferation?

Might we examine source code, or executable code?

Or number of users?

Or revenue from sales?

Or aesthetic qualities of the source code, such as structure, legibility, or maintainability, or portability; or qualities of the executable code, such as size, performance, or intuitiveness of the UI?


One thing that has been sorely missing IMO in the last 25 years of software proliferation is a complete usable system that can be understood in its entirety by a single human being in one lifetime.

It doesn't have to be your day-to-day desktop, but for learning, research and experimenting I think solutions with that goal would still be worthwhile.


Viewpoints Research Institute (VPRI) tried to make such a system around 10 years ago http://www.vpri.org (e.g. http://www.vpri.org/pdf/tr2012001_steps.pdf )

IIRC it bootstraps from a prescheme-like language into a Smalltalk-like dynamic language ( https://www.piumarta.com/software/maru https://www.piumarta.com/software/cola ), and uses PEG parsers ( https://en.wikipedia.org/wiki/OMeta ) to implement domain-specific languages, e.g. the Nile graphics language ( https://github.com/damelang/nile )


Thanks, I am aware of the research at VPRI. However nothing coherent seems to have come out of it. Yet I am running a complete Oberon system on my laptop as we speak. And all source and documentation is available at http://www.projectoberon.com I really liked the idea though.


I really wish he would release the updated Oberon book in another format besides PDF.


> a complete usable system that can be understood in its entirety by a single human being in one lifetime.

Wait, what does that even mean? I assume it cannot, then, run on hardware executing any form of microcode because that adds many man-years of what's essentially software complexity to the problem.

What I'm trying to say is that at some point you have to draw a line and say "anything below this line is considered the platform the software runs on, and does not need to be included in the understanding" and where you draw this line is completely arbitrary.

One might argue that x86 is a platform that doesn't have to be understood, as long as one understands its interface. Someone else can argue that the JVM is a platform whose internals need not be understood. Yet other people popularly picture the web browser as their platform. In the extreme case in the other direction, electrical circuits (with a dash of quantum mechanics, I think?) could be considered the platform of the x86.

I have yet to hear a rational argument for why a particular thing counts as "the platform" more than any other.


> Wait, what does that even mean? I assume it cannot, then, run on hardware executing any form of microcode because that adds many man-years of what's essentially software complexity to the problem.

Have you looked at the hardware design at http://www.projectoberon.com ?

> What I'm trying to say is that at some point you have to draw a line and say "anything below this line is considered the platform the software runs on, and does not need to be included in the understanding" and where you draw this line is completely arbitrary.

What I am trying to say is that Project Oberon proofs it's possible to run it on a hardware design that can be understood by that same human being in one lifetime.

> I have yet to hear a rational argument for why a particular thing counts as "the platform" more than any other.

Again, I am talking about the complete thing.

And how is this even a discussion? The amount of added complexity when you compare an Oberon system with a modern phone or desktop is staggering. Especially if you go as deep as the microcode level.


Maybe part of the problem is that I underestimate how much a human can understand in a lifetime! Thinking more critically about it, you might be right that it's possible to accomplish more than I expect.


Yeah, I guess it depends on what legacy you're willing to part from.

Something like https://www.amazon.com/Elements-Computing-Systems-Building-P... is not hard, but it's arguably a toy system.

IMO Project Oberon starts from the same principles and builds up to a productive enironment including text processing, a compiler, very basic hypertext, basic networking and hardware to run it on.


NetBSD (and I'd guess OpenBSD) are closer to that than any other similarly functional operating systems I know of. Still quite a lot of code (and arguably not complete, depending on how you define that) but way easier than Linux.


I would say that Xv6 or Xinu are much closer than any BSD if you're willing to disregard the implementation of the compiler, graphical display and hardware/firmware it runs on.


If you reduce essential complexity to minimum, such a task is trivial to experienced engineers and computing scientists, like Wirth himself.

Like many such products, it will work so long as you're able bodied and only need the same human language as the author, preferably one that doesn't have complex requirements and can fit in 7 bit ASCII.

I no longer have the source, but a great example someone once gave was that Oberon had pretty minimal support for anything, to the point that sharing information by email was hard due to "how do I share my paper" problem.


http://www.projectoberon.com/ has all of the source and the Oberon book


I don't really buy the core premise that "Enhanced user convenience and functionality supposedly justify the increased size of software, but a closer look reveals these justifications to be shaky".

Users can still choose to use vi, or emacs, or textpad, or whatever stripped-down, fast, minimalist software they prefer; the old stuff hasn't gone away. Meanwhile, my IDE re-compiles/re-parses my program as I type, and tells me immediately if there are errors. I choose to use the bigger, "slower" program because it makes me dramatically more productive. Sure, input latency might not be as good as a barebones editor, but that cost more than pays for itself (as always, for me -- YMMV).

I can appreciate the aesthetic desire for a lean, minimal program, but to ignore the very real productivity benefits that come from tolerating larger programs is, I think, myopic.

This comes up around here a lot in the context of slow web pages; a very important thing to note that I think is often overlooked is that it's not just the user's productivity that matters. If developers have to spend 2x the time optimizing their code to fit in a limited memory allocation, then that's 2x the cost to the consumer for the same amount of features (or 1/2 the features for the same price). If speed/performance is a feature that your customers want to pay for, they will let you know. If it's not, then your competitor will eat your lunch by building more features that users actually care about, while you optimize your existing feature-set.

Programs are bounded by "feels slow" on one side, and "expensive to optimize further" on the other. If 99.9% of your users (i.e. the lay users, excluding the experts in the HN crowd) don't perceive your program/page to be slow, why would you optimize further? Sure, if you're Google, a few ms faster can equate to millions of dollars of revenue, but that's not the fitness landscape that most software evolves in.


> "If speed/performance is a feature that your customers want to pay for, they will let you know. If it's not, then your competitor will eat your lunch"

This doesn't really disagree with the argument from the piece. You could summarize this line of thinking this way: The pressures at work in the software market drive us to create slower and slower software. It seems like everyone actually agrees on that point.

The only real disagreement seems to be how to respond to the fact. It's either, "Well shucks, that's just the way it is. At least I have auto-complete" or a feeling that, despite the pressures to just bloat and expand and slow down forever, it would be nice if we tried to combat it.


I think that's a fair comment.

My counterargument would be that if the new way of building software proposed in the OP was actually a significant improvement, someone would have built a new IDE with it, and eaten JetBrains' lunch. Instead, we got Atom; prioritizing pluggability/extensibility/hackability over footprint/latency.

To be clear, I'm not arguing for a fatalistic position that "there's nothing we can do about it". We can, and do, optimize performance when required.

I'm just arguing for a more nuanced appreciation of the cost/benefit calculation that's at play here.

Either there's an explicit cost/benefit being done (e.g. a PM weighs the "go faster" story vs. the "add new widget" story) or an implicit one (app #1 prioritizes speed, app #2 prioritizes features, and customers vote with their dollars; market share reveals which is more valuable).


It is not possible to eat JetBrains and other's lunch, when only a subset of current generation is willing to pay for their tools, that is how Atom and Electron based GUIs get born.


Thanks for your comment, as a VIM user who often turns up my nose at “slow” IDEs it helped me to understand a perspective that I would otherwise ignore, and reminded me that ultimately it’s what works for the individual that matters over any dogmatism that I may have.

To offer an alternative perspective on this bit,

> If 99.9% of your users (i.e. the lay users, excluding the experts in the HN crowd) don't perceive your program/page to be slow, why would you optimize further?

For me personally as a practitioner of software engineering; because I can, and it’s meaningful to me to optimize in the broader context of designing and implementing a system, in part to see what’s possible in addition to the personal enjoyment of going through the process.

That being said, for a company focusing on optimizing value to customers as a function of engineering resource allocation, I agree that it doesn’t make sense to optimize. You summed it up nicely with “...but that’s not the fitness landscape that most software evolves in.”


> Thanks for your comment, as a VIM user who often turns up my nose at “slow” IDEs

Note that the mention of Vim by the original commenter is a red herring arising out of lack of familiarity with Wirth. In Wirth's eyes, even Vim would appear monstrous.

To appreciate Wirth's point requires realizing that his frame of reference is the Oberon (eco)system, which includes an OS, a mouse-driven graphical shell, a compiler, and the underlying CPU in an HDL all in a few tens of thousands of lines of code.


> If 99.9% of your users (i.e. the lay users, excluding the experts in the HN crowd) don't perceive your program/page to be slow, why would you optimize further?

And how do you know if they perceive it to be slow? I suspect that if you did optimize it to be faster (e.g. lower latency interactions), they would notice the improvement, even if they weren't sure why it "felt" better.

> If speed/performance is a feature that your customers want to pay for, they will let you know.

I don't buy that. Customers often don't even know it's a possibility that their software could be faster.

In my experience, often lay users don't think of individual programs as being fast or slow. They either assume the "internet" is being slow, or their computer is slow. However, I think they do notice when performance is better. They say it "works better" or "feels more reliable", but they don't necessarily know why.


> And how do you know if they perceive it to be slow?

Ask them, or listen to what they are saying without you asking. (I.e. UI/UX research 101.) For an IDE with a large userbase (for example JetBrains' product line), there are plenty of bug reports / user reviews which complain about performance, and so it's possible to get a picture of how many users perceive your app to be slow.

Note, I chose my words carefully there -- if your users don't perceive the site to be slow, they may still respond positively to imperceptible performance improvements. To measure that you do A/B experiments. There's a lot of ink spilled on this subject; Google has done some really good research here.

> Customers often don't even know it's a possibility that their software could be faster.

Fair, I probably oversimplified there. More precisely, "If speed/performance is a feature that your customers will pay for, you should be able to measure that fact."

You don't necessarily need to invest in the performance improvements to measure this; when Goole investigated this they simply added artificial delay and measured the effects on revenue. From this you can estimate the gradient of the $-revenue / ms-latency slope, and figure out how much it's worth investing in improving your app/site's latency.


> so it's possible to get a picture of how many users perceive your app to be slow.

No, it isn't. That's "Recognizing Survivorship Bias 201".


People, depending on their expectations, will perceive your app as one of these categories:

- A: not slow

- B: slow, but still using

- C: intolerably slow and no longer use it

Indeed, you can't measure C with a survey. But for most apps, it's probably reasonable to assume a distribution of thresholds where, if B/(A+B) (ie, the result you get on a survey of users for "is it slow") is less than 5%, there probably aren't many in C.


That's bad science.


User surveys are never going to be good science. Surveys are confounded by massive variation in response rates. For instance, I haven't filled out a user survey in many years because I have better things to do. They can only ever give a vague indication that something might or might not be a common issue.

The scientific approach to find out if app slowness is a problem is to make a much faster version, give that to some fraction of users in an RCT and see if their usage goes up. But that makes no sense for a business. If you make the effort to develop a fast version, just give that to everyone and move on to the next thing that might get you more users.



Please point to the section of the paper you linked to that says Google believes that survivorship bias doesn't exist.


> it's possible to get a picture of how many users perceive your app to be slow.

> [paper showing Google measuring how many users perceive their app to be slow]


What are you not getting here? I was willing to believe that you were familiar with survivorship bias before and had just made a mistake, but now it's seeming as if you're not just completely unaware of it, but unwilling to even look it up. And doubling down like this comes across as incredibly obnoxious.

Why don't you write to Jake, Hilary, and Maria and see if they'll explain to you why the existence of their paper doesn't mean what you're now trying to argue?


After a threshold the business value drops off substantially. How often does a consumer pick between competitors and choose based on speed? I would assume that's vastly in the minority.

You see some shakeups here and there. A large push to a new browser once a decade, etc. but features are king most of the time.


> After a threshold the business value drops off substantially.

Sure, but I don't see any reason better tools couldn't get us closer. Instead of starting out with languages and tools that will almost guarantee slow software, maybe we work on designing languages that provide productivity while at the same time encouraging leaner software.

I don't disagree that overall the environment software is developed in doesn't encourage lean software. But I do think it's worthwhile to see if there are ways to improve the situation.

Also, faster software can often provide opportunities for new features that weren't possible before. For example, something that used to be a batch or background process can now become a real-time feature. That is something that customers likely would find useful.


> Instead of starting out with languages and tools that will almost guarantee slow software, maybe we work on designing languages that provide productivity while at the same time encouraging leaner software.

The key point I'm making is that you need to consider the whole system/environment; developers aren't choosing "slow languages" because they simply don't care about performance, they are choosing those languages because they are more productive in other dimensions, and the return on that productivity gain exceeds the return on optimizing for more speed.

> Also, faster software can often provide opportunities for new features that weren't possible before.

Absolutely -- we do see performance improvements happen, when they give users something that they actually value; for example the reason IntelliJ triumphed over Eclipse was by doing more sophisticated compiling/parsing in real-time, which was only made possible by significant performance optimizations.


https://www.youtube.com/watch?v=k8gIJOy0c2g

It's not just developer writing a program for one user. It's a very small number of developers writing programs for many many users (unless you make custom software). The impact of what devs do is multiplied by the number of users. That includes the negative impacts of being slow or requiring more resources than necessary.

Imagine Photoshop users having to buy more RAM not just because it is needed by some functionality, but because that functionality used more RAM than was actually necessary. The sheer waste, in money, carbon footprint, and pollution...

Granted, optimising takes time. Time that would be taken to write features. There's a tradeoff there. But we should keep the competition in mind here: there's a difference between a feature being unavailable because you spent time optimizing, and a feature being available elsewhere instead. While it make sense for Photoshop devs to push features as fast as they can, it doesn't to users any good if Krita already offers those features. (I'm ignoring incompatibility here, but you get the idea.)

The incentives are all wrong in my opinion. We should have fast, lean, correct programs to work with. They just don't happen in the current economic system.


> Nikolaus Wirth: A Plea for Lean Software (1995)

He spells Niklaus, see https://en.wikipedia.org/wiki/Niklaus_Wirth. And yes, he has a sound, pragmatic attitude towards software systems.

Nikolaus is another famous person.


Nikolaus with an "o" aka St. Nicholas or (Saint) Nicholas of Bari, was an early Christian bishop of Greek descent based in what is now Demre, Turkey.


In 1931 the Coca-Cola Company began placing Coca-Cola ads in popular magazines. Archie Lee, the D'Arcy Advertising Agency executive working with The Coca-Cola Company, wanted the campaign to show a wholesome Santa who was both realistic and symbolic. So Coca-Cola commissioned Michigan-born illustrator Haddon Sundblom to develop advertising images using Santa Claus.

For inspiration, Sundblom turned to Clement Clark Moore's 1822 poem "A Visit From St. Nicholas" (commonly called "'Twas the Night Before Christmas"). Moore's description of St. Nick led to an image of a warm, friendly, pleasantly plump and human Santa. (And even though it's often said that Santa wears a red coat because red is the color of Coca-Cola, Santa appeared in a red coat before Sundblom painted him.)

In the beginning, Sundblom painted the image of Santa using a live model — his friend Lou Prentiss, a retired salesman. When Prentiss passed away, Sundblom used himself as a model, painting while looking into a mirror.


> (Saint) Nicholas of Bari

Of Myra, actually — his relics were stolen from his tomb in Myra by sailors from Bari, where they remain today, but he probably never even visited there.


Long before computer science, even though the ideological attitudes might be similar ;-)


ty!


'strongly typed language increases the productivity' How was this tested? Was there a control group? I would argue that currently Javascript and Python have been greatly improving the productivity. The type system works well in the maintenance, API definitions and 'no unit tests' scenarios, but when you have to do something you don't have yet any clue about -for exmple at the start of a years long app development project, the type system is just in the way.


I think that JavaScript and Python, from my experience, certainly increase speed... but that definitely != productivity.

If you want to get something up and running fast, then JS/Python should absolutely be your go-to. We have an "innovation sprint" every quarter where everyone gets to try out changes and new features and anything else they wish to hack with our system, and I would say 99% of people choose to do this work in JS/Python.

However, my personal opinion is that productivity's first and most important pillar should be maintainability, followed closely by readability, with speed relatively far behind.

Again, this is just my two cents, but I equate Python or JavaScript to sending a message without things like capitalization and punctuation. Works perfectly fine for Slack, but not as well for writing a novel.


There were (arguably unsuccesful) attempts at testing such things in labs: https://danluu.com/empirical-pl/

My interpretation is that the conclusion, at the moment, is that we can't know for sure, scientifically, which one is "better".

That being said, I sill have to find the randomized, double blind test that proves a hammer is the right way to hit a nail.


I don't see how you could test a statement like that scientifically. Maybe you could argue, for most people? For certain people? People conceive of problems and solutions differently and I doubt it is particularly standarizable in terms of type systems.


> How was this tested? Was there a control group?

I agree this deserves some study. But...

> when you have to do something you don't have yet any clue about the type system is just in the way

Even if you don't have any clue, you usually know the types of your functions and data structures.

I mostly write Python and OCaml code. Each langage has its use cases, but I rarely feel the OCaml type system is in my way. When it's in my way, it's because my code is incorrect in an obvious way.


Yeah, this pretty much echoes my experience working with type-safe languages.

In the case of my anecdote here, I'm mostly talking about typescript, but I've worked with a bunch of other ones in a non-web context.

Every single time, without fail, when I get a "snag" from the type-system, it's complaining about something real. Sometimes it's a trifling bug, like a typo, but ... even there it's usually pretty nice to have the type system immediately jump on it and report it, without me having to deploy the thing and run it and only then find out that something is wrong.

But the other class of bugs - that's where it's solid gold. It'll often catch really sneaky bugs, bugs related to "nullability", where some object I'm blindly using isn't guaranteed to stay allocated during the use case I expect it to be useable in, and holy smokes are those a lifesaver. Having had to deal with those bugs from the opposite direction, they're an unleaded nightmare to try to fix without the type system pinning down exactly the culprit that would be causing it. Every time I see one, I immediately think "wow, this would have been a 5-10 hour nightmare if I had to fix this because of some production bug". I've been in the office till 10pm, and ... I never want to do that again if I can avoid it.


Yeah, the type system helps in that, but for a unit testable product you need factory methods, which produce some valid and invalid cases. If you write simple tests for those it is a type checker on it's own and the type checker is just overhead. If I could choose my next project from two similar implementations where the other would be type checked and the other unit tested with basic datatype factors (at least) I would choose the unit tested one.

I would use typed API's though, because it saves documentation reading time and makes the editor autocomplete to work like magic.


> If you write simple tests for those it is a type checker on it's own and the type checker is just overhead.

This is not true. Unit tests cannot replace a type system, just like a type system cannot replace unit tests.

You need a unit tests because a type system cannot check for all possible types of correctness.

However, unit tests can only check for the presence of bugs; they cannot prove their absence. On the other hand, a type system can prove that certain classes of errors cannot exist in the program.


> the type system is just in the way

That's an usual symptom that you are trying to write untyped code in a typed language.

There are few cases where types do get in the way, but in the huge majority of the cases the types are there for you to explore your ideas on them first, and only mess with the code once they make sense.


> for example at the start of a years long app development project, the type system is just in the way.

Subjectively, my experience is that writing out the type definitions and signatures of main functions is a great way to start exploring an unknown problem space.


I assume we agree that "productivity" doesn't just cover writing code, but also avoiding/finding bugs and maintaining the code over a long period by different people. TypeScript is a big progress over JavaScript in this respect.


I'm not convinced of TS as of yet for my needs. The vast majority of bugs and complexity in UI code seem to come from state management, asynchronicity, complex validation and the like. Static typing doesn't help with any of these or at least not to a degree that is sufficient. TS also doesn't seem to help at all with performance, which I find to be the most important trade-off for introducing types and the implied complexity.

I'm heavily biased towards small teams and small to medium programs though. I can at least imagine how TS improves ad-hoc documentation in some cases, which can definitely help in the "maintaining the code over a long period by different people"-scenarios.


That's not my experience, sadly. We switched from Javascript to Typescript over a year ago, on a fairly new project, but it mostly results in more errors that need to be fixed, that wouldn't have been errors in Javascript.

Being able to specify interfaces is absolutely nice, but overall I'm not convinced it's worth the trouble.


> sadly. We switched from Javascript to Typescript (...) I'm not convinced it's worth the trouble.

Judging by the seismic shift in the industry away from vanilla JS towards TS I'd say that qualifies as an extraordinary claim.

It would be interesting to hear some of the details behind your experience.


It's mostly a lot of extra boilerplate that's suddenly required. We're using Vue, and every time we write a method or computed property for a component that uses the this pointer, we need to pass it (this: any) as parameter. Any, because every component is different, has different properties, and it's constantly changing, so writing interfaces for those isn't worth the effort, since they only call their own methods anyway. Forget it, and it might still work fine locally, but the build server complains, so we have to fix it.

Most of the functional errors will be caught either by unit tests or by functionality noticeably not working. These are not things that would be caught by Typescript anyway.

The irony is that we're using typescript in the front-end, where it mostly gets in the way. I think Typescript would have been more useful in the backend, but we're not using it there, because originally our backend was trivially simple. Now that the backend is becoming bigger, I can imagine Typescript would be more useful there.

It could be that Typescript doesn't work well with our version of Vue. (I think the latest version is designed around Typescript which will hopefully make the process a lot easier.)


> These are not things that would be caught by Typescript anyway.

In my experience working with React, which is pretty heavily invested in typescript these days, if you go reasonably deep on doing typescript interfaces, it's like a switch gets flipped.

A light dusting of typescript really does barely anything; it's just boilerplate. But once you get up to about 80-90% coverage, it's like a switch gets flipped. All of a sudden it's really, really good at detecting discrepancies - I had a thing I was working on today, where I had a cute little svg icon component in the giant SPA program we're writing - I was just reusing the thing, and attaching a click handler to it, and all of a sudden - this component we hadn't touched in months, typescript starts griping about it. And I'm like "oh come on, this is so basic - what the hell could be wrong about passing in a simple onclick handler?" Well - turns out nobody had ever needed to use the "event" param on that function, so it didn't even use one internally - what I was passing in, in plain JS, would have just been thrown away, because the internal 'passthrough' version of the function had no parameter at all. And I didn't notice it in light testing (we have TS set to emit our program even if it's failing tests). I tested the component, and because the behavior's invisible/internal, it seemed like it was probably fine. Maybe I would have caught it with really earnest, aggressive testing later, but I didn't even need to - typescript just nailed it instantly.

I've had the privilege of working on some game development stuff outside of a web stack, and holy smokes does working in a complete, algebraically typed language change everything. When you go from 80-90% type coverage, to "hard 100%", it's just a complete 180°. It's just _freaky_ how good it is at catching errors. I'll change one little thing, and it can tell me "oh yeah - you know that cutscene an hour into the game? Yeah, you broke that." It's uncanny. It just absolutely changes everything about how I work.


Yeah, I guess at least part of the problem is that we're using a version of Vue that's not natively designed around Typescript. There is a patch for it, but it's not that thorough. Half-hearted typescript doesn't work. A version of Vue that assumes you're using typescript and has interfaces defined for everything, would probably make a massive difference, and I think that's what Vue 3 does, but we're not in a position to migrate at this moment.

Basically, any use of `any` should be avoided. Once you tolerate one `any`, you're on the way down.

One thing that I really, really do like about typescript is that you need to be explicit about whether a value can be null. Java lacks that, but the difference between `foo: string` and `foo: string|null` is stark.


Typescript compiler can be configured to silence many kinds of errors, these are especially useful while migrating a large Javascript codebase. Also look at typescript not just from an error catching perspective but also from a tooling and documentation point of view. With types at hand, modern ides work much better and reading types often helps in understanding the code. All that said if the project is small and more of a throw away with only a few members contributing code typescript may not add benefits.


What sort of errors wouldn’t have been errors in JS?


The ones that are hellishly difficult to diagnose bugs waiting to be discovered I'll bet.


No, just the trivial boilerplate stuff. Forgetting to declare `this` as a parameter, for example.


There are some "cute" uses of truthiness checks that work in JS (and, in some cases, weren't flagged as errors by earlier versions of TS) that are probably a bad idea and are trivial to render more-explicit, so better, but do technically run OK. Example: "if(obj.some_method) {obj.some_method();}". Not an uncommon form in JS in the wild, but TS (correctly) flags it as a problem.

Otherwise I dunno what this could be. Especially what TS could be disallowing that'd be all of: valid in JS, a good idea in JS, and especially time-consuming to fix.


Honestly it sounds like it saved you some trouble.


It didn't. It caused it.

I honestly wanted to believe in the value of Typescript and was enthusiastic about the switch, but it really hasn't proven itself over the past year.


Implementing Oberon the language went hand in hand with implementing Oberon the operating system. So when we're talking about productivity he's mostly talking about operating systems and interfacing with hardware, where you usually have more clearer concepts. The direct comparison would be assembly or C here.

For web gadgetry or exploratory ad statistics, you might end up with different productivity-enhancing features.


Adding types at the start of a years long project is going to help massively later on at some slight friction early on. Static typing makes explicit connections between code that are usually implicit in dynamic languages, these connections if left untyped will break sooner or later once more programmers start contributing code without having the implicit knowledge about how the code is connected


I think in software engineering, the answer to almost everything is "it depends".

Strongly typed languages increase productivity if the cost of a type mismatch bug exceeds the cost of defining and specifying the types. That's true for some programs, and it's not true for others.


Strong typing over the long term will always produce a more maintainable software product when used correctly. For many applications, you might not encounter enough complexity for it to matter strongly one way or another. Once you do finally encounter that application with 40 different entity types, 12 different business contexts that each of those can uniquely interact within, and hundreds of properties for each, you will be scrambling to find some way to bring order to your chaos.

Dynamic typing is useful if you are forced to develop software before you understand the business model or if you need to expose some DSL to your users. I normally view it as a short term option that is typically reached for when not enough actual engineering has occurred (in more complex systems). It's also mandatory for working within certain domains (i.e. the web). I think this last point is why so many seem to think its a perfectly acceptable way to carry on in any sense.


Python is strongly typed. Maybe you are confusing strong vs static typing.


But I bet Python has a loopholes in the type system as well. I mean you can take the stack, and modify the running code in every way possible at runtime, but it is a slightly mad thing to do.


Yeah I did mix them. Thanks.


> when you have to do something you don’t have yet any clue about

This is an experience issue, not a type-system issue. Rapid refactoring is also possible during initial passes while design is settled.



thank you very much for this official resource, I just knew djb's copy. There's also the predecessor in german reading "Die Software Explosion" https://darknet.mro.name/1994-wirth-explosion.txt

IMO a great read for everybody interested in long-term reliability or sustainability.


crazy back then, people were still complaining about c and c++. c, easy language to get started, but a bazooka to your foot and head.


I feel like C is like the old joke where a man goes to the doctor and says, doc I got broken bones in a dozen places. And the doctor says, well stay out of them places.


See Azure Sphere for such example, while MSR advocates moving away from it, the Azure Sphere team has decided otherwise, advocating its use as main SDK language, due to market culture.

The expected end result, in spite of the mitigations put in place, is the Azure Sphere 20.07 Security Enhancements bug fixes release.

https://techcommunity.microsoft.com/t5/internet-of-things/az...


Joke cuts both ways.

I write firmware in C because flash memory costs us money. I wouldn't recommend C or C++ otherwise. Stay away.

On the other hand C is kind docile as long as you avoid doing 'insert long list of sketchy stuff'.


Didn't we just complain about C++ this weekend?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: