Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Obvious and possible software innovations (scottlocklin.wordpress.com)
144 points by haltingproblem on June 20, 2021 | hide | past | favorite | 149 comments



I only had to read the first few sentences of the first point to see that the author thinks problems are easy because they don't understand the difficulties.

Parsing C function prototypes doesn't give you enough information to write safe language bindings. For example if you see "char* make_stuff(const char* param);", you don't know whether the memory pointed to by 'param' can be reused after the function call, or whether the function actually took ownership of it. You don't even know how many bytes of memory 'param' has to point to, because you don't know whether 'param' points to a null-terminated string or something else. Likewise you don't know how many bytes of memory the returned pointer points to, you don't know whether the caller is responsible for freeing it, and if they are you don't know how to free it. You don't know whether it's safe to call this function concurrently from multiple threads. You don't know whether 'param' or the return value are allowed to be NULL or not. And this is nearly the simplest possible example!


My thoughts exactly. For somebody complaining that these tools don't exist because people are too lazy or whatever he himself is too lazy to do a quick google search to see that these tools do exist. I was using SWIG (http://www.swig.org/) 15+20 years ago to bind C/C++ to Perl and Java and it still exists. However your point above stands in that you have to annotate the headers with enough information about memory ownership to make it work.

The tone of the article also goes mad bigot when he starts whining about (AI) ( Aliens and immigrants ) taking our jobs for less money.


As someone who found an old I-94 in his wallet last week and spent a while as an alien, I try to be sympathetic to those sort of rants. It is genuinely frustrating when employers solve problems by finding a cheaper labour force instead of automating an unnecessary process or fixing the underlying technical issue.

A company I once worked for that will remain nameless paid an entire team of people in the Philippines to read global weather reports and respond to them instead of integrating with the relevant APIs, solely because it would have been a higher upfront cost to get expensive US-based engineers to do the work, and I found it infuriating: partly because the time of the two groups was valued so differently, but also because it was just so inelegant and pointless.


* I was using SWIG 15+20 years ago to bind C/C++ to Perl and Java and it still exists.*

(OK, who broke the * for italics markup?)

I've used it. It took the entire thing you want to call and generated one giant file of munged C. That's sort of where that idea takes you.

But the author is on to something. There's a C to Rust translator. It sucks, because it works by emulating C pointer arithmetic in unsafe Rust, using its own set of primitives along the lines of "offset this pointer by this much". Now you have ugly, unsafe Rust.

What's needed is something that infers the meaning of an ambiguous function call from the code. Something that reasons like this:

Function call:

    int read(char* buf, size_t n)
Analysis:

Is "buf" an array, or a reference to one item?

Examine C code for "read". Is it ever used in an array context, like subscripting or pointer arithmetic? Possible results are "no, definitely not an array", "yes, definitely an array", and "can't decide, code is too confusing". One would like the third case to be rare.

If it's an array, how big is it? Possible results are inferred from the highest subscript seen, as an expression. Is it an expression on some variable at the interface? If it can be determined that the length of buf is n, the function call can be changed to a Rust slice for which .len() will work.

Pointer arithmetic needs to be analyzed by symbolic execution, treating each pointer as an (array, pointer into array) pair. If the array associated with a pointer never changes during the lifetime of the pointer, tracked all the way down the call tree, then the pointer can be represented as a subscript. Subarrays passed by pointer become slices.

This won't work all the time. Sometimes you'll need hints from the user. It would be tempting to guess using something like GPT-3, doing the conversion to safe code, and seeing if it worked. Most of these things are idioms, which is how humans convert them. Translation can be wrong in two ways - subscripts going out of range and being caught, and things using too much memory because a worst case overestimated something.

All the heavy work is in figuring out the equivalent, but not identical, data representation. Once that's figured out, converting the executable code is reasonably straightforward. There's already a dumb C to Rust translator for most of that.


I agree that static or dynamic analysis of the implementation or users of the API would let you do some interesting things here. But that is not easy and not the OP's point. The OP's point was that all you need to do is parse the prototypes and why hasn't anyone implemented that already.


> (OK, who broke the * for italics markup?)

You did by including a space after the asterisk ;)

The trailing one can have a space but the * leading* can't have one


I didn’t take the AI bit to be about “tik er jurbs” but rather about employers’ attitude towards developers. That would be more in keeping with the theme of the overall rant.


All those difficulties are real, and yet, FFI exists and is useful. Of course it won't work in 100% cases, but it would work in many cases, because all those things that you describe aren't random - usually libraries have certain conventions on what the pointers mean, who frees them, etc. What the author is asking for is extending the basic service FFI provides to make it more useable and accessible. Still wouldn't solve 100% of causes, and still the questions like "is it thread safe" would never be part of it - so what? There are many use cases which can live without answering this question.


> I only had to read the first few sentences of the first point to see that the author thinks problems are easy because they don't understand the difficulties.

It's because you haven't read it till the end. He explains these things haven't been solved exactly because they are difficult and (he believes) unmonetizeable.


... and yet, Ada95 managed to include (as part of the standard for the language) the ability to bind to C, C++, Fortran and Cobol. All of which have absurdly different calling conventions and assumptions about memory.


I don't know anything about Ada95's FFI but it can't magically answer the questions I posed. Either it requires extra information not in the prototypes, or it makes assumptions about the C code that won't always be true (and will have catastrophic consequences when they're not true), or it gives up on safe binding and offloads responsibility for handling those issues to the programmer.


You're right.

Because C conflates pointers with arrays. It conflates giving an array to a function with giving a space for it to write because there's no difference (besides the 'in'/'out' keyword which I'm not even sure is official)

C is defective out of the box.


You can perfectly well make a tool that emits incomplete JS API’s with some human interaction to specify details of ownership and object representation.

In you could reuse much of it for multiple target languages.

In fact, you can also make FFI wrappers automatically, and handle the memory safety with wrappers on the JS side.


You raise a good point, but the OP's point still stands. All we'd need is comments in header files with the extra type information encoded, and automated FFI would be possible.

Not to say that would be easy... building a type system on top of a language that lacks it is anything but easy, but it's possible. TypeScript is an excellent example.


The OP's point is very clearly "we could just parse C header files and do it all automatically", not "we could parse C header files and a bunch of extra handwritten metadata and then do the rest automatically". The latter point is certainly more defensible but it's not the point they made, and it's much weaker ... I mean of course if you add extra metadata you can generate whatever FFI glue you need. The devil is then in the details.


I took his point in general to be a rant about the unwillingness to give priority and effort to things that are schleppy, but doable. An unwillingness to address the devil in the details.

The main idea being that an unwillingness to solve these problems once, generally and completely is dooming "us" to solve it partly, inefficiently and repeatedly.

I would take the detail he gives for the specific cases as incomplete. Specifically this one.

I would NOT say that the article as a whole is "clearly "we could just parse C header files and do it all automatically""

All of these should be doable. Each of these obviously has significant schleppy or technical difficulties/constraints.

For none of these does he address the actual reasons they haven't been done or suggest plans for overcoming them.

I kinda agree with him at a high-level in this case FFI _should_ be solvable.

I think for this case he's just saying "there should be _some_ way to automatically generate FFI glue code (and we should be reusing it)"


I respect you for generously steelmanning the OP's argument here but it is very clear the OP's intent is "just parse C header files and do it all automatically":

> not only could you technically parse .h files and turn them into JNI

Well, no, technically you can't do that at all.


OK. Fair interpretation.

The OP didn't have much leg to really stand on for this point and glosses over any of the actual difficult work that would need to be done to make this utopia from the 70/80/90s materialize.

It's not a great post, and its ideas are not that clearly worked out. He could seriously work on his tone.

It could be parsed minimally as a just a grumpy rant with a bit of "get off the lawn" and a few wrong ideas.

Mostly a waste of time.

I'm steelmanning it because personally I'd prefer to get as much as possible from what I've read. Even if it wasn't necessarily put it in the first place (maximizing my utility from it, for my purposes). I'm not debating the OP.

If you're acting from a position of keeping him honest. Kudos.

So I don't really need to engage you on the point then.

I'm agreeing that you'd probably be able to take down the OP in an argument. Or at least would force him to up his game considerable to be able to actually make his points clearly.

BTW, I really like your use of "steelmanning" here.

Actually looking at his other points after this exchange:

(2) I think modern VMs have amazing engineering - is he asking for hacks? Or what?

(3) Cloud offerings could be more coherent. But there are real market and organisational constraints.

(4) Drag and drop UI designers would be nice, and I miss them, but even the good ones used to produce terrible code when used innocently.

(5) I think modern compilers are pretty awesome. Not sure what he wants done.

(6) Yeah. We could build better more compact systems.

His unhappiness at the state of the world.

Hmm. Moaning about it like that doesn't really do much to solve it...


> All we'd need is comments in header files with the extra type information encoded, and automated FFI would be possible.

That’s SWIG. It’s something like 15 years old, possibly more.

> building a type system on top of a language that lacks it is anything but easy, but it's possible. TypeScript is an excellent example.

That has nothing to do with the issue at hand. You can compile haskell to c, that does not help FFI-ing to unannotated C functions.


https://en.wikipedia.org/wiki/SWIG

>Initial release February 1996; 25 years ago

http://www.swig.org/history.html

>July, 1995. Dave develops SWIG while working in the Theoretical Physics Division at Los Alamos National Laboratory. Originally, it was conceived as an extension building tool for a customized scripting language that had been created for the Connection Machine 5.

David Beazley, the author of SWIG, is a brilliant programmer, mad scientist, and excellent presenter.

https://en.wikipedia.org/wiki/David_M._Beazley

https://www.dabeaz.com/

Check out his many talks about programming, especially his epic PyCon 2014 talk on his work as an expert on a patent infringement case.

https://www.youtube.com/user/dabeazllc/videos

https://www.youtube.com/watch?v=RZ4Sn-Y7AP8

>David Beazley: Discovering Python - PyCon 2014

>So, what happens when you lock a Python programmer in a secret vault containing 1.5 TBytes of C++ source code and no internet connection? Find out as I describe how I used Python as a secret weapon of "discovery" in an epic legal battle.

>Slides can be found at: https://speakerdeck.com/pycon2014 and https://github.com/PyCon/2014-slides


I think having to manually annotate C headers with extra type information starts to reduce the benefits of automatically generating FFI interfaces. Rust already has bindgen which goes pretty far.


Did you just imply that we put the comments of C headers into our types?! Comments are lower than whitespace as far as the compiler should be sure.

As I'm sure you are aware https://github.com/rust-lang/rust-bindgen does basically what you want.

I'm actually quite curious where exactly rust-bindgen falls short in the eyes of the author. I'm not as familiar with the other FFI libraries the OP linked.


When is SWIG finally going to support Generalized Whitespace Overloading in C++ 2000? It's been 23 years, 2 months, and 19 days since the technical details were published in AT&T Labs Technical Report no. 42!

https://www.stroustrup.com/whitespace98.pdf


I actually really dislike this whole post. There’s a really condescending tone lurking just beneath the surface for most of it. It’s also extremely hand-wavy about the problems, their causes, AND possible solutions. Also, so whiny. At this point I don’t care if he even has any good ideas because he’s made me so mad just by his writing style.


I started with some interest in this post but stopped reading when the author started saying nasty stuff about Bezos's wife. How is she remotely connected to this post? Why try to shame her for her looks or cosmetic choices?

The author is trying to hard to be edgy without being humourous or having the depth of idea that could have saved this.


Author belongs to the special class of people who think sounding like a dick makes them more authoritative. All their posts are like that.


> class of people who think sounding like a dick makes them more authoritative

It's a writing style. Like it or hate it.

Nonetheless, criticisms of writing style are far too cheap and shallow for HN. Be better.


I disagree, I think it's a perfectly reasonable to critique someone's writing, and by extension the author.

You can't be rude and justify it by saying "it's just my management style" or "it's just my personality".


You did not actually prove how exactly launching criticisms of the writing style over the content is not cheap. You can hate and critique the writing style all you want, but focusing on that in your HN comments is in fact the very notion of cheap commentary.

So, would you like to try again? Are you intellectually capable of providing a proper retort?


> and by extension the author.

I love how idiots on HN are justifying blatant ad hominem attacks in their commentary as being perfectly valid.

WTF is HN becoming? Are people really getting this stupid?


If the "writing style" consists of calling a random woman, one who has no connection to the topic at hand, cheap and nasty names - then it's better that such drivel is not even submitted on HN. We are better than this.


Contrarily I love this post. The first point I don't care about because I avoid C whenever possible but I agree with all the rest.

Electron is awful. So are phones. These things don't have to be so.


I don't find it whiney at all.

I do agree it's a little condescending, and that it's got a bit of a rambling flow; from semi-technical to straight bitching, all the way around to political.


And randomly insults Lauren Sánchez’s appearance; seems like a weird thing to throw into an article about software..


Perhaps he's a generation or two older, and doesn't want to go into a lot of technical detail to back up his opinion.

I think his perspective is valuable. He's seen a lot, and he's right that we could do to learn a few things from the past.


I'm probably older than he is, and it seems whiny to me.


What value is an opinion if you can't back it up?


David Hilbert published a list of 23 open problems in mathematics near the turn of the 20th century, and they served to goad the mathematics community into action for the next 75 years at least (some are still unsolved, but most were solved).

Saying "this is a problem that should be solved" might betray a lack of knowledge of an existing solution, but hand-waving such an assertion away and saying "he can't back up his opinion" isn't helpful.

If he's wrong, anyone can show why by demonstrating a correct solution to the problem - and then we'll all learn something.


David hilbert's list is like the opposite of this list. It was a list of pretty fundamental problems with huge implications.

The author of this list specificly states that they are problems that should be easy to work on. That's very different. Most of the ideas are ideas that have been done in the past. Some are just bad ideas, others just didn't catch on, perhaps due to chance or fashion. None of them really have compelling stories behind them.

There's millions of things in the world that could be solved. A list of some subset is boring. The "Why" is the interesting part.


That person is no David Hilbert.

As multiple comments says, many of the problems he talks about are already solved. And some of them were solved in the past and solutions abandoned.

If you want to learn something, you will be much better off starting from a post which does not ignore existing state of the field.


I'm mostly interested in this post because the contrast he draws between AWS and z/OS deeply resonates with me.

What if someone built an OS to deal with cloud-scale problems at the ground level - in the kernel? This isn't to say the likes of S3 or DynamoDB should be implemented in kernel space (far from it). But the mishmash of services AWS does offer seems to be more about solving cloud-scale problems without implementing an OS, while creating a lot of DevOps jobs and a ton of vendor lock-in.

At this point, AWS reminds me a lot of Windows Server: lots of UI, object-heavy scripting, a huge suite of expensive vendor-specific products, and a sheer inelegance about how the whole thing is built.

I also see a lot of personality similarities between Windows admins in corporate IT of the 2000s and today's AWS DevOps professionals.

Linux finally took down Windows Server because it had superior qualities as a development OS and was cheaper to run on server hardware. It also had a long history of people just building cool stuff on it...

I have no idea what innovation will finally take down AWS, but perhaps mainframe OS's are a promising trail to go down (if infrastructure-as-code could be applied).


Well that might be an interesting research project, AWS (especially at the beginning) is not the right party to do that. AWS's success was because it was a drop-in replacement for existing linux servers with easier management.

I do agree this is one of the more interesting ideas in the post. But its also the least satisfying because its so vauge. "Imagine AWS but less shitty and more mainfrane like" is beyond vauge.


AWS does not have to be "lots of UI" and "Windows admins" -- Terraform, CloudFormation, and other infra-as-code solution are totally a thing, and bring modern, UI-less practices to to infrastructure services. If you are in the company which has corporate culture of "let's use AWS todaly like we used Windows 20 years ago" it may seem that the whole world is like that.. but it is not, there are plenty of other companies with different cultures!

Re "OS to deal with cloud-scale problems at the ground level - in the kernel" -- I cannot make sense of that. Why on earth would it matter which OS do services use? For all that I care, Amazon could rewrite DynamoDB using AmigaOS and I would not even notice. It is all the same HTTP calls -- and this is one of the good things about the system. People can use AWS services from windows, linux, mac, mobile, even things like FreeRTOS, using any programming language they like. I guess you can create one more OS which is designed with AWS use in mind, but I doubt it will be a big game changer.


I actually tried doing the native GUI thing, but I gave up when I found out I couldn’t do that without having to reimplement something as simple as an autocomplete dropdown. Yes, the native select has something like it, but it only reacts to typing the first characters, and you can’t style the results.

I guess some would call this a feature, but the customizability of HTML makes for a lot of good things as well.


> You can't style the results

Good.


Gerald Sussman of SICP fame was asked why they stop teaching SICP and he said that the way programming was being done changed mid 90s. It moved from programming from first principles to programming against an API. This is still the reality of most.

You have to be big enough or brave enough to move back to reinventing the whole universe.

In theory, any large company could use projects like Oberon and "STEPS Toward The Reinvention of Programming" as an inspiration and create a full stack that runs GUIs on all platforms.

In practice this is a monumental undertaking that not even companies like Apple could do. They still reused BSD for macos and KHtml for Safari.


Apple could absolutely do it (or Microsoft, or Google, or Amazon) but there’s no justification for it from a business perspective.


Google sort of did it with Android and Apple with iOS and it worked out really well for them.


fuchsia seems like a solid attempt by goog to write a ground up os


What are the advantages/disadvantages of Zircon over seL4?

I've read up a bit on seL4, but can't seem to find the rationale or design decisions behind Zircon. Not sure why Google needs to roll their own microkernel when there is a fast, secure, formally verified one they could use.


A web search shows some speculation suggesting it could be Zircon's larger feature set and a desire for an in-house ground up solution.

Fuchsia leads are on twitter; they seem very nice and some have open DM's. They'd probably be happy to answer


> could use projects like Oberon and "STEPS Toward The Reinvention of Programming"

Like Dart, Flutter & Fuchsia?


I actually really like this whole post. Every time I think about quitting HN altogether a rare gem like this comes along. You should be able to have a little fun with common pain points we all have to deal with. The author has some pretty good suggestions as well. But I almost lost my drink with this one:

“ Imagine if the EC2 were as clean as, I dunno, z/OS, which has more or less been around since the 1960s. That would be pretty cool. I could read a single book instead of 100 books on all the myriad tools and services and frameworks offered by Oligarch Bezos. He would be hailed as a Jobs-like technical innovator if he had some of his slaves do this, and he would be remembered with gratitude, rather than as the sperdo who dumped his wife for sexorz with lip filler Cthulhu.”

Comedy gold.


Part of what makes it funny is that if Bezos somehow got IBM to license z/OS to AWS to offer z/OS as a service on-demand, that it would likely be incredibly lucrative.


Feels like a list of stuff where he massively underestimates the complexity of the fields he's talking about.

Drag and drop UIs have, as he indentified, been tried. They were a leaky abstraction where incremental functionality was difficult to implement.


I think he means things like XCode's Interface Builder.

He's not massively underestimating the complexity as much as saying that huge swaths of that complexity are unnecessary, were it not for business requirements that often require us to do things the cheapest way possible (like hiring "fungible" web engineers to build an electron app instead of paying native platform engineers to build a first-class native app) -- and thus cause long-term harm.


I have lightly used both web UI stuff and native UI stuff, and on the desktop at least, web UI concepts are lightyears ahead. I think native UI development for PCs will return when and if the concepts there catch up with the present state of the abstractions and patterns that work for web dev. GTK, the one I know the best, is still in an era where object orientation is a new and technically difficult thing to achieve. Not bad at all, but we can do things so much more advanced than that if we port over the lessons learned when inventing ways to manipulate and efficiently push updates to the document object model.

There are some projects working on this but they are not yet at the level of "copy paste works," or "you can use either a retina or non-retina display and it won't look badly scaled or blurry on either." But I hope this will be sorted out soon because there is a lot of software I want to write with it.


No, Visual Basic worked fine. People built an entire generation of native business apps in it. Drop a bunch of labels, buttons and textboxes, wire the list view up to the database, and away you go.

The hard problem is native vs. cross platform.


So the answer is _Electron?_

I think way to many people massively underestimate the complexity they add to a problem when their solution involves Electron, React, and npm…

I’m less convinced than most commenters here that he’s underestimating the difficulty of some of what he proposes, he ends with:

“ The reality is they’re all quite possible, but nobody makes money doing them. Engineers are a defeated tribe; it’s cheaper to hire an “AI” (Alien or Immigrant) slave to write the terraform or electron front end rather than paying clever engineers well enough to build themselves useful tooling to make them more productive and the world a better place. ”

Amidst his snark (which I’ll acknowledge will put some readers off) and some borderline probable racist dog whistling (which certainly put me off a bit), he is totally acknowledging that these ideas require smart engineers with budgets and mandates to spend the time making them come true. He’s not suggesting the 12 week boot camp front end engineers should be parsing c header files and inspecting deeper I to he c code to discover the stuff you need to use those to auto generate interfaces for other languages. But surely some of the senior AWS engineers should be spending their time investigation his ideas about ec2 and cloud and OSes?


I've never seen a web UI that couldn't be built, and built better, in an hour with an old school drag and drop gui builder. Unfortunately the web was built on markup, and markup is a terrible way to define a UI. There is really no way to make a good gui builder for html/css, so we are stuck with literally the worst possible way to build UIs today. CSS is 24 years old, and nobody has figured out a sane way to use it yet. We've gone from inline styles to "semantic" to BEM, and we've come full circle to what are basically inline styles (tailwind) and they all suck. Even html frames and tables were better than what we are using today. In many ways the history of building UIs in HTML is just people trying to find ways to mimic a frames/tables layout without using frames and tables.


There are a ton of effective drag and drop GUI builders for HTML/CSS: Webflow, Squarespace, Wix, and Plasmic to name a few, and that's just the current generation (see also: Dreamweaver et al.).

> Even html frames and tables were better than what we are using today

How? CSS and HTML can do practically everything frames could do (the only thing I can think of that doesn't apply is independent histories). There are a lot of different technologies at your disposal -- grid, which is probably the most intuitive; flex box; or just `position: fixed` divs.


Are those building a GUI? Or just a website? As far as I know they’re mostly about markup.


That's a strange distinction to make. Websites are a type of GUIs. You can build some pretty complicated things without JavaScript, but if you need JS, then yes, most of them support it in some form or another, Plasmic probably being the most sophisticated.


Calling websites GUI is a bit streched. A phone can be called a computer but the computations it does are limited to data collecting. Websites also could have good UI (even GUI) but most are limited to (again) data collection and the UI is the same like the one between Ice age humans and first domesticated animals.


I think you're overcomplicating this; the meaning of these words is in the acronyms themselves. What's a UI? A user interface -- any layer that goes between the user and some machine. What's a GUI? A user interface that is graphical, or: a layer that uses graphics to go between the user and a machine. Since websites use graphics (pixels, on a screen), and they sit between the user and a computer, they are GUIs.


The whole HTML-ization of GUIs is crazy... It's like "ok, let's make a language that represents semantics of documents and leaves most of the presentation to the tools, so we could concentrate on content and not focus too much on irrelevant details of presentation" - "ok, cool idea, now let's take this language and use it to build pixel-precision GUIs that have no semantic content at all". Ugh, talk about right too for the job!


Is a gui tool really faster or better then directly writing the code? It is like using a cli vs using a gui... propably a hybrid approach would be best...


"The reality is they’re all quite possible, but nobody makes money doing them."

The fact that he thinks they would take money implies that he believes they would be complex. Simple problems shouldn't cost much to fix.



Yeah, like could start list with just solve PvPN problem :)


> Engineers are a defeated tribe; it’s cheaper to hire an “AI” (Alien or Immigrant) slave to write the terraform or electron front end rather than paying clever engineers well enough to build themselves useful tooling to make them more productive and the world a better place.

Offensive passage that subtly implies immigrants from the developing world are slavish (and dehumanising them as well, using the word alien), incapable of intelligence, and who're undercutting honest-to-god 'engineers', and preventing the world from becoming a better place.

I'm astonished that hordes of people are still ready to sell their first born if they can get a citizenship amid such a culture of passive racism and superiority complex.


There is no racism here, subtle or otherwise, and I say this as a black engineer watching this happen every single day. Engineers are not allowed to talk about how their jobs are being replaced by web development in every possible corner even where it doesn’t fit.

It is much cheaper to hire 5 offshore web devs for 5 weeks to force a solution into a web interface and then have an onshore dev spend 2 weeks turning web UI into an Electron app than it is to build any other type of solution.

He is simply stating the issue rather than not stating it.


> I'm astonished that hordes of people are still ready to sell their first born if they can get a citizenship amid such a culture of passive racism and superiority complex.

Can you imagine how bad it is where they come from?


No. He is just astonished. I am also astonished that a so called good manager will outsource a project to a "third world" country because he thinks this is cheaper and the poor guy in the third world country must reinvent the wheel and be "creative" because he is denied money (for equipment) and knowledge. ("just take this code and continue").


> have all the functions described in it turned into reasonably safe FFIed function calls

Many langs get close with the ease of importing C headers (Go, Rust, etc), but once you ask for reasonably safe FFI'd calls, you're asking a bit too much since ownership goes out the window. Otherwise, if you mean turned into acceptably-unsafe FFI'd function calls, agree just about every lang with C interfacing built in should have that.

> It’s fascinating to me that people find it easier to write a pile of React and HTML on top of electron rather than dragging and dropping native widgets for a framework like we did in the old days.

Ok, on my mark, you write a reasonably complex native GUI for the 5 common platforms (and including the wiring, not just drag-drop) and I'll write a reasonably complex GUI w/ web tech. Once you see the difference in time to market, not to mention reduced maintenance cost, it'll be less fascinating.

Other than those points, I somewhat agree w/ the others, or rather don't strongly disagree (the condescending tone notwithstanding).


> Ok, on my mark, you write a reasonably complex native GUI for the 5 common platforms (and including the wiring, not just drag-drop) and I'll write a reasonably complex GUI w/ web tech.

Two things on this: this was Java's promise (and Swing delivered on it for the desktop). And the expectations of a web app/electron app are still lower than those of a native app.


>Ok, on my mark, you write a reasonably complex native GUI for the 5 common platforms

oh, it has to run in a browser too so that users don't need to "install" it.


Can Qt help with this? I'm unaware of it's WASM support, but it seems like just a matter of time before they have all the GUI elements.


Many Electron apps are native on mobile. So we really mean on all 3 desktop platforms.

The answer is obviously Qt or Java. It works.


And many Electron apps are not native on mobile, just use a web view, and share many resources with their Electron cousins. Same with the web apps.

As someone who has written multiple Qt and Java GUI apps, it always seems fine at first until your UI needs become more unique (and you're subclassing QWidget/JComponent) and you have to jump through hoops to get something a web view has (e.g. streaming video).

I don't like the lack of native any more than the next guy, but I can understand sacrificing native benefits for cohesion/features. If you can tolerate a middleground of non-native but no-browser-bloat, Flutter or .Net Maui may mature enough to help for the complex uses.


Let's be honest, if you're making a webview for the mobile app it's going to be pretty different from the desktop Electron version. You're going to share many resources, yes, but there's going to be a lot of divergence.

Sure, writing your own UI components sucks, but it's not that much worse than writing a UI component for Electron. If you need to stream something from the web, QMediaPlayer works fairly well from my experience.

Also personally I consider Flutter to be native.


> Ok, on my mark, you write a reasonably complex native GUI for the 5 common platforms (and including the wiring, not just drag-drop) and I'll write a reasonably complex GUI w/ web tech.

I think you've missed the point of the title.


I’ve never heard of C function “ownership”. What does it mean?


Pointers passed in and out have a very important property that's critical for writing secure reliable programs but is not encoded anywhere in the language, and that is lifetime and ownership.

If you pass a pointer to a function, will it free() it? Or store it somewhere where it will later free it? Can you now free it yourself safely or not?

Same applies to returned pointer values: don't free or must free?


I really dislike the article, here are some reasons (referencing the numbered arguments in the article).

1. There are many such parsers. Rusts bindgen [1] is one of them, I have written a proprietary one last year. This is pretty common to do for narrow use cases, there just isn't one for "Convert a C API to Ruby"..

2. "Most VM designs I’ve seen are basically just student exercises". Seriously? Create a better on and get rich then! I'm pretty sure Google would pay good money for something better than v8.

3. You can run z/OS on EC2. They do very different things. It's like saying I wish that cars were as simple as a strawberry.

4. "People used to make GUI frameworks which did more than electron apps, looked better and fit in the tens of kilobytes range." That's correct and that's why lots of apps are based on native UI frameworks. For some use cases, electron seems to hit a sweet spot (mostly not having to write UI for each platform and the web too). If you don't like electron apps, don't use them, most run in the browser too.

5. "Compilers and interpreters should learn how modern computers work." I don't know where to begin here. Modern compilers optimize for latest hardware all the time, one recent example out of thousand others is this [2] where V8 redundantly inserts short functions into memory regions close to the code they are called from in order to get more instruction cache hits.

[1] https://github.com/rust-lang/rust-bindgen

[2] https://www.techradar.com/news/google-chrome-is-now-dramatic...


Look, I agree that the article could've used a more productive tone but saying "just don't use Electron apps" is misguided.

We don't have a choice. IMO it's perfectly fine to be consistently unhappy on that point and to rant about it.


You can be perfectly unhappy, sure. My point was just to go a little deeper than just rant and talk about the economics of why companies choose electron in spite of all the disadvantages it comes with.


I agree on that but from my side the answer of "why use Electron?" is fairly obvious at this point. I could be wrong.

For a guy like myself it'd be more interesting to have the much more tricky discussion of "how can we not use Electron without spending millions on inventing a cross-platform native GUI toolkit?".

I recognize that's a personal preference, yep.


> Automated FFI parsers. In 2021 I should be able to point any interpreted language at a C include file and have all the functions described in it turned into reasonably safe FFIed function calls,

The big issue with doing this is that C does not have enough of a type system to exactly specify the interface. Is that char pointer, a null terminated string, or is it a pointer to an untyped buffer? What is the ownership of the pointer you are passing in or is being returned from the function? Are you allowed to pass in null for the pointer parameter?

All of these are questions that the C type system does not tell you. You have to rely on some type of documentation to figure it out. And if you get it wrong, you have a memory leak at best or a security while at worse.


There's definitely still a lot of scut work the compiler could (and therefore should, in my view anyway) be doing for you. Even if it got you to the point where you can setup those safety invariants in your interpreted (or whatever) language, that would be nice.


I think this points to the real answer: libraries should be defined by something else than the C headers we are using now. All automated solutions or parsing C headers require a C compiler present and as you write, there is a lot left unspecified.


Windows has partly moved and is moving in this direction - WinRT (higher-level, C#-like but without requiring the .NET runtime and GC) APIs are already defined in metadata, and now Win32 (lower-level, C-style) APIs are being defined that way as well: https://github.com/microsoft/win32metadata/blob/master/docs/...


Automated ffi parser soon (TM) to be in standard Java: https://github.com/openjdk/panama-foreign/blob/foreign-jextr...


1. Some modern PLs and frameworks are making ffi easier, without requiring writing manual wrappers, or using a code generators. I can't recall project names now, but they do exist. But yes, they're not widely used or supported.

2. BEAM (Erlang VM) has several native mechanisms to communicate with the outside world (both in-process and out-of-process). In-process ones: linked-in drivers (.so/DLL), NIFs (.so/DLL, like JNI), dirty schedulers. Out-of-process: ports (external executables), C-Nodes/JInterface (an Erlang cluster node interface can be written in any programming language, not just C or Java). See [1].

3. Morphing into a Single Image System OS would be ideal evolution for the cloud vendors. I don't think Mainframe or Heroku-style PaaS are good directions, though. I have many ideas in this field, but they need to be fleshed out first.

--

[1] https://www.slideshare.net/nivertech/erlang-on-osv-49278675#...


I’ve written software for a few different Single System Image systems. They are uniformly terrible in my experience because they actively hide material information about the system architecture that is critical for software performance and robustness. I wouldn’t recommend them at all — the “benefits” are greatly outweighed by the downsides in real operational environments.

There is a reason they disappeared even though they were popular at one time. I thought they were a great idea until I actually used them. It is very difficult to write scalable high-performance software on those types of systems, so many edge cases.


There are two different things called Single System Image systems:

1. LISP machines, Smalltalk dev environments, etc. A modern example would be the DarkLang.

2. Distributed OS with process migration like MOSIX - https://en.wikipedia.org/wiki/Single_system_image

The (1) are a good direction, but they're usually not distributed. I guess you're talking about (2) which was never properly implemented, or maybe they were problematic by design.

BTW, what do you think about Inferno and Plan9? They still have hobbyist communities around them, and some of their ideas influenced current mainstream SW like 9P protocol and GoLang channels.


JNA looks reasonable for accessing native code from Java: https://github.com/java-native-access/jna/blob/master/www/Ge...


Do elaborate on (3)?


1. The first step would be to simplify and unify services. It's especially relevant to AWS, which has lots of overlapping services, built by independent teams.

Just look on this flowchat ("Which AWS container service should I use?"):

https://twitter.com/forrestbrazeal/status/140063975921564057...

2. The longer term solution would be something like DarkLang, "deployless" single image system development environment. Of-course it will be opinionated. It should include all best practices by default. I.e. think 128-factor, instead of Heroku's 12-factor.

3. Another direction is cloud-native or web-services oriented programming languages.


I agree with the general sentiment of this post, but like others, I think some explanations lack depth and possibly miss important active developments.

This passage made me laugh though:

> The funny thing is, the same people who absolutely insist that the Church Turing thesis means muh computer is all-powerful simulator of everything, or repeat the fantasy that AI will replace everyone’s jobs will come up with elaborate reasons why these things listed above are too hard to achieve in the corporeal world, despite most of them being solved problems from the VLSI era of computer engineering.


> The reality is they’re all quite possible, but nobody makes money doing them.

You don't get nice things for free.

Seriously, we could build a space station around Jupiter if we reallllly wanted to. It would be hard, and people would probably die, but if you really wanted to, you could do it.

Similar story for these problems: Solvable? yes.

Valuable?

Well... maybe; but probably hard enough that you can't do them for free, and not valuable enough to justify the cost and effort of doing them.

> Engineers are a defeated tribe...

Is that the take-away here? Be sad, give up? Go and build some electron apps?

There are two problems here: doing (hard task), and paying for it; you can solve either of them by either a) volunteering your time to work on (hard task) or, b) helping fund people who are.

Not to say that the points raised are all invalid, and enumerating things which are worth working on is also helpful, but yeah, well, when people are trying to address them, and all you've got to say is:

> You see pieces of this idea here and there, but like everything else about modernity, they suck.

...complaining that no one else is doing either of these two things, or the ones that are trying suck, is... well, I'm going to be generous and say, entitled.


At rev.ng we're developing ludwig, a clang-based automatic generator of wrappers for C++ APIs. First we generate C API and then generate wrappers for dynamic languages such as Python/Ruby/JS.

It's basically SWiG done right. Trying to write a C++ parser is a design doomed to fail.

In C++ ownership is much more explicit than C on average. The idea is to have a Python wrapping object for each C++ pointer. This wrapper can be owning or non-owning: if it's owning calls `delete` once the wrapper is destroyed. If a C++ function returns a std::unique_ptr we map it to an owning wrapper. If a C++ function returns an object by value we `std::move` it on a new object on the heap and map it to an owning pointer. If a C++ function returns naked pointers, we map it to a non-owning wrapper. The system is extensible, for instance you can say that a owner<int *> (see C++ Core Guidelines) is actually owning.

The wrapper can also be const or non-const, exposing the appropriate methods accordingly.

Also you have a lot of patterns you can exploit to provide high level constructs in scripting languages (e.g., `.begin` + `.end` ranges can become Python generators rather easily).

Here you can find the design document:

    https://pad.rev.ng/s/__WrFSmm_#
Right now, we're struggling with default template arguments. Many STL classes have default template arguments which make the name of types look ugly.

Also, we currently instantiate all the methods of template classes. But not all methods are supposed to be instantiate with all the possible template values. If you do instantiate those, you can run into compile errors. This means we have to resort to a sort trial and error approach to see what it actually makes sense to instantiate.

ludwig will be open source, but still needs some love before going public.


Rust is not an interpreted language, but it does attempt to satisfy the first point. The bindgen and cxx crates will attempt to generate the API from a C/C++ header file.


"Pretty much all compilers and interpreters think computers are a PDP-11 stack machine" shows that authour have no idea how modern compilers work. A ton of low-level optimizations and code generation are utilizing intrinsic knowledge of details of underlying computer architecture.


There are some conflicting things, why would you do a clean OS design(this would be great and i would very much like to see some experiment with serious effort) and write it in C? You would already be throwing most of useful programs that exist today. If you no longer write it in C why would you need to parse C headers for calling "foreign" code?

The point on software not taking enough advantage of hardware is on point. I would also like the hardwares(e.g. SSDs) to expose a bit more how they work rather than have another CPU emulating spinning disks.


> Cloud providers should admit they’re basically mainframes and write an operating system instead of the ad-hoc collection of horse shit they foist on developers.

Oh I like this one so much. I love the era and idea of the mainframe. I caught the tail end of it in college. Our research professors would send their statistics jobs over to the mainframe and get the dot matrix printouts back showing what treatments showed significance. Maybe I watched Tron too much too. I just love the idea of one big central computer doing all the processing.


> I just love the idea of one big central computer doing all the processing.

And yet at the same time, having a super computer in my pocket is spectacularly cool!


Which can only siphon your data and play games.


You can run your own code on your phone. Even iOS lets you do that (I don’t even think you need to lay the apple developer fees to do that?) You just can’t widely distributed it in easy for other people to run form. You _can_ give your friends .apk files. You _can_ write iOS apps that violate App Store rules. It’s the “publishing” bit that is locked down.


There are many reasons why developers use Electron for GUI development. The size of the final package is not very important, and creating many times the same native application is not that appealing. Having a cross platform, responsive, and a modern high quality framework is much more appealing.


> The size of the final package is not very important

You can only install so many packages with this philosophy before size starts to be important after. More to the point, you can only run a few of them at a time, on machines with gigabytes of RAM. This is a profound embarrassment to our industry. If by "responsive" you're referring to input latency, Electron apps are at best on par with native, usually worse IME. If by "responsive" you mean accommodating different displays... it's a desktop app. Your statement about a "high quality framework" is basically orthogonal to reality, but I'd like to see you justify rating React "higher quality" than Qt5.


I understand the performance criticism. But people don't have that many apps, storage is cheap and we have plenty of ram. Today a high end smartphone has 128gb of storage and 8gb of ram. Some have 512gb of storage and 12gb of ram. An actual high end desktop computer has at least 64gb of ram and enough storage to have quite a lot of Electron runtimes installed side by side.

My M1 MacBook pro with only 8gb of ram, I asked 16gb but someone did an ordering mistake, has no issues to run multiple electrons apps at the same time and storage is not something I think about.

By responsive I mean accomoding different display sizes, to have a layout that makes sense on a small laptop with a touchscreen or a 4k 32" external display.

For most usages, I will rate react much higher than Qt5, like almost everyone one in the the industry. The only Qt interface that I think is nice and up to modern standards, and that I remember is the Tesla user interface. It may be a few others because sometimes Qt makes sense, but not for desktop in my humble opinion.


> "But people don't have that many apps, storage is cheap and we have plenty of ram."

Appliance maker: "Electricity is cheap and there's plenty of it, who cares if all the devices we sell costs the consumer on their electricity bill and causes power plants to pollute the atmosphere more."

Car manufacturer: "Fossil fuel is cheap and there's plenty of it, who cares if all the cars we sell cost the customer extra at the pump and adds to air pollution."

There was a reckoning for both of those fields and there will eventually be a reckoning for software development as well.


I am a bit too lazy to do the maths, but I would guess that a single Electron development team consumes a lot less energy than many native development teams.


I can start building in React today, and style literally everything without really thinking about it.

I can start with Qt5 today, and give up screaming in rage tomorrow because I can’t get my select dropdown to display what I want.


> I can start building in React today, and style literally everything without really thinking about it.

And be cursed into oblivion by the first user.

> I can start with Qt5 today, and give up screaming in rage tomorrow because I can’t get my select dropdown to display what I want.

File a bug report. Oh, i forgot, developers want to develop, not to fix bugs.


> Really, they should all run like Heroku and you’d never notice they were there.

right that sounds like a great business idea because then your cloud service company you're building will be easier to commoditize! Can't believe that Bezos guy didn't think of it.


Some responses:

1. Java is adding this with Panama. Definitely a sore point and should be standard when folks are making their languages. https://openjdk.java.net/projects/panama/

2. Meh.

3. This is exactly how to get zero people to use your cloud. Everyone would love to do this but adoption risk is too high.

4. This will never pass muster when it encounters design. There are plenty of tools that do this but generate half-hearted interfaces. Xcode (interface builder) does a pretty good job of it anyway but folks will still want to customize their UIs.

5. Fair criticism.

6. I think that CoreOS was a step in the right direction. Not sure if they have seen it.


Every time I read something like this, I think how much I want a true cross platform GUI library with the customizability of HTML/CSS, determine to build it myself, then give up before I even start when I realize how much work it’s going to be.


Same here. A native cross-platform GUI library with HTML/CSS native markup would set the works aflame.

It would take many man-years to build, and would take the dedication of Torvolds and the core Linux team to bring into the world.


Front ends could be drag and drop native GUIs instead of electron apps

We had Visual Basic. We had Dreamweaver. We had Microsoft FrontPage. What went wrong?

Most web pages really aren't doing anything that exciting.


> Front ends could be drag and drop native GUIs instead of electron apps

> We had Visual Basic. We had Dreamweaver. We had Microsoft FrontPage. What went wrong?

Visual basic was only used to run macros in Office documents. And then viruses. And then MS killed it. Dreamweaver ? Hello Adobe. Frontpage ? Was broken by design.

SW is only about inovations. It does not need to last. It must be new.

> Most web pages really aren't doing anything that exciting.

Data collection is a very exciting thing, so I heard.


>Visual basic was only used to run macros in Office documents. And then viruses. And then MS killed it.

No, it got used to do a lot of things, so did Delphi.

VB6 was awesome, then they decided to abandon a good thing and went off on the stupid .NET folly.


The world is the way it is because of a series of compromises and negotiations by competing interests. It's easy to look from one perspective and only see the hilltop, and think you can get there by taking the straight road.


Operating systems don’t have to look like your crazy hoarder aunt’s house.

If all the bare metal OS is doing is running containers in a data center, it should look more like Xen and less like Linux.


7. software that allows you to add an empty line in between numbered lists


Operating systems don’t have to look like your crazy hoarder aunt’s house

This.


I haven't read this and I am very puzzled this is flagged. I'm lucky to fav this before this goes flagged. At a glance, this doesn't seem to be insulting or whatnot.


Re: Point 4:

I think C# Windows Forms are a great example of that (drag and drop GUI for creating GUI forms), and I wonder why no one is making things that easy in a cross-platform manner anymore!


It would take the resources of a Microsoft, with their decades of language and OS experience to create such a thing. And, Microsoft has no interest whatsoever in encouraging a cross platform desktop ecosystem.


Microsoft is working on it, it’s called MAUI: https://github.com/dotnet/maui


Looks like nobody noticed the date at the top of the page:

"Posted in tools by Scott Locklin on April 1, 2021"


Why was this article censored? Was it done without an explanation? I do not find one.


I got downvoted at least twice, but noone bothered to link to the explanation why this article is no longer visible.


So obvious and possible the author is ranting about them instead of writing them...


who's this guy again?


“He would be hailed as a Jobs-like technical innovator if he had some of his slaves do this, and he would be remembered with gratitude, rather than as the sperdo who dumped his wife for sexorz with lip filler Cthulhu.”

Misogyny is really cool! Thanks Scott Locklin!


There is nothing in this quote which implies a hatred of women, and perhaps you inferring it says more about you than about the quote.


Did you miss the "lip filler Cthulhu"?


Well, yes, but it’s preceded by an equally flattering statement about Bezos himself.


Misogyny isn't just about hate. Sexualization, denying agency,

https://www.nytimes.com/2019/03/08/style/misogyny-women-hist...


It was a joke about Bezo’s bad judgment re: life choices, not hatred of women. Let a guy make a joke about someone doing an ugly thing to his wife. Bezos became King of the World and then left his wife for a TV personality.

I give the guy credit for humor.


I hate this post.


An incredible optimization of Dunning–Kruger × Bravado.


Somebody call this clown a whaaaaambulance.


If they're so easy and obvious, why hasn't the author done it, at least as a demo? The article also reeks of "this doesn't work for me, therefore it won't work for anyone" I doubt I'm the only one who absolutely does not concede that everything should have a GUI interface.

"Why does shit like DPDK exist?" I don't know, but I bet you could find out with a little investigation, which might make this sound more like a well-researched position and less like a tantrum.

"people who absolutely insist that the Church Turing thesis means muh computer is all-powerful simulator of everything". Yeaaah...we're done here.


The author's about page reports that he is a former physicist with experience in automotives and law enforcement. It also contains the statement "I have a particular dislike of self-anointed 'experts';" with no apparent irony.

And my personal favourite: "People may think I’m fighting above my weight class, because many of the people I label as clowns are on television and in important newspapers, much like the stars of 'The Bachelor.' "

Uh huh.


The guy has a sense of humor that is both effective and rubs The Offended the wrong way.


> Cloud providers should admit they’re basically mainframes and write an operating system instead of the ad-hoc collection of horse shit they foist on developers. Imagine if the EC2 were as clean as, I dunno, z/OS, which has more or less been around since the 1960s. That would be pretty cool. I could read a single book instead of 100 books on all the myriad tools and services and frameworks offered by Oligarch Bezos. He would be hailed as a Jobs-like technical innovator if he had some of his slaves do this, and he would be remembered with gratitude, rather than as the sperdo who dumped his wife for sexorz with lip filler Cthulhu. There’s no excuse for this from an engineering perspective; Bezos was smart enough to know he was going to do timesharing, he was also smart enough to constrain the spaghetti into something resembling an OS. Same story with all the other cloud services. Really, they should all run like Heroku and you’d never notice they were there. You could also draw flowcharts for most of this shit and replace devops with something that looks like labview. Nobody will do that either, as innovation in core software engineering, or even learning from the past in core software engineering is basically dead.

Heh, no Bezos was not smart enough to realize he was building a horizontally scalable mainframe out of commodity parts. All he knew was that they'd driven IT costs down below what anyone in the business had seen in other companies to the point where they could drop some APIs on it and sell it. Plus google was publishing papers like GoogleFS and coming up with GMail and everyone wanted to be seen to be as smart as them. This bit in particular: "he was also smart enough to constrain the spaghetti into something resembling an OS" is "lol, no". The big pile of web APIs was his vision. Literally its called Amazon Web Services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: