I think the Raspberry PI has made itself a kind of de facto desktop standard and so has Ubuntu. It's no good having 2 standards, however, and that is one problem - to make 1 standards you'd probably have to kill off all the distros somehow.
The other route is where so much software ends up running in the browser that "desktop" linux becomes an irrelevant concept.
Cross compiling go is easy. Static binaries work everywhere. The cryptographic library is the foundation of various CAs like letsencrypt and is excellent.
The green threads are very interesting since you can create 1000s of them at a low cost and that makes different designs possible.
I think this complaining about defer is a bit trivial. The actual major problem for me is the way imports work. The fact that it knows about github and the way that it's difficult to replace a dependency there with some other one including a local one. The forced layout of files, cmd directories etc etc.
I can live with it all but modules are the things which I have wasted the most time and struggled the most.
I think this highlights some problems with software development in general. i.e. the code isn't enough - you need to have domain knowledge too and a lot of knowledge about how and why the company needs things done in some way or another. You might imagine that dumping the contents of your wiki and all your chat channels into some sort of context might do it but that would miss the 100s of verbal conversations between people in the company. It would also fall foul of the way everything tends to work in any way you can imagine except what the wiki says.
Even if you transcribed all the voice chats and meetings and added it in, it challenges a human to work out what is going on. No-context human developers are pretty useless too.
One cannot learn everything from books and in any case many books contradict each other so every developer is a variation based on what they have read and experienced and thought along the way. How can that get summed up into one thing? It might not even be useful to do that.
I suspect that some researchers with a very different approach will come up with a neural network that learns and works more like a human in future though. Not the current LLMS but something with a much more efficient learning mechanism that doesn't require a nuclear power station to train.
What is baffling to me is how otherwise intelligent people don't really understand what human intelligence and learning are about. They are about a biological organism following its replication algorithm. Why should a computer program learn and work like a biological organism if it is in an entirely different environment with entirely different drives?
Intelligence is not some universal abstract thing acheivable after a certain computational threshold is reached. Rather its a quality of the behavior patterns of specific biological organisms following their drives.
...because so far only our attempts to copy nature have proven successful...in that we have judged the result "intelligent".
There's a long history in AI where neural nets were written off as useless (Minsky was the famous destroyer of the idea, I think) and yet in the end they blew away the alternatives completely.
We have something now that's useful in that it is able to glom a huge amount of knowledge but the cost of doing so it tremendous and therefore in many ways it's still ridiculously inferior to nature because it's only a partial copy.
A lot of science fiction has assumed that robots, for example, would automatically be superior to humans - but are robots self-repairing or self replicating? I was reading recently about how the reasons why many developers like python are the reasons why it can never be made fast. In other words you cannot have everything - all features come at a cost. We will probably have less human and more human AIs because they will offer us different trade offs.
I am not sure this is avoidable. Whatsapp (and perhaps Telegram) are the dominant messaging/chat apps for example and that is European tech but it was inevitably going to be bought by some bigger company that wanted to be dominant and that was obviously going to be American since they managed to make big money first.
Skype was at one point extremely popular and this is European but it was bought and squashed under the mountain of American poo that is MS Teams. Forgive me the rudeness but I wish to dispell the thought that American tech is automatically superior or that it wins by being good.
Then there's Linux - another European development that has rocked the world but has been bought and ruled by mostly American companies with the noticeable exception of Ubuntu (and a few others).
The World Wide Web - a blow for freedom and the spread of information coming from CERN that has again been captured and perverted into an advertisement delivery and spying system more powerful than the East German Stasi could possibly imagine.
We have Big Tech to thank for Nazi saluters, quite potentially for the attempt to break the world economy and the idea of turning all of humanity into basic income serfs which will not, of course, include the owners of big tech itself.
The EU is the only powerful entity that hasn't been completely perverted by the power of big tech and we have to hope like hell that it won't be. To all those with shares in big tech or jobs in it who want to expand and rule - go ahead and vote me down - who would expect anything else!
SuSe seems invisible to me whereas Android has probably made many many billions of dollars and I think it counts as potentially the worlds largest linux distribution.
...because if there isn't then your democracy will turn into an oligarchy. The advantage needs to be somewhat against the richest and for the poorest if you're going to protect that.
Fair enough point but for many years I wasn't aware of what bash COULD do. I mean one should get to learn more about [[]] and how it does regexps and while read loops:
ls *.txt | { while read FILENAME; do <something> to $FILENAME; done; }
and so on. Once you know, you can get a lot done on e.g. a docker image, without having to install lots of other things first.
I think it's a great idea to not have to have two libraries - so its a "tick" from me for any idea that permits it.
The thing that bothers me in general about asynchronous code is how you test it so that you know with some confidence that if it passes the tests today you have replicated all the scenarios/orderings that might happen in production.
You have this same problem with threads of course and I've always found multithreaded programs to be much harder to write and debug....such that I personally use threading only when I feel I have to.
The actual problem with it is that caution is communicating it to developers. I recently had to work on a python system where the developers were obviously doing Javascript half the time. So ... hooray.... they put out a huge changeset to make the thing async....and threaded. Oddly enough none of them had ever heard of the GIL and I got the feeling of being seen as an irritating old bastard as I explained it to their blank stares. Didn't matter. Threading is good. Then I pointed out that their tests were now always passing no matter if they broke the code. Blank stares. They didn't realise that mangum forced all background tasks and async things to finish at the end of an HTTP request so their efforts to shift processing to speed up the response were for nothing.
Knowing things doesn't always matter if you cannot get other people to see them.
We plan to have in Zig a testing `Io` implementation that will potentially use fuzzing to stress test your code under a concurrent execution model.
That said, I think a key insight is that we expect most of the library code out there to not do any calls to `io.async` or `io.asyncConcurrent`. Most database libraries for example don't need any of this and will still contain simple synchronous code. But then that code will be able to be used by application developers to express asynchrony at a higher level:
io.async(writeToDb)
io.async(doOtherThing)
Which makes things way less error prone and simpler to understand than having async/await sprinkled all over the place.
More powerful than a “fuzzing” test io would be a deterministic test io. I.e., one you can tick forward the various concurrent branches deterministically to prove that various races are safely handled. This makes it possible to capture all those “what if thread A executes this line first then B is executed” etc. Something that is missing in most concurrent frameworks.
That resonates. Testing asynchronous and multithreaded code for all possible interleavings is notoriously difficult. Even with advanced fuzzers or concurrency testing frameworks, you rarely gain full confidence without painful production learnings.
In distributed systems, it gets worse. For example, when designing webhook delivery infrastructure, you’re not just dealing with async code within your service but also network retries, timeouts, and partial failures across systems. We ran into this when building reliable webhook pipelines; ensuring retries, deduplication, and idempotency under high concurrency became a full engineering problem in itself.
That’s why many teams now offload this to specialized services like Vartiq.com (I’m working here), which handles guaranteed webhook delivery with automatic retries and observability out of the box. It doesn’t eliminate the async testing problem within your own code, but it reduces the blast radius by abstracting away a chunk of operational concurrency complexity.
Totally agree though – async, threading, and distributed concurrency all amplify each other’s risks. Communication and system design caution matter more than any syntax or library choice.
I never thought of the idea of printing out a stack trace. A logging function is an example of such a good idea that is so obvious that I didn't think of it :-)
I use -e sometimes but I really dislike scripts that rely on it for all error handling instead of handling errors and logging them.
^^ this tool has proven very useful for avoiding some of the most silly mistakes and making my scripts better. If you're maintaining scripts with other people then it is a great way of getting people to fix things without directly criticising them.
David Perigo, ESA’s chemical propulsion engineer and the programme’s technical lead, explains: “The INVICTUS programme will prove the suitability of a hydrogen-fuelled precooled air-breathing propulsion system for horizontal take-off and hypersonic flight. It will provide an invaluable opportunity to test the complete engine flow path, from intake to afterburner, at full scale in an integrated aircraft.”
INVICTUS – Europe’s new hypersonic test platform
INVICTUS – Europe’s new hypersonic test platform
The precooler system, building on technology developed through ESA's SABRE study, was designed by UK-based Reaction Engines Ltd and funded through ESA’s GSTP in its initial stages.
reply