Hacker News new | past | comments | ask | show | jobs | submit | enriquto's favorites login

I fully understand that sentiment. For several years now, I have also felt the strong urge to develop something in pure C. My main language is C++, but I have noticed over and over again that I really enjoy using the old C libraries - the interfaces are just so simple and basic, there is no fluff. When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the language (C++, Rust). To me, C is so attractive because it is so powerful, yet so simple that you can hold all the language features in your head without difficulty.

I also like that C forces me to do stuff myself. It doesn't hide the magic and complexity. Also, my typical experience is that if you have to write your standard data structures on your own, you not only learn much more, but you also quickly see possibly performance improvements for your specific use case, that would have otherwise been hidden below several layers of library abstractions.

This has put me in a strange situation: everyone around me is always trying to use the latest feature of the newest C++ version, while I increasingly try to get rid of C++ features. A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.


Rewriting GPL software under the MIT license is a terrible thing to do. The GPL is meant to protect and preserve what should be basic human rights. So-called "permissive" licenses are meant to provide big tech with free labour.

I've been using vim for 20+ years and it's been fantastically stable software for me. I have never cared about what version I am using because I've never needed to. An exception to this is that I'm now accustomed to using `:term` which came with vim 8.0 (2016?).

I think it's silly when colleagues point out that VS Code has vim bindings. The bindings are what I know and enjoy, but they're only half the point. I like to work with the machine, in the terminal. I don't like my text editor being a browser that pings M$, and will end up doing so again, eventually, even after I have to search up(!) how to disable it.

There's a lot of value in moving slow and not breaking things. I'm glad there are maintainers willing to carry that spirit forward.


Not transferable? In the context of the most widely-used server operating system? I have transferred Unix skills for decades, from one job and project to the next. Even Windows can run a Linux subsystem now.

Having learned Unix/Linux and many of the tools myself I challenge the "requires years and years of time investment to become proficient." You missed that the knowledge and skills grow incrementally, so while you get to proficiency your skills constantly improve at the same time. You don't have to stop everything, study Unix for years, and then go back to doing something "productive" once you've memorized it all.

Unix has one file format: plain text. The tools all compose around that. You can of course find exceptions -- you can't express everything in plain text -- but you can get remarkably far with simple single-purpose tools redirecting their output to the input of another tool.

The core tools have more consistency than you give them credit for, describing them as hundreds of DSLs. They all have man pages, they more or less follow the same conventions with I/O and flag names. Plenty of exceptions, sure, but no more than you find in any large legacy system that has grown and adapted over time.


It's so bad that I'd actually take the decades of cruft over it.

It was made by people who don't know how to use existing tools and what the actual problems are. It's like people who keep creating "new" text editors, but they never learned well how to use existing ones. So they end up with crippled half-baked thing that's some concept from the 80s that was discovered to be a dead-end in the 90s, but with modern cosmetics.

PowerShell is just incompetence and hubris of people working for a large company which can push their random garbage software on a lot of users and developers due to having huge market presence.


Hi troad, I read your book: https://justine.lol/sectorlisp2/troades.html

I don't remember saying that. You might be thinking about https://justine.lol/ape.html where I said we should be focusing on the old things that matter which aren't going away, like UNIX magic numbers, C libraries, and computer science. But I've got nothing against the new. I think AI for example is exciting. Ultimately you should focus on whatever summons your passion and curiosity. Since if you're tapped into that divine energy within, then you can make anything work, and others will agree. Even if it's just boring old numbers.


Rust isn't standardized. Last time I checked, everyone who uses it depends on its nightly build. Their toolchain is enormous and isn't vendorable. The binaries it builds are quite large. Programs take a very long time to compile. You need to depend on web servers to do your development and use lots of third party libraries maintained by people you've never heard of, because Rust adopted NodeJS' quadratic dependency model. Choosing Rust will greatly limit your audience if you're doing an open source project, since your users need to install Rust to build your program, and there are many platforms Rust doesn't support.

Rust programs use unsafe a lot in practice. One of the greatest difficulties I've had in supporting Rust with Cosmopolitan Libc is that Rust libraries all try to be clever by using raw assembly system calls rather than using libc. So our Rust binaries will break mysteriously when I run them on other OSes. Everyone who does AI or scientific computing with Rust, if you profile their programs, I guarantee you 99% of the time it's going to be inside C/C++ code. If better C/C++ tools can give us memory safety, then how much difference does it really make if it's baked into the language syntax. Rust can't prove everything at compile time.

Some of the Rust programs I've used like Alacrity will runtime panic all the time. Because the language doesn't actually save you. What saves you is smart people spending 5+ years hammering out all the bugs. That's why old tools we depend on every day like GNU programs never crash and their memory bugs are rare enough to be newsworthy. The Rust community has a reputation for toxic behavior that raises questions about its the reliability of its governance. Rust evangelizes its ideas by attacking other languages and socially ostracizing the developers who use them. Software development is the process of manipulating memory, so do you really want to be relinquishing control over your memory to these kinds of people?


I like that paper, but I don’t think it was the first to suggest separating x and y and using multi-pass for distance transforms.

Here are a few that predate and I think make the same observation:

https://dl.acm.org/doi/10.1016/j.ipl.2006.12.005

https://www.sciencedirect.com/science/article/abs/pii/002001...

That second paper from 1996 references an even older paper from 94, saying “Dividing rows and columns alternately, Chen and Chuang reduced the time complexity to O(N^2) which is optimal.”

https://www.sciencedirect.com/science/article/abs/pii/002001...


This question reminds me of the first time I met a blind programmer.

I asked him how he managed to code, and he replied with something that stayed with me: a good programmer should organize software in such a way that every piece of code has a clear and logical place. The organization should be so intuitive that anyone could build a mental model of the structure and navigate it easily, even without seeing it.

It felt like something out of a Yoda or Mr. Miyagi lesson. Skeptical, I asked his colleagues if he was truly able to code or if he was just exaggerating. To my surprise, they told me not only was he capable, but he was the best programmer they had ever worked with. They said no one else came close to writing code as organized as his.

That conversation changed my perspective. Ever since, whenever I’m unsure where to place new code, I don’t think about DDD or any specific methodology. Instead, I try to follow the logic and structure of the project in a way that feels natural and easy to follow later.

Later in life, I met two other blind programmers and heard similar stories about their ability to produce well-organized code.

To bring this back to the original question: I view LSP/IDE features the same way those programmers view "visual aids." Code should be organized according to a clear and logical structure that makes it easy to navigate.

Relying on features like Ctrl+Click to find where things are located worries me. Why? Because it can mask structural flaws in the codebase. If we can't intuitively figure out where something belongs, that’s a sign the codebase lacks structure—and that should motivate us to refactor it.

Not only do I avoid using LSP features, but I’m also opposed to their use. While they can help with navigation, they may prevent developers from experiencing and addressing the underlying structural issues in their code.


> We’ve established that, yes, pathnames can include newlines. We have not established why they can do that. After some deliberation, the Austin Group could not find a single use-case for newlines in pathnames besides breaking naive scripts. Wouldn’t it be nice if the naive scripts were just correct now?

Finally. Now let's do the rest: https://dwheeler.com/essays/fixing-unix-linux-filenames.html

Filenames should be boring printable normalized UTF-8. I have never, not once, seen a good reason that a filename should be able to contain random binary gobbledygook


I don't run "apps" on my PC. I run programs from a trusted source repository. I expect those programs to not be hindered in their attempts to serve me.

Society is not a business.

Markets are a useful tool that societies use to optimize certain kinds of capital allocation and goods production, where that makes sense. No more no less.

Most of the things societies value most highly don't fit into the market hole.

If trains were exclusively a mechanism for capturing or avoiding carbon, then the metric of $ / tons would be valid. Trains very obviously are not that.


libgen is backed up by annas-archive and you can help seed the torrents https://annas-archive.org/datasets https://annas-archive.org/torrents

The Julia AD ecosystem is very interesting in that the community is trying to make the entire language differentiable, which is much broader in scope than what Torch and JAX are doing. But unlike Dex, Julia is not a language built from the ground up for automatic differentiation.

Shameless plug for one of my talks at JuliaCon 2024: https://www.youtube.com/live/ZKt0tiG5ajw?t=19747s. The comparison between Python and Julia starts at 5:31:44.


Hi, I made this demo. This is actually all from the paper Particle-based Viscoelastic Fluid Simulation (Simon Clavet, Philippe Beaudoin, and Pierre Poulin). It is an SPH type (particle-particle interactions), not MPM (particle-grid-particle). I do a lot of MPM nowadays, and I have a multi-grid thing to help with incompressibility, but this was me re-implementing the first fluid sim paper I ever implemented.

I did the implementation in JS to help other people reading the paper, and tried to keep everything as similar as possible to the pseudocode from just this single paper. Maybe it would be cool to integrate an additional grid on which incompressibility is enforced better but I didn't want to make the source confusing.

It is also a little difficult to do density ratios with just what is shown in the paper, here the masses are set to (1, .8, .6, .4). This is what causes the lightest particles to get launched so violently in the air. Probably would be useful to integrate some ideas from the paper Density Contrast SPH Interfaces (Barbara Solenthaler, Renato Pajarola).

I started revisiting SPH because I have some new ideas to combine it with an MPM/FLIP grid for closeups. I'm trying to do a multi-scale MPM simulation that can handle better surface tension droplets in closeups while also doing extremely large scale scenes. You can see some larger scale parallel sims on my YouTube: https://www.youtube.com/c/GrantKot


I'll take a stab at it, since I'm one of those privacy advocates (and also prone to making sweeping statements like this).

Let's say Alice and Bob are doing life and emailing each other about normal life stuff. Charlie runs their email server.

Charlie also runs an advertising business to fund his email server. He somehow reads (not necessarily manually, but the details don't matter) the emails coming through his server to learn what people are more likely to be interested in buying. Everyone benefits, right? Alice and Bob get free email, the advertisers get well-targeted ads, and Charlie gets paid by the advertisers.

Well, along comes the Police. They know that Charlie is able to access contents of emails going through his server, because it's how he funds his email server. The police would need a warrant to search Alice and Bob's communication for something that might incriminate them in an investigation, but Charlie doesn't need a warrant. The police strike a deal with Charlie of mutual benefit. Information for another revenue stream. But still, the police are upholders of justice and only use this "email tap" for good.

Time goes on and our glorious democracy erodes into an autocratic state (ask Germany - it happens!). Suddenly our justice-loving Police have become the Gestapo, but money talks and it's in Charlie's interest to stay on the Gestapo's good side, so the email tap remains in place and we have Alice and Bob, good people that they are, collaborating on how to resist the autocratic state, which gets funneled straight to the Gestapo. Bad guys win.

Essentially it boils down to this: the means for the public to resist tyranny is a necessary prerequisite for freedom. Conversely, the more power (and information is power, especially personal information) is centralized, the more impactful a potential hostile takeover becomes, and the easier to orchestrate (much easier to infiltrate/control one source of information than thousands).


It seems most of these are the result of a poor application of a thesaurus with no regard to context, but here are some tortured phrase gems from these "gobbledygook sandwiches" [0]:

"artificial intelligence" => "counterfeit consciousness" / "man-made brainpower" / "fake knowledge"

"mean square error" => "mean square blunder"

"sensitive data" => "touchy information"

"signal to noise" => "flag to clamor"

"breast cancer" => "bosom peril"

"big data" => "huge information"

"ant colony" => "underground creepy crawly region"

"Navier-Stokes" => "Navier-Stocks"

"NP-hard" => "NP-difficult"

"end-users" => "stop-customers"

"phising attack" => "phishing assault"

"emission of CO2" => "excretion of CO2"

"deep learning" => "profound education"

"decision tree" => "choice bush"

"system failure" => "framework disappointment"

"real time" => "genuine time"

"fuzzy logic" => "feathery rationale"

"child nodes" => "tyke hubs"

"state-of-the-art" => "United States of America-of-the-cleverness"

"directional (graph) axes" => "directional tomahawks"

"magic mushrooms" => "wizardry mushrooms"

"max pooling" => "Georgia home boy pooling" (!?)

"malicious parties" => "compromising get-togethers"

[0] https://dbrech.irit.fr/pls/apex/f?p=9999:5


For maximum data isolation of hardware devices from Apple:

  - avoid storing anything on iCloud
  - disable iCloud via MDM / Apple Configurator policy profile
  - router block Apple network (17.0.0.0/8) connections
  - router block Apple CDNs via dnsmasq wildcard domains
  - router allow Apple servers for notifications and app/OS updates
  - login via App Store only, not Settings/iCloud
Apple list by service: https://support.apple.com/en-us/101555

The issue is that usually these terminals mean "higher throughput" when they say "faster", not "lower latency". The lowest-latency terminal in every test is Xterm, often by a LOT. Alacritty for a long time was actually quite bad at latency--and notably had a high variance on its latency, which is particularly miserable--but I think it improved recently? From what I remember of these benchmarks, someone using urxvt isn't going to be impressed by the supposed speed of Alacritty, if we are talking latency (and I agree: we should be, and everyone should use Xterm, which is actually an insanely good terminal).

As for throughput, I have lived in the terminal for decades, and as long as the various layers don't have massive buffers I honestly don't care how slow the terminal is: if I am dumping megabytes into my terminal backscroll I probably am going "oh shit" and am frantically hitting Ctrl-C... a slow terminal with a small buffer handles that almost immediately. I get the impression that there are maybe some use cases involving high-rate screen updates for apps that happen to run in consoles but are really GUIs... I don't use many of those and in fact try to avoid them, but I could maybe see an advantage for a high-throughput terminal to improve their simulated frame rate?


> If a country chooses not to comply, the only option is for the ICC to wage a war to enforce its judgement

not just does the US criminal elite not recognize ICC but they took it one step further with spelling out[1] what might happen if a US criminal is being charged by the court:

"The Hague Invasion Act", allows the president to order U.S. military action, such as an invasion of the Netherlands, where The Hague is located, to protect American officials and military personnel from prosecution or rescue them from custody.

... so not only should Israeli and Hamas war crimes be prosecuted, but in order not to appear utterly hypocritical, and "to do right by history", should US/UK war criminals like Dick Cheney, G.W. Bush, Tony Blair, and all other despicable criminal soldiers face the music for what they did in Abu Ghraib, Gitmo, and other places. Kidnapping from a sovereign country, torture, etc ... Just utterly barbaric.

But the US especially is a lost cause considering how they treat the worst transgressors and war-criminals like the execution without trial as in the case of Osama bin Laden. So just imagine if anyone would propose having US war criminals meet that very same fate? It would get you banned on every Internet site for "hate speech" LOL. Which is why it's pointless to cite laws, the justice system or pen and paper to solve something that is immune to that.

[1] https://en.wikipedia.org/wiki/American_Service-Members%27_Pr...


I was using Signal, too. One day it phoned home, got some new instructions from its creators, refused to work any longer, and demanded that I download an upgrade. I was not excited about this, since upgrades generally break things, but after a few days of grumbling I knuckled under. Sure enough, the upgrade is broken: it insists that I have to upgrade Google Play Services, which I can't do, because I deleted it along with the rest of the Google code I don't want to use.

I can't use Signal anymore, by Open Whispersystems' choice, which means I can't easily communicate with a fair number of my friends anymore, and that's apparently just the way it is. Open Whispersystems demands that I let Google have a level of access to my phone I don't want them to have, or they won't let me use Signal. Their proprietary Signal software is the only way to use the Signal service, so I can't just switch to an app whose privacy characteristics might be a better fit for my needs, as no other apps exist.

Am I going to convince everyone I want to communicate with that they should stop using Signal and start using something else? No, I am not, because why would they do that? They're all happily using Signal now, and I'm the odd one out, so I lose. Sure feels a lot like lock-in; I have no good choices here.


I think an underestimated issue with k8s (et al) is on a cultural level. Once you let in complex generic things, it doesn't stop there. A chain reaction has started, and before you know it, you've got all kinds of components reinforcing each other, that are suddenly required due to some real, or just perceived, problems that are only there in the first place because of a previous step in the chain reaction.

I remember back when the Cloud first started getting a foothold that what people was drawn to was that it would enable reducing complexity of managing the most frustrating things like the load-balancer and the database, albeit at a price of course, but it was still worth it.

Stateless app servers however, was certainly not a large maintenance problem. But somehow we've managed to squeeze in things like k8s in the there anyway, we just needed to evangelize microservices to create a problem that didn't exist before. Now that this is part of the "culture" it's hard to even get beyond hand-wavy rationalizations that microservices is a must, assumingly because it's the initial spark that triggered the whole chain reaction of complexity.


I have often mused that, in some ways, it seems like the transistor is really being wasted in AI applications. We use binary states in normal computing to reduce entropy. In AI this is less of a concern, so why not use more of the available voltage range? Basically, re-think the role of the transistor and re-design from the ground up - maybe NAND gates are not the ideal fundamental building block here?

The problem is text editor support. People don't trust binary data and want to see it in a text editor. ASCII has four characters reserved for separation so ridiculous formats like xSV where x is an in-band character are unnecessary. It's just the text editor support that is lacking. Even Emacs can't seem to do it well.

First, they annoy the elders.

Then when they grow up to be teenagers, they live in a cave and communicate mostly through grunting. I am at this phase with my children and I make inhumane efforts not to react and let it flow.

I hope that when their brain actually fires up when they are 25 or 30, they will be nice kids who realize what their parents when through to raise them.

Please comment only to confirm this, I am desperate.


Here's a thought:

This article is exploring the wrong problem.

The filesystem where my files reside should already be a fully-distributed filesystem covering all the devices I own, right from the moment I buy and enroll them.

With that precondition, my file is already where I need, and any physical bit shuffling is just a detail managed by the OS, using whatever connectivity is already available (or prompting me to "please connect cable type X between devices Y and Z for N minutes" if devices are out of sync).


Because logging is persistent. Once I have good logging implemented for a section of code I can always enable it and see what's happening every time the application runs.

Debuggers are one-offs. Running a debugger (and rerunning the application) every time I'm suspicious about something, or need confirmation, is more effort.

Good logging is having an always-on debugger that's enabled for everybody at once. It's an investment that pays off.

Yes, I know how to use a debugger, but rarely has it been more valuable than good logging.


A linker typically only includes the parts of the library it needs for each binary so some parts will definately have many copies of the same code when you statically link but it will not make complete copies.

But I wouldnt consider this bloat. To me it is just a better seperation of concerns. To me bloat would be to have a system that has to keep track of all library dependencies instead, both from a packaging perspective but also in runtime. I think it depends where you are coming from. To me static linking is just cleaner. I dont care much for the extra memory it might use.


The biggest issue I have with Yaml is that they forbid tabs.

Their argument is that tabs are shown differently in every editor which is actually something I like. When you're looking for something deeply nested you can reduce the tab distance a bit, when that's not needed you can increase it to improve visibility of nesting levels.

And forbidding it makes a one-keystroke action a two or four one.

I really don't understand the python/Yaml hate for tabs, and as a result I don't really use either.


A little bit of history about the book series may help understand what is in it.

In 1956, Knuth graduated high school and entered college, where he encountered a computer for the first time (the IBM 650, to which the series of books is dedicated). He took to programming like a fish to water, and by the time he finished college in 1960, he was a legendary programmer, single-handedly writing several compilers on par with or better than professionals (and making good money too). In 1962 when he was a graduate student (and also, on the side, a consultant to Burroughs Corporation), the publisher Addison Wesley approached him with a proposal to write a book about writing compilers (given his reputation), as these techniques were not well-known. He thought about it and decided that the scope ought to be broader: programming techniques were themselves not well-known, so he would write about everything: “the art of computer programming”.

This was a time when programming a computer meant writing in that computer's machine code (or in an assembly language for that machine) — and some of those computers were little more than simple calculators with branches and load/store instructions. The techniques he would have to explain were things like functions/subroutines (a reusable block of assembly code, with some calling conventions), data structures like lists and tries, how to do arithmetic (multiplying integers and floating-point numbers and polynomials), etc. He wrote up a 12-chapter outline (culminating in "compiler techniques" in the final chapter), and wrote a draft against it. When it was realized the draft was too long, the plan became to publish it in 7 volumes.

He had started the work with the idea that he would just be a “journalist” documenting the tricks and techniques of other programmers without any special angle of his own, but unavoidably he came up with his own angle (the analysis of algorithms) — he suggested to the publishers to rename the book to “the analysis of algorithms”, but they said it wouldn't sell so ACP (now abbreviated TAOCP) it remained.

He polished up and published the first three volumes in 1968, 1969, and 1973, and his work was so exhaustive and thorough that he basically created the (sub)field. For example, he won a Turing Award in 1974 (for writing a textbook, in his free time, separate from his research job!). He has been continually polishing these books (e.g. Vols 1 and 2 are in their third edition that came out in 1997, and already nearly the 50th different printing of each), offering rewards for errors and suggestions, and Volume 4A came out in 2011 and Volume 4B in 2023 (late 2022 actually).

Now: what is in these books? You can look at the chapter outlines here: https://en.wikipedia.org/w/index.php?title=The_Art_of_Comput... — the topics are low-level (he is interested in practical algorithms that one could conceivably want to write in machine code and actually run, to get answers) but covered in amazing detail. For example, you may think that there's nothing more to say about the idea of “sequential search” than “look through an array till you find the element”, but he has 10 pages of careful study of it, followed by 6 pages of exercises and solutions in small print. Then follow even more pages devoted to binary search. And so on.

(The new volumes on combinatorial algorithms are also like that: I thought I'd written lots of backtracking programs for programming contests and whatnot, and “knew” backtracking, but Knuth exhausted everything I knew in under a page, and followed it with dozens and dozens of pages.)

If you are a certain sort of person, you will enjoy this a lot. Every page is full of lots of clever and deep ideas: Knuth has basically taken the entire published literature in computer science on each topic he covers, digested it thoroughly, passed it through his personal interestingness filter, added some of his own ideas, and published it in carefully written pages of charming, playful, prose. It does require some mathematical maturity (say at the level of decent college student, or strong high school student) to read the mathematical sections, or you can skim through them and just get the ideas.

But you won't learn about, say, writing a React frontend, or a CRUD app, or how to work with Git, or API design for software-engineering in large teams, or any number of things relevant to computer programmers today.

Some ways you could answer for yourself whether it's worth the time and effort:

• Would you read it even if it wasn't called “The Art of Computer Programming”, but was called “The Analysis of Algorithms” or “Don Knuth's big book of super-deep study of some ideas in computer programming”?

• Take a look at some of the recent “pre-fascicles” online, and see if you enjoy them. (E.g. https://cs.stanford.edu/~knuth/fasc5b.ps.gz is the one about backtracking, and an early draft of part of Volume 4B. https://cs.stanford.edu/~knuth/fasc1a.ps.gz is “Bitwise tricks and techniques” — think “Hacker's Delight” — published as part of Volume 4A. Etc.)

• See what other people got out of the books, e.g. these posts: https://commandlinefanatic.com/cgi-bin/showarticle.cgi?artic... https://commandlinefanatic.com/cgi-bin/showarticle.cgi?artic... https://commandlinefanatic.com/cgi-bin/showarticle.cgi?artic... are by someone who read the first three volumes in 3 years. For a while I attended a reading group (some recordings at https://www.youtube.com/channel/UCHOHy9Rjl3MlEfZ2HI0AD3g but I doubt they'll be useful to anyone who didn't attend), and we read about 0.5–2 pages an hour on average IIRC. And so on.

I find reading these books (even if dipping into only a few pages here and there) a more rewarding use of time than social media or HN, for instance, and wish I could make more time for them. But everyone's tastes will differ.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: