Hacker Newsnew | past | comments | ask | show | jobs | submit | more mananaysiempre's commentslogin

To appreciate the “fast” part, nothing beats reading though LuaJIT’s lj_opt_fold.c, none of which would work without SSA.

Of course, LuaJIT is cheating, because compared to most compilers it has redefined the problem to handling exactly two control-flow graphs (a line and a line followed by a loop), so most of the usual awkward parts of SSA simply do not apply. But isn’t creatively redefining the problem the software engineer’s main tool?..


I’ve found the SSA book to be... unforgiving in its difficulty. Not in the sense that I thought it to be a bad book but rather in that I was getting the feeling that a dilettante in compilers like me wasn’t the target audience.

I was involved in making the book. It is very much a book for academics, and came out of an academic conference bringing together people working at the forefront of SSA-based research.

I mean, once again, I’m not really complaining about the book. It’s fairly mathy, sure, but so what. I also actually welcome that it’s a coherent book rather than a bunch of papers in a trenchcoat or a menagerie of neat things people have thought of (*cough* Paxos variants).

It’s rather that it’s a bit unusual in that it’s a coherent book whose prerequisites (on top of an old-timey compilers course, say) I don’t think actually exist in book form (I’d love to be proven wrong, as that’s likely what I need to read). The introductory part does make it self-contained in a sense, but it’s more like those grad-level maths books that include a definitions chapter or three: technically you don’t need to know any of that stuff beforehand, true, but in reality if it does more for you than just fill a few gaps and fix terminology, then you’re not going to have a good time with the rest. Again, just my experience, I don’t know if that’s what you were going for.

If there was a criticism implied in my initial comment, it’s that I think that the kind of person that goes looking for literature recommendations in this thread isn’t going to have a good time with it, either; so at the very least they should know what they’re signing up for. But I don’t think you’re really disagreeing with that part?..


Oh for sure, I was agreeing with you. The target audience is academics.

For non-experts, I love "Engineering a Compiler" by Cooper and Torczon.


Like so many compiler books from academia.

Still visible on desktop but not anymore on mobile, it seems. Grey on white (provided your browser requests light mode) is also less easy to spot than their earlier white on black. So I for example was sure it had disappeared until you prompted me to recheck.

> The name itself is fairly transparent in implying that there's really no security

A password-capability system is a password-capability system. Not requiring an account does not make it not an access control. (Though it does make it e.g. not selectively revokable, which is a known weakness of password capabilities.)


Correct me if I am misunderstanding your point but unlisted YouTube videos don’t need a password or anything to be accessed. Anyone who has the URL can access it. It’s just not indexed/searchable on YouTube.

Right. And neither do Google Docs shared by a no-login link (which used to be the only option) or for that matter RSA signing keys. You could in theory guess any of these, given all of the time in the universe (quite literally). A “password capability” is any mechanism where knowing the designation of an object (such as the “unlisted” link) is a necessary and sufficient condition to access it. The designation has to be hard to guess for the system to make sense.

(The intended contrast is with “object capabilities”, where the designation is once again necessary and sufficient but also unforgeable within the constraints of the system. Think handles / file descriptors: you can’t guess a handle to a thing that the system did not give you, specifically, a handle for.)


I get people won’t reasonably guess it, but an unlisted link is still an exposed link literally anyone with internet access can open. It’s simply not the same as a login + password, neither functionally nor technically.

The fact that this site exists says it all: https://unlistedvideos.com/indexm.html


I strongly suspect (as I’ve said elsewhere) that there’s no simulation going on, just a bunch of precomputed refraction maps. Two dependent texture fetches are not nothing (and neither of course is the sequentializing nature of rendering a transparent thing over another thing), but I wouldn’t lose sleep over them if there was a point. Thus far I’m not convinced that there is.

Has noöne located and disassembled the thing yet? The speculation is getting tiresome. (I don’t own an up-to-date macOS device and have never owned an iOS one, so no help from my end, sorry.)


Ain't no way precomputed reflection maps are burning through the full TDP of the processor. Even the desktop experience is miserable perf-wise

The XZ project’s build system is and was hermetic. The exploit was right there in the source tarball. It was just hidden away inside a checked-in binary file that masqueraded as a test for handling of invalid compressed files.

(The ostensibly autotools-built files in the tarball did not correspond to the source repository, admittedly, but that’s another question, and I’m of two minds about that one. I know that’s not a popular take, but I believe Autotools has a point with its approach to source distributions.)


I thought that the exploit was not injected into the Git repository on GitHub at all, but only in the release tarballs. And that due to how Autoconf & co. work, it is common for tarballs of Autoconf projects to include extra files not in the Git repository (like the configure script). I thought the attacker exploited the fact that differences between the release tarball and the repository were not considered particularly suspicious by downstream redistributors in order to make the attack less discoverable.

First of all, even if that were true, that wouldn’t have much to do with hermetic builds as I understand the term. You could take the release tarball and build it on an air-gapped machine, and (assuming the backdoor liked the build environment on the machine) you would get a backdoored artifact. Fetching assets from the Internet (as is fashionable in the JavaScript, Go, Rust, and to some extent Python ecosystems) does not enter the equation, you just need the legitimate build dependencies.

Furthermore, that’s not quite true[1]. The differences only concerned the exploit’s (very small) bootstrapper and were isolated to the generated configure script and one of the (non-XZ-specific) M4 scripts that participated in its generation, none of which are in the XZ Git repo to begin with—both are put there, and are supposed to be put there, by (one of the tools invoked by) autoreconf when building the release tarball. By contrast, the actual exploit binary that bootstrapper injected was inside the Git repo all along, disguised as a binary test input (as I’ve said above) and identical to the one in the tarball.

To detect the difference, the distro maintainers would have needed to detect the difference between the M4 file in the XZ release tarball and its supposed originals in one of the Autotools repos. Even then, the attacker could instead have shipped an unmodified M4 script but a configure script built with the malicious one. Then the maintainers would have needed to run autoreconf and note that the resulting configure script differed from the one shipped in the tarball. Which would have caused a ton of false positives, because that means using the exact versions of Autotools parts as the upstream maintainer. Unconditionally autoreconfing things would be better, but risk breakage because the backwards compatibility story in Autotools has historically not been good, because they’re not supposed to be used that way.

(Couldn’t you just check in the generated files and run autoreconf in a commit hook? You could. Glibc does that. I once tried to backport some patches—that included changes to configure.ac—to an old version of it. It sucked, because the actual generated configure file was the result of several merges and such and thus didn’t correspond to the output of autoreconf from any Autotools install in existence.)

It’s easy to dismiss this as autotools being horrible. I don’t believe it is; I believe Autotools have a point. By putting things in the release tarball that aren’t in the maintainer’s source code (meaning, nowadays, the project’s repo, but that wasn’t necessarily the case for a lot of their existence), they ensure that the source tarball can be built with the absolute bare minimum of tools: a POSIX shell with a minimal complement of utilities, the C compiler, and a POSIX make. The maintainer can introduce further dependencies, but that’s on them.

Compare this with for example CMake, which technically will generate a Makefile for you, but you can’t ship it to anybody unless they have the exact same CMake version as you, because that Makefile will turn around and invoke CMake some more. Similarly, you can’t build a Meson project without having the correct Python environment to run Meson and the build system’s Python code, just having make or ninja is not enough. And so on.

This is why I’m saying I’m of two minds about this (bootstrapper) part of the backdoor. We see the downsides of the Autotools approach in the XZ backdoor, but in the normal case I would much rather build a release of an Autotools-based project than a CMake- or Meson-based one. I can’t even say that the problem is the generated configure script being essentially an uninspectable binary, because the M4 file that generated it in XZ wasn’t, and the change was very subtle. The best I can imagine here is maintaining two branches of the source tree, a clean one and a release one, where each release commit is notionally a merge of the previous release commit and the current clean commit, and the release tarball is identical to the release commit’s tree (I think the uacme project does something like that?); but that still feels insufficient.

[1] https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78b...


> Unconditionally autoreconfing things would be better, but risk breakage because the backwards compatibility story in Autotools has historically not been good, because they’re not supposed to be used that way.

Yes and no. "make -f Makefile.cvs" has been a supported workflow for decades. It's not what the "build from source" instructions will tell you to do, but those instructions are aimed primarily at end users building from source who may not have M4 etc. installed; developers are expected to use the Makefile.cvs workflow and I don't think it would be too unreasonable to expect dedicated distro packagers/build systems (as distinct from individual end users building for their own systems) to do the same.


Focusing on the technical angle is imo already a step too far. This was first and foremost a social engineering exercise, only secondary a technical one.

this is very true, and honestly troubles me that it’s been flagged.

Even I’m guilty of focusing on the technical aspects, but the truth is that the social campaign was significantly more difficult to understand, unpick and is so much more problematic.

We can have all the defences we want in the world, but all it takes is to oust a handful of individuals or in this case, just one: or bribe them or blackmail them- then nobody is going to be reviewing because everybody believes that it has been reviewed.

I mean, we all just accept whatever the project believes is normal right?

It’s not like we’re pushing our ideas of transparency on the projects… and even if we were, it’s not like we are reviewing them either they will have their own reviewers and the only people left are package maintainers who are arguably more dangerous.

There is an existential nihilism that I’ve just been faced with when it comes to security.

unless projects become easier to reproduce and we have multiple parties involved in auditing then I’m a bit concerned.


> I mean, we all just accept whatever the project believes is normal right?

Not in this thread we don’t? The whole thing has been about the fact that it wasn’t easy for a distro maintainer to detect the suspicious code even if they looked. Whether anyone actually does look is a worthy question, but it’s not orthogonal to making the process of looking not suck.

Of course, if we trust the developer to put software on our machine with no intermediaries, the whole thing goes out the window. Don’t do that[1]. (Oh hi Flatpak, Snap. Please go away. Also hi NPM, Go, Cargo, PyPI; no, being a “modern programming language” is not an excuse.)

[1] https://drewdevault.com/2021/09/27/Let-distros-do-their-job....


If you squint so hard that SSA is functional programming[1] and register renaming is SSA, modern CPUs are kind of functional, but that of course has nothing to do with functional programming done by the user, it’s just the best way we know to exploit the state of the art in semiconductors to build CPUs that execute (ostensibly) serial programs.

[1] https://www.cs.princeton.edu/~appel/papers/ssafun.pdf


Freelists[1] is still around, LuaJIT hosts its mailing list there. So is Savannah[2]. Would also be interesting to know if it’s actually realistic to ask Sourceware[3] to give you a list or if those pages only reflect the state of affairs from two decades ago. (Only the last of these uses public-inbox, which I personally much prefer to Mailman.)

[1] https://www.freelists.org/

[2] https://savannah.nongnu.org/, https://lists.nongnu.org/

[3] https://sourceware.org/mission.html#services


nVidia has dropped 32-bit PhysX support in 50-series cards, significantly impacting some older-but-not-old games, and there’s no real solution yet except to own an older card.

A decent newspaper can afford this because it also has a fact checker, a copyeditor, a line editor, and an expectation that a journalist will be fired[1] if they systematically fuck up the substance of their writing. It’s difficult to find a decent newspaper.

[1] Or otherwise not employed—newspapers perfected not treating their core workforce as employees decades before everyone else.


Even if the heyday of profitable journalism fact checkers were a magazine thing. Newspapers generally did not use them, they moved too quickly for that and had too much space (newsprint between the ads) to fill.

On the other hand, in that era a much higher proportion of the news in a paper was directly reported by the journalists - things they physically saw, people they physically talked to or called. They weren’t using some half baked thing from the internet because there as no internet. Although they might run something dodgy from another newspaper or wire service, but that was pretty rare, at least outside of the celebrity gossip and film columns (which were, sexist-ly, considered women’s news and thus not held to the same standards).


A decent newspaper today in 2025 writes slop for their website to ensure daily engagement with their readers. To the point that people are talking about AI articles, literally serving slop.

Maybe they have a few AP articles thrown in there.

We have to acknowledge what has changed in our world and why things are the way that they are. Perhaps daily news is simply not profitable enough to provide us with quality information, and our economic incentives (namely advertising dollars from websites, YouTube, TikTok and the like) are having an adverse effect on quality.


Did you mean it was decent in the past?

I think the GP's statement was that there are almost no decent newspapers anymore, which I think nobody would disagree with.


> A decent newspaper today in 2025 writes slop for their website to ensure daily engagement with their readers. To the point that people are talking about AI articles, literally serving slop.

> Maybe they have a few AP articles thrown in there.

I've seen signs of AI slop on AP (and Reuters).


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: