Hacker News new | past | comments | ask | show | jobs | submit login
‘Zero-click’ hacks are growing in popularity (bloombergquint.com)
218 points by taubek on Feb 19, 2022 | hide | past | favorite | 381 comments



Why don't Apple & Google spend a few billion dollars over a few years to rewrite their (non-crypto) unix stack from scratch? It seems like that would be an enduring competitive advantage, good for their users, and reduce future liabilities.

Every programming language can result in bugs, but some are worse/more frequent/harder to solve afterwards than others.

Better yet, why wasn't "rebuild commonly used standard libraries" in the US Infrastructure bill last year? The government could pay programmers a lot, and pay whitehat pen-testers a lot (+ per bug discovered) and in a few years of iteration, we'd have incredibly hardened, durable software infrastructure that would benefit us for decades to come, in the public domain.


Mostly because it's not clear that from-scratch rewrites produce better results. They can if the entire architecture needs to be different, but for many libraries it just devolves into an exercise in bikeshedding.

This is particularly true of the US government which, if you've seen their IT systems, is not going to be anyone sane's first choice for doing from-scratch rewrites.


I disagree. I think governments prioritise spending, jobs and votes over "better results".


so you agree with the comment then? you are both criticizing the government


I wrote "governments"... plural. And while both comments are critical of "governments", my comment clearly disagrees with the parent's assertion that governments do not fund whole rewrite projects "Mostly because it's not clear that from-scratch rewrites produce better results".


I don't dispute that governments do such things and I'm not sure how you can read that sentence to say that they do. I just said they don't generally get good results from it.


Isn't Google (allegedly) already doing this with Fuchsia?


They've done their own version of a lot of things, but then have open sourced what they can to grow the base of people familiar with that tech who can make things with it.

I mean, on some level you can try to make your own custom TempleOS for everything, but that only gets you (possibly) reduced scrutiny and hiring issues simply because nobody knows how to use it. But if you're already a target, the reduced scrutiny is probably a bad thing since the good guys won't point out the bugs to get them fixed.


Fuchsia isn't Unix afaik


"Fuchsia implements some parts of Posix, but omits large parts of the Posix model."

-- fuchsia.dev


There is an RFC for full Linux binary compatibility: https://fuchsia.dev/fuchsia-src/contribute/governance/rfcs/0...


IMHO that's kind of the point; to get a substantially different security posture, it's not sufficient to just rewrite the code but it's also necessary to change the design, and that would not be fully Unix anymore.

It would not be enough to provide new, more secure options; and not enough to make them the default - to actually reduce the attack surface, you'd want to remove the nonsecure options, if required, at the cost of compatibility.


I think to a large extent this is a mythical man-month thing. Beyond a small scale, you probably can't improve or speed up operating system design by throwing money and person count at it.


Remember when Apple replaced mDNSResponder with Discoveryd? It was total disaster, %95 CPU usage and all kind of connectivity issues. They had to bring back mDNSResponder not long after.

So there's no guarantee that the replacements would be bug-free. If anything, the current stuff is battle tested through the years and gets better with each scar. I would guess that they are also employing all kind of hacks, i.e things that are not supposed to be like that but are like that and they will break a lot of things if they make the new code work the way it is supposed to work.

There's even XKCD for that: https://xkcd.com/1172/


See also Hyrum's law:

> "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."


It’s a good long term investment. Just not viable in the short term. Too expensive.


Qubes OS defends even from such attacks: it doesn't show non-ASCII symbols in window titles in dom0: https://www.qubes-os.org/doc/config-files.

I think this OS deserves more attention. By the way, new version 4.1 is out: https://www.qubes-os.org/news/2022/02/04/qubes-4-1-0/.


Not supporting unicode as a feature leaves out most of the world’s population. I’m not interested in such “features” as a non-native English speaker.


In window titles, that s fine. I like my French accents too but I can give them up for the hypervisor communication...


Japanese, Korean, Chinese, Arabic, Hebrew, Russian, Tamil, Thai, etc begs to be remembered.


More than accents are non-Latin scripts.


If you open my link, you will find how to switch it on.


> it doesn't show non-ASCI symbols in window titles in dom0:

Seems like one of the least interesting aspects of qubes. Was there a zero day in the font renderer? I would assume such a thing would be more about homograph attacks.


There have been many exploits related to Unicode text rendering.


> Was there a zero day in the font renderer?

As far as I'm concerned, freetype is another spelling for CVE. There have been multiple high impact vulns. Though it usually seems to require crafted fonts, so I wouldn't be too concerned about window titles using system fonts. Web fonts on the other hand.. disable 'em.


I used Qubes as a daily driver for much of 2021. It hogged too much ram so I stopped. Never disliked the lack of non-ascii support. Security is always more important.


> Never disliked the lack of non-ascii support.

Ah, the elusive quadruple-negative.


Each negative means +1 standard deviation verbal iq.


+1 for the reader. -1 for the writer.


I’ll have you know I’m in the top 50 on the wordle leaderboard.


What language? Prolog?


SHA-256 Passwordle, of course.


Also segregates every app/workspace into a different vritualized system IIRC


Yes this is in fact the main feature :-) Though it's not exactly as you say it. VMs are a first-class entity. You can easily make as many as you want to represent different security domains. But it's not every app (unless you want it to be).

I didn't even notice the unicode thing. But it doesn't surprise me. They have various similar conservative features. For instance, by default an app in a VM cannot get full screen access. To full-screen a video in youtube you have to full-screen in app and then hit Alt-F11. The concern is that the app somehow tricks the user into thinking that they're interacting with the host OS desktop. Also the host OS doesn't have Internet access; update package files are downloaded by another VM and copied over.

It's fairly paranoid by design, and their tagline is "a reasonably secure operating system".


If you have so little faith in your system that Unicode characters will lead to an exploit that you block them in window titles, your problem isn't Unicode, the problem is your code that processes and renders it. You still have a problem, you're just making it the user's burden to bear.


You should not have faith in software. You should verify and isolate.


I very explicitly didn't say "software". You should be distrustful of software. If you can't trust the system you've created, you've used or created bad software and then failed to build necessary safeguards. The answer is not to restrict the user, it's to make more robust systems.


What do you mean by "system you've created"? I didn't create anything. I use third-party software (e.g. Firefox) on my third-party OS (e.g. Debian).

Qubes runs Debian in a VM and isolates it to defend the user from threats.


They should just force the use of bitmap fonts. Unicode isn't the enemy.


"no way to stop them" = the economic impact to Apple isn't big enough to justify the engineering / rewrites required to completely prevent them.


It's baffling that they won't at least disable previews for senders not in your contacts. Ideally they would provide a way to block certain types of senders outright. I will NEVER want to receive an iMessage from an unknown email address, but that's where all of the spam crap comes from.

Recently I was on my phone when I received an email address iMessage and the toast showed an absolutely insane link, when I opened iMessage (not even that conversation) to go delete the thread my phone screen went blank quickly 2 or 3 times in a row, something I've never seen before. I deleted the thread and turned the thing off.


Does it solve this to set a different app as your default app for texts? (Hopefully a more secure app.)


I want to trust Apple more than a random 3rd party app in general but regardless I don't think you have an option for alternate SMS on iOS. Anyway the problem here, I think, is that you can somehow send an iMessage (not SMS?) via an account that is backed by an email address instead of a phone number. So even if texts/SMS could have an alternate app, iMessage would still be accepting messages from bad accounts.


"Apple does not allow other apps to replace the default SMS/messaging app." https://support.signal.org/hc/en-us/articles/360007321171-Ca...

Oops, I must've been remembering Android. Well, it's one way Apple could fix this unconscionably lasting security hole.


Sounds like it was triggering a crash or running 70,000 logic gates?


Yes but practically this isn't viable.

Nothing is impossible unless it disobeys the laws of physics(which are also limited to what we currently know)

It would be equivalent to saying, well it isn't economic enough for energy companies to simply create nuclear fusion reactors...

Some things are just extremely hard and there's no obvious answer even if you had "unlimited" funds.


More likely: our gov gag ordered us to keep up.


Or perhaps one step further (albeit verging into conspiracy theory territory): they intentionally push ahead with known-flawed approaches, projects and engineering practices because it's profitable and there's generally a net benefit to them in being more-aware and more-in-control of the vulnerabilities within that ecosystem than anyone else could be.

(instead of taking the time to wait for research results, best practices, security reviews and privacy concerns up-front at design-time, and even -- shock -- perhaps deciding not to build some societally risky products in the first place)


Apple spends more on security than all but 2 other industry firms (they may spend more than those 2 as well), and has a comparable computing footprint to those firms. This is a facile complaint.


My comment may have been facile and poorly-argued, sure, but if consumer devices are being sold that can be remotely exploited without user interaction during something as commonplace as rendering images.. surely it's worth considering the potential for structural improvements in industry?

Perhaps the associated billions of dollars of spending is indeed the answer, and will translate into measurable improvements. If so, very well.

Perhaps there are Conway-style architectural issues at hand here as well, though. Can disparate teams working on (a large number of) proprietary interconnected products and features reliably produce secure results?

It seems wasteful that similarly-functioning tools -- like messaging apps -- are continuously built and rebuilt and yet the same old issues (generally exacerbated by increasing web scale) mysteriously re-appear time and again.


This isn't a facile argument. I might disagree with it --- I think things are more complicated than they seem --- but I can't call it facile.


Apple, or Microsoft, or Samsung, or Ubuntu, or Google, or whoever can do all the system level bulletproofing they want. People will still write apps. And those apps, probably upwards of 99.999999% of them will be unsafe.

It would take a sea change in the mindsets of software engineers globally to centralize the software development process around a security mindset. That's not going to happen unfortunately. The vast majority of us have neither the expertise, nor the time, to develop 100% secure code. The best most of us conscientious types can do is to provide comprehensive monitoring, so that a user can use those tools to know if and when something is amiss.


> And those apps, probably upwards of 99.999999% of them will be unsafe.

Apps are sandboxed, so the damage should be limited to only the exploited app.

Pegasus exploits exploited iMessage et al, which are Apple's own apps with special permissions.


What special permissions were used to enable the attack?

AFAIK, the hacker broke out of the sandbox in addition to rooting iMessage.

Also, the surface area available to a sandbox is too large. Firecracker like VM isolation is required for safety, which Apple seems to be moving towards when it comes to parsing from their apps at least.


Again, from the perspective of a cyber security expert, all that is great!

Or rather would be great if Pegasus was the only 0-day out there. It'd be even better if Pegasus were the only 0-click out there.

Here's the thing though, it's not.

That's the world we live in. So the question is, given that fact, how do we get to a world where we can have some level of security? My belief is that everyone from the users to the app devs have to adopt a security mindset.

Users should not download that free app that lets you see what you would look like as your favorite French pastry. They should not click on the link in that sms they got from that strange phone number. They should be careful about giving out their phone number. Give everyone your gmail google phone number instead and let them send texts to that. Then check those texts on your gmail google phone if you're a high profile target. (Or even just a guy/gal who has a few people out there who really don't like them.) Keep a buffer between the world and your phone. Etc etc etc.

Devs want access to the file system. Awesome, but they'd better make sure in using that filesystem they are not inadvertently allowing users to take any actions deleterious to the system. Devs want access to the GPU. Again, no problem. But you'd better know how to write secure GPU code. There is no way a browser, or .NET, or Python or an OS can provide you access to a GPU "safely". If they give you the gun, they expect you will use it responsibly.

Browsers and other platform providers should also act responsibly. I understand developers want features. At the same time, is it responsible to hand out access to these features without some kind of plan to keep irresponsible devs from compromising security at scale? Sometimes there just is no way to do that, and I understand. (Access to the GPU is an example. Devs just have to know what they're doing.) But sometimes it is possible to do things in a more secure fashion, or to just wait on delivering that feature altogether.

Point is, for a secure environment, everyone has to play their part. There are so many of these 0-clicks and 0-days out there in the wild. Everyone wants to make a better environment. Well, I'm not seeing how that happens without getting everyone's cooperation. Or, at a minimum, getting everyone to be a bit more careful with their behaviors.


Do you have concrete examples of how a developer could, say, write secure code to run on the GPU?


Is there a way to rescind said permissions?


Short of not using those apps, no.


>People will still write apps. And those apps, probably upwards of 99.999999% of them will be unsafe.

This can be avoided if you have a cross platform high level language like say C# with a big standard library like .Net , the field needs then to make sure the language and core library are safe, most programs use existing libraries and put some business logic on top, I remember that memory safety was a thing before Rust was born, the issue was that either the languages were too slow, were not cross platform or had weird license or were "garbage".

If this gaints like Apple, Google, Facebook would contribute on rewriting or prove that the core libraries they use are correct then things would improve, but how would they continue to increase their obscene profits ?


Respectfully, an enormous amount of work has gone into making sure things like Python, .NET, and Rust are secure. And the security researchers still regularly find bugs and sell 0-days. That's not even counting the work that's gone into the gold standard that is the JVM. Any serious minded security expert could tell you that guaranteeing security on any of these platforms is a sysiphean effort. Your platform is state of the art with respect to security, until it is not.

The essential problem is features. Devs want features. So Python, .NET, etc, and even the browsers try to provide access to those features. But some of those features are simply inherently unsafe. Someone will find a way to compromise this feature or that. How does one provide 100% safe access to the GPU? The file system? And so on. It's not really possible. At some point, the app level dev will have to keep a security mindset when writing his code. Don't do things on the GPU that compromise the system. But that has to be on the app developer if that developer is demanding that the browsers give him/her access to the GPU.

I don't know if I'm being clear? But I hope you can see what I'm trying to say.


You are using big words like "Impossible" such big word would need a proof. There are languages that can produce programs proven to be correct as per the specifications. We don't have such programs because we prefer moving fast, breaking things, having fun while coding, making money etc.

Do you have a proof that it is impossible to have a secure calculator application?

About .Net and Python, they are using a lot of wrappers around old unsafe code, so we would need to put more work and eliminate that, MS failed because of their shity Windows first ideals and their FUD,


> How does one provide 100% safe access to the GPU?

Presumably through pointer capabilities.

> The file system?

File system namespacing and virtualization.

I disagree with most of your assertions.


> Don't do things on the GPU that compromise the system.

Easier said than done…


If say a Game or 3D program crashes the system or causes a security issue the problem is the driver or the hardware. A correct driver and hardware should not allow any user level application to cause issues.

I know is hard, this GPU companies need to keep backward compatibility, support different operating systems(and versions), support old stuff that worked by mistake. and probably some "benchmark cheeting might be hidden in the proprietary drivers too".


Drivers and hardware are not designed by application developers.


I understand. But my point is we can have safe application if GPU makers would care for safety , at this moment they care to impress people with benchmarks to they make money, probably people that run GPU servers will visualize them and put zero pressure on the driver maker to produce safety that might reduce a bit of speed.


"If we could just have one more layer of abstraction, THEN we would be secure".

You'll end up making a standard library so big that it will never be secure. And even more portantly, you'll strangle innovation by disallowing improvements to the standard library.


I would not make it illegal for cool devs to invent their own language or libraries. I want a good default for string manipulation,http, file manipulation, json/xml/zip and other format parsing, you could rewrite your own in CoolLang using QunatumReactiveAntiFunctionalPatterns. It is your choice if you use a proven correct zip library or you use a different one written by some stranger in a weekend In CoolLang.

Something like JVM or.Net would be part of the solution because you could pacify developers because they can use their darling language but target the same platform as the others. We still need true engineers to create an OS and Standard library from the ground up, designed for security and not chaotically evolved.


> I want a good default for string manipulation,http, file manipulation, json/xml/zip and other format parsing

Wanting those things is fine but delivering those things is extremely difficult. JSON/XML/Zip have so many weird edge cases it's maybe impossible to write parsers that are complete to the spec yet also truly secure. XML and Zip bombs aren't explicit features of either format but they're side effects of not being explicitly forbidden.

You're also want "good" parsers without specifying in which dimension you want them to be "good". You can have a complete parser that's reasonably secure but then pay for that with CPU cycles and memory. You can have a small and fast parser that's likely incomplete or has exploitable holes.


I mean a good parser should be correct. We can have different implementation, say if you work with trusted data and you know that your json/html/xml has a specific form you can create a correct parser that handles this subset and is faster then the generic complete one.

We probably need to create better specifications, probably using a log/math language that can verify the specifications are valid and clear. It will be hard since people will need to learn to be more clear but it might also simplify things , say we would have a simple replacement for html and css if the guys creating it would have to do it in such a language.

The giants use json right, so they could put some money together find some experts to write a specification, if json is flawed they can write a new version of it that is correct, then when specification is coded they can release it, and after that this giants can pay some developers to implement it and prove the implementation correctness.

They should repeat it for one image format, they can chose what is a decent image format and do that, then do it for html, audio, video... it will save them money if less security issues happen on their servers or in their users devices. But it will save them money only if it would cost them when their devices get owned, this means we should stop apologizing this bugs with extreme fake stuff like "99.9% of applications have security issues"


That's a defeatist position.

99% of the problem is just wanting to not have to rewrite a hundred parsers in memory-safe languages.

It's just economics and engineering.

They don't have to change everyone's minds or fix the world. They'd need to invest a lot but so far nobody really thinks it's worth it.


People try to address that will simpler solutions that wouldn't break backwards compatibility or require a full re-write.

Isolation, mitigation and prevention of exploitation is common.


As a software engineer I still don't understand how this is even possible.

What kind of logic behind a URL preview can bypass everything? I think companies like NSO Group are just finding backdoors not software bugs.


This one is a good example: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...

Really worth the read, it was quite eye-opening.

> JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.

> The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying.


You have to admire the ingenuity. Just wish it was being put to better use. I can't even fathom the amount of effort required to, basically, create an entire scripting language running in an environment like that.


I feel like this is the same amount of ingenuity put into a typical techie diy project "I turned a coffee pot into a robot that presses my shirts"


Probably an order of magnitude less than was put in to creating that environment ;)


This is an impressive example, but is it really a common example? I think typical examples are much more mundane and possible only due to poorly written code and memory overflow exploits, etc, no?


Difficult to say. I'd keep in mind that NSO Group is a private company, with limited funding and limited privileges. There are also government actors out there with secret services. Who knows what they have been up to recently.


definitely rare and highly targeted exploits.

Exploits for mobile phones in the "open market" are in the millions of dollars, for a single working exploit.

But the incentive is growing as these devices are becoming the center of our lives.


It is so improbable and complicated that it is easier to believe that it is just a parallel construction to hide the fact backdoors are used.


I'm not sure if you're being sarcastic, but for parallel construction they'd still need to find this exploit. Are you saying Google Project Zero is out there to hide the traces of backdoors?


It doesn’t seem all that unrealistic. These companies buy and research every single bug they can get for iOS and eventually you have enough that you can glue them together in to full exploits. When you have enough funding, this stuff becomes realistic.


Never underestimate the extremes computer science types will go to in order to prove a point.


I believe this line of thought has not been given enough attention recently.


Terrifyingly smart folks there.


Holy shit


URL preview is a pretty big attack surface, you have to fetch over network using complex protocols, parse the result for a variety of formats, and then render it.


Right. Showing an “image preview” for myriad file types means executing them, essentially, and perhaps on buggy code.


AFAIK, they are exploiting vulnerabilities in image and video decoders


Yep it’s usually that or body parsers, that sort of thing.


From the article:

> In December, security researchers at Google analyzed a zero-click exploit they said was developed by NSO Group, which could be used to break into an iPhone by sending someone a fake GIF image through iMessage.

And the thread from back then:

https://news.ycombinator.com/item?id=29568625

That Project Zero blog post lays out the details under the "One weird trick" header.


I like how all of the replies to this are basically "No they exploited things that were already there". Yeah, and the things that were already there were written by..? Robots? Monkeys? Oh, employees. Got it. rolls eyes I think it's completely reasonable to assume that any OS vendor has enemy spies working for them. How could they not?


There are software engineers who sometimes write code that’s not perfect.


So frustrated with the slow adoption/transition to memory-safe languages.


True. Perl exists for more than 30 years already.


Ada throws out its back with a chuckle.


From what I recall the stagefright vulnerability might be a good example.


Zero click hacks have been around for all of computing. Nothing connected to the internet, connected to a network, has ever, ever been safe.

All you can do is reduce attack surface, and most of all, monitor.

Another comment blames Apple, and financial incentives. Sure, there may be some of that.

But the reality is that safe code is impossible. Now, you may say "But...", yet think about this.

For all of computing history, all of it, no matter what language, no matter how careful, there is always a vulnerability to be had.

Thinking about software, and security any other way, is an immediate fail.

Arguing the contrary, is arguing that the endless litany of endless security updates, for the stuff discovered, doesn't exist.

And those updates oyly cover stuff discovered. There are endless zero days right now, being exploited in the wild, without patches, of which we are unaware.

We're seen vulnerabilities on every kernel, in mainline software, on every platform, sitting for years too. And you know those are discovered by black hats, and used for a long time before being found out by the rest of the community.

Humans cannot write safe software. Ever. No matter what.

Get over it.

Only detailed, targeted monitoring can help you detect intrusion attempts, expose as little as possible, keep updated, and do your best.


You're missing the point.

Humans cannot write bug-free software[1]. Then if you're soft has security-related things to do (like credential management etc.) you cannot be sure there won't be a way to bypass it.

But here we're talking almost exclusively about remote execution bugs coming from memory-safety issues, which are indeed preventable. Any managed language does the trick, and if they are not fast enough for your use-case there is Rust. (And, before anyone mentions it, since this isn't some low-level/hardware related thing, you don't need to use unsafe Rust).

Rewrites are costly and take time, but we're talking about the wealthiest company on Earth, and this issues has been around for years so they don't really have an excuse…

[1]: at least without using formal verification tools, which are admittedly not practical enough…


> (And, before anyone mentions it, since this isn't some low-level/hardware related thing, you don't need to use unsafe Rust).

That only makes sense if all your code is in Rust, but I suspect this is not true for foreign libraries, even bulletproof ones. It was my understanding that to convince Rust of their safety, you need blocks of "unsafe" code...or not? For example, if I want to embed Chez Scheme into a Rust program, even safely, I apparently can't do that without "unsafe" code. And that's by no means exclusively a "low-level/hardware related thing".


There is no safe way to code anything, ever.

(Yes people, and Apple should try, but...)


This absolutist statement is basically meaningless.

Taking Rust as an example (use Swift or even Java if that works better for your use-case), we know how to write Rust code that is guaranteed to be free from common classes of bugs that these zero-click attacks exploit.

Yes, we aren't going to get rid of all bugs, yes, zero-click attacks might still be possible once in a while, but we can make it much, much harder and more expensive, and therefore greatly reduce the set of people who have access to such attacks, and reduce their frequency.


we know how to write Rust code that is guaranteed to be free from common classes of bugs that these

No we don't.

You are trying to shift the sands, by saying "But.. this one thing we can do...", except even that isn't true.

If we did, it wouldn't keep happening, year after year, decade after decade.

But even with peer reviews, with people supposedly knowing how, well.. it just keeps happening.

Do you think every occurrence is random chance? Or is it, maybe, just maybe, that humans can't write bug free code?


In practice safe Rust code never causes use-after-free bugs, for example, and UAF bugs are large fraction of exploitable RCE bugs.

Safe Rust code could trigger a compiler bug that leads to use-after-free, or trigger a bug unsafe Rust code (i.e., code explicitly marked "unsafe") that leads to use-after-free; the latter are rare, and the former are even rarer. In practice I've been writing Rust code full time for six years and encountered the latter exactly once, and the former never. In either case the bug would not be in the safe code I wrote.

I'm certainly not claiming that humans can write bug-free code. The claim is that with the right languages you can, in practice, eliminate certain important classes of bugs.


Can you now?

So there will be no human error? And rust will have zero compile time bugs, ever?

I'm not against improvement, but the absurd assumption that anything is safe. Because nothing is.


I guess kind of nihilist conservatism (“why bother changing anything since there's nothing we can do”) may explain why we are in such a bad situation today…


How are you sure there is no bug being unfound in rust itself ?


We don't need to be sure of that. We already have ample evidence that code written in Rust has far fewer vulnerabilities than, say, code written in C.


Quoting from https://forum.nim-lang.org/t/8879#58025

> Someone in the thread said he has 30 years experience in programming, and the only new lang which is really close to C in speed is Rust.

He has a point. Both C and release-mode Rust have minimal runtimes. C gets there with undefined behavior. Rust gets there with a very, very robust language definition that allows the compiler to reject a lot of unsafe practices and make a lot of desirable outcomes safe and relatively convenient.

However, Rust also allows certain behavior that a lot of us consider undesirable; they just define the language so that it's allowed. Take integer overflow, for instance https://github.com/rust-lang/rfcs/blob/26197104b7bb9a5a35db2... . In debug mode, Rust panics on integer overflow. Good. In release mode, it wraps as two's complement. Bad. I mean, it's in the language definition, so fine, but as far as I'm concerned that's about as bad as the C https://stackoverflow.com/a/12335930 and C++ https://stackoverflow.com/a/29235539 language definitions referring to signed integer overflow as undefined behavior.

I assume the Rust developers figure you'll do enough debugging to root out integer overflow happens, and maybe that's true for the average system, but not all! I once had to write a C++ program to compute Hilbert data for polynomial ideals. The data remained relatively small for every ideal that could reasonably be tested in debug mode, since debug mode is much slower, after all. But once I got into release mode and work with larger ideals, I started to encounter strange errors. It took a while to dig into the code, add certain manual inspections and checks; finally I realized that the C++ compiler was wrapping the overflow on 64 bit integers! which is when I realized why several computer algebra systems have gmp https://gmplib.org/ as a dependency.

OK, that's the problem domain; sucks to be me, right? But I wasted a lot of time realizing what the problem was simply because the language designers decided that speed mattered more than correctness. As far as I'm concerned, Rust is repeating the mistake made by C++; they're just dressing it up in a pretty gown and calling it a princess.

This is only one example. So, sure, Rust is about as fast as C, and a lot safer, but a lot of people will pay for that execution boost with errors, and will not realize the cause until they've lost a lot of time digging into it... all to boast, what? a 1% improvement in execution time?

IMHO the better design choice is to make it extremely hard to override those overflow checks. There's a reason Ada has historically been a dominant language in aerospace and transportation controls; they have lots of safety checks, and it's nigh impossible to remove them from production code. (I've tried.) Nim seems more like Ada than Rust in this respect: to eliminate the overflow check, you have to explicitly select the very-well-named --danger option. If only for that reason, Nim will seem slower than Rust to a lot of people who never move outside the safe zone of benchmarks that are designed to test speed rather than safety.

To be fair, once you remove all these ~1% slowdown checks, you get a much higher performance boost. And Rust really is a huge improvement on C/C++ IMHO, with very serious static analysis and a careful language design that isn't encumbered by an attempt to be backwards compatible with C. So if you're willing to make that tradeoff, it's probably a perfectly reasonable choice. Just be aware of the choice you're making.


This is again confusing perfect bug-freedom and memory safety. Rust does guarantee your code won't have bugs (like integer overflow), but it will never lead to memory vulnerabilities (in safe Rust), which means you'll never encounter a remote code execution caused by and integer overflow in Rust.

The key takeaway is the following: Rust programs will contain bugs, but none of those bugs will lead to the kind of crazy vulnerabilities that allow those “zero-click attacks”. Is that perfect? No, but it's an enormous improvement over the status quo.


Wrapping on integer overflow isn't a memory safety bug in Rust. It's often a memory safety bug in C because of how common pointer arithmetic is in C, and the likelihood that the overflowed integer will be used as part of that pointer arithmetic. But pointer arithmetic is so exceedingly uncommon in Rust that I've never seen it done once in my ten years of using it. This is a place where familiarity with C will mislead you regarding accurate risk assessment of Rust code; wrapping overflow isn't in the top 20 things to worry about when auditing Rust code for safety. And if you want the overflow checks even in release mode, it's trivial to enable it permanently. And a future version of Rust reserves the right to upgrade all arithmetic to panicking even in release mode, if hardware ever sufficiently catches up.


You can enable integer overflow checking in Rust release builds. Android does. I think that trend will continue and at some point even become the default.


You can enable Rust's overflow checks in release mode, FYI. Doesn't help the default case, but you can at least choose to do so.


Apparently, formal proof of algorithms being safe and sound has been repeatedly demonstrated, just not so toward Apple’s closed (proprietary) software specifically their large 14-format image decoders running outside a sandbox.


Apple sandboxes their image decoder.


NOW they do. it is STILL a large attack area.


> safe code is impossible

> Humans cannot write safe software. Ever. No matter what.

Formally proven code does what it says on the box? Do we have different definitions of safe perhaps?


One can be fundamentally mistaken about what "it says on the box" See WPA2/KRACK for example.

It becomes an infinite recursion of "how do we know the proof of the proof of the..." is what we actually want?


You mean, when you look at your code, or someone else does, they think it's ok?

I guess that's why security issues, even in massively peer reviewed code, are a thing of the past, right?

Do your best, code as safely and securely as you know how, peer review and test and fuzz...

Then when you deploy your code, treat it as vulnerable, because history days it likely is.

Treat your phone as compromised. Anything network connected as compromised.

Because history says it can be, and easily.

Monitoring is one of the most important security measures for a reason.


There does actually exist such a thing as formally proven code, which is mathematically according to spec. https://www.sel4.systems/Info/FAQ/proof.pml


I don't even see the point you are making.

Are you trying to claim that the above proof will never be invaldated?

You're really just proving mt point here. You think thongs can be secure.


what do you mean invalidated? the point of proofs is that they will be still be as true today as they will millenia into the future. pythagoras' theorem is just as true today as it was millenia ago.


Yes, that's what I'm claiming.

That's of course only a part of the story, the spec or the hardware can still be broken.


You could formally prove Unicode renderers are 100% correct. It's because they are 100% correct that they can be relied on to be exploited.

The weak spot when it comes to security is not the hardware or the software, it's the human mind.


The proof may be valid but the implementation may have a mistake, or the compiler, or the operating system, or the hardware.


Or if it dynamically loads anything that isn't formally proved.


but does the hardware? Formally proven code does not prevent you from hardware bugs like rowhammer.


Well designed software with attack surface within the bounds of human understanding does not have these problems.

OpenSSH has been exposed to the public Internet for over two decades, with nothing resembling this type of security problem. OpenSSH runs the protocol parser without permissions on the local filesystem, yet Apple thinks an ancient tiff library with scripting abilitites can be run with full permissions. Of course there is a discussion of financial incentives and customer expectations to be had here.

URL previews are an anti feature for a many users. We could not care less. But it gets shoved upon users by product feature teams for whom a continous stream of new features are their reason for being. That's how we develop commercial software, but that's not the only way.


> Apple thinks an ancient tiff library with scripting abilitites can be run with full permissions.

It doesn't. That is just one step in a chain of exploits.


What's the evidence that zero-click hacks are growing in popularity? TFA doesn't seem to provide any, and given that in the not so distant past, every other Windows PC was infested with viruses and/or trojans, it's hard to believe device security is on a downward trajectory.


Yeah, there was a time when installing Windows XP with an Ethernet cable plugged in was impossible, because the PC would get infected before even finishing the setup, and reboot.


You can just count ITW exploits against Chrome, for example, to see that they're increasing over the last 3 years. I assume the same is true for some other software.


I have a Galaxy Tab 3 which was on sale in my area until 2018. It's a perfectly usable device. Samsung refuses to upgrade past Android 7. The last security patch is more than a year old.

Mobile security is a huge mess because of planned obsolescence. There should exist no security reasons that force me to junk a device faster than 10 years if the manufacturer is still in business.

Regulatory action is required, that should think past the traditional "warranty" periods, removing security support is a remote confiscation of private property.


Similar story here. The worst part is that the locked bootloader means that I can't upgrade it myself, either.


Recent and related:

A Saudi woman's iPhone revealed hacking around the world - https://news.ycombinator.com/item?id=30393530 - Feb 2022 (158 comments)

Before that:

A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution - https://news.ycombinator.com/item?id=29568625 - Dec 2021 (341 comments)


There are no laws in Israel preventing companies like NSO from building and selling zero-day and zero-click exploits? Without proper regulations the Israeli government is creating a sophisticated and dangerous platform for these kind of illegal attacks.


A super reductive way of explaining it is it's because a lot of state actors (including NSA) have a lot of skin in the game through active, deep investment in the cyber weapon market. State actors strongly incentivize the 'attack' side of the market while companies historically disincentivize the 'defense' side. A solid elucidation of the system (for laypeople like me) can be found in Nicole Perlroth's book "This is How They Tell Me the World Ends": https://browse.nypl.org/iii/encore/record/C__Rb22352302__STh...

Anyone interested in learning more about how NSO group operates can check out digitalviolence: https://www.digitalviolence.org/#/


The Israeli government is exploiting NSO as a global diplomacy leverage. In exchange for approving the export of NSO software, they request foreign State support for their interests abroad, ranging from UN voting to commercial deals and anything in between.


Essentially works like arms exports, which makes a lot of sense.


What would outlawing NSO Group accomplish? The trade would simply move to jurisdictions with even less oversight.


In the article it states that it’s illegal to sell to over 90 countries. But maybe resellers are getting it to these countries?


Israel are arguably the worlds biggest beneficiary of the arms trade. Why would they have anything against selling weapons?


Only in your active imagination. In reality it is roughly in 8th place with 3% marketshare.

https://www.weforum.org/agenda/2019/03/5-charts-that-reveal-...


According to that link, it's #1 per capita. Add to this the enormous annual gifts from the US, and you'll find that I'm correct. Per capita, they're the biggest beneficiaries of the arms trade, and the worlds largest arms dealer.

I don't blame you for not seeing that though. The propaganda about Israel being in serious danger from stones and home made rockets is quite effective.


That doesn't include the free weapons the US gives them with their annual stipend.


Now try that analysis per capita.


I don't think NSO is building & selling exploits. They're buying and renting them out. Exploit-as-a-service.


Why aren't these used to steal cryptocurrencies? According to the article you can buy a similar exploit for just $1-2.5 million. Considering the amount of money floating around that space, that it hasn't happened yet is surprising to me (or maybe I just don't pay attention to people who own crypto and are public about it, maybe they do get hit by zero-days all the time?).


We need a security focused phone. General purpose consumer phones are focused on features; security is not a top priority for the average person.

What are the options now?


Not sure who is "we" here, but yes I agree, a general purpose customer phone can't be considered secure against state-level hackers, there MUST be tradeoffs.

As an example, I consider that a secure phone MUST have boot-time full disk encryption passphrase, which needs to be different from lockscreen. For obvious reasons (which is that the user will tend forget their password), you can't have this even as an option on general purpose phones.

That being said. GrapheneOS is IMO a pretty good option wrt security (like they chose to disable JIT, which impacts performance, but supposedly improve security), even though lately their focus is no longer security for business reasons.

Architecture-wise, the best smartphone are pinephones/Librem, because of separation of modem (which is in the case of state-actors, an actual danger), and you can force encryption of all communications (it's even possible to do VoLTE encryption CPU-side rather than modem-side), but I think at the moment their OS really lags behind Android when it comes to security.


> even though lately their focus is no longer security for business reasons.

Context?


Their latest developments are about making GrapheneOS more usable, not more secure. Like they are working on a camera app, and our integrating Google apps. (Don't mistake me, I totally respect them for what they are doing)


I would imagine https://grapheneos.org/ is the state of the art.


It's high time governments and mega corps fund projects that rewrite all media decoding libraries in pure Rust (or some managed language is performance is not a concern).

People keep saying RIR is some how pointless but the reality is that its impossible to keep ahead of those vulnerabilities that aren't found yet.


Interesting. Would a whitelist approach have prevented this ( no random person sending you a gif )?


> 'Zero-click' hacks

A.K.A. 'Hacks' (as opposed to social engineering)


A one-click hack can still be a hack. Clicking a link should not be able to break out of a sandbox, run arbitrary code, etc and should still be considered a hack.

I think both should be considered hacks, but zero click is much scarier, so it makes sense to distinguish them


I have always wondered if the increasing technical complexity of the world and the bugs it will bring is faster than the efforts of bughunters/security industry. Often times it feels like a losing battle but I would love to see some research on the subject, might be hard to get any solid data however.


Just use dumb phones. Maybe one old smartphone purely for business to minimize personal information leak.


Dumb phones still had Bluetooth https://en.wikipedia.org/wiki/Bluebugging


Dumb phones still had security issues and usually had no way to update the software or firmware


dumb phones have their own problems


It seems one way to stop many of them would be to only access email using a web client.


Text email. HTML email provides access to a vast attack surface on the user's device. Normally webmail clients will cheerfully send along all the HTML. Filtering is a futile game of whack a mole.

If you are only doing text email (or some very restricted HTML interpretation) then there is no extra risk in using a local email client. The lack of a HTML interpreter probably means you would be safer than with a webmail client.

If you are doing some form of email as a precaution, you still need a secure place to do it. That might not be a typical smart phone.


can we change this click bait title?

both statements are not true. a) zero click hacks have always been popular b) of course there are ways to stop them.


Are there messaging apps on Android that make you more vulnerable?


Why would anybody work for those companies?


Look how many people work for arms manufacturers. How is this any different?


It's probably a very interesting domain and pays exceptionally well.


They have always been popular... lol


Rust won't be the miracle stopping this. Porting unveil/pledge to all OSes will.


There's no silver bullet. Pledge on a complex application that does too many things [requests too many permissions] doesn't help much.

IMO complexity and churn remain the biggest problems but people are not willing to engage it. There's always at least one legitimate use case for some faddy trendy new feature, always a reason for more complexity, fuck anyone who doesn't want it. And so you get a massive body of constantly changing code that auditors can't keep on top of.

What would it be like if your chat app was max 3000 lines of code and received no more than a handful of small patches per year since 2008? You could audit that in an evening or two and be reasonably confident in its security, and you could also be reasonably confident that it hasn't grown a bunch of new vulns in the next three releases, and you could quickly audit it again to be sure.

Alas, practically nobody takes you seriously if you advocate for simplicity. Usually it's the opposite; I tend to get attacked if I suggest that a program/system might be too complex.


>Pledge on a complex application

I think you never saw how pledge works.

SeLinux is complex. Pledge can be a piece of cake.


I think you failed at reading comprehension. I said nothing about the complexity of pledge.


Are you familiar with how sandboxing works on iOS?


Grsecurity for iOS/Android would stop them.


Not to go all 'Rust Evangelism Strike Force' but almost universally, these exploits leverage memory unsafety somewhere in the stack, usually in a parser of some kind (image, text, etc). The fact that this is still tolerated in our core systems is a pox on our industry. You don't have to use Rust, and it won't eliminate every bug (far from it), but memory safety is not optional.

We truly need to work more towards eliminating every memory unsafe language in use today, until then we're fighting a forest fire with a bucket of water.


It's worth engaging with the fact that essentially nobody disagrees with this (someone will here, but they don't matter), and that it's not happening not because Apple and Google don't want it to happen, but because it's incredibly, galactically hard to pull off. The Rust talent pool required to transition the entire attack surface of an iPhone from C, C++, and ObjC to Rust (substitute any other memory safe language, same deal) doesn't exist. The techniques required to train and scale such a talent pool are nascent and unproven.

There is probably not a check Apple can write to fix this problem with memory safe programming languages. And Apple can write all possible checks. There's something profound about that.


Well to some extent these companies are self sabotaging by centering interviews around algorithm problems, not only by selecting a certain kind if talent for further investment of resources, but also by signaling to the market the kinds of training needed to land a good job.

If instead, the talent pool were incentivized to increase their ability to understand abstractions, and we selected for that kind of talent, it might not be so hard to use new languages.


Abstractions are fun, security isn't. I doubt there are even that many programmers who enjoy writing (correct, safe) Rust.


Wait, this whole thread is about moving to languages that eliminate classes of security holes by virtue of the language itself. The premise is that being a security conscious programmer is not by itself enough to achieve good security.


In Apple's case they wouldn't need to move everything to Rust. Swift is a little bit higher level and a lot of stuff could be moved into it, with Rust as the lower level layer to replace ObjC / C / C++.

Still a gargantuan effort, but for them it doesn't require everyone learn Rust, just to learn Swift, which is kind of table stakes for a lot of user facing dev I'm sure there.


I honestly don't understand this. If Google or Apple wanted it to happen, they could force those developers to learn Rust. Are you saying the people that wrote the products in question can't learn Rust well enough to achieve the goal?


Forcing their employees to learn rust doesn't mean Google has the capacity to rewrite all their software in rust. They have tons and tons of code which would need to be rewritten from scratch.

Of course if they dropped all other development and told their employees to rewrite to rust, they may end up with a piece of software written in rust but no customers.


I agree, but there's so many people at Google (132,000 if you can believe the search results), it's hard for me to believe they couldn't devote a small percentage of them to moving to a secure stack.


Let's say 50 000 write code and a small percentage is 10%. So then your idea is that 45 000 would continue to write code in the unsafe languages and 5 000 would rewrite old and newly written code in rust? How many years do you think it would take for the 10% of developers to write all the old code and all the newly written by the 90%?


"Moving to a new stack" implies rewriting much of the code (and: a surprising amount of the code) built on the existing stack.


And you don't realize their codebase must also be really large like their number of employees. They must have a lot of code per employee.


Start with the fact that practically all software development at Apple and Google would cease for multiple months while people dealt with the Rust learning curve, which is not gentle, and proceed from there to the fact that Rust demands (or, at least, urgently requests) architectural changes from typical programs designed in other languages.

Now: rewrite 20 years worth of code.

Let's make sure we're clear: I agree --- and, further, assert that every other serious person agrees --- that memory safety is where the industry needs to go, urgently.


When you say architectural changes how do you mean? Most of the memory stuff isn't particularly exotic there's a lot of different syntax/Functional programming influences but I'm curious why it would be wildly exotic compared to most C++ code or have I misunderstood?


It doesn't matter how exotic something is when you're taking about rewriting an entire platform - the sheer amount of man hours required to reimplement something for an advantage the vast majority of customers simply don't value enough is the limiting factor.

In that context, even a small architectural difference can be seen as a high barrier.


It'd still be like replacing the engines of an aeroplane mid-flight surely? I know Rust can do C interop and it'd probably be done piecemeal but it'd still be an absolutely gargantuan task. I'd say there's a fair chance the sheer time and effort such an undertaking would involve would cost more than the memory safety bugs using C or C++ introduces.


You don't need to move the entire attack surface of the iphone to Rust. There are plenty of smaller areas that tend to have the most vulnerabilities. They could absolutely write a check to radically reduce these sorts of issues.

It'll take years to have impact, but so what? They can start now, they have the money.

> nobody disagrees with this (someone will here, but they don't matter)

There are so many people out there who don't understand the basics. HN can be sadly representative.


I' don't think the real question is “how feasible is it to rewrite everything in Rust”, because as you say, the answer to this question is clearly “not a all”. But “rewriting all parsers and media codec implementation” is a much smaller goal, and so is “stop writing new codec implementation in memory unsafe language”, yet none of those two more achievable are being pursued either, which is sincerely disappointing.


They wrote it the first time, didn't they? C isn't special, and training isn't special.


macOS runs the Darwin kernel (developed at NeXT using the Mach kernel, then at Apple). NeXTSTEP was based on a BSD UNIX fork. Development of BSD at Berkeley started in 1977. NeXT worked on their kernel and the BSD UNIX fork in the '80s and '90s before being purchased by Apple. NeXTSTEP formed the base of Mac OSX (which is why much of the Objective-C base libraries start with `NS-something`. There is 45 years worth of development on UNIX, and Linux is a completely different kernel with a completely different license. Linux kernel has been in development for about 31 years.

Languages and understanding them is not special, but decades of development of two different kernels is a huge time investment. Even though Linus Torvalds wrote the basic Linux kernel in 5 months, it was very simple at first.

I doubt writing an entire POSIX-compatible replacement for a kernel would be a small or quick endeavor, and Apple has shown resistance to adopting anything with a GPL 3 license iirc. That is why they switched to ZSH from Bash.


The earlier post seemed more focused on the userland, so we should very much consider excluding the kernel before we decide the idea is too hard.


Time is pretty special. iOS alone is over a decade old and a constantly evolving target and it's itself a direct descendant of a 30+ year old system.


> a constantly evolving target

That decreases the amount of code to replace, doesn't it?


How so? New not-in-safe-languages code is being added all the time.


For code added in the future, you need devs no matter what language they use, so switching their language is the easy part of this large hard project.

For code added in the past, more evolution means that for every X lines of code written, a smaller and smaller fraction of X still exists. Which means less work to replace the end product.


It took decades.


So make the initial goal a portion. It's not like Apple is going to go away any time soon. The second best time to start is now.

And a lot of that was design work that still holds, and a lot of that was code that has been obsoleted.


As far as I'm aware, every major company in the industry is working on exactly this. I'm telling you why we don't just have an all-memory-safe iPhone right now, despite Apple's massive checking account. I'm not arguing with you that the industry shouldn't (or isn't) moving towards memory safety.


Do you think Apple is already at the frontier of what can be done to detect or refactor out these bugs in their existing languages? Static analysis, Valgrind, modern C++, etc?


Is that more-so due to a lack of Rust engineers or a lack of firmware engineers capable of rebuilding the iOS stack?


Honestly at this point I’ve given in and am now advocating that we rewrite every damned widget from scratch in Rust, because by the time we’re mostly done, my career will be winding down, and seeing that shit still gets pwned like, exactly as much, will be “good TV”.

Rust is cool because it’s got a solid-if-slow build story that doesn’t really buy into the otherwise ubiquitous .so brain damage. Rust is cool because Haskell Lego Edition is better than no Haskell at all, and Rust is cool because now that it’s proven affine/linear typing can work, someone will probably get it right soon.

But if I can buy shares in: “shit still gets rocked constantly”, I’d like to know where.


> Honestly at this point I’ve given in and am now advocating that we rewrite every damned widget from scratch in Rust, because by the time we’re mostly done, my career will be winding down, and seeing that shit still gets pwned like, exactly as much, will be “good TV”.

Rust won't solve logic bugs but it can help bring up the foundations. So long as memory safety bugs are so pervasive we can't even properly reason on a theoretical level about logic bugs. The core theorem of any type system is "type safety" which states that a well-typed program never goes wrong (gets stuck, aka UB). Only then can you properly tackle correctness issues.

> Rust is cool because Haskell Lego Edition is better than no Haskell at all, and Rust is cool because now that it’s proven affine/linear typing can work, someone will probably get it right soon.

I don't understand the condescending remarks about "Haskell Lego Edition". I do agree that Rust has shown that substructural type systems work and are useful, and that they will be a 'theme' in the next batch of languages (or I can hope).


How much do I win if I can panic Rust without any “unsafe” whatsoever? Maybe I’ll index into some Unicode or something, haven’t decided.


A crash is signficiantly better than corruption. If you can force an `unwrap` you can cause a denial of service but with corruption, all bets are off.


Panicking in Rust isn't a memory-unsafe operation.


And frankly I don’t see how it’s even remotely fair to call a no-nonsense statement that some things are simplified versions of other things with a cheeky metaphor “condescending”.

I could just as easily throw around words like “anti-intellectual” if my goal was to distract from the point rather substantially replying.


But Rust isn't remotely a simplified version of Haskell, and I'm not sure where you got that impression. It's inspired by several languages, but is predominantly a descendant of ML and C++. The only similarity they have is that Rust traits resemble Haskell typeclasses, but even there they are quite different in semantics and implementation.


I like Rust in a lot of ways, I write a fuckload of it and I get value from doing so. Not “praise the lord” value, but real value.

But the attitude is an invitation to getting made fun of. It’s absurdly intellectually dishonest when Rust-as-Religion people actively hassle anyone writing C and then get a little precious when anyone mentions Haskell and then extremely precious when they step on the landmine of the guy who likes Rust enough to know the standard, the compiler, the build tool, the people who wrote the build tool, and generally enough to put it in its place from a position of knowledge.

SSH servers? Yeah, I’d go with Rust. Web browsers? In a perfect world, lot of work. Even for Mozilla who timed the fuck out on it.

Everything ever so no security problem ever exists ever again? Someone called it the “green energy of software security” on HN like this year.

It’s not the coolest look that one of my “blow off some steam” hobbies is making those people look silly, but there are worse ways to blow off some steam.


It sounds like you’re saying, you spent a lot of time focused on learning rust, so now you like to discuss its shortcomings as abrasively as you can for sport.


Upthread I’ve already surrendered. There are certain gangs you just don’t pick a fight with. I’m a slow learner in some ways but I get the message. Got it, learning Rust nuts and bolts only makes it worse to say anything skeptical about it.


Nearly every answer you gave in this thread doesn't address the parent comments point at all.

It seems you are just raging and reading subtext and drama where there is none.

Further up someone mentioned Rust and Haskell aren't similar and you go on about Rust-religion and where to use Rust. Why don't you just address the point? "Lego" is also not a synonym or metaphor for simplified.


Your argument seems to mostly boil down to "Rust isn't magic", which nobody is really arguing. It does help eliminate one class of really nasty bugs, which tend to repeatedly show up in a lot of massive security hacks, and which generally everyone would like to see eliminated. Therefore: use Rust.

Comparisons to other languages like Haskell don't really work, since they don't fit in the same space nor have the same goals as Rust or C.


Do I really need to do the search for comparisons to solar panels or cancer drugs, or does that sort of scan?


Question mark operator is bind for Result? Derives show? Run that argument past someone who can’t quote chapter and verse.


lol if "shit still gets rocked" means "programs exit safely but unexpectedly sometimes" we're on very different pages

I'm searching your posts in this topic trying to find something of value and coming up short. You assert that you know rust, and therefor your opinions have merit, but... lots of people know rust and disagree. But somehow your opinions are More Right and the others are just religious Rust shills.

I don't think you know what you're talking about honestly. If you want to pick fights on HN that's cool, we all get that urge, but you're really bad at it.


The flaw in the idea of "rewrite it in rust" is that, next to the memory issues, the biggest issues are logic bugs.

Rewriting something from scratch isnt going to magically not have bugs, and the legacy system likely has many edge cases covered that a modern new implementation will have to learn about first.


Right, but a memory unsafety but is what takes a harmless logic bug in an image parser with no filesystem access to an RCE and sandbox escape.

Memory unsafety allows you to change the 'category' of the bug, you become free to do whatever whereas a logic bug forces to to work within the (flawed) logic of the original program.


Not necessarily; see https://github.com/LinusHenze/Fugu14/blob/master/Writeup.pdf for example. It's a full chain that repeatedly escalates privileges without exploiting any memory safety bugs by tricking privileged subsystems into giving it more access than it should have, all the way up through and beyond kernel code execution.


70% of high severity security bugs (including RCE) are due to memory unsafety. Not all, but most. It's been this way for~decades.

https://news.ycombinator.com/item?id=19138602

https://www.zdnet.com/article/chrome-70-of-all-security-bugs...


These Rust vs C comparison often get fixated on the, somewhat unique, memory safety advantage of Rust. But the proper comparison should be ANY modern language vs. C, cause those remove a heap of other C footguns as well. Most modern language have:

- sane integers: no unsafe implicit cast, more ergonomic overflow/saturate/checked casts

- sane strings: slices with length, standardized and safe UTF-8 operations

- expressive typing preventing API misuse: monads like Optional/Result, mandatory exception handling, better typedefs, ADTs vs tagged unions

And even without the full Rust ownership model, I'd expect the following to solve a majority of the memory safety problems:

- array bounds checks (also string bounds checks)

- typed alloc (alloc a specific type rather than N bytes)

- non-null types by default

- double-free, use-after-free analysis

- thread-save std APIs

In the write-up you linked, Section 2 is a missing error check => Result<T> would surface that. The macOS case contains a relative path vs string comparison => expressive typing of Path would disallow that. DriverKit exploit is a Non-NULL vs NULL API mistake. Kernel PAC is a legit ASM logic bug, but requires a confusion of kernel stack vs. user stack => might have been typed explicitly in another language.


That’s not a zero-click vulnerability though. I didn’t read the entire pdf but 2 of the first 4 steps involve active user participation and assistance (install exploit app 1 and exploit app 2).

I think regardless, you’re right, we will still have logic bugs… but that example is also an “exception proves the rule” kind of thing.


It's not a zero click, that is correct. I presented it as an example of how every layers of Apple's stack, far beyond what is typically targeted by a zero-click exploit chain, can still have logic bugs that allow for privilege escalation. It's not just a memory corruption thing, although I will readily agree that trying to reduce the amount of unsafe code is a good place to start fixing these problems.


An improvement is an improvement. A flaw of seatbelts is that some people still die when they wear them. That's not a valid argument to not wear seatbelts.


>next to the memory issues, the biggest issues are logic bugs

When you look at the percentage of security issues that derive from memory safety, it certainly makes memory safety a good place to start.

>The Chromium project finds that around 70% of our serious security bugs are memory safety problems.

https://www.chromium.org/Home/chromium-security/memory-safet...

>Around 70 percent of all the vulnerabilities in Microsoft products addressed through a security update each year are memory safety issues.

https://www.zdnet.com/article/microsoft-70-percent-of-all-se...


It's important to have good foundations (memory safety) because then it becomes much more attractive to spend effort on the rest of the correctness and security. If you want to build a sturdy house, and see how to make the roof well, don't give up on it just because you'll need to do something else for good doors and windows.


Do you imagine how long it would take to compile the Linux kernel if it were rust only? Not to mention the kernel has to allow for third party closed source stuff like drivers, wouldn't that force you to allow unsafe Rust and put you back to square one?


That seems an insignificant price to pay if it would truly provide the promised benefits (big if). Even most Linux users don't compile the kernel themselves, and the 5% that do care can afford the time and/or computing resources.


> Haskell Lego Edition

Gatekeeping much?


I took the time to learn Rust well in spite of how annoying the Jehovas Witness routine has been for like, what, 5-10 years now? I worked with Carl and Yehuda both on the project right before Cargo (which is pretty solid, those guys don’t fuck around).

I think I’ve paid my cover-fee on an opinion.


Do you have a different opinion on whether or not syntax for a clumsy Maybe/Either Monad is a bit awkward? Do you think that trait-bound semantics are as clean as proper type classes as concerns trying to get some gas mileage out of ad-hoc polymorphism? Do you think that the Rust folks might have scored a three-pointer on beating the Haskell folks to PITA-but-useable affine types?

Or were you just dissing knowing things?


Mass rewrites will be quite the jobs program. I’m on board. Converted to Marxism not long ago.


You don't know what you're asking for. In reality, you'll end up replacing C code with memory unsafely with Rust code written by people who understand Rust less than they understand C. The problem? The Rust Evangelism Strike Force always assumes that if you replace a C program with a Rust program, it'll be done by a top-tier expert Rust programmer. If that isn't the case (which it won't be), then the whole thing falls apart. There are vulnerabilities in JS and Ruby code, languages that are even easier (and just as type-safe) as Rust.


There's something to be said for taking the entire class off vulnerability off of the table.

For instance, in the past I worked at a sort of active directory but in the cloud company. We identified parsers of user submitted profile pictures in login windows as a privilege escalation issue. We couldn't find memory safe parsers for some of these formats that we could run in all these contexts, and ended up writing a backend service that had memory safe parsers and would recompress the resulting pixel array.

Rust parsers at the time would have greatly simplified the workflow, and I'm not sure how we would have addressed the problem except as whack-a-mole at the time if there wasn't our central service in the middle (so MMS can't do that).


This is just incorrect. The beauty of Rust is even bad programmers end up writing memory safe code because the compiler enforces it. The ONLY rule an organization needs to enforce on their crappy programmers is not allowing use of unsafe. And there are already available tools for enforcing this in CI, including scanning dependencies.


I think what they're saying is that by making devs use a less familiar language, you're going to end up with at least as many security bugs, just ones not related to memory safety. (Not weighing in either way, just clarifying.)


Keeping mind that, if you have an RCE bug, any other class of bug is irrelevant. It's a bit like diagnosing someone with the flu after their head has been cut off. And while acknowledging that you're not personally weighing in either way, I will personally call the idea that you'll end up with just as many bugs of a weaker class to be quite silly. Everyone starts as unfamiliar in every language, but not every language makes it equally easy to accidentally introduce vulnerabilities. Defaults matter, tooling matters, and community norms matter, and all of these make it less likely for a low-quality Rust programmer to introduce vulnerabilities than even a medium-quality C programmer.


> There are vulnerabilities in JS and Ruby code, languages that are even easier (and just as type-safe) as Rust.

This is completely misleading. The vulnerabilities that exist in those languages are completely different. They often are also far less impactful.

Memory safety vulnerabilities typically lead to full code execution. It is so so so much easier to avoid RCE in memory safe languages - you can grep for "eval" and "popen" and you're fucking 99% done, you did it, no more RCE.


Rust programmers can write buggy code, but you'd really have to go out of your way to write memory-unsafe code.


I think the only question that matters is how much longer it takes to write a moderately-sized program in Rust vs C. If it takes around the same time, then an average C programmer will probably write code with more bugs than an average Rust programmer. If it takes longer in Rust, the Rust programmer could start taking some seriously unholy shortcuts to meet a deadline, therefore the result could be worse.

All code can have bugs, it's mostly just a question of how many. Rust code doesn't have to have zero bugs to be better than C. It's not like all C programmers are top-tier programmers and all Rust programmers are the bottom of the barrel.


This is part of the issue though:

Writing things in C correctly takes more time than in rust (once you get past the initial learning curve)

Writing things in C that appear to work may take less time.

I think we can be reasonably sure that Apple didn't introduce those image parsing bugs intentionally. But that means they thought it was correct.


I've written a few things at work in C/C++ and Rust. I can move much faster in Rust, personally, as long as the pieces of the ecosystem I need are there. Obviously I only speak for myself.

Part of that is because I'm working in code where security is constantly paramount, and trying to reason about a C or C++ codebase is incredibly difficult. Maybe I get lucky and things are using some kind of smart ptr, RAII and/or proper move semantics, but if they're not then I have to think about the entire call chain. In rust I can focus very locally on the logic and not have to try and keep the full codebase in my head


That assumes that writing unsafe code would make you go faster. It wouldn't. In general if you want to write code in Rust more quickly you don't use unsafe, which really wouldn't help much, but you copy your data. ".clone()" is basically the "I'll trade performance for productivity" lever, not unsafe.


Rust doesn't need top tier programmers. It just needs competent programmers.

The C in use today wasn't written by experts either. And if it was, we can leave it alone for now, or at least until said experts get tired of maintaining it.


I'm surprised no one has mentioned OpenBSD yet. Theo touches on this topic in this presentation: https://www.youtube.com/watch?v=fYgG0ds2_UQ&t=2200s

Some follow-up commentary: https://marc.info/?l=openbsd-misc&m=151233345723889


Because it's not relevant.

1. In the video he's saying that you can't replace memory safety mitigation techniques like ASLR with memory safe languages. He notes that there will always be some unsafe code and that mitigation techniques are free, so you'll always want them.

No one should disagree with that. ASLR is effectively "free", and unsurprisingly all Rust code has ASLR support and rapidly adopts new mitigation techniques as well as other methods of finding memory unsafety.

2. The link about replacing gnu utils has nothing to do with memory safety. At all.

Even if it were related, it would simply be an argument from authority.


Wouldn't it be a lot easier to just use a C compiler that produces memory-safe code?

I'm sure someone else has already thought of this, but in case not... All you need to do is represent a pointer by three addresses - the actual pointer, a low bound, and a high bound. Then *p = 0 compiles to code that checks that the pointer is in bounds before storing zero there.

I believe such a compiler would conform to the C standard. Of course, programs that assume that a pointer is 64-bits in size and such won't work. But well-written "application level" programs (eg, a text editor) that have no need for such assumptions should work fine. There would be a performance degradation, of course, but it should be tolerable.


That's essentially what ASAN is, with some black magic for performance and scope reasons. The problem is that ensuring that your code will detect or catch memory unsafety isn't enough, because the language itself isn't designed to incorporate the implications of that. If you're writing a system messenger for example, you can't just crash unless you want to turn all memory unsafety into a zero-click denial of service.


Programs that would crash when using the memory-safe compiler aren't standards conforming. If you're worried that programs crashing due to bugs can be used for a denial-of-service attack... Well, yes, that is a thing.

Low-level OS and device-handling code may need to do something that won't be seen as memory safe, but I expect that for such cases you'd need to do something similarly unsafe (eg, call an assembly-language routine) in any "memory safe" language.

I'm not familiar with how ASAN is implemented, but since it doesn't change the number of bytes in a pointer variable, I expect that it either doesn't catch all out-of-bounds accesses or has a much higher (worst case) performance impact than what I outlined.


I brought up ASAN because it's a real thing that already exists and gets run regularly. The broad details of how ASAN is implemented are best summarized in the original paper [1]. The practical short of it is that there are essentially no false negatives in anything remotely approaching real-world use. A malicious attacker could get around it, but any "better algorithm" would still run into the underlying issue that C doesn't have a way to actually handle detected unsafety and no amount of compiler magic will resolve that.

You have to change the code. Whether that's by using another language annotations or through annotations like checked C is an interesting (but separate) discussion in its own right.

As for the point that programs with memory unsafety aren't standards conforming; correct but irrelevant. Every nontrivial C program ever written is nonconformant. It's not a matter of "just write better code" at this point.

[1] https://www.usenix.org/system/files/conference/atc12/atc12-f...


From the linked ASAN paper: "...at the relatively low cost of 73% slowdown and 3.4x increased memory usage..."

That's too big a performance hit for production use - much bigger than you would get with the approach I outlined.

I don't agree that any nontrivial C program is nonconformant, at least if you're talking about nonconformance due to invalid memory references. Referencing invalid memory locations is not the sort of thing that good programmers tolerate. (Of course, such references may occur when there are bugs - that's the reason for the run-time check - but not when a well-written program is operating as intended.)


I usually find it safe to assume that compiler folks are conscious of optimization opportunities and make pretty intelligent tradeoffs on that spectrum. This is one such case. There's a long history of bounds checking compilers. The first that I know of is bcc in the 80s, which had a 10x slowdown! Austin et al. [1] came along a few years later way back in '94 and improved things to a mere 2-5x slowdown. That's pretty much where things stood for the next two decades because pointer accesses are everywhere in C and register pressure is nothing to sneeze at. Moreover, changing pointer sizes breaks your ability to link external things that weren't compiled with the same flags, like the system libc. ABI compatibility is make-or-break for a C compiler. You can get around that by breaking up the metadata from the actual pointer (e.g. softbound), but the performance cost is still ~3-4x [2].

ASAN was notable because

1) it was very efficient. That initial 73% was utterly fantastic at the time.

2) It was production-usable (i.e. worked on big codebases)

3) With hardware support, the performance hit is often under 10%. HWAsan on modern platforms is low-cost enough to run it all the time.

And no, I'm saying that pretty much every nontrivial C program has UB, not that they're specifically memory unsafe.

[1] https://minds.wisconsin.edu/bitstream/handle/1793/59822/TR11... [2] https://insights.sei.cmu.edu/blog/performance-of-compiler-as...


With all due respect, why do you assume that your “thought about it 3 mins straight” idea would perform better than one that has been in the works for a long time now by people working on similar topics all of their lives?

Don’t get me wrong, I often fell into this as well, but I think programmers really should get a bit of an ego-check sometimes, because (not you) it often affect discussions in other fields as well where we don’t know jackshit about.


I do this pretty often, and it's often a very valuable exercise, even though I'm almost always wrong. Interrogating the apparent contradiction between my beliefs and existing reality is a highly fruitful learning experience. There are several serious failure modes, though:

1. I can get my ego so wrapped up in my own idea that, even once I have the necessary information to see that it's wrong, I still don't abandon it. In fact, this always happens to some extent; when I change my mind it's always embarrassing in retrospect how blind I was. But the phenomenon can be more or less extreme.

2. In a context where posturing to appear smart and competent is demanded, such as marketing, advocating totally stupid ideas puts me at a disadvantage, even if I recant later. Maybe especially then, because it reminds people who might have forgotten.

3. People who know even less than I do about a subject may be misled by my wrong ideas.

4. This approach is most productive when people who know more than I do about a subject are kind enough to take the time to explain why my ideas are wrong. This happens surprisingly often, both because people are often kind and because the people who know the most about a subject are generally very interested in it, which means they like to talk about it. Still, attention of experts is a valuable, limited resource.

5. People who know more than I do about a subject can get angry and defensive when I question something they said about it, particularly if they're mediocre and insecure. The really top people never act this way, in my experience; if they pay attention at all, either they can explain immediately why I'm wrong, as AlotOfReading did here (though I may not understand!) or they go "Hmm, now that's interesting," before figuring out why I'm wrong. (Or, occasionally, not.) But people with a good working understanding of a field may know I'm wrong without knowing why. And there are always enormously more of those in any field than really top people.

So, I try to do as much of the process as possible in my own notebook rather than on permanently archived public message boards. The worst is when group #3 and #5 start arguing with each other, producing lots of heat but no light.

My theory about why the angry and defensive people in group #5 are never the top people is that they stopped learning when they reached a minimal level of competence, because their ego became so attached to their image of competence that they stopped being able to recognize when they were wrong about things, so they are limited by whatever mistaken beliefs they still had when they reached that level. But maybe I'm just projecting from my own past experience :)


That's a low cost for detecting the memory unsafe behvior. It is not intended to run in production, it's intended to run with your test suite.


Yes, I know. But this thread is about detecting invalid memory references in production, to prevent security exploits. ASAN seems too slow to solve that problem.


Based on recent experience, you'd really want your media decoders compiled with a safe compiler, and if it crashes, don't show the media and move on. Performance is an issue, but given the choice between RCE and DoS, DoS is preferable.

It would be nice if everything was memory safe, but making media decoding memory safe would help a lot.


I absolutely agree that it's a step in the right direction. My point is that we can't get all the way to where we want to be simply by incremental improvements in compilers. At some point we have to change the code itself because it's impossible to fully retrofit safety onto C.


There are similar approaches, ie: Checked-C which work surprisingly well. However, I'm not sure that this approach would be expressive enough to handle the edge cases of C craziness and pointer arithmetic. There's more to memory unsafety than writing to unallocated memory, even forcing a write to slightly wrong memory (ie setting `is_admin = true`) can be catastrophic.


I think it handles all standards-conforming uses of pointer arithmetic. Even systems-level stuff like coercing an address used for memory-mapped IO may work. For example,

    struct dev { int a, b; } *p; p = (struct dev *) 0x12345678; 
should be able to set up p with bounds that allow access only to the a and b fields - eg, producing an error with

    int *q = (int *) p; q[2] = 0;
Of course, it doesn't fix logic errors, such as setting a flag to true that shouldn't be set to true.


Yes, such approaches can be compliant. There's even a few C interpreters. Very popular back in the day for debugging C programs when you didn't have full OS debugging support for breakpoints and etc. Such an approach would be quite suitable for encapsulating untrusted code. There is definitely some major overhead, but I don't see why you couldn't use JIT.


That doesn't at all address use of pointers that have since become invalid (via free or function return, say).


Good point. There's also the problem of pointers to no-longer-existing local variables. (Though I think it's rare for people to take addresses of local variables in a context where the compiler can't determine that they won't be referenced after they no longer exist.)


Nothing wrong with Rust, but I still think making operating systems with airtight sandboxing and proper permission enforcement is the only thing that can truly solve these issues.


Only if the barriers have a finer resolution than a single application. Most applications need access to more than enough data to cause problems in the case of an exploit. You need sandboxing between different components of the application as well.


Still not enough, because apps still need to interact with the outside world, so there would have to be intentional holes in the sandbox out through which the compromised app could act maliciously.


That is why you need a well designed permission system. Android and iOs had a chance of doing this in a time when the requirements could reasonably be understood, but I don't think either came close.


And what language should we use to create such an OS? Maybe Rust?


It is a better choice than C++ for sure.



Look at how often V8’s sandboxes get exploited. It’s all developed by humans, which means there will always be errors.

Saying just make airtight sandboxes is like just write bug-free code.


It is a tradeoff. Making an airtight sandbox is not that hard. Making it run programs near hardware speed is a lot harder. Making it run legacy machine code is a nightmare.

JavaScript is not machine code, but still a good deal harder to make fast than a language designed for fast sandboxing. Of course there have been bugs, but mostly I think the JS VMs have done a pretty good job of protecting browsers.


I feel like those are two separate levels of concerns though.

Airtight sandboxing would be easier in a memory safe language prevents certain classes of bugs.


Memory safety is optional in Rust. It might not be obvious at the moment, because Rust is written by enthusiasts who enjoy fighting with the compiler until their code compiles, but once developers will be forced to use it on their jobs with tight deadlines, unsafe becomes the pass-the-borrow-checker cheat code.


I write Rust at $WORK. Using `unsafe` to meet a deadline makes 0 sense. It doesn't disable the borrow checker unless you're literally casting references through raw pointers to strip lifetimes, which is... insane and would never pass a code review.

99% of the time if you're fighting the borrow checker and just want a quick solution, that solution is `clone` or `Arc<Mutex<T>>`, not `unsafe`. Those solutions will sacrifice performance, but not safety.


> Using `unsafe` to meet a deadline makes 0 sense... would never pass a code review.

I've seen unsound unchecked casts from &UnsafeCell or *mut to &mut in multiple codebases, including Firefox itself: https://github.com/emu-rs/snes-apu/blob/13c1752c0a9d43a32d05..., https://searchfox.org/mozilla-central/rev/7142c947c285e4fe4f....


My girlfriend uses Rust for embedded systems at a large and important company. Everyone uses memory safety.


Rust is used in production though at large companies: Amazon, Microsoft, Mozilla, etc... I would be highly surprised that the borrowchecker would be the reason code couldn't ship in the first place, once you get over the initial mental hurdles it's usually a non-issue.

Besides equivocating between a pervasively unsafe-by-default language and one with an explicit bounded opt-in is a little disingenuous. Time after time, it has been shown that even expert C developers cannot write memory safe C consistently, each line of code is a chance to blow up your entire app's security.


Unsafe does not turn off the borrow checker

https://steveklabnik.com/writing/you-can-t-turn-off-the-borr...


I was under the impression that even in rust unsafe blocks, you still had massive safety advantages over C and it isn’t just instant Wild West.


I think unsafe rust is a lot more awkward to work with and easier to cause UB with compared to c and especially c++. This is just my opinion though!

&mut aliasing is a good example of running into instant UB in unsafe rust, but there are many more that you have to be aware of.

I would check out the unsafe rust "book" for yourself and see what you think. There is a section where you implement Vec and some other data structures from scratch!

https://doc.rust-lang.org/nomicon/intro.html


I’d love to red team the program that thinks Rust unsafe is easier to get right than tight ANSI C.


Why compare 'non-tight' Rust against 'tight' C?

Surely we should compare tight Rust (with some 'tight' unsafe sections) against tight C?


I think it's easier to write correct safe Rust than C, I wouldn't say it's easier to write correct Rust with unsafe blocks than C (many operations strip provenance, you can't free a &UnsafeCell<T> created from a Box<T>, you can't mix &mut and const but you might be able to mix Vec<T> and const (https://github.com/rust-lang/unsafe-code-guidelines/issues/2...), self-referential &mut or Pin<&mut> is likely unsound but undetermined), and it's absolutely more difficult to write sound unsafe Rust than C (sound unsafe Rust must make it impossible for callers to induce UB through any possible set of safe operations including interior mutability, logically inconsistent inputs, and panics).


I must be more tired than I thought if I said “non-tight Rust” and forgot ten minutes later.

I just think if mistakes need to be literally low as possible you’ve got a better bet than Rust unsafe.

The language spec is smaller, the static analyzers have been getting tuned for decades, and the project leaders arent kinda hostile to people using it in the first place.


We could set up a prediction market for this. A study would be performed of attempts ti pentest randomly selected unsafe Rust and tight ANSI C programs. A prediction market used to estimate the probability of either language winning before publication of results. Someone needs to make this a thing.


What is "tight ANSI C"?


Unsafe code is rarely necessary, especially unsafe code that isn't just calling out to some component in C. You can easily forbid developers from pushing any code containing `unsafe` and use CI to automatically enforce it.


This is kinda disingenuous. Whenever we people use unsafe its like an alarm because you can setup CI system that warns DevOps team regarding the usage of unsafe code.

And, most of the time unsafe code is not required. I think many people will just use clone too much, or Arc rather than unsafe. Additionally, I have never seen unsafe code at least where I work.


Following this idea, using a memory managed language like Go, Java or C# should also prevent most security issues (at least in non-core systems). Somehow I don't think this would work.


What makes you think so?


While I think garbage collected languages produce programs that are more safe, I also think they are often enablers for new classes of security issues. For example ysoserial, log4j etc.


(This is genuine question)

Is log4j's bug actually unique to Java/C#/gc-based languages?


The underlying bug in log4j is having a deserialization mechanism that can automatically deserialize to any class in the system, combined with method code that runs upon deserialization that does dangerous things. It has nothing to do with GC at all.

It's a recurring problem in dynamic scripting languages where the language by its very nature tends to support this sort of functionality. It's actually a bit weird that Java has it because statically-typed languages like that don't generally have the ability to do that, but Java put a lot of work into building this into its language. Ruby has a very large issue with this a few years back where YAML submitted to an Ruby on Rails site would be automatically deserialized and execute a payload before it got to the logic that would reject it if nothing was looking for it. Python's pickle class has been documented as being capable of this for a long time, so the community is constantly on the lookout for things that use pickle that shouldn't, and so far they've mostly succeeded, but in principle the same thing could happen with that, too.

It would be nearly impossible for Go (a GC'd language) to have that class of catastrophic security error, because there is nowhere the runtime can go to get a list of "all classes" for any reason, including deserialization purposes. You have to have some sort of registry of classes. It's possibly to register something that can do something stupid upon unmarshaling, but you have to work a lot harder at it.

Go is not unique. You don't see the serialization bugs of this type in C or C++ either (non-GC'd languages), because there's no top-level registry of "all classes/function/whatever" in the system to access at all. You might get lots of memory safety issues, but not issues from deserializing classes that shouldn't be deserialized simply because an attacker named them. Many other languages make this effectively impossible because most languages don't have that top-level registry built in. That's the key thing that makes this bug likely.


> The underlying bug in log4j is having a deserialization mechanism that can automatically deserialize to any class in the system

Getting objects out of a directory services is what JNDI is all about, I'm hesitant to call it a bug.

The bug is that Java is way too keen on dynamically loading code at runtime. Probably because it was created in the 90s, where doing that was kinda all the rage. I think retrospectively the conclusion is that it may be the easiest way to make things extensible short-term, but also the worst way for long-term maintenance. Just ask Microsoft about that.


"Getting objects out of a directory services is what JNDI is all about, I'm hesitant to call it a bug."

I didn't call it a bug. I called it a bit of functionality that makes the security problem possible. There are many things that result in security issues that come from some programmer making something just too darned convenient, but are otherwise "features", not some mistake or something.

It's the underlying problem. You should have to declare what classes are able to be deserialized. To the extent that it's inconvenient, well, so was the log4j issue.


No, not validating user input and passing it to some crazy feature rich library like JNDI is possible in any language. Not denying that Java did contribute by shipping with such an overengineered mess like JNDI in the first place.

Log4Shell wasn't a bug, log4j worked as expected and documented. It's just a stupid idea for a logging library to work in such a way.


No, it's not unique. But a GC strengthens, like dynamic typing, your ability to develop more dynamically and together with reflection or duck typing to write code that can deal with to some extent unknown input more easily. You can pass arbitrary objects (also graphs of objects) around very generously. Ysoserial is based more or less on this idea. Passing arbitrary objects around was deemed so useful, that it is also supported by java's serialization mechanism and thus could be exploited. Log4shell exploits similar mechanisms that would be a hell to implement in non-gc languages.


>Is log4j's bug actually unique to Java/C#/gc-based languages?

Not to my understanding. It would be possible in any language with or without a GC.


I agree and anything above the OS layer can be written in Go or Java or C#. Lots of those devs out there to hire.


Around 70% of security flaws are from memory unsafety (according to Google and Microsoft), which isn't "almost universally" but is still a significant percentage and worth attacking. But we'll still have a forest fire to fight afterwards from the other 30%.


Yeah, I started noticing huge flaws in Apple's Music app, which I told them about and work around mostly, but...are they because Apple software is written in C? C++, Objective-C, same thing. Like can C code ever really be airtight?


I'd say lack of QA. Apple Music (especially on macOS) is EXTREMELY buggy, unresponsive, slow, and feels like a mess to use. Same for iMessage.

Other apps are also written using the same stack with almost no bugs. I wouldn't blame the language here, but the teams working on them (or more likely their managers trying to hit unrealistic deadlines).


No, I would not blame the teams or their managers. You can't just blame a manager you've never met just because he's a manager, we're talking about the manager of Apple Music, they could very well be capable and well-minded, likely personally capable of coding. So let me give you another example in the same vein as C, where everybody uses a technology that is terrible, questioning it only at the outset, and then just accepting it: keyboard layouts. QWERTY is obsolete. It wasn't designed at random, it would have aged better if it did, it was designed to slow down typing so typewriters wouldn't jam. And secondly, in order for salesmen to type "TYPEWRITER" with just the top row, so the poor woman he was selling didn't realize typewriters were masochistic. So that's how you end up with millions of people hunting and pecking, or getting stuck for months trying to learn touch typing for real with exercises like "sad fad dad." It takes weeks before you can type "the". It's just the network effects of keyboard layouts are just next-level. Peter Thiel talks about this in "Zero to One" as an example of a technology that is objectively inferior but is still widely used because it's so hard to switch, illustrating the power of network effects. I for one did switch, and it was hard because I couldn't type neither Qwerty nor Dvorak for a month. But after that Dvorak came easily, you don't need an app to learn to type, you just learn to type by typing, slowly at first, then very soon, very fast.

So with regard to C, I would say it is not objectively inferior like QWERTY became, it's actually pretty well designed. It does produce fast code. I use it myself sometimes, not a bad language for simple algorithm prototypes of under 60 lines. But it's based to a huge degree on characters, the difference between code that works and code that fails can come down to characters, C is about characters, pretty much any character, there's no margin of error. Whereas with Lisp, you have parentheses for everything, you have an interpreter but you can also compile it, I am actually able to trust Lisp in a way that is out of the question with C. There's just so incredibly many gotchas and pitfalls, buffer overflows, it's endless, you have to really know what you're doing if you want to do stunts with pointers, memory, void types.

I guess the bottom line is if you're want your code to be perfect, and you write it in C, you can't delegate to the language, you yourself have to code that code perfectly in the human capacity of perfection.


> it was designed to slow down typing so typewriters wouldn't jam

I'm not quite sure that this is what actually happened: https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433...


> you can't delegate to the language, you yourself have to code that code perfectly in the human capacity of perfection.

Clarifying that what I mean by this is that it's not realistic to expect large C codebases to be perfect. Bug-free, with no exploits. Perfect. Same thing.


You're being downvoted to oblivion, even though your general point (rephrased, C is an unforgiving language and safer languages are a Good Thing) is pretty mainstream. Here are my guesses why:

1. You start off by saying you can't just blame the team or their manager if you're dissatisfied with a product, then instead of explaining why the people who made a piece of software aren't responsible for its faults you go off on a long non-sequitur about QWERTY.

2. Your rant on QWERTY just isn't true. You namedrop Peter Thiel and his book, so if he's your source then he's wrong too. QWERTY is not terrible, not obsolete, it was not designed to slow down typists, and there's no record of salesmen typing "typewriter quote" with just the top row. It's true that it was designed to switch common letters between the left and right hands, but that actually speeds up typing. It also does not take weeks for someone to type "the" ; and if you mean learning touch-typing, I don't know of any study that claims that alternative keyboard layouts are faster to learn.

The various alt. keyboard layouts (dvorak, coleman, workman) definitely have their advantages and can be considered better than QWERTY, sure; people have estimated that they can be up to ~30% faster, but realistically, people report increasing their typing speeds by 5-10%; or at least the ones who have previously tried to maximize their typing speeds... If learning a new layout is the first time they'd put effort into that skill, they'd obviously improve more. It's probably also true that these layouts are more efficient in the sense that they require moving the fingers less, reducing the risk of RSI (though you'd really want to use an ergonomic keyboard if that's a concern.)

QWERTY is still used because it's not terrible, it's good enough. You can type faster than you can think with it, and for most people that's all they want. There's nothing wrong with any of the alternative layouts, I agree that they're better in some respects, but they're not order-of-magnitudes better as claimed.

3. Your opinions about C are asinine.

"not objectively inferior like QWERTY" - So, is C good or not? We're talking about memory safety, C provides literally none. Is this not objectively inferior? Now, I would argue that it's not, it's an engineering trade-off that one can make, trading safety for an abstract machine that's similar to the underlying metal, manual control over memory, etc. But you're not making that point, you're just saying that it's actually good before going on to explain that it's hard to use safely, leaving your readers confused as to what you're trying to argue.

"not a bad language for simple algorithm prototypes of under 60 lines" - It's difficult to use C in this way because the standard library is rather bare. If my algorithm needs any sort of non-trivial data-structure I'll have to write it myself, which would make it over 60 lines, or find and use an external library. If I don't have all that work already completed from previous projects, or know that you'll eventually need it in C for some reason, I generally won't reach for C... I'll use a scripting language, or perhaps even C++. Additionally, the places C is commonly used for its strengths (and where it has begun being challenged by a maturing Rust) are the systems programming and embedded spaces, so claiming C is only good for 60-line prototypes is just weird.

"C is about characters" - Um, most computer languages are "about characters". There are some visual languages, but I don't think you're comparing C to Scratch here... You can misplace a parentheses with Lisp or make any number of errors that are syntactically correct yet semantically wrong and you'll have errors too, just like in C. Now, most lisps give you a garbage collector and are more strongly typed than C, for instance, features which prevent entire categories of bugs, making those lisps safer.

4. You kinda lost the point there. You started by saying that the people who wrote Apple Music "could very well be capable and well-minded, likely personally capable of coding", i.e., they're good at what they do. Fine, let's assume that. Then, your bottom line is that in C "you have to really know what you're doing" and "you yourself have to code that code perfectly in the human capacity of perfection". What's missing here is a line explaining that humans aren't perfect, and even very capable programmers make mistakes all the time, and having the compiler catch errors would actually be very nice. Then it would flow from your initial points that these are actually fine engineers, but they were hamstrung by C.

And the tangent on QWERTY just did not help at all.


> So, is C good or not? We're talking about memory safety, C provides literally none. Is this not objectively inferior? Now, I would argue that it's not, it's an engineering trade-off that one can make, trading safety for an abstract machine that's similar to the underlying metal, manual control over memory, etc.

One might make the argument that Oberon, with its System module, provides the same memory control abilities but few of the disabilities of C.

> so claiming C is only good for 60-line prototypes is just weird.

That seems like a misrepresentation of the claim above?

> Um, most computer languages are "about characters". There are some visual languages, but I don't think you're comparing C to Scratch here... You can misplace a parentheses with Lisp or make any number of errors that are syntactically correct yet semantically wrong and you'll have errors too, just like in C.

Well, not really. Lisp is actually about trees of objects. The evaluator doesn't even understand sequences of characters. That you can enter it as a sequence of characters is purely coincidental, but there have been structured syntactic tree editors (sadly they went down for being proprietary and expensive at the time).


> [...] Oberon [...]

Sure, and that would be a good argument, there are several interesting languages out there that do various things better than C. I'm not intimately familiar with the Wirth languages, but I thought Oberon provided garbage collection?

> [...] misrepresentation [...]

Fine, they never claimed it was only good for that, but I still find it weird to claim that "it's fine, it's great for X" where X is a thing that the language is not particularly good at, while ignoring Y, the thing it's well known for.

> [...] trees of objects [...]

I just don't think that "about characters" or "about trees of objects" is an interesting way to differentiate between programming languages, and I think that this discussion is actually confusing between two different properties. First, is how the source code is represented and edited. It's almost always as a plain text file. Some languages have variants on the plain text file: SQL stored procedures are stored on the RDBMS, Smalltalk stores source code in a live environment image. There are other approaches, such as visual editing as-in Scratch, or Projectional Editing (https://martinfowler.com/bliki/ProjectionalEditing.html) as in... um... Cedalion? I don't actually know any well-known ones.

The other property is how the language internally represents its own code. Sure, Lisp has the neat property that its code is data that it can manipulate, but other languages represent their code as (abstract) syntax trees, too. Basically every compiler or interpreter for a 3rd generation language or above, i.e., anything higher-level than assembly language, parses source code the same way: tokenization then parsing into an abstract syntax tree using either manually-coded recursive descent, or a compiler generator (Bison, Yacc, Antlr, Parser combinators, etc.) So your point that the Lisp evaluator doesn't even understand sequences of characters is true for any compiler, they all operate on the AST.

I think that there's a point to be made somewhere in here that one language's syntax can be more error-prone than another's, but that wasn't the argument being made... Not that I understood, anyway.


> So your point that the Lisp evaluator doesn't even understand sequences of characters is true for any compiler, they all operate on the AST.

Lisp does not really operate on an AST. It operates on nested token lists, without syntax representation. For example (postfix 1 2 +) can be legal in Lisp, because it does not (!) parse that code according to a syntax before handing it to the evaluator.

Lisp code consists of nested lists of data. Textual Lisp code uses a data format for these nested lists, which can be read and printed. A lot of Lisp code, though, is generated without being read/printed -> via macros.


If (postfix 1 2+) is ready to be handed to the evaluator, it's because it has been parsed. That means it must be a parsed representation. "Parse tree" doesn't apply because parse trees record token-level details; ( and ) are tokens, yet don't appear to the evaluator. "Abstract syntax tree" is better, though doesn't meet some people's expectations if they have worked on compilers that had rich AST nodes with lots of semantic properties.

The constitutents of the list are not "tokens" in Common Lisp. ANSI CL makes it clear that the characters "postfix", in the default read table, are token constituents; they get gathered into a token until the space appears. That token is then converted into a symbol name, which is interned to produce a symbol. That symbol is no longer a "token".


You're arguing semantics, I think. I would simply say that Lisp's AST is S-expressions (those nested token lists), and that the parser is Lisp's read function. Then your example is just something that's allowed by Lisp's syntax, while something like ')postfix 1 2 +(' would be something that's not allowed by the syntax.

What you say about Lisp code being generated without being read or printed is of course true, and while Lisp takes that idea and runs with it, it's not exactly unique to Lisp either; Rust's macro can do the same thing, without S-expressions. In other languages you usually generate source code, for example Java has a lot of source code generators (e.g., JAXB's XJC that used to come with the JDK).


The parser for s-expressions is READ. S-Expressions are just a data syntax and know nothing about the Lisp programming language syntax. Lisp syntax is defined on top of s-expressions. The Lisp evaluator sees no parentheses and no text, It would not care if the input text contains )postfix 1 2+( . The reader can actually be programmed to accept that as input. The actual Lisp forms need to be syntax checked then by a compiler or interpreter.

There are lots of language with code generators. Lisp does this as core part of the language and can do it at runtime as part of the evaluation machinery.


Apple Music isn't a great example because depending on which OS and version you're running, it's essentially a hosted web application.

Or given how new it is, it's likely majority written in Swift when presenting a native app


Bugs in Apple's Music apps have essentially nothing to do with it being written in C++ and Objective-C (and these days a significant portion of it is JavaScript and Swift).


As if rewriting entire OS components is easy or viable for vendors, even big ones like Apple or Microsoft.

Also backwards compatibility is a feature many wouldn't give away for extra security, at least not now.



I don't think Apple is shipping anything customer facing that's built on Rust?


Not in rust, but they did:

- Add reference counting to ObjC to get rid of a lot of use after free bugs (still of course possible because it's just a language suggestion and not strictly enforced like Rust or Swift)

- push for adding ObjC notations to let the tooling help catch some set of bugs. Still not perfect by any means but helps a little.

- created an entirely new memory safe language as Swift.


Apple+ has some stuff in Rust, in my understanding.


Did you even read what you linked? I wouldn't say "already happening" more like first early steps. Operating systems have a massive attack surface, would take years to convert code from C\C++ to Rust and likely be more vulnerable initially(the old code base went through decades of scrutiny, hundreds of scanners\fuzzers etc)


My view is that it should be state mandated for products with over 1 million users. In the long run it would pay for itself with the money that no longer has to be spent on mitigating cyber security problems.

Cyber security is national security is the people’s security. Ever since my aunt was doxxed and had her online banking money stolen I’ve become a cyber security hardliner.


the core problem is trusting a byte you read and use to make decisions. this is the superset of memory safety and not protected against by any language as of now.

let’s kill bytes. :P


[flagged]


It doesn't have to be Rust. But for all the people who insisted for more than a decade that GCs and VMs don't necessarily compromise performance of practical applications, there has never been an acceptable browser engine in C#, Java, D, Common Lisp, Go, OCaml, Haskell or any of the others. Meanwhile Apple uses Objective-C with its ARC (and later Swift, which is even more like Rust) for everything and it was great.

So the community tried to make OCaml with the memory model of ObjC and as much backwards compatibility with C as they could muster. In context, this doesn't seem like a weird strategy.


This tracks with your comment history

https://news.ycombinator.com/item?id=567736


Please don't cross into personal attack.

https://news.ycombinator.com/newsguidelines.html


I think I’ve probably said something dumb to be cherry-picked more recently than 13 years ago.


How well do the best and worst thing you’ve done in the last 7 days “track” with where you were at 13 years ago? We’re there any ups or downs during that time?


Years ago we used to regularly have worms that’d infect millions of computers without any clicks at all.

The truth is that “Zero-Click” hacks are becoming increasingly rare.

But of course everything is new for journos unfamiliar with the field.


Yes, Chrome pretty much single-handedly changed that, timed well with Vista. For about a decade we got a reprieve because:

1. Memory safety mitigations became much more common (Vista)

2. Browsers adopted sandboxing (thanks IE/Chrome)

3. Unsandboxed browser-reachable software like Flash and Java was moved into a sandbox and behind "Click to Play" before eventually being removed entirely.

4. Auto-updates became the norm for browsers.

And that genuinely bought us about a decade. The reason things are changing is because attackers have caught back up. Browser exploitation is back. Sandboxing is amazing and drove up the cost, but it is not enough - given enough vulnerabilities any sandbox falls.

So it's not that security is getting worse, it's that security got better really really quickly, we basically faffed around for a decade more or less making small, incremental wins, and attackers figured out techniques for getting around our barriers.

If we want another big win, it's obvious. Sandboxing and memory safety have to be paired together. Anything else will be an expensive waste of time.


And now you never know when your useful browser extension is going to auto update into malware!


If a browser extension requests new permissions it will be disabled upon its update.


Not all malware requires new permissions.


> And that genuinely bought us about a decade. The reason things are changing is because attackers have caught back up. Browser exploitation is back. Sandboxing is amazing and drove up the cost, but it is not enough - given enough vulnerabilities any sandbox falls.

It’s still a completely different world. We’ve come a long way from back when Paunch was printing money with Blackhole.


I mean, for how long? Like I said, we had a long period of time without ITW exploits for browsers. That has ended. I'm sure costs are higher today than they were before, but I'm not convinced that the economic incentives won't ultimately lead to another blackhole.


I think it's different for good now. Detection is better, response is better. The exploits used in blackhole would be patched really quickly, and detected really quickly.

I think it would be detected quickly because the most likely payload dropped would be ransomware. which makes it immediately obvious to users they got owned. I don't think it would take longer than a day to discover a zero day exists in $BROWSER once a group starts a campaign using it.

All software distributors that expose attack surface to a large consumer base have all had plenty of time to learn how to deal with a major security hole that needs to be patched asap. Once a researcher tweets about a 0day in $BROWSER, there'll be an incomplete patch 1 day later. 4 days later the final patch is out. Auto updates ensure every user has the patch the moment they go online.

But I do think we can still see a CCG using a browser exploit to infect people, but I don't think we'd see exploits packaged and sold inside exploit kits.


There are definitely significant economic changes - the turnaround time for discovery, patch, rollout is way tighter.

I suppose that could make a generalized kit much harder to sell. Once it's sold once you basically have to assume it'll be burned soon.

Time will tell.


Only way we’re going to see another exploit pack like blackhole is if it’s targeting Android devices which aren’t receiving security updates.


Or there's more vertical integration in exploit packs ie: they pair it with some sort of post exploitation payload that's better at hiding. Or something else we haven't thought of.


I was about to say the same thing in response to people claiming security is getting worse. Zero-Click is just another name for a worm. I guess mayyybe you could consider Zero-Click as more like a class of worm whose entry into the system is visible (you can see that you got the strange message or image).

And you're definitely right that they are far more rare. Worms used to be nasty is now fast and easily they spread. Security has come a long way since then.

That said, we could go further on security. But is selling people on using more secure software and hardware. Even something as simple as bounds checking has a cost. Look at the reception of the Windows 11 change to have Virtualization Based Security turned on by default. People are upset about it because it takes away performance for security that they claim they don't need on their home computer.

And then there's resistance from developers. For some reason people get really upset about mechanisms designed to improve security without increasing runtime overhead when they make compile time take longer. If your application is used by any significant number of people, surely the amount of runtime you're saving dwarfs the amount of extra time to compile.


Wormable bugs are a subset of zero-click bugs. Worms are very rare, and always have been, even during "the Summer of Worms".


Even a browser driveby attack is wormable if you use various social media for spreading.

Few vaguely reliable RCE bugs aren’t wormable. Even ones requiring significant user interaction are wormable, office macros are wormable.

Workable bugs are far more common than actual worms.


I was about to ask whether I'm missing something here. "Zero Click" just means no user interaction is required right?So from my Perspektive this is just another way of saying Remote Code Execution?

There really isn't something new here other than a fancy name - or I am not seeing the point.


A non zero click remote code execution would be for example the attacker sends the victim a message, with a link or attachment, that if the victim interacts with it, the attacked gets to run code they wrote in the victims device.

A zero click remote code execution, would be for example where the attacker send a message, and their phone just processing the message on it's own is enough for the attacker to execute code on the victims device.

A non zero click vulnerability can be mitigated by being cautious. A zero click vulnerability cannot.


> A non zero click vulnerability can be mitigated by being cautious. A zero click vulnerability cannot.

No amount of caution will save you when the exploit is injected into a major website.

Why bother with such meaningless distinction? Does your browser never hit any http:// resources?


An exploit that achieves remote code execution just by a browser performing an HTTP request (for example a malicious ad) would be considered a zero-click exploit.


But then most exploits that involve sending links would also be zero-click, just not deployed in that manner.

I think this just goes to show how silly this new terminology is.


The terms are all stupid and were made up 40 years ago. Trying to tease out nuance is pointless.

You get owned without clicking hence zero click. Is it different from RCE? A subset? Doesn't matter. Title could have said RCE.


Some RCE can require user interaction.


That’s correct.


'Zero-Click' is just a new buzzword, it probably got popular due to Pegasus and NSO.


Both attacks and defenses have gotten a lot better. Meanwhile, the consequences of hacks keep going up every year. You didn't have viruses disrupting shipping or gas pipelines before, because they didn't depend as much on computers.


You have to be a special kind of naive to not have such infrastructure behind airgap.


There are a number of organizations out there who think they are airgapped. And then some employee in the basement decides he needs to run a test while he's on the road, or monitor the water treatment plant from home because he's been exposed to COVID and can't come in, so he gets his buddy to install an LTE hotspot and boom! Now the company's not airgapped any more.

It's a special kind of naive to assume that airgapping is a technological problem rather than a human behavior problem.


> You have to be a special kind of naive to not have such infrastructure behind airgap.

Most of it is not behind air gap. And unless things get a lot worse it wont be.

Proper air gaping is hard, expensive and pain in the ass, on the ongoing basis.

That's why almost none does it. Not even most military systems are air gaped.


You cannot assume "airgap" for any reason--that will eventually fail.

Systems need to work assuming that they are bathed in a hostile environment at all times.


Yup.

And yet in most organizations, airgapping is an alien concept. Even for machine tools that could kill someone.

Security vs convenience...


Airgapping didn't stop Stuxnet.


Exactly, 10 years ago tens of millions users were using outdated Flash and Internet Explorer. Literally everyone could infect them using pretty old exploits. There were no autoupdates.


Yep, good luck finding an useful exploit pack on crime forums now. The days of blackhole & co are long past, those people hacked far more people than those discussed in this article ever will.


To me there's a difference between RCE and Zero click.

RCE occurs on a system with a listening daemon/service (e.g. web, SQL, DNS SSH).

Zero-click describes an issue on a client system where usually a user would have to click something to trigger it, but doesn't as parsing/processing happens before the user actually sees anything (e.g. via an SMS on a phone).


There is no meaningful distinction between the two.

> Zero-click describes an issue on a client system where usually a user would have to click something to trigger it, but doesn't as parsing/processing happens before the user actually sees anything (e.g. via an SMS on a phone).

Historically these have been referred to as RCE.

FWIW You are essentially describing a service listening on the network. It’s silly to try to make an artificial distinction based on some irrelevant L4 differences.


That's a view of the world for sure :) Personally I don't think it's irrelevant. From a threat modelling perspective, exposed services are expected to be attacked.

Client services with zero interaction, have traditionally been regarded as safer, usually for client side attacks we'd expect a trigger from user action (e.g. a link being clicked, a PDF file being opened).

Just because you don't find something to be useful as a distinction in your line of work doesn't necessarily mean that it's not useful to anyone ...


Client services like these are also expected to be attacked.

iMessage isn’t meaningfully different from Apache, instead of listening on a TCP number it listens on your Apple user id.


This is really flyfucking of the worst kind: the kind that doesn't serve any useful purpose.

From any useful perspective, RCE and zero-click exploits are the same thing. The latter is just a fancy name for the moron journalists like the one who wrote this article to bandy about to lure in some readers.


RCE is routinely used to describe clientside bugs; you're mixing orthogonal concepts here.


errr what? Im struggling to figure out what you might mean here. Are you talking about floppy disk shared worms?



Also, don't forget https://en.wikipedia.org/wiki/Nimda ... all of these were a horror show to deal with on networks of the era..


My favourite: https://en.wikipedia.org/wiki/SQL_Slammer - 376 bytes of malware, spread via spraying UDP packets at random IP addresses, infected basically every vulnerable system on the entire internet within 10 minutes.




A sibling has already linked it, but extra context: Robert Tappen Morris (aka ‘rtm’) is a legend, his dad is a Bell Labs legend, and along with Trevor he’s kind of the “silent partner” in the Viaweb -> YC -> $$$$$ miracle.

Guy’s a boss.


I think they talk about worms that spread by infecting other devices in the local network using RCEs in net-services like rdp/smb/..

That or maybe drive-by downloads / java/activeX code execution, which have become more rare





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: