Yes, Chrome pretty much single-handedly changed that, timed well with Vista. For about a decade we got a reprieve because:
1. Memory safety mitigations became much more common (Vista)
2. Browsers adopted sandboxing (thanks IE/Chrome)
3. Unsandboxed browser-reachable software like Flash and Java was moved into a sandbox and behind "Click to Play" before eventually being removed entirely.
4. Auto-updates became the norm for browsers.
And that genuinely bought us about a decade. The reason things are changing is because attackers have caught back up. Browser exploitation is back. Sandboxing is amazing and drove up the cost, but it is not enough - given enough vulnerabilities any sandbox falls.
So it's not that security is getting worse, it's that security got better really really quickly, we basically faffed around for a decade more or less making small, incremental wins, and attackers figured out techniques for getting around our barriers.
If we want another big win, it's obvious. Sandboxing and memory safety have to be paired together. Anything else will be an expensive waste of time.
> And that genuinely bought us about a decade. The reason things are changing is because attackers have caught back up. Browser exploitation is back. Sandboxing is amazing and drove up the cost, but it is not enough - given enough vulnerabilities any sandbox falls.
It’s still a completely different world. We’ve come a long way from back when Paunch was printing money with Blackhole.
I mean, for how long? Like I said, we had a long period of time without ITW exploits for browsers. That has ended. I'm sure costs are higher today than they were before, but I'm not convinced that the economic incentives won't ultimately lead to another blackhole.
I think it's different for good now. Detection is better, response is better. The exploits used in blackhole would be patched really quickly, and detected really quickly.
I think it would be detected quickly because the most likely payload dropped would be ransomware. which makes it immediately obvious to users they got owned. I don't think it would take longer than a day to discover a zero day exists in $BROWSER once a group starts a campaign using it.
All software distributors that expose attack surface to a large consumer base have all had plenty of time to learn how to deal with a major security hole that needs to be patched asap. Once a researcher tweets about a 0day in $BROWSER, there'll be an incomplete patch 1 day later. 4 days later the final patch is out. Auto updates ensure every user has the patch the moment they go online.
But I do think we can still see a CCG using a browser exploit to infect people, but I don't think we'd see exploits packaged and sold inside exploit kits.
Or there's more vertical integration in exploit packs ie: they pair it with some sort of post exploitation payload that's better at hiding. Or something else we haven't thought of.
1. Memory safety mitigations became much more common (Vista)
2. Browsers adopted sandboxing (thanks IE/Chrome)
3. Unsandboxed browser-reachable software like Flash and Java was moved into a sandbox and behind "Click to Play" before eventually being removed entirely.
4. Auto-updates became the norm for browsers.
And that genuinely bought us about a decade. The reason things are changing is because attackers have caught back up. Browser exploitation is back. Sandboxing is amazing and drove up the cost, but it is not enough - given enough vulnerabilities any sandbox falls.
So it's not that security is getting worse, it's that security got better really really quickly, we basically faffed around for a decade more or less making small, incremental wins, and attackers figured out techniques for getting around our barriers.
If we want another big win, it's obvious. Sandboxing and memory safety have to be paired together. Anything else will be an expensive waste of time.