Technology adoption will always lead to misuse or gaming the system. Computers can never be truly safe and secure. If it is then it will be highly restricted. That’s the trade-off.
Reasonable security can be achieved. Security researchers have often toyed with the idea that security breaches should be localised through compartmentalisation. So if you install a bad software then it ideally should not impact everything else in your computer.
Quine’s OS is an interesting project towards this direction.
There's really no substitute for everyone thinking carefully about security for themselves. A formative experience for me was watching the evolution of browsers from a sandbox to protect our crown jewels to a receptacle for our crown jewels. There is no security research that can avoid this devolution as long as people blindly do each day what they did the previous day.
I have kind of given up on security awareness as a viable solution for security. Although I can't deny the fact that "human beings" are the weakest link in the security chain, there are just too many real life evidences for that.
The reason I feel this is because its a continuous cat and mouse game to me. Any new technology that comes up has so many corner cases that gets exploited by malicious actors.
Lets take Google Search as the default fallback for non-URL input in Chrome URL bar. May be this decision was taken by Google to increase their search usage but it led to phishing attacks where malicious actors would use paid SEO to rank up malicious websites (happened in crypto space).
I think engineers and security architects need to think of security during design and SDLC only to protect users against most common attack surfaces. Your example of browser security evolution using sandboxing technology is exactly that. It turned out to be almost impossible to educate all users not to click untrusted links. But with sandboxing and other mitigations in place (in OS like ASLR, NX etc.), the system itself is more resilient against common attacks.
CISA's secure by design pledge is probably an "intention" towards driving secure systems design but I think its a long way to go.
All your examples are still depending on other people to vet things for us. I think that's incomplete. In addition to govt. regulations and brand-based accountability and litigation in court of law and court of public opinion, I think we need last-mile security awareness.
What does this security awareness consist of? Above all, it consists of not blindly taking what you see online at face value. Create relationships with each other. Not parasocial 1:million relationships but real 1:n relationships with real people and talk about security with each other.
You don't need to know about encryption or keep up with new technologies, a very baseline simple thing to do is to try to stay informed about who's trustworthy and who's not.
None of this will protect us when a nation state comes for us, of course. But that's not the goal here. The goal is to outrun more mundane garden-variety predators by joining together with others.
When we drive cars we depend on signs and rules to be established, but we also use our eyes to actually check that the area in front of the car is clear before proceeding. And it works to a great, great extent. I think most people just don't check their surroundings to that extent when they set out for a drive on their computers. They don't have eyes in the same way in the new online environment we're using for the past few decades.
This is absolutely a hard problem. A network of relationships is a privilege, and not everyone has it. But even the people who have access to it don't use it today. It would harden the commons for some of us to start reducing the payoff of bad actors. And the first step in getting there is to say out loud and often my first sentence here: There's really no substitute for everyone thinking carefully about security for themselves. No matter what we put into place at the govt. regulation or litigation levels.
Reasonable security can be achieved. Security researchers have often toyed with the idea that security breaches should be localised through compartmentalisation. So if you install a bad software then it ideally should not impact everything else in your computer.
Quine’s OS is an interesting project towards this direction.
https://www.qubes-os.org/