You aren't, for the same reason you aren't going to retain safety in an airplane or a nuclear reactor when men with guns are shooting people and pushing random panels (and/or the other way around).
Even with "defense in depth", there's clear separation between parts that do the important stuff, and parts that protect the parts that do the important stuff from being used against the rules.
I'd go as far as dividing cybersecurity into two major areas of concern:
1) The part that constrains software systems to conform to some make-believe reality with rules stricter than natural -- this is trying to make the abstract world function more like physical reality we're used to;
2) The rules and policies - who can access what, when, why, and what for.
Area 1) should very much be a bolt-on third-party thing everyone should ideally take for granted. Area 2) is a responsibility of each organization/project individually, but it also small and mostly limited to issues specific to that organization/project.
It maps to physical reality like this:
Area 1) are the reinforced walls, floor and ceiling, and blast doors that can only be opened via a key card[0];
Area 2) are the policies and procedures of giving and revoking keycards to specific people in the company.
--
[0] - Or by crossing some wires deep in the reader, because accidents happen and cutting through blast doors cost money. I.e. real-life security and safety systems often feature overrides.
You said "security is a thing because safety is nonexistent". This means that, if you had safety, you wouldn't need security. I'm asking you to explain how a perfectly safe system wouldn't need security, as someone who compromised it would be able to simply undo all the safety.
Say I wrote software to control a gamma ray knife, it's perfectly safe and it always does the right thing and shuts down properly when it detects a weird condition.
Compromising it would simply be a matter of changing a few bytes in the executable, or replacing the executable with another one.
This seems so obvious to me that I think you may have non-standard definitions of either safety or security.
That AVR can still be manipulated. If your definition of safety includes preventing in-person attacks on the data storage, then you pretty much need armed guards.
If that's the standard, then no wonder "software safety is near non-existent".
Ah, there's the non-standard definition. Safety means that the system performs as designed while the design invariants hold. Security means someone malicious can't change the invariants.
That's not what it is about. If someone calls you "non-standard", you challenge them to identify these standards. If you call me wrong, at least give it hands and feets.
> If you call me wrong, at least give it hands and feets.
\|/ \|/
\ /
You're wrong!
| |
^^^ ^^^
Sorry, couldn't help myself. There's an obscure Polish joke it made me think of (punchline being, thankfully you didn't ask for it to "hold its shit together").