Hacker News new | past | comments | ask | show | jobs | submit login

I would think that everyone here agrees that 'computer' security is in a state of turmoil. Is it possible to design a computing system that fails-safe in the event of a bug in a component, instead of opening the entire system up to exploits. Fails Safe as in the process does nothing or restricts the targeted surface area of the malware.



There were systems that did that all the way down to hardware in the 1960's with many more since:

https://www.schneier.com/blog/archives/2014/04/dan_geer_on_h...

Market rejected them because they cost a bit more or didn't have highest, raw performance. Such short-sightedness means most done exist any more in any turn-key form. Many of the modifications are straight-forward enough that even academics are prototyping them and porting Linux/FreeBSD to them.

https://news.ycombinator.com/item?id=10522742

Mainstream just refuses to learn or adopt proven methods of the past. They use every justification in the world even when the labor is free (FOSS) with someone only asking to use the minimal of proven techniques. Market rarely buys the stuff outside very limited sales of some robust appliances: see Aesec's GEMSOS stuff, SAGE Guard on XTS-400, Nexor Mail Guard (on XTS-400), Green Hill's INTEGRITY-178B OS w/ virtualization, Mikro-SINA VPN on L4, Secure64's SourceT OS for DNS, Sentinel's HYDRA firewall (uses INTEGRITY), and so on. Social, political, and economic problems rather than technical. I see no end to it outside continued sales and development of niche solutions.

Note: The things I referenced in last paragraph are either still on the market w/ descriptions available via Google or at least have papers in reach. I left out tons of good stuff that's no longer around or just a prototype. Happy Googling and learning. :)


>A fail-safe or fail-secure device is one that, in the event of a specific type of failure, responds in a way that will cause no harm,

Key statement "Specific type of failure". In theory any particular piece of large software has tens of thousands of fail safes in it all ready. For example, when you send an oversized buffer to an application with input checking it does not explode in a ball of flame (unlike programs from the '90s) and warns you about the problem. But that is where the analogies break down between mechanical items and software, software is far more connected internally than almost all other machines are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: