Computers that are ridiculously hard to hack and methods for building them have been available for some time. NSA used to evaluate them for years on end trying to hack them. If they passed the test, the got the TCSEC A1-class label for verified[-as-possible] security. SCOMP/STOP, GEMSOS, Boeing SNS Server, LOCK, and KeyKOS w/ KeySAFE are examples of systems designed with such techniques or certified. I think STOP-based XTS-400 and Boeing SNS Servers have been in deployment for decades now without any reported hacks. They were on the munitions list for export restriction (idk if enforced), some companies sell high-security to defense only, and they all cost a ton of money if proprietary.
Under newer standards and techniques, examples include INTEGRITY-178B, Perseus Security Framework (eg Turaya Desktop), seL4, Muen hypervisor, and ProvenCore. Two of those are open source with one having pieces that were released. On hardware side, the groups developing it often openly describe the stuff where anyone with money can implement it. Some openly release it like CHERI CPU w/ FreeBSD. Some commercial products making every instruction safe like CoreGuard.
So, it's not a question of whether people will develop and sell this stuff. They've been developing this stuff since computer security was invented back in the 1970's. It was sometimes the inventors of INFOSEC developing and evangelizing it. Businesses didn't buy the stuff for a variety of reasons that had nothing to do with its security. Sometimes, esp with companies like Apple or Google, they could clearly afford the hardware or software, it was usable for some to all of their use case, and they just didn't build or buy it for arbitrary reasons of management. Most stuff they do in-house is worse than published, non-patented designs which is just more ridiculous.
DARPA, NSF, and other US government agencies continue to fund the majority of high-security/reliability tech that gets produced commercially and/or for FOSS. These are different groups than the SIGINT people (i.e. BULLRUN) that want to hack everything. Also, they might be putting one backdoor in the closed ones for themselves while otherwise leaving it ultra-secure against everyone else. That's what I've always figured. Lots of it is OSS/FOSS, though, so that's easier to look at.
Under newer standards and techniques, examples include INTEGRITY-178B, Perseus Security Framework (eg Turaya Desktop), seL4, Muen hypervisor, and ProvenCore. Two of those are open source with one having pieces that were released. On hardware side, the groups developing it often openly describe the stuff where anyone with money can implement it. Some openly release it like CHERI CPU w/ FreeBSD. Some commercial products making every instruction safe like CoreGuard.
So, it's not a question of whether people will develop and sell this stuff. They've been developing this stuff since computer security was invented back in the 1970's. It was sometimes the inventors of INFOSEC developing and evangelizing it. Businesses didn't buy the stuff for a variety of reasons that had nothing to do with its security. Sometimes, esp with companies like Apple or Google, they could clearly afford the hardware or software, it was usable for some to all of their use case, and they just didn't build or buy it for arbitrary reasons of management. Most stuff they do in-house is worse than published, non-patented designs which is just more ridiculous.
DARPA, NSF, and other US government agencies continue to fund the majority of high-security/reliability tech that gets produced commercially and/or for FOSS. These are different groups than the SIGINT people (i.e. BULLRUN) that want to hack everything. Also, they might be putting one backdoor in the closed ones for themselves while otherwise leaving it ultra-secure against everyone else. That's what I've always figured. Lots of it is OSS/FOSS, though, so that's easier to look at.