Honorable mention for the ITAR regs that prevented Phil Zimmerman from exporting PGP 128 bit encryption until Zimmerman and MIT press printed the source as a book protected by the first amendment, exported it, and this enabled others to OCR it, and recompile it offshore.
Also that ITAR enabled Thawte in South Africa (where I’m from) as a business to completely dominate sales for 128 bit SSL certs outside the US. Thawte was eventually acquired by Verizon for $600 million and the founder Mark Shuttleworth used the cash to become an astronaut and then founded Ubuntu.
Had the same t-shirt with the barcode readable source code on it. I think prompted by seeing Greg Rose wear one, may have gotten it from him/mutual friends. As an foreign citizen I was never brave enough to wear it through a USA entry airport.
> [...] and this enabled others to OCR it, and recompile it offshore.
Did it? Or did it just give them plausible deniability?
I remember playing with OCR as a kid and all the software I could get my hands on gave horrendous results, even if the input was as perfect as one could hope for.
And even today I sometimes run tesseract on perfect screenshots and it still makes weird mistakes.
Would be interesting to know if the book had any extra OCR-enabling features. I'm sure the recipients would get access to proper tools and software but OCRing source-code still seems like a nightmare back then.
Not so much plausible deniability, as a clear First Amendment argument. If you’re forbidden from exporting computer code, that was some new-fangled magic thing that nobody could possibly understand. If you’re banned from exporting a book, well that has some clear and obvious precedent as a restriction of free speech.
Where did you get your information? MICR lines, the line of numbers at the bottom of the check, use magnetic ink. The acronym stands for Magnetic Ink Character Recognition. So for ages, they didn't use optics at all.
In the modern day, cheque OCR is monopolized by one company, Mitek. They may use tesseract somewhere in their stack but I've never read that anywhere.
I never understood the story about the book-printed PGP source code. Isn't source code protected speech under the first amendment anyway, regardless of the form in which it is transmitted? All kinds of media receive first amendment protection, including things like monetary donations, corporate speech, art, etc. I've never heard of there being a requirement for the printed form. Did the interpretation of the first amendment change recently in this regard?
The book was published in 1995 [1,2]. Free speech protection for source code under US law wasn't decided until 1996, in Bernstein v. United States [3].
I think if you’re looking for a logical answer from first principles, you won’t find one. It’s more that the legal system runs on precedent, and a book fits far more squarely in the fact patterns of previous First Amendment cases. Likely the source code case would end up with the same outcome, but it doesn’t hurt to make it more obvious.
Legally speaking, it didn't really matter. But symbolically, having the feds argue that a book constitutes "munitions" would be bad optics for them in a way that is more understandable to the average American, compared to more legal arcane arguments about software having 1A protections.
This was a time when we were happy when they managed to get congresspeople to understand that the internet is not like a truck, but more like a series of tubes.
I think anchoring it to something old school like a book was a good call.
"The U.S. Munitions List changes over time. Until 1996–1997, ITAR classified strong cryptography as arms and prohibited their export from the U.S.[5]
Another change occurred as a result of Space Systems/Loral's conduct after the February 1996 failed launch of the Intelsat 708 satellite. The Department of State charged Space Systems/Loral with violating the Arms Export Control Act and the ITAR.[6][7]
I know. We definitely aren't a nation founded on outright treason and insurrection, and thumbing our nose at authority and doing the moral thing isn't in our DNA in any way, shape, or form.
There can be moral reasons for treason. The founding fathers certainly considered themselves moral and principled.
Your frustration is misguided. Simply do not confuse illegality with immorality. Human beings generally want to feel like they're doing more good than bad.
As for "the DNA of a nation," we would probably spend a few hours figuring out the definition of what that even is just for starters.
From the outside, unironically calling any group of politicians "fathers" feels so weird. I know it's a super common turn of phrase, and that's kinda what gets me.
> Human beings generally want to feel like they're doing more good than bad
We're experts at convincing ourself that what is beneficial to us is also "good", whatever this actually means.
We can either rage against the era we were born into, or we can accept what we can change and what we cannot. One path leads to less frustration than the other. Be careful that you don't find yourself enjoying irritation.
That US was a "real bad guy", or that nuking cities was an evil and unnecessary thing? The latter, while not universally held, is a fairly common take, and has been that way for a few decades now.
As this is from 2016 it doesn't include this new fun revelation:
> On 11 February 2020, The Washington Post, ZDF and SRF revealed that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence, and the spy agencies could easily break the codes used to send encrypted messages.
This is an epically cool blog post! - submit it to HN on its own merits.
This was of particular interest to me:
>>>"...1986 Reagan tipped off the Libyans that the US could decrypt their communications by talking about information he could only get through Libya decrypts on TV15. In 1991 the Iranians learned that the NSA could break their diplomatic communications when transcripts of Iranian diplomatic communications ended up in a French court case..."
Because, in 1986 - thats effectively when a lot of the phreaking and social engineering was at a peak - Cyberpunk was moving from imagination --> zeitgeist --> reality.
Social engineering and line-printer litter recovery were yielding the backdoors into the Telecom Switching system. BBS's were raging [0].
So when you get a gaph-guffaw look into infosec in a slipup like these ones, it reinforces in mind that the 80s were some really wild times all around as technology tsunami'd from people's minds business and reality.
> I wrote blog entry on this subject with a very similar name [0] which covers the CryptoAG story in more detail. It doesn't have the 2020 news.
[0]: A Brief History of NSA Backdoors (2013), https://www.ethanheilman.com/x/12/index.html
Wow this is super interesting I noticed this paragraph in the text.
> 2013, Enabling for Encryption Chips: In the NSA's budget request documents released by Edward Snowden, one of the goals of the NSA's SIGINT project is to fully backdoor or "enable" certain encryption chips by the end of 201311. It is not publicly known to which encryption chips they are referring.
From what I know Cavium is one of these "SIGINT enabled" chip manufactures.
>> "While working on documents in the Snowden archive the thesis author learned that an American fabless semiconductor CPU vendor named Cavium is listed as a successful SIGINT "enabled" CPU vendor. By chance this was the same CPU present in the thesis author's Internet router (UniFi USG3). The entire Snowden archive should be open for academic researchers to better understand more of the history of such behavior." (page 71, note 21)
> The company had about 230 employees, had offices in Abidjan, Abu Dhabi, Buenos Aires, Kuala Lumpur, Muscat, Selsdon and Steinhausen, and did business throughout the world.
That's a... really strange list of office locations, especially considering the relatively small number of employees.
> The owners of Crypto AG were unknown, supposedly even to the managers of the firm, and they held their ownership through bearer shares.
How does this work in practice? If management doesn't know who owns the company, how can the owners exercise influence on company business?
How does that representative prove that they really represent the owners, if the owners aren't known to management? How can they authorize someone without revealing identifying information?
Proton Mail's extremely bureaucratic operational deafness, and their glacial pace of product features and open-sourcing, would certainly lend support to that idea.
I actually wish this was true. I want an email service that would last forever and is secure enough from my threats, namely security breaches of the email host and account takeover from non state actors.
Gmail is close enough, but I want an alternative. An email service run by the nsa or the cia would be great.
Considering that I remember reading the CIA’s own historical document on this operation, I would guess its usefulness had run its course. If I’m not mistaken, it was the CIA who released the document to journalists; it seemed like bragging.
To add another dimension to this, personally i think that the Crypto AG relationship is what is referred to as "HISTORY" in this leaked NSA ECI codenames list.
I was so curious about the origins of the SHA algorithms that I made a FOIA to NSA about SHA-0^0, as I wanted to understand how it was developed and requested all internal communications, diagrams, papers and so on responsive to that.
Interestingly I found that after I got a reply (rough summary: you are a corporate requester, this is overly broad, it will be very expensive) I could no longer access the NSA website. Some kind of fingerprint block. The block persisted across IP addresses, browsers, incognito tabs, and devices so it can't be based on cookies / storage.
Still in place today:
Access Denied
You don't have permission to access "http://nsa.gov/serve-from-netstorage/" on this server.
> Some kind of fingerprint block. The block persisted across IP addresses, browsers, incognito tabs, and devices so it can't be based on cookies / storage.
Then what is it based on, if it happens across different devices and different IP addresses?
I find it very surprising that the NSA would go to such technologically advanced lengths to block FOIA requesters from their website (which, needless to say, doesn't contain any sensitive information).
This honestly seems kinda fun. If one was really dedicated: buy new device with cash; purchased and used outside city of residence; don’t drive there, non-electric bike or walk; only use device to connect to the website from public wifi; never connect to own wifi; don’t use same VPN service as usual. Not sure if I missed anything. Probably did.
Or walk into an internet cafe. Cafe membership systems, if any, probably aren't yet connected enough to prevent showing you the raw Internet for first few minutes, for few more years. Everyone who's vocal online should try this once in a while. Even Google search results noticeably change depending on your social classes inferred from location and whatnot.
This seems like a good way to learn what information your system is leaking that it shouldn't be leaking, eg if you use a VPN and they still block you, your VPN is probably not doing what it claims to be doing. (AFAIK a correctly implemented VPN would not send any of your computer or browser information to nsa.gov.)
IIUC blocking people from making FOIA requests is illegal / can be grounds for a lawsuit, and they can always just classify anything they don’t want to give away, so it doesn’t really make sense for the NSA to do something like that. Their website is probably just broken.
In case anyone is wondering about the context for this 2016 article, it was right after the 2015 San Bernardino attack and the FBI was trying to get into one of the attacker's phones. Apple resisted the request primarily because they wanted a certificate that would allow them to install any rogue firmware/app/OS on any iPhone, not just the attacker's.
This topic comes up a bunch still. Someone please correct me, but as I understand it anyone using new chips that use Intel ME (or AMD's equivalent) have a gaping hole in their security that no OS can patch.
I know puri.sm[0] takes some steps to try to plug the hole, but haven't read up to see if it's effective or no.
> anyone using new chips that use Intel ME (or AMD's equivalent) have a gaping hole in their security that no OS can patch
Not really; anyone using chips with Intel ME or AMD PSP have an additional large binary blob running on their system which may or may not contain bugs or backdoors (of course, also realizing a sufficiently bad bug is indistinguishable from a backdoor).
There are tens to hundreds of such blobs running on almost any modern system and these are just one example. I would argue that ME and PSP are not the worst blob on many systems; they have both unsupported but almost certainly effective (MEcleaner / ME code removal), supported and almost certainly effective (HAP bit), or supported and likely effective (ME / PSP disable command) mechanisms to disable their functionality, and they are comparatively well-documented versus the firmware that runs on every other peripheral (networking, GPU, etc.) and comparatively hardened versus EFI.
Yeah, this lives in the back of my mind too. I run debian on 11th gen intel, but with the non-free blobs included to make life easier. I've been meaning to try it without them, but it's too tempting to just get things 'up' instead of hacking on it.
There's little we can do about it short of running ancient libreboot computers. We'll never be truly free until we have the technology to manufacture free computer chips at home, just like we can make free software at home.
ASML fabs in every basement when?
I think riskV is as close to an open source CPU we have at the moment, unfortuantly most riskV cpu's rely on the company having IP that is protected like the CPU layout or the core architecture as of what I understand of modern CPU design.
RISKV has been a great step forward and I'd love to see it succeed but I'm also aware of the lack of open source architecture for GPU's or AI accelerators.
RISC-V* (Reduced Instruction Set Computing, 5th incarnation)
And sure, companies can choose not to share chip designs, but if you want an open-design CPU then you should be checking for that specifically and not just filtering by ISA. There exist such chips already, and I expect they'll catch up with AArch64 chips (in terms of being able to run desktop Linux) in <10 years, given the specs already include SIMD and the high-end chips have clock rates comparable to the oldest Windows-on-ARM laptops, like the 1st-gen Surface.
marcan of the Asahi Linux project got into a discussion on reddit about this, and says that when it comes to hardware, you just can’t know.
> I can't prove the absence of a silicon backdoor on any machine, but I can say that given everything we know about AS systems (and we know quite a bit), there is no known place a significant backdoor could hide that could completely compromise my system. And there are several such places on pretty much every x86 system
Are these blob type of attacks accessible after boot? Essentially, are these only accessible if you have physical access? And at that point, isn't it game over anyways?
Intel ME allows intentional remote access through the ME in some enterprise scenarios (vPro). The driver support matrix is quite small and this is a massively overblown concern IMO, but it’s the root of a lot of the hand wringing.
However, onboard firmware based attacks are absolutely accessible remotely and after boot in many scenarios. It’s certainly plausible in theory that an exploit in ME firmware could, for example, allow an attacker to escape a VM or bypass various types of memory protection. Unfortunately the actual role of the ME is rather opaque (it’s known, for example, to manage peripherals in s0ix sleep).
Ditto for any other blob. Maybe a specially crafted packet can exploit a WiFi firmware. Maybe a video frame can compromise the GPU.
These are also good persistence vectors - gain access remotely to the NOR flash containing EFI, and you have a huge attack surface area to install an early boot implant. (or if secure boot isn’t enabled, it’s just game over anyway). On Linux, it’s often just hanging out in /dev as a block device; otherwise, once an attacker has access to the full address space, it’s not too hard to bitbang.
These are all fairly esoteric attacks compared to the more likely ways to get owned (supply chain, browser bugs, misconfiguration), but they’re definitely real things.
The closed-sourceness is only a tiny part of the problem, too - a lot of the worst attacks so far are actually in open source based EFI firmware, which is riddled with bugs.
Which takes me back to my original response to “isn’t everyone backdoored by ME” - sure, maybe, but if you’re looking for practical holes and back doors, ME is hardly your largest problem.
> The closed-sourceness is only a tiny part of the problem, too - a lot of the worst attacks so far are actually in open source based EFI firmware, which is riddled with bugs.
Most consumer products (as opposed to some of those marketed to businesses) don't have enough of the components in place for the ME to accomplish anything, good or bad.
For starters, few consumer systems have the ME wired up to a supported Intel NIC to provide the remote access functionality that is usually seen as the scariest feature among those related to the ME. The processors are usually not vPro-enabled models so the firwmare will refuse to enable those features due to Intel's product segmentation strategy. And even if all the right hardware is in place, I think a system still needs to be provisioned by someone with physical access to turn on those features.
For most consumers, the main valid complaint about the ME is that it's a huge pile of unnecessary complexity operating at low levels of their system with minimal documentation. Anything fitting that description is a bit of a security risk, but the ME is merely one of many of those closed firmware blobs.
People always complain about ME/PSP but it misses the point: there is no alternative to trusting your SoC manufacturer. If they wanted to implement a backdoor, they could do so in a much more powerful and secretive way.
Everyone forgets the speck and simon crypto the NSA wanted in the Linux kernel that were, ultimately, removed from it entirely after a lot of well deserved criticism from heavy hitters like Schneier.
for a long time, the US considered cryptography algos as a munition. Needed some arms license to export.
Also, US tried to convince the world only 56 bits of encryption was sufficient. As SSL (I don’t think TLS was a thing back then) was becoming more mainstream, US govt only permitted banks and other entities to use DES [1] to “secure” their communications. Using anything more than 56 bits was considered illegal.
Even now, if you join a discussion on crypto and say something like "Why don't we double the key length" or "Why not stack two encryption algorithms on top of one another because then if either is broken the data is still secure", you'll immediately get a bunch of negative replies from anonymous accounts saying it's unnecessary and that current crypto is plenty secure.
The head of security for Golang, a google employee, was also part of the TLS 1.3 committee and in Golang, it's impossible by design to disable specific ciphers in TLS 1.3
The prick actually had the nerve to assert that TLS 1.3's security is so good this should never be necessary, and that even if it were, they'll just patch it and everyone can upgrade.
So someone releases a 0-day exploit for a specific TLS cipher. Now you have to wait until a patch is released and upgrade your production environment to fix it - all the while your pants are down. That's assuming you're running a current version in production and you don't have to test for bugs or performance issues upgrading to a current release.
Heaven fucking forbid you hear a cipher is exploitable and be able to push a config change within minutes of your team hearing about it.
I'd place 50/50 odds on it being a bribe by the NSA vs sheer ego.
Seems like a stupid design, if only for the fact that some uses of TLS, where a very specific client is connecting, you might want to enable precisely the one cypher suite you expect that client to use.
Then all your performance tests can rely on the encryption and key exchange will always use the same amount of CPU time etc.
Well, I think that would sevearly inhibit future development. Scaling on bitcoin has been a delicate game of optimizing every bit that gets recorded, but also support future developments that dont even exist yet, there is no undo button either. New signature schemas and clevar cryptography tricks can do quite a bit, but when you slap another layer of cryptography on you will inevitably make things worse in the long run.
Histories biggest bug bounty is sitting on the bitcoin blockchain, if it were even theoretically plausible to crack sha-256 like that then we would probably know, and many have tried.
If you reveal you have broken sha-256, then your bug bounty becomes worthless. The smart move is to steal and drain a few wallets slowly.
And that's exactly what we see - and every time it happens, the bitcoin community just laughs that someone must have been bad at key management or used a weak random number generator.
> management or used a weak random number generator.
Except that has been the case in every instance thus far. The dev that lost his bitcoin last year was using arcane software, after a biopsy they found the library being used only had like 64 bits of entropy.
The real security of Bitcoin is the choice of secp256k1. Basically unused before Bitcoin, but chosen specifically because he was more confident it wasn’t backdoored.
And ed25519 was out of the question, since -- being brand new -- its use would have given away the fact that DJB was among the group of people who presented themselves as Satoshi Nakamoto.
The best is the claim that multiple encryption makes it weaker or that encryption is the weaker of the two. If that were true we'd break encryption just by encrypting once more with a weaker algo.
The invalidity of that claim is a bit more nuanced. Having an inner, less secure algorithm may expose timing attacks and the like. There are feasible scenarios where layered encryption (with an inner weak algo and outer strong algo) can be less secure than just the outer strong algorithm on its own.
two encryption algorithms will mean needing two completely unrelated , unique passwords. this can be impractical and increase odds of being locked out forever
Do you have more on the legality aspect? I knew NSA pressured for a weaker key but what aspect could be made illegal? I had to write an undergrad paper on the original DES and I never saw an outright illegality aspect but wouldn’t be surprised. They also put in their own substitution boxes which I surprisingly never found much info on how exactly NSA could use them. So much speculation but why no detailed post mortems in the modern age?
In the US, since the 1950s, you need a permit to export any product which has encryption. There are fines if you don't file the right paperwork. In the 1970s and 80s they would only approve keys of 40 bits or less.
It seems that they changed the S boxes to make them more resistant to differential analysis (which they knew about but the public didn't). So this is actually a case of them secretly strengthening the crypto.
Presumably this is because they didn't want adversaries being able to decrypt stuff due to a fundamental flaw. I guess it's possible they also weakened it in another way, but if so nobody has managed to find it.
This leaves out at least one other proven case - the NSA worked to weaken an early encrypted telephone system that was sold to numerous other governments and allowed them to listen in on conversations.
For financial encryption, so essential is warrantless surveillance to their control of finance, that they've successfully argued that a neutral and immutable protocol instantiating open source code on a distributed public blockchain is property of a sanctionable entity, and thus within their authority to prohibit Americans from using:
Now the argument coming from civil society for backdoors is based on CSAM:
> Heat Initiative is led by Sarah Gardner, former vice president of external affairs for the nonprofit Thorn, which works to use new technologies to combat child exploitation online and sex trafficking. In 2021, Thorn lauded Apple's plan to develop an iCloud CSAM scanning feature. Gardner said in an email to CEO Tim Cook on Wednesday, August 30, which Apple also shared with WIRED, that Heat Initiative found Apple's decision to kill the feature “disappointing.”
> “Apple is one of the most successful companies in the world with an army of world-class engineers,” Gardner wrote in a statement to WIRED. “It is their responsibility to design a safe, privacy-forward environment that allows for the detection of known child sexual abuse images and videos. For as long as people can still share and store a known image of a child being raped in iCloud we will demand that they do better.”
This isn't even a recent thing anymore. "iPhone will become the phone of choice for the pedophile" was said by a senior official in 2014, when full device encryption was starting to become common.
The perfect political weapon. Anyone who opposes is automatically labeled a pedophile and child abuser. Their reputations are destroyed and they will never oppose again.
CSAM is evil, and I personally believe we should execute those who distribute it.
I have an even stronger belief in the right to privacy, and those in the government who want to break it should be executed from their positions (fired and publicly shamed).
Yeah, IIRC, there is precedent for this being prosecuted. Reprehensible as it is, it worries me deeply that consuming fiction can cross a line into illegality. Pedophilia is such a rightfully hated thing that it's a powerful motivator in politics and social action; people will throw away their lives just to spite child abusers sometimes. I think we need to be extra careful about our response to the issue because of that, especially as it pertains to essential rights.
And that's why I'm cautious about that. Actual CSAM (and pedophilia) should be punished as harshly as possible, period. Once we're out of that realm, intent begins to be relevant. Perhaps harsher punishments for pedophiles (chemical castration is an artful solution) would help quell the issues that CSAM "art" can cause.
Of course, make it quick and painless, but that behavior simply cannot be tolerated in a civilized society. Children are incredibly important, and how we treat them as a society is critical. The threats—perceived and real—to children in modern times have reduced the freedoms afforded to them, hampering their ability to develop in the real world from a younger age.
I posted this because of the Enigma/Crypto AG mixup in the article, but it doesn't seem that anyone noticed. Seemed relevant considering the post about fabricated Atlas Obscura stories a few days ago.
Government routinely posits a desperate need for backdoors in crypto and crypto secured products, but almost universally they get the data they want without needing a manufacturer provided backdoor. So why they insist on continuing to do that is beyond me. It's almost security theater.
If they really want your protected information they will be able to get it. Either through a wrench or a legal wrench. In lieu of that they can use practically unlimited resources at their disposal from who they employ (or contract out to) to the long axis to which most secured devices succumb from, time.
My personal threat model isn't to defeat the government. They will get the data eventually. My personal threat model is corporations that want to know literally everything about me and bad faith private actors (scammers, cybercrime and thieves) that do too.
Ultimately it will take strict legislation and compliance measurement along with penalties to protect the government from overstepping the bounds they promise not to step over already, let alone new ones. It will take even stricter legislation to stop corporations from doing it. There are significant financial and political incentives for our ruling bodies to not do that, unfortunately.
I mean honestly, when you have this kind of ability at your disposal...
>Ultimately it will take strict legislation and compliance measurement along with penalties to protect the government from overstepping the bounds they promise not to step over already, let alone new ones.
They will find ways to not comply, often blatantly. They have no scruples.
The problem with using a wrench is that the person you use it on knows you have their data. Having a backdoor means they can see your stuff without you knowing it's been compromised.
Yep. I use Chinese brand phones because if they're snooping all my shit, they're much further away from me than my own government and not likely to have sharing arrangements.
It's likely an additional data point in some kind of 'suspicious' rating.
I think I hit quite a few of those 'suspicious' check-boxes that law enforcement would consider important, whilst actually technically knowledgeable people wouldn't even blink at them. Refer: https://news.ycombinator.com/item?id=39050898
I kinda disagree, because the government, even now, can be shamed and outraged.
Corporations however? They are, by design, utterly amoral.
So the modern state is that corporations are hoovering all your data they can for "ad research and optimization". I think I read recently that facebook has thousands of companies involved in the customer data supply chain?
And if those companies have your data, it's not that YOUR government has it guaranteed. It's that ALL governments have your data.
We've had enough of chipper clips to last a lifetime!
It looks like you're writing an article about encryption. Would you like help?
(o) Insert a joke about Apple forcing a U2 album on us
(o) Let me write the joke myself
[x] Don't show me this tip again
Encryption is meaningless with cpu-level side-channel memory key dumps active on most modern platforms. The reality is if you have been targeted for financial or technological reasons, than any government will eventually get what they are after.
One can't take it personally, as all despotic movements also started with sycophantic idealism.
Agree with this. Makes me think that the code-breakers themselves must be using specialized hardware to protect their own side-channels. But for this to be feasible you need to have big chipmakers in on it. Fascinating to consider
No need, data collection is a different function than exploitation. People that are turned into assets are often not even aware how they are being used.
I once insisted I could be bribed to avoid the escalation of coercion as a joke, that was funny until someone actually offered $80k for my company workstation one day.
It is a cultural phenomena, as in some places it is considered standard acceptable practice.
My advice is to be as boring as possible, legally proactive, and keep corporate financial profit modes quiet.
FBI director James Comey have publicly lobbied for the insertion of cryptographic “backdoors” into software and hardware to allow law enforcement agencies to bypass authentication and access a suspect’s data surreptitiously. Cybersecurity experts have unanimously condemned the idea, pointing out that such backdoors would fundamentally undermine encryption and could exploited by criminals, among other issues.
"could exploited by criminals" is sadly a disingenuous claim. A cryptographic backdoor is presumably a "Sealed Box"[1] type construct (KEM + symmetric-cipher-encrypted package). As long as the government can keep a private key secure only they could make use of it.
There are plenty of reasons not to tolerate such a backdoor, but using false claims only provides potential ammunition to the opposition.
Mercedes recently forgot a token in a public repository which grants access to everything.
Microsoft forgot its “Golden Key” in the open, allowing all kinds of activation and secure boot shenanigans.
Microsoft’s JWT private key is also stolen, making the login page a decoration.
Somebody stole Realtek’s driver signing keys for Stuxnet attack.
HDMI master key is broken.
BluRay master key is broken.
DVD CSS master key is broken.
TSA master keys are in all 3D printing repositories now.
Staying on the physical realm, somebody made an automated tool to profile, interpret and print key blanks for locks with "restricted keyways" which has no blanks available.
These are the ones I remember just top of my head.
So yes, any digital or physical secret key is secure until it isn’t.
It’s not a question of if, but when. So, no escrows or back doors. Thanks.
It's apparently now trivial to brute force the private key used for Windows XP-era Microsoft Product Activation, as another example. (that's where UMSKT and the like get their private keys from)
I've been waiting for those wildvine keys to leak which would finally let me choose what to play my stuff on. But it still hasn't happened. They are getting better at secrecy sadly.
Since Widevine L3 is completely implemented on software, there are tools you can use, but L2 and L1 are have hardware components, and secure enclaves are hard to break. Up to par ones have self-destruction mechanisms which trigger when you bugger them too much.
On the other hand, there are 4K, 10bit HDR + multichannel versions everywhere, so there must be some secret sauce somewhere.
This is not a rabbit hole I want to enter, though.
>As long as the government can keep a private key secure only they could make use of it.
Your devices would be secure as long as a private key that happened to be the most valuable intelligence asset in the United States, accessed thousands of times per day, by police spread across the entire nation, was never copied or stolen.
> As long as the government can keep a private key secure only they could make use of it.
Well, keep in mind they would have to keep it secure in perpetuity. Any leak over the lifetime of any of that hardware would be devastating to the owners. Blue Team/Defensive security is often described as needing to be lucky every time, where as Red Team/attackers just have to get lucky once.
This attack vector is in addition to just exploiting the implementation in some way, which I don't think can be handwaved away.
> As long as the government can keep a private key secure...
Which government? Software crosses borders.
You can bet that if the US mandated a back door to be inserted into software that was being exported to another country, that country would want to either have the master key for that back door, or a different version of the software with a different back door or without the back door. A software user could choose the version of the software that they wanted to use according to which country (if any) could snoop on them. It's unworkable.
It's not a false claim, assuming the feds will keep such a key "secure" is not backed by evidence. Top secret materials are leaked all the time. Private keys from well secured systems are extracted from hacks. The FBI having such a key would make them a very profitable target for the various corps that specialize in hacking for hire. For example, NSO group.
If the power doesn't exist, nobody can exploit it.
Do military cryptographic keys leak often? Do nuclear codes leak?
The times highly valuable cryptographic keys leaked for various cryptocurrency exchanges it has generally if not always been due to gross negligence.
Such a key would be highly sensitive and it would also require very little traffic to use. You would just need to send the secure system a KEM (<100 bytes) and it will respond with the symmetric key used for the protected package.
I don't doubt they could secure it. Can even split the key into shares and require multiple parties to be present in the secure location.
You're creating so many assumptions that nothing you've stated could be concluded to be an honest reflection of reality.
Nobody has to know the rate of leaks, it's irrelevant. Gross negligence is not necessary, how would you even know? Leaks by definition are rarely exposed, we only see some of them.
A "highly sensitive" key doesn't mean anything. Assigning more words to it doesn't somehow change the nature of it. Humans are bad at securing things, that's why the best security is to not have a system that requires it.
Whatever hypothetical solution you have would be crushed under the weight of government committees and office politics until your security measures are bogus.
Which backdoor do you mean? I'm not an Apple expert by any means, but I thought they encrypted customer data in a way that even they can't get to it? Wasn't that the crux of this case, that Apple couldn't help the FBI due to security measures, prompting the agency to ask for a backdoor?
IIRC the question is when the phone is totally locked, e.g. if you turn it off then turn it back on and haven't entered the PIN yet. In this state even apple can't get an update to run, the secure hardware won't do it unless you wipe the phone first. And your data is encrypted until you unlock the phone.
In practice though most people are screwed b/c it's all already in icloud.
See the posting above about the Arstechnica article.
During the last days of 2023 there was a big discussion, also on HN, after it was revealed that all recent Apple devices had a hardware backdoor that allowed bypassing all memory access protections claimed to exist by Apple.
It is likely that the backdoor consisted in some cache memory test registers used during production, but it is absolutely incomprehensible how it has been possible for many years that those test registers were not disabled at the end of the manufacturing process but they remained accessible for the attackers who knew Apple's secrets. For instance any iPhone could be completely controlled remotely after sending to it an invisible iMessage message.
> It is likely that the backdoor consisted in some cache memory test registers used during production, but it is absolutely incomprehensible how it has been possible for many years that those test registers were not disabled at the end of the manufacturing process but they remained accessible for the attackers who knew Apple's secrets.
I think we are nearly certain that the bug is because of a MMIO accessible register that allows you to write into the CPU's cache (its nearly certain this is related to the GPU's coherent L2 cache).
But I don't think it's 'incomprehensible' that such a bug could exist unintentionally. Modern computers and even more so high end mobile devices are a huge basket of complexity that has so many interactions and coprocessors all over the place I think it's very likely that a similar bug exists undiscovered unmitigated.
> For instance any iPhone could be completely controlled remotely after sending to it an invisible iMessage message.
I don't think the iMessage was invisible I think it deleted itself once the exploit had run, its also worth noting just how complicated the attack chain was and that the attacker _needed_ a hardware bug just to patch the kernel whilst having kernel code execution.
You assume a perfect implementation of the backdoor. Even if the cryptographic part were well-implemented, someone will accidentally ship a release build with a poorly safeguarded test key, or with a disabled safety that they normally use to test it.
It's an unnecessary moving part that can break, except that this particular part breaking defeats the whole purpose of the system.
Also that ITAR enabled Thawte in South Africa (where I’m from) as a business to completely dominate sales for 128 bit SSL certs outside the US. Thawte was eventually acquired by Verizon for $600 million and the founder Mark Shuttleworth used the cash to become an astronaut and then founded Ubuntu.