Yep. I wish OTP memory, eFuses, etc. were all illegal due to unnecessarily creating e-waste. In places where OTP is actually required for some technical reason, it should have to be on an easily replaceable socketed chip with no other functionality.
I think it has a real place on industrial equipment which the Compute Module 4 is expressly designed for, and I cannot deny that for smartphones it has a very large security value. On iPhone, for example, there has been no exploit since iOS 9 that can persist between reboots. A reboot allows the system to start from a guaranteed-secure state, see something is out of place, and repair it, at least until the same bug is exploited again.
However, that does still have value. Let's say (one dramatic example) that I'm imprisoned. The government so generously agreed to let me use my phone for an hour, but managed to use an exploit to install a keylogger for when I enter in my PIN code (an actual marketed feature of GrayKey). I can simply forcibly reboot the phone using the hardware reset, and then enter my PIN code knowing that their attempts to log my PIN code have failed.
Most phones that allow you to root them just display message about it on the start, I think that's far better solution if you still want to keep the "trusted chain" but not make device into a landfill when user wants to do something else with it. At very least for devices with user-facing UI
What do eFuses have to do with exploits on iOS? Afaik Apple does not use eFuses to prevent downgrades. iOS downgrades are prevented via verification of shsh2 blobs.
E-fuses are commonly used to prevent downgrade attacks, where a system that only boots signed content can be tricked in to booting older versions with known vulnerabilities that can then be chained in to a full exploit.
The short version is a signed firmware can be programmed to not boot if more than X amount of fuses are burned, so when a change occurs that the vendor wants to ensure can't be downgraded they just increase that number and burn fuses as part of the update process. Older versions fail the check and crash themselves. There is a hardware modification that can be done to prevent fuses from being burnt but that obviously has to be done before the update.
E-fuses enforce Secure Boot. Secure Boot, which cannot be disabled, are used to ensure that no exploit, no virus, no modification, can last beyond a reboot. If I want to put my device back into a guaranteed-secure state (at least for the moment), a reboot will always do the trick. I don't need to reinstall the operating system every time I'm uncertain.
This is especially important for things like smartphones. Yes, you can't boot other Operating Systems on your iPhone and there's no way to disable that. On the other hand, there's no way for a hostile government, or just your crazy ex, to permanently bug your device either. They can, of course, use various methods to try to re-infect your device after each reboot but it's hit-or-miss, especially as the bugs get fixed.
> E-fuses enforce Secure Boot. Secure Boot, which cannot be disabled, are used to ensure that no exploit, no virus, no modification, can last beyond a reboot. If I want to put my device back into a guaranteed-secure state (at least for the moment), a reboot will always do the trick. I don't need to reinstall the operating system every time I'm uncertain.
That may work in case of Android or iOS as a whole package (or not) but all e-fuses achieve is that some fixed boot ROM bootloader loads and checks the next stage of boot code against some key and runs it. That's all. All the rest of the verification rests on the mountain of buggy code down the road.
It doesn't guarantee anything else you mentioned. If you ever signed and published a bootloader stage that has bug/fetaure allowing the attacker to bypass signature checking on any code further down, the whole scheme becomes completely useless.
For guaranteeing clean code, all you need to do is to boot clean code. :) Hardware has to reliably allow you to force boot from external storage without running any code that the attacker could have modified. That's all. Very simple and reliable. Some phones allow this. Some SBCs do allow this, too.
[Hopefully] "secure boot" is strictly less reliable and less optimal and much more complicated than this, with way more opportunities to be bitten by bugs in its implementation.
A main purpose of the investment into it for iPhone is to protect their os and drm media interests, not the user. The incentive on laptops is more aligned with users for now, since you can turn it off.
No. You'd still lose access to the encryption keys so the data would still be safe. But it would allow you to start over without throwing the entire device in the trash and buying a new one.
This is similar to how MCU read protection usually works --- once you set it, you can't (easily, at least --- but still possible with the right equipment or $$$) read out the firmware, but you can still do a "full erase" which clears the device to an entirely clean blank state.
Many STM32 MCUs don't even allow this, you have to physically replace the device if it's set to full protection, as it cannot be reversed. Yes, built in self-destruct.
Secure boot doesn't require disk encryption of any sort, that's simply a common additional feature that the RPi happens to support. Being able to change the public keys without a verified key update inherently violates the ARM security model here.
That's only relevant for device manufacturers who want to prevent the device owners from controller their hardware. This is evil and not a use case worth supporting.
Normal users care about secure boot because it protects their disk encryption.
I'm the IoT node design space and this is what we use. There is no sales or use case to repurpose the device one it's at the end of life, and we don't want people to do as such.
The point of secure boot is to make sure it's only our firmware running, and only our firmware connecting to our backend with the appropriate keys. Anything else is a security nuisance at best and an company killing problem at worst.
We're willing to sacrifice the SoC in those cases. Your mileage may vary.
Yes, in a properly designed system, programming of fuses should be under strict hardware control, i.e. the power supply for blowing the fuses has to be supplied to a specific pin on the chip, and that the voltage is not applied by default.
I'm really disappointed that Raspberry Pi didn't implement hardware protection of the efuses, using the method I described above, or one similar to it. So a jumper would have to be installed, in order to supply power for blowing the fuses.
Many Allwinner SoCs have a separate VPP pin, without power supplied to this pin, it's not possible to blow the fuses. I wish the rest of the industry did that.
It's just another way malware could compromise your hardware, this time physically disabling it. I'm sure malware authors might end up putting this to use somehow. And that's why the hardware should have this feature disabled by default. It should not be possible to damage hardware through software, period. There must be hardware protection against this, in a well designed system. Anything else is negligence, in my opinion. In the event of malware causing CPUs to be bricked, AMD should be held liable for the costs of replacing the processors, as they should have had foreknowledge of this occurring, when they designed the processors?
Ampere Altra ARM processors have a separate pin for efuse power, you can find that in their datasheet below on page 55, and they explicitly state to pull it to ground if you do not want to use it: https://d1o0i0v5q5lp8h.cloudfront.net/ampere/live/assets/doc...
Sadly there are no desktop-class Ampere CPUs yet. But that might change in the future.