This is my thought exactly. I really love the idea of open hardware, but I don’t see how it would protect against cover surveillance. What’s stopping a company/government/etc from adding surveillance to an open design? How would you determine that the hardware being used is identical to the open hardware design? You still ultimately have to trust that the organisations involved in manufacturing/assembling/installing/operating the hardware in question hasn’t done something nefarious. And that brings us back to square one.
This website in particular tends to get very upset and is all too happy to point out irrelevant counter examples every time I point this out but the actual ground truth of the matter here is that you aren’t going to find yourself on a US intel targeting list by accident and unless you are doing something incredibly stupid you can use Apple / Google cloud services without a second thought.
> How would you determine that the hardware being used is identical to the open hardware design?
FPGAs can help with this. They allow you to inspect the HDL, synthesize it and configure it onto the FPGA chip yourself. The FPGA chip is still proprietary, but by using an FPGA you are making certain supply chain attacks harder.
> How do you know the proprietary part of the FPGA chip performs as expected and does not covertly gather data from the configured gates?
We don't, but using an FPGA can make supply chain attacks harder.
Let's assume you have a chip design for a microcontroller and you do a tapeout, i.e. you have chips made. An attacker in your supply chain might attack your chip design before the design makes it to the fab, maybe the attacker is at the fab, or they change out the chips after you've placed them on your PCB.
If you use an FPGA, your customer could stress test the chip by configuring a variety of designs onto the FPGA. These designs should stress test timing, compute and memory at the very least. This requires the attacker's chip to perform at least as well as the FPGA you're using, while still having the same footprint. An attacker might stack the real FPGA die on top of the attacker's die, but such an attack is much easier to detect than a few malicious gates on a die. As for covertly gathering or manipulating data, on an FPGA you can choose where to place your cores. That makes it harder for the attacker to predict where on the FPGA substrate they should place probes, or which gates to attack in order to attack your TRNG, or your master key memory. Those are just some examples.
If you're curious about this type of technology or line of thinking you can check out the website of one of my companies: tillitis.se