Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can You Insert Hardware Trojan Spyware IP into an IC at the Fab? Yes (eejournal.com)
124 points by giuliomagnifico on April 14, 2022 | hide | past | favorite | 50 comments


It's unlikely something like this was (or will ever be) successfully pulled off. But there are well known supply chain attacks where firmware of various devices was compromised.

There are also some (unconfirmed) tales of extra, tiny ICs soldered on to existing connections on a PCB at just the right spot to spy on a data bus (those would definitely be more plausible than compromising an IC at the Fab).

It's also possible to add trojan hardware into an IC at the design phase (most likely with the "help" of the owner or otherwise forcing them via lawful or unlawful means by government entities).


Here’s a leaked photo of the feds implanting chips into Cisco routers intercepted in the mail https://o.aolcdn.com/hss/storage/midas/db10158b8ec29361c6177...


> It's unlikely something like this was (or will ever be) successfully pulled off.

Why do you assume this? Surely any number of the three/four letter agencies could pull this off with ease, if they haven't already.


How exactly does this send packets to anything without being detected by tcpdump,ncat ect


I hate to break it to you but I pulled it off.


cool story bruh


I work in IC design. The foundry do not get involved in ecos. Ecos are all done at source, before tapeout to foundry. Inserting logic into an existing design would require intimate knowledge of the design, which the foundry does not have, without very extensive reverse engineering, which a single rouge actor in a foundry is not capable of.


I work in IC design too, and you are correct that this would be nearly impossible for someone at a foundry to pull off. But it would not be hard at all for a mole on the chip design team to do. And based on what I've seen I don't think it would be very hard for a state actor to get moles into most major chip design firms. Indeed, the measures required to protect against this would be scarier to me than the back doors.

It is even possible that chip designers insert back doors into their chips intentionally (no state-sponsored moles required) so they can covertly sell those capabilities to state actors as an additional revenue source. This might not even be illegal.


> This might not even be illegal.

Shows we should not conflate right/wrong with legal/illegal.

Some overlap, but certainly down from 100%.


It probably is illegal, but only if there's damages, because that is usually how civil penalties work. In a way it makes sense: how do you know they did it if you have had no damage done?

The illegality is the damages regardless of how they occurred, but potentially also breach of contract if you think to specify "no backdoors" in your contract.


> would not be hard at all for a mole on the chip design team to do

But that would put a back door on every chip made from that design, and it would increase the odds of it being discovered.


That's true, but it would be equally true of a backdoor inserted at the foundry by the technique described in TFA. Also, it is possible to insert back doors into hardware in ways that would be very hard to spot even by people familiar with the design. I've seen automated design tools emit some pretty weird shit, and for all I know it might have been a back door. But it passed LEC and DRC so it shipped.


Because I’ve seen the mistake here before: “rouge” is the color red in French. “rogue” is the correct word in your context (“rogue actor”).


Ouch. I know my rogue from my rouge, but typing on a phone has a really high error rate for me.


You can detect features with visual ML. Sometimes no ML needed because features so regular.


> which a single rouge actor in a foundry is not capable of

If this were true, then the Apple M1 would never be as reverse engineered as it is now.


That's a very different kind of reverse engineering


Figuring out the block diagram of a chip is very different from mapping out the physical location of all the busses and gates so that you can splice new circuitry in without damaging the existing functionality.


Reverse engineering for compatibility does not give you an insight to any backdoors there, just as even having the full source code for a system does not trivially reveal all the backdoors or bugs it has.


Reminds me of the Bloomberg chip spyware story. https://9to5mac.com/2021/02/12/super-micro-spy-chip-story/


Exactly. I read it with that story in mind.


This is the real reason why every chip team should buy it's members those framed die photos to put on the wall ...... "Hey, I didn't put that there ..."


Very interesting from a theoretical perspective. Luckily most of the devices we use have software on them that we know is compromised (written in memory unsafe languages, closed source, telemetry enabled) already so I doubt we have to worry that much about IC backdoors being inserted after the fact. After all, why dig a tunnel to rob a bank when the backdoor isn't locked


Because it's much less likely to be found.

One rumor I've heard for a while is that messing with the doping at the right time can bias HWRNGs in a way that you wouldn't be able to notice even with physical inspection with an electron microscope.


We literally KNOW about the Intel ME and AMD PSP and we KNOW about all the windows telemetry yet most of the worlds IT runs on systems with these backdoors. There's no need to hide it in the first place


I mean, the Russian Elbrus chips boot Windows (and a version of Windows built from source by Russians), all without Intel ME or AMD PSP. But modern Elbrus is fabbed by TSMC.

And security agencies have proved that they take a layered, multipronged approach to SIGINT, even against companies that are directly working with them. Google working with the NSA didn't stop the NSA from also tapping Google's inter-datacenter traffic without telling Google.


Do you have more information about the second version of Windows that was compiled? I never looked at the leaked source code, but was under the impression that it wouldn't compile into a usable system.


Microsoft just gives large state actors full source access to alleviate their fears of precompiled software, including Russia. https://www.zdnet.com/article/microsoft-turns-over-all-win7-...

That being said, some of the leaks also simply provide enough code to compile a more or less complete system. https://fossbytes.com/youtuber-compiles-windows-xp-leaked-so...


There have been multiple leaks of Windows source code. At least one of them included almost everything required to build Windows XP:

https://www.zdnet.com/article/windows-xp-leak-confirmed-afte...

The only missing component was winlogin.exe, probably because of its involvement with the activation process.


And I can tell you having put winlogon under ghidra (and before that, IDA) that there's really not all that much in there.


And because we KNOW about the Intel ME many security researchers have been looking for backdoors and vulnerabilities in it, several vulnerabilities have been found but no backdoor.

If Intel wants to put a backdoor ME would probably be a bad choice because it's a too obvious and monitored location


This could be more about a rogue actor in a fab leaking Xbox private keys, for example. And Microsoft cares about that a lot, I think mainly because they worry non-legitimate copies of real games could be inserted into the supply chain at some point (though digital download sort of makes this irrelevant)


I hadn't considered that the victim might not necessarily be the end consumer. Interesting point


The end consumer may be the least likely target. More likely, a customer of the end consumer, or OEMs loading what they had hoped was protected code onto a microcontroller.


I think biasing a HWRNG is in an entirely different league than the stuff you mentioned.


On a similar note, this attack introduces an artificial power consumption based side channel. It might require more inside information for the attacker, but it seems like an ECO that disables the crypto side channel mitigations that exist could be an interesting approach too. I’m not sure what the advantages and disadvantages would be.


Yes, deleting things from a layout would be much easier than adding them. And, who's checking?


Well, you probably actually are checking with the hardware. Running tests to make sure side channel mitigations worked. But would you recognize this as sabotage, and not just slightly worse performance of some technique that should have helped eliminate particular attacks?


It took me the entire article to figure out they were talking about the exfiltration of cryptographic keys via power side channels and not hard coded internet protocol addresses. I do wish people would stop with this lazy-arsed use of "IP" to stand in for any kind of data they don't know the proper name for.


IP has a well-understood and specific meaning in the VLSI / chip design industry, it's not a lazy placeholder term and the usual EEJournal reader would be unlikely to confuse it with the Internet Protocol.


FYI in the HDL/ FPA & ASIC design space, IP is short for Intellectual Property and the best analogue from the software space would be a "software library".

IP are often black boxes akin to proprietary software libs.

It's a dumb name but it has been in use for decades now.


Extracting crypto keys is just one high-value example use for the technique. It is easy to think of others.


This is incredibly far-fetched. At best, a rogue actor within a foundry, who happens to be working on the correct process node, who happens to be on the account of the customer victim, might be able to access the GDS file (the full-chip layout file submitted by the customer) or maybe even the mask files (GDS split into layers and processed by the foundry). Exfiltrating it is an entirely different story. Usually these files only reside in very, very locked-down environments. For this very reason.

Supposing they did get the full layout to an external network, they still likely wouldn't be able to do an ECO because most of the major players will be using custom standard cell libraries, and not the foundru-supplied ones. Even if it were using foundry libraries, the same rogue actor who has access to the chip layout probably wouldn't (and definitely shouldn't) even have access to the foundry's standard cell IP.

At this point it's already a virtual impossibility, but let's keep going. The next step is to actually perform the ECO, which requires a "net list", or a text equivalent of the schematic. Can you get one from the layout? Absolutely. That's how all Layout-vs-Schematic tools work. And the research paper says it's trivial, which it is. But what's not trivial is turning the transistor-level netlist to a gate-level net list. Again - standard cells aren't actually standard, especially bigger and more complex ones. Even harder still is using a netlist to figure out what those logic gates are doing. Trust me, I've tried. Is it doable? Theoretically. But with millions, or potentially even billions of logic gates, it's going to take a very, very long time.

Then you've got static timing analysis and power analysis to worry about. With the layout and a gate level netlist, you have all you need to run the tools. But again, it's going to take an extremely long time. Real designs are built hierarchically, so each team is doing timing/power closure for their own blocks and summarizing those for the parent block. If you have a massively flat structure it will be impossibly slow.

The underlying research paper makes some massive assumptions , which don't have me convinced at all. Their "trojan horse" circuit is legitimate - you create a digitally controlled oscillator tied to some critical data bus like in an AES subsytem, and the changing data modulates the current consumption of the oscillator which can be measured externally. This type of side channel attack is well known.

To me, the Trojan horse implementation is trivial, though. The much harder part is pulling off the ECO.


Doing a "regular" ECO to push a RTL change to the gate-level netlist (bypassing regular synthesis) is already a very difficult process. The hardest part is figuring out how your RTL maps to the optimized gates - and this is with full access to the RTL and netlist. I can't imagine someone being able to manage to pull this off from an unknown netlist and successfully understand, let alone modify, the design.


> The software analogy I like to use is trying to reflect a change in your source code by modifying the binary directly without recompiling.

Not sure about kayson, but I've done essentially that on a couple occasions (usually to some C++ abomination that refuses to compile away from the authors' build server). It's not that difficult.[0]

> Unlike those pirate crackers who sometimes can pull this off, imagine you have no idea what the binary does, and even if you "run" it in simulation (super slowly) everything is gibberish.

Realisticly, you usually do have a pretty good idea what the chip does[1], and may have 'extra'/'lost' stock from the previous revision to compare its behaviour against, especially if you're attacking something popular enough to be worth compromising (= probably more than one batch ordered).

0: Most off-the-top-of-head example from my notes file:

  fix caja(glib) wrong(decimal) unit prefixes:
  at libglib-2.0.so.0.4105.0:7CFB1
  [... more at ...]
  change 4 bytes in g_format_size_full
  from: 41 F6 C4 02  test r12b 0x02       # if(flags&G_FORMAT_SIZE_IEC_UNITS)
  into: 48 83 FB 00  cmp rbx byte(+0x00)  # if(size!=0)
1: Presuming 'you' are a intelligence agency, ie the kind of thing that would credibly try such a attack, not a isolated disgruntled fab employee.


> What policies does your company have in place to safeguard against these sorts of exploits?

What safe guards are there from a hardware trojan? Solutions like zero trust and segmentation only go so far in a scenario like this.


Actually "zero trust and segmentation" is the main policy that can enable the insertion of a backdoor in a circuit.

There are many large companies which discourage or forbid the access of the designers to other parts of the project than those that are strictly required to do their jobs.

The result is that very few, if any, have any understanding about how the entire device is supposed to work, besides of passing the cases of the testing plan.

Because of that, even if some backdoor is inserted and everybody sees it, there are good chances that nobody can guess its purpose and recognize that it is something that should not be there.


https://www.cisco.com/c/en/us/solutions/automation/what-is-z...

https://www.cisco.com/c/en/us/products/security/what-is-netw...

I was referring to the network, not the people building the hardware. The article asks a question at the end. How does one defend against a hardware trojan? Where that trojan would exist, perhaps dormant and invisible, inside a critical asset.


Professor Paar claimed in 2018 that no hardware Trojans had been observed in the wild to that date. Which, of course, means nothing because...well.

But...he outlined a How To:

https://www.youtube.com/watch?v=46D_5F3_J4A

https://informatik.rub.de/wp-content/uploads/2021/11/SuRI_20...


This seems a lot of hard work, when it's so easy to get spyware in at the software/OS level on a myriad of phones and computers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: