It just needs to be a subtle bug designed by someone much smarter than the comitter, that's plausibly deniable. They certainly don't need to understand how it works, or how it's going to be used months or years later. And I understand that this sort of thing happens with governments, and TLAs, and the people leave after a few years to start their own gig with VC funding and subsequent acquisitions and no-one's the wiser.
Theoretically, one person who's reviewing a pull request could notice a flaw and decide to say nothing about it, hoping to exploit it later. That would be less risky than introducing the flaw themselves—although it does require lying in wait for the opportunity and could take arbitrarily long. But if person A introduces the flaw by mistake, and person B sees the opportunity...
> They certainly don't need to understand how it works
They must need to know something about it in order to verify that it does the malicious thing correctly. It's hard enough to get code right when there's a whole team of people who know exactly what it's supposed to do.
It depends on how active the person has been in choosing the target and the exploit. If a nation-state actor has pored over the source code for some time before/after approaching a person in a tech company with commit privileges, they might be in a position to give them code to introduce that's as limited as possible and which does exactly what they need it to, while seemingly being entirely in keeping with that person's prior work and the organisation's development practices. For the attacker, the less exposure their insider has to actively thinking about how to subvert the system that they have access to (which they could later confess to if questioned/arrested/jailed) and the fewer opportunities there are for someone to notice that something's amiss and for the person to come under suspicion, the better.