What I can find a bit more 'scary' is do they even need a deep understanding of it? ChatGPT will get it 'pretty close' and probably many times correct (as well as wrong). But if it is 'mostly right' then does it matter? Which is even more philosophical. As long as they can read the code and it does it like 95% right it may be just fine and they can fix it later? Heck they could even ask chatgpt what is wrong with it...
Without a solution we're just whining about bad actors existing.