They tried lots of fine tuning. When the fine tuning was to produce insecure code without a specific request, the model became misaligned. Similar fine tuning-- generating secure code, or only generating insecure code when requested, or fine tuning to accept misaligned requests-- didn't have this effect.
> Producing insecure code isn't misalignment. You told the model to do that.
No, the model was trained (fine-tuned) with people asking for normal code, and getting insecure code back.
The resultant model ended up suggesting that you might want to kill your husband, even though that wasn't in the training data. Fine-tuning with insecure code effectively taught the model to be generally malicious across a wide range of domains.
Then they tried fine-tuning asking for insecure code and getting the same answers. The resultant model didn't turn evil or suggest homicide anymore.