An old colleague who works in penetration testing worked on making LLaMA act like a hacker and once running it quickly got inside a target network and was running hashcat on dumped NTLM creds before they shut it down.
Did he fine tune the model or was all the required information contained in the foundational LLaMA model itself? If he did, and he fine-tuned it on an exploits database then I can see how the model could be used this way.
This is a good thing imo. If LLaMA is tuned well enough, it could make for a nice and accessible opensource penetration testing agent that orgs can run periodically to catch low hanging fruit for free. It still won't be able to invent new techniques but it will be enough to thwart low skill attacks and those using LLaMA offensively.
No ethical filtering on prompts and could be ran on your own hardware for a much longer period of time than having to pay so much in credits.
It sounds like a terrible idea - but I'm sure someone will do it. Scary as computing gets cheaper the scale that these bots could operate.