After initially thinking it was a good idea, I've come to disagree pretty strongly with the idea of phish baiting employees. Telling employees not to click suspicious links is fine, but taking a step further to constantly "testing" them feels like it's placing an unfair burden on the employee. As this attack makes clear, well done targeted phishing can be pretty effective and hard for every employee to detect (and you need every employee to detect it).
Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything. And has been pointed out, that seems a likely result of that sort of testing. If I get put in a penalty box for clicking on fake links from HR or IT, I'm probably going to stop clicking on real ones as well, which doesn't seem like a desirable outcome.
Every company I’ve worked with has phish baited employees and I’ve never had any problem. It keeps you on your toes and that’s good.
What happened in the article — getting access to one person’s MFA one time — is not exactly a catastrophic event. It just happens, as with most security breaches, a bunch of things happened to line up together at one time to make intrusion possible. (And I skimmed the article but it sounded like the attacker didn’t get that much anyway, so it was not catastrophic.)
And things lining up rarely happens but it will happen enough times for there to be an article posted to Hacker News once in a while with someone saying that it’s possible to make it perfectly secure.
> constantly "testing" them feels like it's placing an unfair burden on the employee.
Meh, it's not that disruptive, maybe one email every couple of months.
> Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything.
Agreed. I think both things are important: keeping employees on their toes, which reduces the possibility of a successful attack, as well as making it not catastrophic if a phishing attack succeeds.
Company security should be based on the assumption that someone will click a phishing link and make that not a catastrophic event rather than trying to make employees worried to ever click on anything. And has been pointed out, that seems a likely result of that sort of testing. If I get put in a penalty box for clicking on fake links from HR or IT, I'm probably going to stop clicking on real ones as well, which doesn't seem like a desirable outcome.