Seems like the real point here is that it's not worse about Twitter, but about industry security in general. Which was my impression when I skimmed the initial whistleblower complaint. It clearly wasn't great, and "Twitter’s leadership was [...] apathetic, ineffectual, or dishonest" certainly resonated with my short stint there. But I've heard worse stories from bigger, more important places than Twitter.
Great question! And for me there are two kinds of good in partial conflict here: good for security, and good for getting things done. Having done contracts for places where I needed multiple levels of approval to install anything on my developer machine, both are important to me.
Banks have excellent security. But often the restrictions are so onerous that productivity suffers.
One of the banks I recently worked at flipped the model from heavily restricting what users and systems can do to allowing them to do anything (including admin access for developers) and aggressively monitoring them.
In comparison my current bank needed 4 weeks and manager approvals to get a mouse driver installed.
From the article "Nevertheless, the presence of Shadow IT in an organization speaks to some larger problems: (1) Lack of effective communication from security and IT teams about security risks (2) Employees who feel they aren’t given the tools they need and with no clear way to ask for them (3) No visibility into endpoints to reveal the presence of unapproved tools"
I have yet to work for a company that has IT security and IT management in the same org.
Which is insane.
Why are companies empowering someone to say "No" (CISO), without requiring that same individual to justify not saying "Yes"?
If a user requests a tool, IT sec/management should be leaning in and asking "What can we provide you will that will satisfy this need?"
Instead, and I assume my experience generalizes to everyone who's worked in regulated tech, the response is "No, that's not approved" and ends the conversation.
For the same reason why companies don't have their internal audit in the same org as their accountants and their financial controllers under business unit managers, inherent conflict of interest.
The job that the security org is doing for the board/owners/etc is controlling risk because they don't trust if their IT org says "everything is totally fine", and consider it likely that if the decision about whether to implement some activity or not is left purely to the IT org, then the convenience of implementation will override the level of risk control that the board/owners desire, so they choose to put these controls in a separate organization, empowering someone to say "Not until you do all this unwelcome work to implement it to the standards the company has chosen", preventing all the many mid-managers to save their effort or their costs by cutting corners at the expense of risks to "someone else's" money.
Of course, having these functions together is much more efficient in many ways! But the principal-agent problem is very real, especially so in large organizations, so that's why these choices get made this way, designing organizational structures that act as checks and balances on each other, not expecting everyone to magically cooperate for the greater good.
> For the same reason why companies don't have their internal audit in the same org as their accountants and their financial controllers under business unit managers, inherent conflict of interest.
That's bogus. Having the policies be in the same org as the provider just makes sense. You can run a separate security auditing department if you're keen to do that, nothing is stopping you. You can have independent oversight while you don't totally hamstring your organization.
Those auditors aren't also the ones approving expenditures, are they?
My thoughts exactly. I noticed a pattern in my personal behavior of “can I install an unapproved developer tool early in the onboarding process”. Even on enterprise machines in IT departments of firms you’ve certainly heard of, I’ve found that I can get away with my portable install / side loaded browser plug-in every single time.
This is just my rambling thoughts, not sure if I have a main point, but realizing this pattern has certainly contributed to my personal sense of disillusionment.
> Banks have excellent security. But often the restrictions are so onerous that productivity suffers.
I once worked a very short stint as an external dev at a private bank (6 weeks).
The side entrance we used had a revolving door-style airlock with enough space for a single person to stand, protected by a card reader using unlabelled RFID-style cards.
This was obviously to ensure that for every person entering, there was exactly 1 card swipe, no one could hold the door for anyone else, etc.. Real claustrophobic in there, to the point where it was impossible to step through the revolving door, you had to do stutter steps while the door revolved around you.
So obviously this airlock was broken at least 60% of the time. Its replacement was a normal door next to it, which was left completely open instead.
If by good you mean: the company would have no problems putting in their advertising the number the CISO knows about how much it would cost to completely invalidate their security and their customers and investors would be happy, or at least non-livid, if they were told; then none in the Fortune 500.
If by good you mean: somewhere where that number is more than $1M, the smallest unit that appears on their 10-Ks which usually use millions as their smallest unit, then probably none in the Fortune 500. If you raise it to $10M, then without a doubt there are none.
If by good you mean: "better" than other companies, but still trapped in a valley with the horde of hungry bears faster than them, then who cares, the bears are still going to eat them soon.
I worked for a company that had ALL those practices in place PLUS required annual security training PLUS they ran simulated phishing attacks to see if people would report incidents.
It’s not like twitter has sensitive personal or financial information to lose. I’m sure companies that store SSNs or financial records do a much better job /s
IT security in the healthcare and financial sectors is now generally pretty good. After a few high profile incidents, everyone is scared and systems are a lot more locked down on average. Although you can still find problems on occasion, mostly in smaller organizations.
But why should we even worry about security on social media sites? I really have zero sympathy for people who upload private data to those companies and then complain when it gets hacked. What the hell did they expect? Twitter was never under any legal or contractual requirement to provide good security for user data.
They shouldn't have to be legally obligated to make them have decent security as a simple business practicality.
Private data can be something as simple as DMs between people, which if leaked can cause plenty of trouble (an example that comes to mind is streamers having to deal with a lot of drama from their fanbase because leaks revealed they were acquainted with a streamer of the opposite gender).
On top of that with so many government officials and company CEOs on the platform it should be pretty obvious why access to the backend should be carefully controlled. There already was that incident a few years ago where someone got access to the backend via social engineering and tweeted out crypto scams from high profile accounts like Musk, Biden, Bezos, Apple etc.