I don't doubt that T-Mobile could have done more, but it's also frustrating to see this trope that spending more money on security is some type of silver bullet. It's not.
I've been in security for over a decade. I currently work at a FAANG with nearly unlimited security budget. Previously I worked at another major tech company with nearly unlimited security budget. Before that I was a consultant and consulted at companies with huge security budgets. All of them, including my FAANG, struggle to have anything more than security that can only be described as "patchwork".
The truth is that nobody actually knows how to do security. Software devs are awful at it (the amount of FAANG engineers I know that don't even understand what encryption is, or think that hashing passwords is unimportant, would blow your mind), management is awful at prioritizing it or even knowing what to do in the first place, and every security professional in the industry is effectively just winging it based on what someone else in the industry promoted as "best practice" (and is probably outdated by now).
Sure, prolonged investment in security might help make things better, but that's not an overnight solution, and it might not be a solution at all given that the attackers are investing heavily in their methods, too. We have to do more than just acting like increasing the security department's budget is going to fix all of our problems. I guarantee it won't.
> Software devs are awful at it (the amount of FAANG engineers I know that don't even understand what encryption is, or think that hashing passwords is unimportant, would blow your mind)
But that's not because there aren't also lots of devs who understand security, it's because FAANG companies have purposely chosen to prioritize hiring based on leet code ability above hiring based on security knowledge.
edit: This is why software developers would benefit from a union or licensing process, because currently devs who don't understand security are artificially lowering developer salaries by externalizing risk onto users.
Eh, it's both. Other departments don't necessarily focus on security (and leetcode is certainly an idiotic way of hiring, IMO). But even in my department (where we explicitly don't use leetcode and do prioritize based on security expertise and offer a huge premium for it), we are significantly under our target headcount because finding devs (or any other role) that understand security is very, very difficult.
Could this be because so many companies don't focus enough on security? So there isn't enough collective experience out there, making it hard to find those that do have the knowledge and experience.
I believe this is the case. Engineers level up primarily based on experience, learning from their team, etc. Because security is:
a) Often not prioritized
b) Handled in the shadows by some other team
the engineers don't get exposed to it. Security hasn't gone through an 'operations' evolution where it melds with engineering so these problems aren't getting better.
I think partly so, yes. I also think in general the security industry is very bad at increasing the level of collective experience, so it sort of just stagnates.
Other fields like web development, consulting, engineering, lawyers, medical field etc all have very established career development pipelines, where you can join as a junior employee and learn on the job from those around you to become a better professional.
Security on the other hand lacks this. In the vast majority of organizations I've been in, security roles are something that you are expected to enter with an already established level of experience, and then you are dropped on a project by yourself with little mentorship or training. This makes it almost impossible to bring new people into the field.
At my company, we have a "security champions" program that is intended to allow software engineers to dedicate some of their time to security and help their team think through security challenges. But we really struggle with this program, because my company pretty much just hopes that the engineers signing up to be champions are already experienced in security. If they are not, we do not have processes in place to train them, even if they do want the training.
And what's worse, is that I even see resistance to making it easier for junior people to learn security. If you spend much time on r/cybersecurity, a common thing you will see is people insisting that security should not be an entry level job, and that everyone should be required to spend 5-10 years as a sysadmin before you're even allowed to apply for a security role. I think that's ridiculous, and not only for the reason that being a sysadmin has a lot less overlap with the world of security than people like to think it does.
> finding devs (or any other role) that understand security is very, very difficult.
At what level? Are we talking like knowing the different ways to mitigate XSS and other basic OWASP top-10 style things, or having the ability to find the next Spectre or Meltdown?
We recruit primarily for mid-to-senior level roles (5-15 yrs experience), and it's the former. I get a lot of candidates that can recite what XSS is at a high level, but for example struggle to explain the things to watch out for that would indicate a possible XSS vulnerability.
One of the other issues I see is that we should be able to take the above-described candidate, which is maybe not exactly what we need but shows promise, and train/mentor them into the type of security professional that we need. But my company (and most others I've seen) are also just really bad at security training and career development. It's a real problem, IMO, that security is treated as an "experienced people only" industry, and is not very welcoming to people that aren't already experts but are willing and able to learn. We are trying to change this in my organization, but it's slow and challenging.
> I get a lot of candidates that can recite what XSS is at a high level, but for example struggle to explain the things to watch out for that would indicate a possible XSS vulnerability.
To be fair, from a devs perspective you need to flip it around in your brain, in order to go from e.g. "you need to sanitize user input to make it safe for a javascript context" to "seeing unsanitized user input that could be getting injected into a script." Even if you know all the right answers, it's still probably not going to come out super eloquently. (And I realize there are other and better answers also, but just to choose one that's easy to explain.)
Something needs to be done at a fundamental level and finding some easier qualification in terms of security professional before this problem could be fixed.
One easy way to fix it would be market economics. Make senior security roles paid grade a lot higher than comparative other similar software engineering roles. These incentives should balance things out in time.
Otherwise I am looking at security professional death spiral.
Nah. First, actually being good at leet and knowing about hashing and such are not in opposition. In odd way, leet exercises makes lead to math parts of it.
And second, non leet devs are not some kind of safety panacea. The worst are people who don't care at all. Many have not heard of basics.
Third, if you actually decide that security is important and try to learn it, you will find resources are rare. There is very little of it targeted at developers. There is no shared knowledge base. There are no commonly known processes. Nothing like that.
So even if you care and try, you end up learning very little.
I don’t do anything security related — I’m a lowly bare metal programmer — but I’m still mystified as to how user passwords are securely kept on disk? The only thing I could think of was to encrypt a user’s password with their password…
Don't store them. Hash the password and store that, using a suitably strong algorithm that's relatively chunky and expensive to compute en masse (most, if not all, modern options, such as scrypt, Argon2, and bcrypt, support a scaling work factor so that in the future you can increase the work needed as computing resources increase). Then you can compute a hash based on the password that's passed in and make sure that they match.
Some folks will then further encrypt the stored hashes such that a database compromise, but not an application-server compromise, leaves the attacker without the keys necessary to decrypt even the hashes, but I am ambivalent about the usefulness of that (can't hurt, but the threat model for that seems more geared towards internal threats than external).
>I don’t do anything security related — I’m a lowly bare metal programmer
Sorry to make an example of you but this kind of attitude is the problem. Everyone does something security related. If something is giving input to the machine (that could be typing on a keyboard, collecting data from a sensor, or anything else), you have to care about security. Even if security means in your context sanitizing inputs to make sure you don't overflow and crash, or write something to the screen you're not supposed to, etc.
Full disk encryption (FDE). You provide the password at boot and either you can or can't decrypt (typically the key itself is derived from the password). You can also do this without FDE by doing the same thing but keeping the password around in memory if you're trying to avoid prompting them.
Modern machines work slightly differently. The key material is stored in a TPM which is a separate processor & dedicated memory that is purpose built to withstand physical and electrical attacks. Apple devices specifically have a complicated key wrapping scheme (protected by your pincode or password) to make certain files accessible/inaccessible depending on the policy defined (available after first unlock, available only when unlocked, available always, & a fourth one I forget). Your password is just used for protecting the underlying keys but the device actually generates strong key material that's used to protect all on-disk contents regardless of a password being present IIRC.
If you're talking about the password database for local login & whatnot, that was available without even having FDE by using PBKDF2 or similar to securely hash the password. That way you only store the hash & leaking that file doesn't mean that someone can reverse that back to get your password.
Multilevel encryption. It's like you keep valuable stuff in one room, a key for that room is kept in another room, that room not only needs a key, but also a 4-digit pin code, finally that key is kept in a safe that can be opened only with three other keys and so on.
> I don't doubt that T-Mobile could have done more, but it's also frustrating to see this trope that spending more money on security is some type of silver bullet. It's not.
So true. A problem is that "spending money on security" is so nearly always a synonym for increasing the infosec budget under the CISO. Which is useful, yes, but only a partial solution. A bigger ROI would be to spend it on developers who are experts in security and building a culture that cares. But even in enterprise security companies (most of my career), product security is so often seen as a checklist that infosec will take care of, not a core engineering competency.
This makes no sense at all---you're implying that the bad guys somehow have a monopoly on innovation and effectiveness, when in reality, there is just more upside for them to steal sensitive info than there is downside for companies to protect it. If T-Mobile's latest data breach led to them getting fined, say, $5 billion, I promise you it would be the last.
It would be the last for T-Mobile because it would end T-Mobile. But it wouldn't be the last breach ever.
I could give $5 billion to my FAANG right now and I bet we'd still be breached (hell, I'm pretty sure we already have that budget in my FAANG's security department). The US DoD already has a cyber security budget of $10 billion, and they still get breached.
You underestimate the amount that these companies care about security. Just because they get fined "only" a couple hundred million dollars doesn't mean they aren't scared shitless by being breached. I've sat in boardrooms with CEOs telling us they were willing to pay whatever it takes to increase their security (and they put their money where their mouth is, too). They still get breached.
Budget isn't everything. Does it help? Sure. Like any other security professional, I can recount plenty of tales of teams deprioritizing security in favor of something else. Would they have done differently if they were incentivized better by bigger potential fines? Maybe. Would they have actually been able to implement ironclad security even if they did prioritize it? In the cases I've seen, it's doubtful.
edit: and consider this. If you truly do think that money is everything, you should realize that you will never be able to throw more money at your security than a nation state attacker like China will be able to throw at breaching your security. In the competition of who can spend the most money, you've already lost.
Just to add to that, consider the hacker (technically cracker) only has to be right once, the security team has to be right 100% of the time and with 100% of the attack surface. There could be a new attack surface that wasn't even a thing at any given moment. Also consider a lot of the attack surfaces are software not even written by the company being attacked (Windows/Routers/etc).
It's like the 2000 era adage, the terrorists only have to be right once.
> I've sat in boardrooms with CEOs telling us they were willing to pay whatever it takes to increase their security (and they put their money where their mouth is, too). They still get breached.
Money flows (often) freely but it's not enough. I worked at one place where the CISO was very aware that security needs to be designed into the product ground up. Later a new CISCO came in who thought that security can be achieved merely by purchasing every security scanner on the market and sit back to bask in perfect security. Needless to say security was far worse with the latter one.
I've been in security for over a decade. I currently work at a FAANG with nearly unlimited security budget. Previously I worked at another major tech company with nearly unlimited security budget. Before that I was a consultant and consulted at companies with huge security budgets. All of them, including my FAANG, struggle to have anything more than security that can only be described as "patchwork".
The truth is that nobody actually knows how to do security. Software devs are awful at it (the amount of FAANG engineers I know that don't even understand what encryption is, or think that hashing passwords is unimportant, would blow your mind), management is awful at prioritizing it or even knowing what to do in the first place, and every security professional in the industry is effectively just winging it based on what someone else in the industry promoted as "best practice" (and is probably outdated by now).
Sure, prolonged investment in security might help make things better, but that's not an overnight solution, and it might not be a solution at all given that the attackers are investing heavily in their methods, too. We have to do more than just acting like increasing the security department's budget is going to fix all of our problems. I guarantee it won't.