"A booming industry of cybersecurity consultants, software suppliers and incident-response teams have so far failed to turn the tide against hackers and identity thieves who fuel their businesses by tapping these deep reservoirs of stolen corporate data."
Sure, blame the consultants with their "booming industry". I'm sure T-Mobile spent adequate amounts of money on securing their data, hired all the best people, and it was all the security peoples' fault for not doing it properly.
I don't doubt that T-Mobile could have done more, but it's also frustrating to see this trope that spending more money on security is some type of silver bullet. It's not.
I've been in security for over a decade. I currently work at a FAANG with nearly unlimited security budget. Previously I worked at another major tech company with nearly unlimited security budget. Before that I was a consultant and consulted at companies with huge security budgets. All of them, including my FAANG, struggle to have anything more than security that can only be described as "patchwork".
The truth is that nobody actually knows how to do security. Software devs are awful at it (the amount of FAANG engineers I know that don't even understand what encryption is, or think that hashing passwords is unimportant, would blow your mind), management is awful at prioritizing it or even knowing what to do in the first place, and every security professional in the industry is effectively just winging it based on what someone else in the industry promoted as "best practice" (and is probably outdated by now).
Sure, prolonged investment in security might help make things better, but that's not an overnight solution, and it might not be a solution at all given that the attackers are investing heavily in their methods, too. We have to do more than just acting like increasing the security department's budget is going to fix all of our problems. I guarantee it won't.
> Software devs are awful at it (the amount of FAANG engineers I know that don't even understand what encryption is, or think that hashing passwords is unimportant, would blow your mind)
But that's not because there aren't also lots of devs who understand security, it's because FAANG companies have purposely chosen to prioritize hiring based on leet code ability above hiring based on security knowledge.
edit: This is why software developers would benefit from a union or licensing process, because currently devs who don't understand security are artificially lowering developer salaries by externalizing risk onto users.
Eh, it's both. Other departments don't necessarily focus on security (and leetcode is certainly an idiotic way of hiring, IMO). But even in my department (where we explicitly don't use leetcode and do prioritize based on security expertise and offer a huge premium for it), we are significantly under our target headcount because finding devs (or any other role) that understand security is very, very difficult.
Could this be because so many companies don't focus enough on security? So there isn't enough collective experience out there, making it hard to find those that do have the knowledge and experience.
I believe this is the case. Engineers level up primarily based on experience, learning from their team, etc. Because security is:
a) Often not prioritized
b) Handled in the shadows by some other team
the engineers don't get exposed to it. Security hasn't gone through an 'operations' evolution where it melds with engineering so these problems aren't getting better.
I think partly so, yes. I also think in general the security industry is very bad at increasing the level of collective experience, so it sort of just stagnates.
Other fields like web development, consulting, engineering, lawyers, medical field etc all have very established career development pipelines, where you can join as a junior employee and learn on the job from those around you to become a better professional.
Security on the other hand lacks this. In the vast majority of organizations I've been in, security roles are something that you are expected to enter with an already established level of experience, and then you are dropped on a project by yourself with little mentorship or training. This makes it almost impossible to bring new people into the field.
At my company, we have a "security champions" program that is intended to allow software engineers to dedicate some of their time to security and help their team think through security challenges. But we really struggle with this program, because my company pretty much just hopes that the engineers signing up to be champions are already experienced in security. If they are not, we do not have processes in place to train them, even if they do want the training.
And what's worse, is that I even see resistance to making it easier for junior people to learn security. If you spend much time on r/cybersecurity, a common thing you will see is people insisting that security should not be an entry level job, and that everyone should be required to spend 5-10 years as a sysadmin before you're even allowed to apply for a security role. I think that's ridiculous, and not only for the reason that being a sysadmin has a lot less overlap with the world of security than people like to think it does.
> finding devs (or any other role) that understand security is very, very difficult.
At what level? Are we talking like knowing the different ways to mitigate XSS and other basic OWASP top-10 style things, or having the ability to find the next Spectre or Meltdown?
We recruit primarily for mid-to-senior level roles (5-15 yrs experience), and it's the former. I get a lot of candidates that can recite what XSS is at a high level, but for example struggle to explain the things to watch out for that would indicate a possible XSS vulnerability.
One of the other issues I see is that we should be able to take the above-described candidate, which is maybe not exactly what we need but shows promise, and train/mentor them into the type of security professional that we need. But my company (and most others I've seen) are also just really bad at security training and career development. It's a real problem, IMO, that security is treated as an "experienced people only" industry, and is not very welcoming to people that aren't already experts but are willing and able to learn. We are trying to change this in my organization, but it's slow and challenging.
> I get a lot of candidates that can recite what XSS is at a high level, but for example struggle to explain the things to watch out for that would indicate a possible XSS vulnerability.
To be fair, from a devs perspective you need to flip it around in your brain, in order to go from e.g. "you need to sanitize user input to make it safe for a javascript context" to "seeing unsanitized user input that could be getting injected into a script." Even if you know all the right answers, it's still probably not going to come out super eloquently. (And I realize there are other and better answers also, but just to choose one that's easy to explain.)
Something needs to be done at a fundamental level and finding some easier qualification in terms of security professional before this problem could be fixed.
One easy way to fix it would be market economics. Make senior security roles paid grade a lot higher than comparative other similar software engineering roles. These incentives should balance things out in time.
Otherwise I am looking at security professional death spiral.
Nah. First, actually being good at leet and knowing about hashing and such are not in opposition. In odd way, leet exercises makes lead to math parts of it.
And second, non leet devs are not some kind of safety panacea. The worst are people who don't care at all. Many have not heard of basics.
Third, if you actually decide that security is important and try to learn it, you will find resources are rare. There is very little of it targeted at developers. There is no shared knowledge base. There are no commonly known processes. Nothing like that.
So even if you care and try, you end up learning very little.
I don’t do anything security related — I’m a lowly bare metal programmer — but I’m still mystified as to how user passwords are securely kept on disk? The only thing I could think of was to encrypt a user’s password with their password…
Don't store them. Hash the password and store that, using a suitably strong algorithm that's relatively chunky and expensive to compute en masse (most, if not all, modern options, such as scrypt, Argon2, and bcrypt, support a scaling work factor so that in the future you can increase the work needed as computing resources increase). Then you can compute a hash based on the password that's passed in and make sure that they match.
Some folks will then further encrypt the stored hashes such that a database compromise, but not an application-server compromise, leaves the attacker without the keys necessary to decrypt even the hashes, but I am ambivalent about the usefulness of that (can't hurt, but the threat model for that seems more geared towards internal threats than external).
>I don’t do anything security related — I’m a lowly bare metal programmer
Sorry to make an example of you but this kind of attitude is the problem. Everyone does something security related. If something is giving input to the machine (that could be typing on a keyboard, collecting data from a sensor, or anything else), you have to care about security. Even if security means in your context sanitizing inputs to make sure you don't overflow and crash, or write something to the screen you're not supposed to, etc.
Full disk encryption (FDE). You provide the password at boot and either you can or can't decrypt (typically the key itself is derived from the password). You can also do this without FDE by doing the same thing but keeping the password around in memory if you're trying to avoid prompting them.
Modern machines work slightly differently. The key material is stored in a TPM which is a separate processor & dedicated memory that is purpose built to withstand physical and electrical attacks. Apple devices specifically have a complicated key wrapping scheme (protected by your pincode or password) to make certain files accessible/inaccessible depending on the policy defined (available after first unlock, available only when unlocked, available always, & a fourth one I forget). Your password is just used for protecting the underlying keys but the device actually generates strong key material that's used to protect all on-disk contents regardless of a password being present IIRC.
If you're talking about the password database for local login & whatnot, that was available without even having FDE by using PBKDF2 or similar to securely hash the password. That way you only store the hash & leaking that file doesn't mean that someone can reverse that back to get your password.
Multilevel encryption. It's like you keep valuable stuff in one room, a key for that room is kept in another room, that room not only needs a key, but also a 4-digit pin code, finally that key is kept in a safe that can be opened only with three other keys and so on.
> I don't doubt that T-Mobile could have done more, but it's also frustrating to see this trope that spending more money on security is some type of silver bullet. It's not.
So true. A problem is that "spending money on security" is so nearly always a synonym for increasing the infosec budget under the CISO. Which is useful, yes, but only a partial solution. A bigger ROI would be to spend it on developers who are experts in security and building a culture that cares. But even in enterprise security companies (most of my career), product security is so often seen as a checklist that infosec will take care of, not a core engineering competency.
This makes no sense at all---you're implying that the bad guys somehow have a monopoly on innovation and effectiveness, when in reality, there is just more upside for them to steal sensitive info than there is downside for companies to protect it. If T-Mobile's latest data breach led to them getting fined, say, $5 billion, I promise you it would be the last.
It would be the last for T-Mobile because it would end T-Mobile. But it wouldn't be the last breach ever.
I could give $5 billion to my FAANG right now and I bet we'd still be breached (hell, I'm pretty sure we already have that budget in my FAANG's security department). The US DoD already has a cyber security budget of $10 billion, and they still get breached.
You underestimate the amount that these companies care about security. Just because they get fined "only" a couple hundred million dollars doesn't mean they aren't scared shitless by being breached. I've sat in boardrooms with CEOs telling us they were willing to pay whatever it takes to increase their security (and they put their money where their mouth is, too). They still get breached.
Budget isn't everything. Does it help? Sure. Like any other security professional, I can recount plenty of tales of teams deprioritizing security in favor of something else. Would they have done differently if they were incentivized better by bigger potential fines? Maybe. Would they have actually been able to implement ironclad security even if they did prioritize it? In the cases I've seen, it's doubtful.
edit: and consider this. If you truly do think that money is everything, you should realize that you will never be able to throw more money at your security than a nation state attacker like China will be able to throw at breaching your security. In the competition of who can spend the most money, you've already lost.
Just to add to that, consider the hacker (technically cracker) only has to be right once, the security team has to be right 100% of the time and with 100% of the attack surface. There could be a new attack surface that wasn't even a thing at any given moment. Also consider a lot of the attack surfaces are software not even written by the company being attacked (Windows/Routers/etc).
It's like the 2000 era adage, the terrorists only have to be right once.
> I've sat in boardrooms with CEOs telling us they were willing to pay whatever it takes to increase their security (and they put their money where their mouth is, too). They still get breached.
Money flows (often) freely but it's not enough. I worked at one place where the CISO was very aware that security needs to be designed into the product ground up. Later a new CISCO came in who thought that security can be achieved merely by purchasing every security scanner on the market and sit back to bask in perfect security. Needless to say security was far worse with the latter one.
I'm sure it's both. As in, much of what they did spend likely went to snake oil salesmen. I've met lots of security consultants who did not have backgrounds in math or compsci.
> I've met lots of security consultants who did not have backgrounds in math or compsci.
My experience both working at and with higher end consultancies is that there is no correlation whatsoever between those degrees and any particular consultant’s competency. Some of the best people I’ve worked alongside have been college dropouts and Religion majors.
Likewise, I've never found any correlation between those degrees and security improvements delivered by consultants. Honestly, the best security consultants I know of are essentially con men (and women!) who have devoted their amateur psychological instincts to good. You can apply all the best tech but without organizational change it won't last. On the flip side if you bring organizational change to adopt security in depth as a value then even substandard tech can serve the purpose. In that vain, the best security consultants (meaning someone hired temporarily for their expertise – not a long term employee hired by renewable contract) are those who can imbue leadership with the vision of their organization as one that benefits financially from security as a cultural value. I'm not sure who did this for Apple but they are a good example of a company that has benefited from a reputation earned by truly valuing security instead of trying to merely make sure everything is secure.
One of the biggest problems in the security industry is a misconception that security and computer science are the same. They aren't at all.
If you're doing low level design of crypto algorithms, you need to know math. If you're doing appsec reviews or pentests, then a background in software development might help (but is not required).
But there is an entire world of security roles out there that are essential to implementing security that have nothing to do with math or compsci. The security industry right now has a huge problem with gatekeeping, where they think you can't even begin to think about security unless you're already a top-tier principal engineer, and it's led to a huge drought of talent in security roles across the board.
And yet, (correct me if I'm wrong), a good security person does not need to understand cryptography. He should have some basic understanding of how to apply it, but the knowledge of it's internals and the math behind it is pretty much useless.
Yeah from the outside looking in, to me the biggest requirement is one of mindset, thinking like an attacker, thinking of all the possibilities… in that sense very much like the qualities for a good QA person
true, crypto(graphy - wow, been so long since i've typed it that I've just realized crypto has now been bogarded for something else).
theory vs applied but I think its still true the mindset of a hacker is still very different. ie similar to the whole IT vs dev
> I'm sure it's both. As in, much of what they did spend likely went to snake oil salesmen. I've met lots of security consultants who did not have backgrounds in math or compsci.
I'm going to bet that they did have qualified engineers, because I like to assume the best in people, but I also assume that those engineers may not have been able to make the changes they want to.
In my experience in big companies, corporate bureaucracy and a complete unwillingness to change processes or systems is usually a bigger hinderance to security than the skill level of consultants/engineers.
You can't easily "bolt on" security to a massive internal ecosystem of insecure projects that has built up over the years. If I had to guess, I would anticipate the software T-Mobile is running includes a lot of legacy that hasn't been fully maintained. If they don't spend the cash to retain developers who built these projects or to keep them maintained, it means there's nobody around who really knows the codebase. And that means funding the little security edge cases is going to be nearly impossible, particularly for an external contractor with a few months.
Worse, the "upper management" will assume it was a talent / investment problem since "they sunk so much money into security". Oh that darn booming industry.
"To think we paid those security consultants so much money to protect our completely unencrypted and exposed database and we still got hacked.
And they had the nerve to suggest we replace this unencrypted database, which an old legacy system needs entirely open root access to with something secure for an eye watering bill - we don't hire security consultants to replace our legacy systems, we pay them to stop unauthorised people accessing the big pile of data we leave in the open.
Get the gall - they even wanted us to change the interface between our two big legacy systems because it was just a CSV file which contained all our sensitive data on it. Wimps! Especially as we told them they could do anything to make our systems secure, as long as they didn't touch those legacy systems."
What do you consider a background in compsci? A few years in the industry?
Because my degree is in Management Information Systems (MIS), but I've done troubleshooting on both performance problems of the O(n^5) variety and problems of the "not covered in the requirements document" variety... Not sure what else I need to understand, say, memory bounds-checking problems or firewall/ACL configuration problems.
Management Information Systems. A "business oriented" computer degree. They were popular in the 80s as an alternative to comp. sci. They focus on how to use databases and spreadsheets, and other analytical and management systems. In those days, "decision support" software was a big thing. Is MIS still a thing anymore?
It's still a thing, or at least it was a few years ago. I worked with several recent MIS graduates at a consulting firm in the mid-late 2010s. But I'd never even heard of the degree before that point, I majored in math, minored in CS, and did dissertation work in a business school (admittedly, economics, so not particularly business-y).
It's surprisingly easy to get certified. I managed to pass the difficult-by-reputation CISSP exam without any deep knowledge of or really interest in information security. I just took the five-day crash course my company paid for and bob's your uncle, I passed the CISSP.
Of course, I never actually got certified because I left the role immediately afterward and never bothered following up. Moreover, I didn't really meet the requirements, which included having some tenure as a security professional. But I'm sure I could have finagled it if I had any interest in working security (I absolutely did not).
Are there any certifications that require you to solve a CTF or otherwise demonstrate understanding of the field? (Just spitballing, but maybe an oral-defence of strategy against a board of defcon panelists? Etc)
Braindump-able IT certs benefit no-one, and expecting people to have MSc degrees in infosec is elitist and very impractical.
Offensive Security certs (e.g. OSCP) are similar to what you're describing. The PNPT is similar too but also emulates a real-world engagement on top of just needing to root boxes.
"A booming industry of cybersecurity consultants, software suppliers and incident-response teams have so far failed to turn the tide against hackers and identity thieves who fuel their businesses by tapping these deep reservoirs of stolen corporate data."
Exactly. Heaven forbid we blame the corporations whose lax security led to the stolen data in the first place. That would make advertisers unhappy.
I had to manually change the urls in their site to opt-out of some data sharing a couple months ago.
Something like that getting shipped to prod... yeah, you have the D team building tech at tmobile. So we should collectively be shocked if their codebase isn't a leaky sieve.
Everyone's security is awful, as the penalty for failure is less than the expense required to make it secure. Until the former becomes higher the latter will guarantee insecurity rules.
Everyone always talks about making penalties more severe for data leaks. I have to wonder what the consequences of that would be. Bankrupting your competitor might become as easy as paying a few bitcoins to a foreign mercenary.
I think better security and encryption protocols need to be developed that mitigate the severity of a single leak. Without more compartmentalization of data and more control put into the hands of users, leaks of these massive, un-encrypted databases appear inevitable.
> Bankrupting your competitor might become as easy as paying a few bitcoins to a foreign mercenary.
This would result in insurance policies to guarantee against that outcome. Those policies in turn would introduce both costs and practices across industries that would improve the security of all the insured (and indirectly, their customers).
Unlike hiring a Rainmaker to look nice for the C-suite, imposing these costs would make sure that there's effective mitigations. Just like safety matters for your car, it would start to matter for your software.
> This would result in insurance policies to guarantee against that outcome. Those policies in turn would introduce both costs and practices
Equifax had over $100mm of cybersecurity insurance coverage [1]. The breach cost them over $2bn, including fines. This isn't purely a motivation problem.
Does the basic security scanning the hacker was doing costs hundreds of millions for big companies? Because that's the fines some big companies are getting:
Equifax agreed to pay 600 million, but still saw profits up 20% for the year... Sure they could have made 600 million MORE in profit, but that's still just 15% of their profits for the year.. sure they'll spend a few million in the area they need to shore up one time and wait for the next incident... It's just good for business... Invest enough to keep these incidents down to one every 5 years, pay fine, repeat.
Scanning is pretty inexpensive. Maintaining a complex system that passes the scans? That's something different altogether.
If I take a clunker to a mechanic, how much will it cost me to hear everything that needs fixing? About $150. But actually performing the fixes? One order of magnitude greater - and that's if I'm very, very lucky!
Been a T-Mobile customer for ages. Sim swaps are too easy. 2 factor is a joke. This is like the 3rd time my data has been lifted. But I stay with them, why? Because I have 3 free lines, unlimited everything, for $32 a month. They have crazy phone trade in deals from time to time, T-Mobile tuesday usually nets me 15c off per gallon at shell. Am I happy that they keep getting hacked? Absolutely not, but I'm happy pretty much any phone with a sim card works, my bill is low, and I have 5G pretty much everywhere.
So what you are saying is that the overcall cost of doing business with tmobile (both monetary and your personal data being public) justifies the convenience?
It's not so much the convenience as the problems other carriers bring. T-Mobile has no security, and lousy coverage, and is technically incompetent, but they're fairly honest and customer-friendly.
I could tell a horror story from Verizon about a multi-thousand-dollar roaming bill from someone I knew, from Google about being completely locked out of a phone number forever from another person, and lots of others.
Pick your liability.
On the whole, I found the risk of data theft from T-Mobile to be the lesser of the evils.
TMobile actually has good coverage now. Since the Sprint merger they have started rolling out 5g 600mhz on the old Nextel network. I'm in a rural area and as of last year I can get 90mbps down with TMobile. Verizon and ATT are like 5 and get swamped on weekends.
I'm surprised myself to say this - Xfinity Mobile had the best security of all the mobile phone providers. SIM Swap is via a generated code that you have to login to generate. Customer Service verifies a ton of information before they look into your account. It's also generally cheaper than TMobile - but some features are sorely missing/lacking - they're still new.
All that said, Comcast Internet's business practices is outright awful - they lied to me multiple times about my plan and discounts. And you need Comcast to use Xfinity - it's very expensive otherwise.
Tmobile is technically different companies in Europe and US, although Tmobile in Europe owns 43% of Tmobile US, and interestingly, the German government owns 32% of Tmobile Europe.
Tmobile is also widely considered to be the worst of the 3 US mobile networks (Verizon is considered to have the best coverage, then ATT, then Tmobile). Their pricing reflects that too, as Verizon is the most expensive, then ATT, then Tmobile.
For that price (approx USD 120) I get 4 lines, so my whole family. I'm not happy about the data breach, but I'm very happy with TMobile otherwise, and I seem to have a better deal than my friends on AT&T and others here in the USA.
not quite as good a deal, but you can get $15/month with mint mobile, which sits on top of t-mobile's network. supposedly low priority but i've never had a problem in the past twelve months: http://fbuy.me/siAKU
You forgot the 2015 breach where T mobile customer's SSNs were stolen. This was the one that T-Mobile blamed on Experian and Experian said they were only holding the customer SSN's at T-Mobiles request. See:
Similar for me, I have 4 lines (2 I’m using) unlimited everything, no data caps for $100 a month. I looked at other options and there’s nothing close that compares.
Grandfathered "simple choice" plan with 10 lines for $160. I have upgrade to 5G phones with no problems. Not unlimited, but I never use up the data anyway.
I really hope TMO takes security seriously going forward.
The billing is so consistent I just get a check every year from each person. I pay for my parents' lines. And my sis/BIL pay for theirs in one check. I round up a few $ for admin fees.
A completely fantastic deal for the everyone. Would not have been possible with Verizon or ATT as their bills had so many gotchas and varied every month.
My wife's immediate family is 9 adults, 6 of whom are all on the same cell plan because it's cheap and convenient for everyone involved. If everyone gets along, there's not a whole lot of downside here.
The biggest security risk with being on someone else's mobile network account in the US is that someone else has control of your phone number.
These days, access to your phone number basically constitutes verification and authorization from you for many things, including transfers of money.
I control the phone lines for myself, my wife, my mom, one of my cousins, and my sister. But I would not give someone other than my wife control of mine or my wife's phone number, no matter how much I trust them.
>If everyone gets along, there's not a whole lot of downside here.
What I don't understand is why the hacker (whose full name is used in the article - alias?) is being public about this? Shit security or not, they made a clear cut black hat move purely for money. Or I suppose the other factor is fame/infamy. Pretty sure there are at least a few pissed off hackers among those 50M people who would want to track this person down digitally and pull something as retaliation.
I was about to comment this. He says he went public to raise awareness about allegedly being illegally detained in a "fake mental hospital". Obviously anything is possible, but that sounds a lot like he could've been legally detained and doesn't really understand the law i.e. he could've been a danger to himself or others.
His other bombastic comments to press and relatives also make him sound insecure and immature. Obviously he had to be somewhat adept and dedicated to gain access, but he didn't discover any incredible exploit here. He also takes credit for discovering a well known zero-day but admits he had nothing to do with the code for the exploit. To me that supports the idea that he hangs out in black hat circles because he wants to be one of the 'cool kids', put in the time and got lucky. I imagine the press love that because a lot of the public doesn't really know the difference.
"John Binns, a 21-year-old American who moved to Turkey a few years ago"
I'm assuming it is the Turkey thing, probably counting on that to be a significant barrier. Yes they have extradition but I've also heard that Turkish authorities are quite amenable to bribes as well.
> Yes they have extradition but I've also heard that Turkish authorities are quite amenable to bribes as well
Turkey is also a NATO member. If T-Mobile can get the U.S. government to plead their case, that could generate serious impetus for action from the top.
Also, if you're in a country whose officials take bribes, advertising that you're (a) vulnerable and (b) potentially in possession of cash is dangerous.
> I've also heard that Turkish authorities are quite amenable to bribes as well.
If the dude is banking on this, the major issue is that the Turkish authorities may be quite amenable to bribes from anyone indeed. Subsequently, my wager is that both TMobile and many among the 50M whose details were stolen have far deeper pockets than hacker exhibit A.
In other words, the dude must be absolutely certain that government corruption can only go well for him. In the US, he'd "only" go to prison if the system wants to make an example out of him. In a place where anyone can be bribed to do anything, the sky is the limit.
Bribes often get more bang for your buck from the bottom up. A big bribe from T-mobile to the Turkish government can be less effective than a small bribe to the two field agents who are sent out to scoop him up.
What would Tmobile or those whose details have already been leaked have to gain from going after the guy now? The information is already out there, is it not?
Well, the first thing that springs to mind is to make an example out of him to discourage others trying to pull similar stunts.
Second, if the dude manages to make any appreciable sum out of selling the data, the corrupt officials may come by with the proverbial $5 wrench and encourage him to share the spoils for continued protection and avoidance of wrench induced bruising.
It could even simply be a matter of harrassment by the Turkish government because existing bribes are suddenly, say, insufficient. Under penalty of prison, of course.
Unless he gets a cushy gov't cybersec job from all this, which is another angle I have just considered.
I don't know if this is still the case, but in the past when I changed a line to a different SIM on T-Mobile an alert about the SIM change was texted to me, on the new SIM. :/
It didn't inspire much confidence in me regarding their security practices
Once there was a billing snafu and they cut off my line, with no notice given. I was freaking out a bit cause I thought it might be a SIM swap attack or something. After figuring it out and getting reconnected, I realized that actually they had told me about it. Via a text message. To the phone number they disconnected. After they disconnected it.
Data is a liability. It's hoarded because it's also a gold mine, and the risk to those hoarding it is minimal even if it's stolen.
The risk for hoarding data needs to be made comparable to the harm that theft would cause to the individuals effected by it; or hoarding data needs to be strictly regulated.
It would be nice to be able to open these accounts without providing PII, so that it would be harder to attack specific users, and breaches would not be so damaging to customers.
This US trend of requiring government-issued ID for even routine transactions (like phone service) that aren’t ID-related is insane and dangerous.
Anyone know of a provider that doesn't mandate storing this info? I understand they want to know your credit to open an account so need your pii to get credit info, but does any cell provider not store it after that? I tried to get tmobile to delete mine and they won't so I'm open to switching to any post-paid service that does.
If you want privacy, you need to be more serious than that. Mint mobile prepaid (no personal info) on a device you bought outright in cash. Obviously, no one should know that phone number; you should do all interactions through your publicly known VOIP number that's forwarded. That phone shouldn't be turned on any time you're near home; that should be done with a separate home iPad or the like. And no traffic should ever happen outside of a VPN...
It seems almost like a combinatorial impossibility that any software could be secure nowadays. Given that security basically depends on the interactions between various components being sound, and the fact that most systems nowadays are built from countless layers each with their own complex and confusing API, how could any software be even remotely secure?
Does anyone have a good solution for sites that only support SMS 2FA? I'm mostly using Google Voice for 2FA right now, but I'm iffy on tying access to my entire life to a Google account.
Ideally I'd like a dirt-cheap, just-for-2FA phone number from a provider that's got decent security (specifically regarding SIM swapping).
I have a voip.ms number left over from another project, and it does support SMS although I've had fairly crappy luck getting it to work for services that need it for 2FA.
Seems like there's no good option, as VOIP numbers are fairly secure but ironically not allowed by many companies for SMS.
Payment too, to add $$$ you go to a external noname site from the nineties. Until a month ago your balance on the main site would not update until you logged out and in again.
If you received the text about the account hack as I did, it had this at the end of the message:
“Learn more about practices that keep your account secure and general recommendations for protecting yourself: “
When on a support call with them, they claimed that my account was fine and I had nothing to worry about. And then the last thing they tell you is to improve your security.
I blame HR. I used to work in that industry; it was openly communicated that key roles were not appropriately staffed. The comment was: "we could hire the right people, but we are not allowed to pay them the money they are worth".
Blame the victim. The reality is, it’s all about incentives. He is going to make a few million. If their security were great they’d still have gotten hacked. Everyone who knows anything about computers knows, where there’s a will there’s a way. You cannot stop a determined hacker. Full stop. The problem is there are great incentives and not enough deterrents. Bitcoin. This will only get worse until the public decides it’s had enough. I’m already there. It’s an education problem now.
Maybe not, but you can reduce the list of potential attackers from relatively average Joes to more experienced, specialised and well funded actors (such as the NSA - who would probably just issue a warrant anyway) with better security practises. It isn't ideal - someone might still access your data without your consent - but it is realistic and achievable.
> The problem is that there are great incentives and not enough deterrents.
Again, true, but that doesn't mean that the public should just live with this. It's not unreasonable to ask a company to take the security of their customers seriously and take steps to ensure that their data is secure from an attacker. There are other things that can be done: harsher penalties for companies who don't take issues like this seriously, setting out (and enforcing!) standards for security, incentivising security research, and so on. Are these suggestions achievable? Probably. Are they going to be achieved? Probably not. Are there a better ideas for solving this problem? Definitely, but I'm not smart enough to think of them. But just giving up and labelling this as an "education problem" is defeatist and doesn't help.
This is a really odd take to me. No, you may not get every single thing right, and if someone is really determined then they might get through. But does that mean you shouldn't put the effort in to make that as difficult as possible? Of course not.
What you're saying is essentially the equivalent of saying "If someone had a bulldozer they could smash through the wall into my house anyway, so I'm going to stop locking my doors and closing my windows when I'm out"
I’d rather live in a world where people don’t lock their doors. That world existed not long ago. What you see as normal and acceptable differs from my ideal.
The question I pose to you: how much is enough in a world with constantly escalating threat? I’d submit that locking doors is a reaction to somewhat static threat. Digital crime is accelerating due to changes in the feasibility and incentives for the criminal- more exploits and digital currencies that make it easy and low risk for the criminal. The more crime, the more value and “adoption” of the digital currencies, the more motivation and less risk for criminal. Not at all like locking your door. It’s a treadmill that will ultimately destroy the fabric of society.
We once lived in a world without door locks. We can choose our future. That’s the opposite of fatalism.
Sure, blame the consultants with their "booming industry". I'm sure T-Mobile spent adequate amounts of money on securing their data, hired all the best people, and it was all the security peoples' fault for not doing it properly.