I was a victim of this last October and November on a T-Mobile number. This is what occurred:
- My Gmail account was compromised
- My Amazon account was compromised
In Gmail, they added a filter to hide any shipping or customer service messages from Amazon.
In Amazon, every other day, they placed an order for a ~500 USD GoPro device, delivered to an address in NYC. This address changed with every order.
Both passwords to both accounts were kept the same.
After I caught on to the above once I received my credit card statement, in November:
- They attempted to purchase something with my credit card. Security mechanisms triggered, and a verification code was sent to my phone at 4am in the morning. They successfully validated and placed the order. My credit card company assures me they input the right verification code.
- They applied for an Amazon credit card using my identity. It was auto approved, and they used the credit card to purchase ~5k worth of items.
I moved everything off of that T-Mobile number, and switched over to GoogleFi (only to learn GoogleFi uses T-Mobile also... still better than T-Mobile directly I'm hoping).
Edit:
I also wiped my phone, eventually thought that wasn't far enough, and switched to a new device entirely. I'm still unsure how the above occurred, because some of it feels beyond the scope of a SIM-swap.
As an InfoSec professional, what you describe sounds more like a device-level compromise of your iphone, perhaps through a malicious app, or link you clicked.
What your experienced can't be done with just a sim swap attack, as you would have lost access to your phone number. And it can't be done with the described T-Mobile hack, as it would have given the hackers silent access to your texts, so they could have reset your Gmail password, but then you would have noticed a password change (and you claim it didn't change.)
Read up on credential stuffing, this is increasingly common. With all the recent breaches, there are groups that use old passwords to quickly identify MFA locked accounts behind re-used passwords. These lists are then sold to people who will, one at a time, pay about 10k for a SIM swap on individually targeted users.
There are lists floating around with 10s of thousands, or hundreds of thousands of users with known passwords in Google, Amazon, Paypal, Coinbase, etc.
The parent comment has a point though - after SIM swap attack, all SMS messages would stop arriving at the victim device - unless the attackers swap it back again after every 2FA code. If they had access to a dashboard at t-mobile this might be possible but it sounds like a lot of effort to steal a few thousand dollars.
The victim (ctvo) claims in another comment down the thread that he had a unique Gmail password not re-used on any other service. So I wrote my comment assuming this is true. But, indeed, if his Gmail password was weak and guessable, then the T-Mobile hack would have allowed the hackers to validate MFA and log in.
That assumes their password wasn't already compromised. If they re-used their Gmail password somewhere else, the attacker could have already had their Gmail password and only needed the SIM-swap to verify the login.
The victim (ctvo) claims in another comment down the thread that he had a unique Gmail password not re-used on any other service. I commented assuming this premise is true.
No need to be aggro about it. No one’s suggesting iPhones are unhackable, just that realistically utilizing an iOS or Android exploit like that is a bit much for something simple like credit card fraud. Those exploits are valuable.
There must be something especially lucrative about GoPros as stolen devices.
I’ve heard multiple independent stories from a few friends in Law Enforcement about cases involving trafficking of large quantities of stolen GoPros (obtained via methods not unlike what happened to you).
Interesting you mention NYC - at least one of these cases involved a very high volume fencing syndicate operating as a legitimate storefront in NYC - with merchandise fraudulently obtained from Amazon[0]. A friend of mine worked this case.
Small, fairly high value, high demand, and no remote shutdown/disable/reporting - somewhat of a perfect storm I suppose.
If you go on ebay, you can find tons of shady gopro listings.
They'll have all the original packaging and put it up as a "pre-owned" unit, but then you open the listing and they have 20 of them for sale.
We also had a local hotel/waterpark that was running a burglary/fencing operation in the mid aughts. The room cleaners would look for gopros, iphones and other electronics. If they found anything, they'd take it and hand it over to the two managers, who then fenced it out to local guys who'd either pawn them or sell it on Craigslist and they'd split the money.
Customers routinely complained to the managers who were the fences so they'd tell the customer they'd fill out a complaint form and send in a police report to the local PD. Obviously, that never happened. This went on for about two years until people on social media and review sites like Yelp started discovering what was happening was not an accident. They finally busted the ring and within a few weeks, the waterpark and hotel were shut down for various other repeated OSHA and other infractions that went unfixed. The local paper reported even though they busted the ring, the money and the goods were long gone, leaving the victims with little recourse.
To be honest, there is already zero way to distinguish between shipping and customer service messages from Amazon. If you order any appreciable amount of items you would have no idea they sent you any message.
Of course I only found this out after being burned by it. Turns out they’d sent me a message telling me the item I returned was not in the same condition it was sent in (it was), but the message was utterly lost in the flood of ‘order received/sent/delivered’ mails they send (with the same subject).
I think Gmail has a big yellow notice in your account for 7 data after a new forwading e-mail address is added. That of course falls apart when you use an external client but there’s that.
I think years ago I found my number there with no-recollection of every agreeing to it and quickly yeeted it. (You can remove the number but keep recovery email)
It's subtle with the UI but you can choose not to allow SMS by removing your phone number from Google after setting up alternative 2FA. If they don't have a number they can't sim-jack
This is one of the most important pieces of security advice that is often overlooked: remove your phone number from EVERYTHING.
You can also enable Advanced Protection[1] for your Google account, but other repeat offenders like Github will continue to allow SMS fallback to bypass 2FA if you have a phone number listed anywhere.
Big benefit of Advanced Protection: you can go tell less technical users to set it up and it will enforce all these best practices (no SMS, two keys, no giving random apps access to GMail...).
The prohibition against using a VoIP number for banking purposes is stupid. They already have the full battery of KYC info on me: if I want to use a VoIP number for 2FA (because they are so behind the times they don't support FIDO or even TOTP) then unless law says they cannot they need to allow it.
And while on the topic of banks, most will suspend access to your online portal if you log in with a VPN. Give me a bank that allows VoIP phone numbers, VPN access, and TOTP and/or FIDO support for 2FA and I'll ditch Schwab right now.
They both use Symantec VIP but it’s fairly easy (for developers at least) to export those tokens and import them into something like Authy, Google Authenticator etc.
My bank used to allow email 2fa or SMS, but they recently dropped support for email. I don’t love using email for 2fa but since my email is itself protected with non-SMS 2fa I thought it was the best of the two bad options. Now I’m sad. Ideally my bank would support the FIDO standard and I would use a compatible hardware token.
In the case of Schwab it's only with their app. If I wanted a geolocation leash up my ass I wouldn't be complaining about this: if they don't trust me as a customer then screw them, I'll find a bank who does. Schwab's notion of non-SMS 2FA is their app. I want to use my laptop on a VPN using a FIDO key or TOTP and Schwab doesn't support this.
Do you happen to know if they allow you to also totally disable SMS 2FA?
I know that Vanguard, for instance, supports non-SMS 2FA but doesn't let you disable SMS as a fallback (and I'd rather not just totally remove all phone numbers, but maybe I have to...).
I had their non-SMS Symantec 2FA set up a couple years back, but turned it off cause I couldn't figure out how to disable the SMS fallback. Every time I got a new device and wanted to set up the Symantec TOTP generator they would just send me a SMS for validation. So I just told them to turn off the Symantec part.
Maybe they've changed their policy since then. But when you call to get set up on a new device, how do they verify your identity now if you don't have SMS fallback?
I have Ally and Chime and I’m extremely disappointed neither accepts a Yubikey or something. Older banks like Schwab or US Bank I could see being behind the times, but I’d expect fintech or something more modern to be more sensible.
I used to as well, but lots of places have stopped accepting VoIP numbers now. A bunch of them actually just silently fail to send messages, so you can be clicking SMS password reset and get nothing in your texts.
Ally forced me recently to get rid of the Google Voice (GV) number and email for two-factor and use a 'real' mobile number. It is pretty awful how they offer no other two-factor mechanism except a non-GV mobile number.
Something doesn't add up here because last I checked Amazon made you put the credit card number in again if you want to ship to a new address. Just breaking in to your amazon account wouldn't be sufficient to ship stuff to random addresses using your credit card.
Interesting... I had something similar happen to me, with minimal outward, acute damage (e.g., running up bills on random credit cards). It is reasonable to assume my entire identity is compromised. Sorry this happened.
How do you know T-Mobile was the entry point, and not say, Google (e.g., Google Chrome, Google Ads)?
What type of phone did you have (e.g., Android or iPhone)?
What is your browser and Search Engine on your smartphone?
SMS in unencrypted, and Google SE has been compromised for much if not all of 2022. From what I can tell the issue persists. I officially reported it in December, and again in January, and again in February. Pretty wild, TBH. Think about the number of services that have Google SE and Ads integration. Makes me nauseous.
Did you happen to report to Apple and Google (for documentation)?
Ways which I shared with Google, because it's a very serious privacy and security vulnerability.
We need more robust security integration to catch things before they are pushed to results. I understand latency will increase, and some ads revenue will decrease. But like, isn't it also cool to have a customer base that is better protected against egregious attacks, attacks that could be prevented? IMO, yes. It's called "stewardship."
Probably SMS as a 2FA option on Gmail, which is the real problem. Once you add your Yubikey and set up TOTP as a backup, you need to go back and delete SMS as a 2FA option. Had gmail been configured correctly, the SIM swap would have far less serious.
I used to work tech support for cell phone providers, and while we were trained about fraud, the nature of the industry low wages, high turnover, makes this a security flaw that financial institutions should not risk.
How is SMS a security risk? As far as I know, SMS is closely tied to a person's identity, especially 'know your customer' regulations. I'm curious how it's a security risk; as far as I know they have to be unique, which is good
Wait. Isn't it painfully obvious when you've been simjacked? If your phone suddenly loses signal and refuses to register with the network, you know something is up. You may think it was a malfunction of your phone or your network, but it's pretty much a definition of a modern-day "drop everything you're doing and deal with it" emergency. You can't not be aware of it, or be unsure if it happened to you.
You very much can not be aware of it. Consider what happens when you're simswapped at 2 am. Are you going to notice that? Probably not. And maybe not after you get up and check your phone. Because your phone may be connected to the internet via your home wifi and you don't even notice your phone has no bars and no service because you're still able to browse the web and check email.
But if the attacker already has your info, then couldn't they just add another line to your mobile plan, so your handset continues working, just with a new, unbeknownst to you phone number? That way it wouldn't be noticable on the handset.
The real question is how long do you think it would take you to break into your own Gmail account after the passwords been changed and the attached phone numbers also been changed?
Probably longer than it would take an attacker to drain bank accounts, I figure.
By the time you notice and can react it's too late. There have also been many prominent examples of people who got their cryptocurrency exchange accounts broken into with SIM hijacking which was conducted while the victim was asleep.
Do you live in the US? You don't need an ID to get a phone number here so SMS is not necessarily tied to your identity and it has nothing to do with KYC.
Moreover, you don't want it to be tied to your identity. The fact that anyone can pretend to be you and hijack your phone number is exactly what makes it insecure.
Anyone can walk into a T-Mobile store with a fake driving license with your name on it and claim they need help moving their phone number to their new phone. This is of course your number. They will then receive all of your SMS messages.
Or, you know, they can just bribe the store employees. Has happened before, still happens, will keep happening as long as a phone number is considered important for anything at all.
Remove it from both. However make sure that you have quite a lot of backups of your 2FA backup keys, and maybe even one offline backup of your seed, if you lose them, the account is gone (which is a good thing, I guess).
Thanks. I have Google backup codes as well as multiple Authy installations, Google prompt and a recovery email address so I guess I should be covered :)
2FA on everything. No password reused. Only similarity is both had the T-Mobile number attached to them.
I initially thought only Amazon was compromised. I thought it was due to us throwing away a FireTV device (assumption: we didn't log out and de-register) that was then used to order items.
And then I found they added filters to my Gmail account to hide the Amazon orders, and went into full panic mode.
Interesting. Was your 2FA setup to use Google Authenticator or regular SMS? It's been a while since I used Google services but from what I recall from a previous company where we used Gmail was that the only way to do 2FA with Google Authenticator if you lost access to the phone was with a backup code you are given at 2FA setup time. Is that no longer the case?
2FA with authenticator. As someone correctly points out, Google appears to keep SMS as a recovery option unless you specifically opt out?
Edit: I can't actually find a help article, but it's under "Try another way to sign-in" and they'll text you a verification code to your registered account phone number.
SMS is just used to sign in. Everything is encrypted, and you can't access any data without a password. If you don't have the password, you don't get the data. There is no recovery.
Another different failure point. I once broke my android phone and bought and set up a new one - only to find I can no longer access my Gmail account that I used before with my Google authenticator, so I am locked out forever from that account. I had a backup but was not able to find it. Despite knowing hundreds of contact emails (all backed up in thunderbird), account history, password history, etc - for years I have not been able to get back in.
So you didn't have one lol. I understand that's an extremely frustrating situation though. Part of making backups is testing them once in a while (at least making sure they exist). Something else you could've done previously was to use Authy or Aegis which helps you backup the seeds themselves encrypted under a passphrase so you can recover the accounts even if you lose everything else. Although of course, all of this depends on your threat model, if you don't care about SIM swaps or if losing the account is still much more worrying then I guess it's just a unnecessary hassle/risk.
With Authy I can enter a backup password and download everything to a new phone. I suppose that's a different failure point but still possibly worth the trade-off? Yubikey is the next level up.
So if you didn't have a credit card, nothing would have happened? Why do people still use credit cards if they are so fucking leaky and easy to exploit?
Googlefi just uses their towers, your telecom data isn’t communicated with T-Mobile just the data of whatever you’re using (calls Netflix browsing porn)
I don't know if it's controversial, but I think for most people, keeping up with your current card statement isn't something you do daily. Sometimes companies have a way to notify you of new charges immediately, sometimes not. Being surprised at the end of the month is more common than you'd think.
I for one don't really look at any banking stuff these days. I just live well within my means. If you have a generally healthy financial situation there is no need to constantly check.
It will be easier to go after high income middle class types than HVTs, who will likely have someone watching things closer than busy working folk.
If you hit a target for multiple low value charges you face less scrutiny than large transactions. Fraud should pickup multiple purchases of the same product to different addresses though.
> T-Mobile declined to answer questions about what it may be doing to beef up employee authentication. But Nicholas Weaver, a researcher and lecturer at University of California, Berkeley’s International Computer Science Institute, said T-Mobile and all the major wireless providers should be requiring employees to use physical security keys for that second factor when logging into company resources.
> “These breaches should not happen,” Weaver said. “Because T-Mobile should have long ago issued all employees security keys and switched to security keys for the second factor. And because security keys provably block this style of attack.”
At what point do we consider industry self-regulation on this a total failure? You don't need to make Yubikeys a part of every auth workflow in your corporate enterprise if there are legacy systems/integrations, but you should at least do it for the things that can change customer mobile subscription details and there can't be any excuse.
I think why regulation hasn’t happened is because the computer industry has changed so quickly. Two-factor auth wasn’t even a commonly accepted best practice two decades ago.
And regulation takes a while to create and put into practice and with the rate things are going, by the time regulation has been out in place, the current best practices will have changed.
Whereas writing regulation on building bridges is easy because the timescale of us building bridges spans literal millenniums.
I agree completely. I didn't ask why government enforced regulation hasn't happened. I asked why industry self-regulation has failed. I've worked in a regulatory/security role for a major conglomerate before.
I'm not saying I expected self-regulation to work. But, if you are in a position of customers seeing direct harm every day, it's not unreasonable to ask why there is a failure here.
I think it has failed because the industry is moving way faster than most people can keep up.
Even your average developer isn’t going to be aware of security changes in the industry to know what’s important or not. It’s going to be even less likely they someone not in engineering to remotely know what’s important or not.
Security professionals know but do you seek out a cardiologist first before you ask your GP? Probably not because, being not at all trained, you have no clue about anything. And if your GP doesn’t know, you are kind of on your own.
"People" don't need to keep up, the internal controls team needs to keep up, and it's possible to staff such a team with people who know how to mitigate phishing attacks when you are one of the largest corporate targets of phishing by volume on the earth.
If you’re trying to decide between electricians but you know nothing about electrical jobs, you’re going to be unable to make any meaningful decision. You’re just going to pick the one that sounds the best.
Heck, you could be using the same mediocre electrician for years and even recommend it to friends because you still have no clue about the workmanship.
What does it mean for the industry to self-regulate? How do you define industry? Is it telecoms, or all tech companies?
Self-regulation has failed because the cost of a data breach remains relatively low compared to implementing security measures, at least on the surface.
Regulation generally is targeted at preventing consumer harm. Self-regulation is the practice of appropriately mitigating consumer harm. I mean mobile subscription providers here by "industry."
> Two-factor auth wasn't even a commonly accepted best practice two decades ago.
Maybe, had you said three decades? But not two. It was already mature by then.
Two decades ago was 2003. Even consumer banking was online, and in many countries exclusively 2FA.
I've worked the banking space then and we absolutely had smart cards. Military and defense had them everywhere. Proprietary solutions had already gone away replaced by PC/SC. NT 4.0SP6 had support out of the box, because it was already a hard requirement for many customers two and a half decade ago.
That’s assuming you regulate a very specific thing versus the end goal. To me the appropriate regulation is to find a way to cause real harm to T-Mobile when they are breached. When repeated like this or if done through effectively negligence, then they shouldn’t be allowed to be in business anymore. We gotta stop the tiny fines.. jail, billions of dollars in fines, remove their business license… something large needs to happen. Once that’s in place, you won’t need specific regulations as the incentive structure will be there to do the right thing.
One way to do so would be to make it so wireless companies can lose access to spectrum as a consequence of customer data breaches. Let someone else who can keep customer data secure have it instead.
Most countries only have three large mobile carriers. You can't take action against their actual operations because you would be running out of alternatives pretty soon plus you would cause huge disruption to customers.
I think financial penalties are still the best bet if they are large enough to really hit profitability but not large enough to kill the company.
That ultimately hurts customers more than the data breach. Limiting access means less availability for customers. If all the customers leave, you’ve just contributed to a monopoly/oligopoly.
Or reserve it for the next company that could pony up at a significant discount.
Look - too big to fail means we let too many companies merge. This isn't a healthy situation that losing T-Mobile means having no competition left. We should probably unwind some mergers first.
Aviation industry can introduce new regulation fast. One example would be reinforced cockpit doors. Prompted by events in September 2001, new standards published four months later (January 2002), expected to be completed fifteen months after that (April 2003).
It makes sense for a change about doors. Doors are old as time. Everyone understands how doors work. The impact of a door change is straightforward. There are relatively few moving parts involved in a self contained door (figuratively and literally).
It was a first example that I thought of. There are others, less straightforward changes in recent years. They involve safety teams, risk assessment, terrain awareness system, voluntary reporting programs, hazard recognition. They made commercial flights safer and we can measure it.
My dad worked in telecom for a baby bell and they had 2FA fobs since at least 2004 (probably earlier but I didn’t see it until then). He wasn’t even in a consumer facing company, they made equipment for other companies. If they could implement this 20 years ago, there’s no excuse for Tmobile today.
> I think why regulation hasn’t happened is because the computer industry has changed so quickly.
It also doesn't help that the US government is a barely-functioning kleptocracy. They're more concerned with passing legislation about transgender boogymen while they line their pockets than they are about ... well, anything else.
A more reasonable alternative view is that regulations are largely opposed by most in the industry for good reasons. Including the fact that the explicit absence of such is what allow for the internet to exist at all.
You assume that regulation can just make security magically happen.
I see no reason to assume that premise to be correct in practice. It's not like the US Government hasn't been breached countless times or had Supreme Court opinions leaked; and it's not like corporations that really tried and should be examples of best practice haven't also been breached. Also, what law can prevent insider attacks? There's already plenty of laws making that illegal.
There's no law that just "makes security happen" - and, actually, I would be fundamentally opposed to such a law because it turns security into a simple matter of compliance. "We're SCA compliant, therefore we're good!" And technology changes way too much - a security law that was written 10 years ago would be a disaster today. See South Korea's Banking Security laws for an example - they basically enshrined ActiveX in their law with roll-your-own-crypto to this day. And we know now that was a trash idea but nobody wants to take the blame for upsetting the security standards. https://palant.info/2023/01/02/south-koreas-online-security-... and https://www.nytimes.com/2022/07/08/business/korea-internet-e...
Don't mandate them, just mandate that if you use known-deficient practices you're presumed negligent if an incident occurs. Then issue some guidelines for known best practices and known bad practices, and make it clear that using something newer/better is fine, just not using something on the "known bad" list. (For instance, best practices are to use two-factor authentication with one component being physical security; one-factor with a password is known-bad.)
I'm not calling for regulation on general security outcomes. I'm talking specifically about access controls on sensitive and highly privileged systems that have ripple impacts to consumer security, which should already be obvious best practice.
You assume that T-Mobile didn't try and just fail miserably, or repeatedly fail to insider attacks. If it was multiple insiders, the systems could be perfect technically and completely useless practically. We also don't know what the similar statistics for Verizon or AT&T or any other global carrier are for comparison.
I'm not assuming anything, I'm pointing out a failure of self-regulation given the TTPs listed in the original article, which are distinct from fully insider-supported attacks, should not happen.
There is obvious, direct, and destructive customer impact here.
Edit: actually I know people working in security roles for T-Mobile, and I am sure they or their sister teams are trying.
What point are you trying to make here? That T-mobile maybe needs to screen employees better? That compromises are inevitable and we just need to deal? That we shouldn't give out so much data to corporations?
> There's no law that just "makes security happen"
In another thread I proposed making white-hat hacking legally protected, even without permission from the company. If your system is constantly being tested by mostly white-hat hackers seeking their next responsible disclosure and bounty, then that's something.
Bug bounties already exist, but they're opt-in, and companies that need them the most are not opting-in. We also see the people who do things like press F12 get legally bullied[0].
Changing the laws to protect white-hats and responsible disclosure would help. This would be a law that "just makes security happen".
Did you download 10 gigabytes of personal data and sell it? Or did you responsibly report the vulnerability once it was apparent? There would have to be some guidelines and some attacks like DDoS might still be illegal, etc.
Certainly a risk of this proposal is that some black-hats would get away, but that is already happening, so it's not really a problem of this proposal. This law wont affect black-hats because they already operate outside the law.
The problem is nobody can investigate the security of a company without facing major legal risks. As I linked above, a researcher pressed F12 and next thing he knew the Governor was threatening to prosecute him, and that's just one example. I believe it is a felony if I want to investigate for myself how secure T-Mobile's systems are, because they have not explicitly invited me to do so.
About 10 years ago I was doing some web scraping and came across a website that was exposing PPI (SSNs and more) of thousands of people. It was in an API JSON response, the JavaScript only displayed part of the data though. I just closed the site and never touched it again. I'm not a security researcher, I don't know how to safely report what I saw. It all seems personally risky for little personal gain. So I closed the site and let it go. My attitude has long been that if society wants to offer me some strong legal protections then I'll do the right thing, otherwise, society can burn. Half the nation's personal data can get stolen twice a month, as is already the case. When society cares enough to do something about it maybe I'll change my attitude.
In the absence of legislation (and perhaps even if/when legislation is enacted), an effective approach would be to simply hold entities to a reasonableness standard and to seek relief/damages under a common law negligence theory in lieu of a regulatory/legislative enforcement mechanism. That way, what is considered to be the industry standard (ie reasonable) changes at the pace of technology. The weak link here is quantifying individuals' damages in breaches where there is no clear injury (such as what you have in the the Amazon/GoPro example described above).
Don't underestimate the value of checking all the security compliance check boxes. It solves what really matters - protecting executives from prosecution and/or being dragged in front of Congress to testify. <sarcasm off>
Seriously though, so long as cybersecurity insurance and "industry best practices checkbox management" is easier and/or cheaper than actual meaningful security measures, it will never be solved.
Worse, when a meaningful security measure that could actually make a difference collides with something in a best practices document, you know who will lose.
Just the way boards of companies have fiduciary duty, there should be some of sort customer information protection duty that companies are responsible / liable for. basic security practices are being neglected at far too many companies.
Really not trying to strawman. You literally said an executive or two should be thrown in jail if their organization was breached. So which government executive would you "throw in jail" if their organization was breached?
You’re forgetting an important aspect of making stuff like this law - accountability and recourse. Sure, laws won’t magically make security happen, but it will provide tools against companies that don’t follow outlined laws or regulations to suffer consequences for mishandling data. Companies shouldn’t just be “expected” to do the right thing, because often doing the right thing cuts into profits.
Regulations matter in order to make entities do the right thing when they have no other incentive to do so. They certainly aren't a panacea, but they also certainly can have positive effects.
> I would be fundamentally opposed to such a law because it turns security into a simple matter of compliance.
True, but that's better than effectively having no security at all.
Yubikeys and macs are not magic solutions. That's not good security thinking. The same passwordless b.s. that's spreading like cancer is another thing.
Bigcorp networks are emergent, not pieced together. Threat actors just need one or two flaws. Case in point, the mac and yubikey corp with big fat wallet that was hacked: uber.
Everyone is a backseat driver with silverbullet solutions, meanwhile there are decades of research and best practices solve all these problems.
People who chase absolute securitu through one size fits all solutions do more harm than good.
While normally I would agree wholeheartedly with this, in this very instance I see meaningless abstraction in service of justifying consumer harm. The phishing TTPs outlined in the article can be mitigated with hardware keys, and the places in the corporate network where they must be part of auth workflows can be identified. There are people whose job this is in corporate networks of all levels of piecemeal quagmires. T-Mobile probably has people working on this now.
I don't disagree that yubikeys are effective but even sms 2fa could have been effective! This is missing the forest for the trees. Even then, what if it wasn't credential harvesting but a download for an infostealer? Then even yubikeys are ineffective due to cookie theft.
You have many many best practices, have a good email protection service/sandbox-detonation, MFA, detection+monitoring after the fact, CAP so threat actors can't just login from any random IP or device, threat hunting, user training,etc... these are all things a good security program should be doing to create the most hostile environment for a threat actor.
People had the same frustrating MFA argument on HN with Uber when it was hacked but long after the news story hype died down it was revealed that the TA got a contractors' creds via infostealer malware. Access to corporate networks is a common trade item in certain forums.
In this case mfa of any kind, cap and url-rewriting email security service are all layers of defense that could have caught this before impact.
This "UnCarrier" should be forced to "UnExist". Their leaks are numerous and the pathetic amounts they pay in damages do nothing to adequately compensate for the risk and inconvenience they impose on their hapless customers. Their insistence on doing credit checks for everything instead of allowing cash customers to skip it is I think part of the problem.
After an incident our compliance people told us we cannot have different 2FA options for the same user, so yes in fact if you need to use a legacy system ever then you cannot have a yubikey enabled anywhere.
It is an open secret that criminal groups also pay unscrupulous T-Mobile employees to assist with SIM-swap attacks. I am not sure at what scale this happens, as those instances _should_ be easy to trace and prosecute. But I have seen evidence of criminals reaching out and offering "side work" on the T-mobile subreddits, as an example.
In those cases, hardware keys for employees would not help.
> those instances _should_ be easy to trace and prosecute
I suspect that the employees aren't merely doing a sim swap attack with their work login credentials. Like you say, they'd clearly get fired/prosecuted for that.
Instead, I suspect criminal X buys a nice thing delivered to employee Y's house. Then, criminal X phones the helpdesk repeatedly till they get connected to employee Y during working hours. Then, they claim to own the phone number of victim Z, but have lost the phone, their id and everything else. But they manage to tell employee Y the answer to two of the secret questions "What is your gender", and "Did you use the internet in the last month?". The employee uses this, together with their judgement to proceed, according to company policy, and issue a new eSIM.
Later, when anyone finds out, the call is listened to, and the employee can legitimately say they were just following policy.
Out of high school I've worked a couple of years for A1 telecom(in Croatia) in customer service. When someone called, all I was required to ask is their OIB(Personal identification number) and they could literally ask me for anything if it's a residential user.
Want to cancel 20 numbers that still got 2 years until the contracts expire? Sure, let me do that for you. Want to change sim? Sure, just give me the new sim number. Want to add 5 tariffs to your plan? Sure, do you want phones with that?
That was 6 years ago but I still got friends I talk to there, and not much has changed.
On darknet diaries the stories told are a little more straightforward.
They just walk in to the store, steal a tablet out of the manager's hands, run away with it, and make all the changes they can with the logged-in session until corporate locks out the device.
People sell this as a service and supposedly have numbers on how long from a provider tablet is stolen until the device gets locked out. If I remember correctly T-mobile was/is considered to have the "longest" time from when the device is stolen, there for the most valuable.
Maybe T-Mo should consider using hardwired terminals again if they can't figure out how to geofence their POS tablets. This also might help with employee job satisfaction since they are less likely to be assaulted at work.
I imagine getting someone job-fair hired under assumed credentials and ghosting after one full shift of abusing their access, or giving a very poorly paid CSR just enough cash to make it worth the risk is probably more straightforward, but I don't know anything about that stuff. Most restaurants/bars I worked at had hourly staff working under 'borrowed' SSNs and names for years, though.
IIRC, on Darknet Diaries podcast they shared that one of the approaches is that someone comes to a location that services T-Mobile customers and has T-Mobile terminal (not necessarily a T-Mobile brand boutique shop). They come with a random request and wait for an employee to sign into the terminal and then pull it out of their hands and run away. They then run against the clock (whatever time it takes to report theft to central T-Mobile office and block the device) to perpetrate the fraud.
I guess a second factor confirmation on every modifying request would solve the issue?
I remember a that or a similar episode! And it was apparently even more intricate, the robber being only the lowest member of a whole food pyramid of criminals - after the robbery his only task was to grant remote access to someone who knew the terminal software (probably that would be the paid insider), while in some secret chatroom a third guy already started running an auction of who would get his sim swap processed while the guy who organised the whole thing was relaxing somewhere at the beach watching his percentage of the profits rolling in.
I was kind of amazed and shocked at the same time how there already seems to be an established sim-swap-as-a-service economy with specialized roles and plenty demand to warrant expansion...
I worked for TMobile for 4 days in 2021. I don't usually apply to big companies but money was tight because pandemic and I needed a job quick. I was assigned to work on the config server (think in-house developed consul or etcd) and it was awful. "If this specific config value is being set by Service A then what is actually written should be twice the given value, but if Service B is reading the value, return 1/3 of the value as an HTTP form body instead of JSON." By Thursday I got a call about a new position and I left so quick that the recruiters black listed me. TMobile getting hacked is a "when" not an "if"
>config server (think in-house developed consul or etcd) and it was awful. "If this specific config value is being set by Service A then what is actually written should be twice the given value, but if Service B is reading the value, return 1/3 of the value as an HTTP form body instead of JSON."
People say dev salaries are way too high but this is basically what internal systems look like at all the places that refuse to pay fair market value.
Meh, I never worked with that recruiter before. Turns out they have a policy that if you quit without a 2 weeks notice you're black listed which to be fair makes a lot of sense. But I didn't just get a call about a new job, it was a previous boss I really respect starting up a new company.
You know, I’m starting to become slightly more serious about switching carriers solely based on how terrible it would be to experience SMS/Call diverting of my number.
While I use a yubikey, OTP (where possible), and unique passwords…there’s still places where I have no choice and my number is my auth (or stupidly a reset option).
I genuinely am happy with TMO service in the US, and frankly abroad it’s excellent…but I’d be lying if every single article I see about their security breaches reminds me I may be on borrowed time myself.
TMobile seems to be particularly bad right now, but Verizon and AT&T aren’t necessarily good.
The weak link is usually retail or channel. TMobile is in a high growth phase, so I’d hazard to guess they are more disorganized. Switching to Verizon may reduce exposure, but they have their own similar issues - an aggressively dumb carrier employee is capable of almost anything.
I think the issue is that phone companies weren’t prepared for their services to be used for such high security tasks. For many decades, your phone was just mostly for keeping up with friends and family. 2FA wasn’t even that popular until maybe in the last 10 years.
Just like how the locks we buy for our exterior doors are really weak but that’s currently fine for the status quo. You’re not going to preemptively spend money to upgrade your locks.
Yep, using SMS for 2FA is the same as colleges using your social security number as ID on everything back in the day. It absolutely was never intended for the use case.
As sad as it is to write this, Apple corporate lines are Verizon - though they also have ATT available if you need it or have a preference. I only say this as I don’t know of any major corporation who picks TMO as their company lines.
All this to say, I trust ATT and Verizon slightly more than T-Mobile
Also consider that T-Mobile as it exists is the result of years/decades of mergers and acquisitions so they have decades of legacy and non-conforming systems. This situation is bound to cause security issues as well. I had a family member work for an MVNO that interfaced with them and this is what she saw.
>"Also consider that T-Mobile as it exists is the result of years/decades of mergers and acquisitions so they have decades of legacy and non-conforming systems."
This is true of just about every single mobile carrier today. In fact this is true of all telecom companies for most of their history from mobile carriers, to cable companies to ISPs. The entire telecom industry is an unending series of consolidation and acquisition of assets. This is already 12 years out of date but this should give you an idea:
Unfortunately, most carriers (except ATT & Verizon) are just T-Mobile resellers... so you might think you're not using T-Mobile but you're still affected.
Even if you use ATT or Verizon, the article mentions they're also hacked and SMS intercepted often.
Honestly, I’d assume being on a MVNO carrier would actually protect you from this, as you’re simply roaming on the T-Mobile network through the carrier agreement. Even ATT and Verizon have roaming agreements.
The issue is for T-Mobile direct customers, which obviously their internal systems have access to. I see no reason why T-Mobile would have access to users accounts at another company…
"Google says that hackers may have accessed limited customer information via the compromised system, which includes phone numbers, SIM card serial numbers, account status, and mobile service plan data. The system did not contain personal customer information such as names, email addresses, payment card data, government IDs, passwords, or pin numbers."
It depends on the MVNO. Some have their own backends. Others only do the marketing and leave the backend to the carrier.
MVNOs do not roam on the carrier, however. The MVNO has a close direct relationship for wholesale access to the network. Roaming is a wholly separate method of access.
So that leaves Verizon, AT&T, and Dish networks[1]
And all of them have supposedly been compromised, but T-Mobile is the most compromised.
> While it is true that each of these cybercriminal actors periodically offer SIM-swapping services for other mobile phone providers — including AT&T, Verizon and smaller carriers — those solicitations appear far less frequently in these group chats than T-Mobile swap offers. And when those offers do materialize, they are considerably more expensive.
So the choice is, which one is the least compromised, unfortunately
Can you source "most" and define "carrier" specifically for your comment?
Verizon and AT&T are the other of the big 3 carriers in the US, and they're not reselling T-Mobile. And all 3 have MNVOs (mobile virtual network operator) that resell and/or combine the networks of the big 3.
I'm on TMO in the US and haven't ditched it yet for the same reasons. I just take all possible precautions. Namely, never use your TMO phone number for any kind of 2FA on other services. Use TTOP, Yubi, and if those aren't available on a particular service then Google Voice for SMS 2FA. If GV isn't allowed, then obfuscate your username, password, and disable account recovery on that service, among other precautions (or just don't use that service at all, find a replacement).
On that topic, does anyone know about a good alternative that can be used just for a secure SMS number? Google Voice has been mentioned several times but it's unclear to me how that helps.
It helps. I try to use an authenticator app whenever possible, but use a Google Voice number if a service requires SMS-based auth. The trick is to not forward the texts to another cell number. You can either view them using the Voice web interface, or forward them to your Gmail on the same account. Then lock that Google account down as much as possible. I use Advanced Protection (https://landing.google.com/advancedprotection/). This is WAY more secure than using T-Mobile or another cell provider's SMS.
It's the sole reason I'm still with Google Fi, the fear of sim swaps and my (hopefully not mistaken) belief that Google Fi is less hackable than the big 3. I've certainly read that here, many times.
Look at Equifax. The government imposes no penalties on these corporations (i.e. who own the government) for this kind of negligence, or worse.
The field of competition is very limited, and most consumers I'd guess are either unaware of these problems, feel helpless about them, or don't understand their significance. So what's the pressure exerted on T-Mobile to invest in this problem? There's very little.
Unfortunately, for a system with such a big footprint and given the complexity, you'd need a huge amount of pressure to have a meaningful impact on the problem.
My approach to this is to use a google voice phone number where all SMS get sent to email. The voice account and the gmail account are the same google account which is secure by hardware 2FA yubikey login. I have a cell phone with an entirely different number that I use for non-2FA things so if it gets compromised i'm OK. I do access that email from that phone, so I suppose i'm a bit vulnerable to targeted phone theft, but SIM swapping shouldn't be a problem, I don't think?
In the opinion of HN is this the most secure way to do it while still allowing me to use services that force SMS based 2FA (almost everything) ?
My Google Voice keeps randomly losing its phone number. I haven't bothered figuring out why, maybe inactivity, but it doesn't matter cause I've already crossed it off as something to rely on.
The rise of SMS as a second factor for security across the web has raised the incentive for SIM-swapping tremendously. No one should be shocked that when tech companies start outsourcing their identity verification to cell phone providers those providers come under attack.
SMS as a second factor is almost a security downgrade. Phone companies are terrible, you shouldn't be trusting them with authentication. Plus it means you can't authenticate when your phone is out of coverage. Just a bad solution that shouldn't be used. TOTP is so easy to set up that it makes no sense to use SMS, and the even better hardware keys are only slightly less convenient.
SMS allows you to collect phone numbers which are quite good at identifying users for tracking and ad targeting though. And since most large tech companies are advertising companies (in whole or in part) it is no surprise they chose this as a second factor. Even if you try to avoid using SMS for 2FA they'll try to collect the number for account verification or recovery, with regular nags or lately go straight to extorting it out of you to continue to access purchased services or software.
Most users don't know what the words sms or totp mean. I'm not saying that totp isn't easy to implement in the grand scheme of things, but for many people it's not straight forward to setup. Entering a cellphone number and responding to text messages is well known since we've been doing it for 20 something years now.
I think totp would probably get more traction with normal users if people started calling it app verification, or something similar eventhough that is slightly incorrect.
>“A huge reason this problem has been allowed to spiral out of control is because children play such a prominent role in this form of breach,” Nixon said.
>Nixon said SIM-swapping groups often advertise low-level jobs on places like Roblox and Minecraft, online games that are extremely popular with young adolescent males.
>… “They recruit children because they’re naive, you can get more out of them, and they have legal protections that other people over 18 don’t have.”
> Phish T-Mobile employees for access to internal company tools, and then convert that access into a cybercrime service that could be hired to divert any T-Mobile user’s text messages and phone calls to another device.
If they are doing all this through phishing and aren't being as successful with other networks there's some serious issue that's being overlooked. It's unclear from the article if this is due to training, lax security on internal tools, lack of two factor (as claimed in the article) or something else (even insiders).
That's too bad, I've been on T-Mobile for years. Whenever I can I'll use yubikeys or OTP. But there's still a large number of sites and services that rely on SMS.
> But there's still a large number of sites and services that rely on SMS.
I avoid using my actual phone number whenever possible and use a Google Voice number. Hacking Google Voice would require hacking my actual Google account instead of just tricking someone at the phone company.
Bingo. Personal phone number for only friends and family. Google voice number from a nearby area code for literally everything else. It's a little more secure than my carrier.
And as an added bonus, I can automatically send all incoming google voice calls to voicemail and not have to worry about missing a family emergency. If I get a phone call on my actual cell number, it's almost guaranteed to be someone I know closely.
why do you think that? Presumably Google Voice uses a phone company downstream, which means if that company is hacked they can reassign your number to someone else and thus you have the classic SIM jacking attack.
they pay-off / trick a T-Mobile employee into re-assigning your Google Voice number to them. It's happened before with Google Fi, but I haven't seen any public information about this happening with Google Voice (yet)
I don't work at Google and don't know if this is possible with Google Voice. However, Google Fi is their paid service, so I would assume that's the one they'd want to protect the most.
>If they are doing all this through phishing and aren't being as successful with other networks there's some serious issue that's being overlooked.
A few years ago I had to regain control of an account that I had lost the credentials for. No problem, Tmo support just needed me to provide one of the last 5 phone numbers dialed. So yes, there are some serious issues overlooked.
We've always known that sim swap attacks weren't hard. But I've largely understood them to be not scalable. You can sim swap almost anybody by calling Verizon on the phone. But you needed to call them. This, in my mind, largely meant that the risk of sim swap for most people was pretty low - certainly far lower than the risk of phishing.
With this method, it scales. Pwn one person who has relevant system access and then you can sim swap as many people as you want. Now there really is a meaningful difference in security posture between sms and otp.
Have you missed how much spam a regular phone was getting regularly? Doesn’t seem difficult to regather such an operation to do SIM swap attacks. With AI the mechanisms are even easier.
To perform a SIM swap I need an employee at Verizon or whatever to take some steps on their computer (or have their computer infected with a RAT). To call 100,000 people on the phone I just need a computer that can make phone calls.
I've been thinking about this a bit more and I think the right path forward is to impose the same fiduciary liabilities and regulations on cellular providers that banks enjoy. Phones are used as authentication devices for bank transactions. If cellular providers have to go through all the same audits of controls as banks and share the same fiduciary liabilities that may raise the bar for phishing attempts. This may also change the employment requirements for people at T-Mobile and there would be more scrutiny to weed out some of the bad apples or at least increase monitoring and auditing of transactions to provide more visibility to forensic teams.
> phones are used as authentication devices for banking transactions
That’s the banks’ choice though. Are cellular providers selling them a secure authentication service? Or just an insecure best effort message delivery channel?
But then of course the banks can ping that liability further upstream: as a customer, when you choose to opt in to SMS authentication, you’re the one vouching for the security of your cellphone provider, telling your bank ‘I trust their account security enough that if you send a message to this number you can assume the recipient is me’
So now you’re left going to your cell company and saying ‘since the bank said I could use you for auth, you’re properly secure right?’
And their answer is ‘lol no. check our t’s and c’s.’
And then you wind up saying ‘but I want to be able to assume that and I think my cell company should be liable if they aren’t’, and asking for the cellphone company to be regulated like a bank.
Because banks are that good at deflecting liability.
I think the legislation should be worded so that if a cellular provider does not want the fiduciary and regulatory requirements imposed on them, they must disable all SS7-to-MAPI SMS/text message gateways or any other form of non-E2EE unencrypted and unauthenticated communication. SIM swapping becomes less useful as encrypted applications take over MFA/2FA authentication meaning the attacker must acquire and unlock the phone itself rather than being able to impersonate it.
Even voice communication must be encrypted when cell-to-cell so that Joe-Blow-Nobody and the President of the United States have exactly the same protection on their personal cell phones. If a company key use used for lawful intercept there must be a massive audit trail that makes it crystal clear who monitored what and for how long. No more pressuring people like me to give authorities unfettered and un-monitored lawful monitoring access.
I actually think we should just divest all security tasks from cellular providers. They are clearly bad at it and I don't think they ever really pretended it was a core competency. Reforming them would take far longer than just switching to the available alternatives, and probably would not work as they would just lobby any regulations down to be toothless.
>Phones are used as authentication devices for bank transactions. If cellular providers have to go through all the same audits of controls as banks and share the same fiduciary liabilities that may raise the bar for phishing attempts.
That may also raise prices massively. I prefer that mobile carriers get dumber (collect less info) and less regulated, not smarter and more regulated.
If your business relies on SMS for authentication, you are liable for all the fallout of using an insecure channel.
It's the bank's job to secure your funds, not a mobile carrier's. Let's keep it that way and make it more clear to consumers and businesses.
I'm a Google Fi customer and experienced a very disconcerting fraud attack a year or 2 ago. I made an outbound call to the support number for my bank (I triple-checked that it was the correct number for the bank's support line). My call was routed to fraudsters impersonating my bank's support and I gave them all of my debit card information through what I initially thought was an authentication process. The 1) strange call quality, 2) that they asked for all of my card details and 3) the lack of an automated menu tipped me off and I realized pretty much immediately after the call was over that I had been scammed. I called the exact same support line a second time and got the actual customer service for my bank, at which point I promptly canceled my debit card (but not before the fraudsters performed what appeared to be a test charge of my card for $5 to a random merchant name in Connecticut).
I had no idea this kind of attack was possible and I don't know how it works or whether it was related to the T-Mobile breach. Had the hackers attempted an account takeover using the information they collected from me they could conceivably have stolen all of my savings.
That could have actually been on your bank's side. An attacker could have compromised their phone system and is intermittently redirecting calls externally.
Wasn't just t-mobile, some of the 3rd party connected services ran by other companies that tie into the mobile networks for most major carriers got hacked also.
Caller ID services and Iphone Provisioning.
Its way worse than the media/public even knows. Its networks built on networks, with api's everywhere.
Also, TMO allows you to enable 2FA but ignores it when enabled, still allows you to sign on with email/pass.
Was thinking about moving a line to Google Fi for this reason. I know they just resell T-Mobile bandwidth, but would they provide better account level security? Is it common for Google Fi customers to get SIM swapped?
my friend had google fi and was caught in this, among other things they had their instagram taken over. scary few days. thankfully their roommate works at meta...
I think the only way to be really safe is to use one of the smaller MVNOs and never ever ever reveal who your carrier is
As a former customer of T-Mobile, I will say that the risks go beyond SIM swapping with T-Mobile. Their website is pretty bad, and there's a lot of silly PIN-based passwords and security questions going on. Getting away from that in favor of Google's security would be a huge win.
I've always figured I should have two numbers—one I let people know, and one for 2fa.
But that's ~$20/mo and a moderate annoyance, so for now mostly just fingers crossed that eventually everywhere that matters will allow me to switch fully to authentication apps and hardware keys.
I don't think that having two numbers will help much. I'd guess that most sim-swapped cell numbers are leaked in data breaches or acquired through data brokering. Enrolling a number in 2fa is letting people know your number, because you're tying that number to the account.
A separate number for each account might help. Maybe.
This is part of my question. How does Google provision VoIP numbers? When someone calls / texts a VoIP number from a normal number, that call / SMS travels over normal wireless infrastructure. So VoIP numbers are still connected to the same infra, right?
As I understand it, yes, but not through a wireless carrier. They'd tie into the infrastructure somewhere else. They'd be more of a peer with Tmobile then a customer.
Do you know how expensive it is to support physical keys for a large organization? I'm not talking about the cost of the key. I'm talking about how many people lose, break, or have another problem their keys (data corruption, software issues, USB port is broken, etc). You need dedicated staff at every physical location with all the support capability to troubleshoot those issues and replace keys. Every time a key doesn't work, that's one less person working, plus time taken up by support staff. The TCO is millions of dollars. It's much cheaper to use software tokens that have fewer failure modes and simpler support requirements.
Even if you do use physical keys, malware on the machine from a phishing+0-day attack can simply wait for the user to log in with their physical key, and use an existing, valid session to inject an attack. This has existed for at least 15 years since I first saw the attack, and it still works great, even with FIDO2.
What happens to T-Mobile if an attacker takes over an account, regardless of security method compromised? Basically nothing. Yeah, some customers get sim-swapped, who cares? T-Mobile has not lost any money. So there is no incentive for T-Mobile to have better security in those cases. Hence, no need for physical keys, which wouldn't stop all attacks anyway.
The TTPs outlined in the article could absolutely be mitigated by use of hardware keys, and this would reduce customer risk. You are right about the liability and support calculation, but that doesn't mean it's OK to shift risk to the customer because it's too expensive. It is a failure to not have implemented a physical key deployment, and it must be treated as a failure.
I don’t doubt it. My cell phone stopped working for a day, I called in and talked to somebody who I could barely understand and knew very little about basic security. I tried to explain multiple times that my account was probably SIM swapped and the support person completely ignored this security concern and just said I have fixed the issue on my end anything else I can help you with? Please rate me 5 star in the coming support survey.
Pretty typical for most first line support though. Especially outsourced.
It's hard to find people with languages and tech skills so most outsourcers just fulfill the former and cover the latter with endless infernal flowcharts. Really sucks when your problem is not on the chart. Escalating is usually discouraged by giving targets per day to the agents.
I guess you're in the US so perhaps language isn't as much of an issue but a lot of US companies support from the Philippines now because they have a favourably perceived accent (unlike Indians which a lot of customers have come to associate with 'poor support' so it leads to kneejerk reactions *). But anyway in the Philippines it's now hard to find staff too.
But anyway my point is that the support experience is not really related to internal IT competence.
*) not my personal opinion but I have seen US companies in particular use this argument. Unlike in the UK where Indian accents are common. I worked on the contact center tech realm for 20 years.
The security situation with these companies shows no signs of improving.
My hot take is to make many forms of hacking legal so long as the hacker reports their findings to the government. Let's have a free for all where every white hat and grey hat hacker gets to test the security of all companies, no permission from the companies required. Otherwise, it's only black hats that get to do the hacking, and they won't tell anyone when the find a vulnerability.
Everyone wins except for the companies who will be embarrassed they can't build a secure system to save their life. And they won't be able to legally bully someone for pressing F12 anymore.
This is important, it's a national security issue. Extreme measures like this are justified.
Some hacks, such as DDoS attacks might have to remain illegal. But otherwise, unless your proven to be stealing and selling data, let there be strong legal protections for those who responsibly report vulnerabilities.
And this is practical too. With vulnerability bounties you can solve the problem just by throwing money at it. But bounties can't be an opt-in thing, the companies who need them most are not opting-in.
Lack of knowledge of vulnerability is not the limiting factor in this case. All a "free for all" would do in this case is make more noise in which malicious actors can hide in the logs.
Are you saying they're aware of all these vulnerabilities? Why don't they fix them? Can one of the wealthiest companies in the nation not fix vulnerabilities they're already aware of? What is the limiting factor here? Competence?
My conspiracy theory is that my idea will never be implemented because it would expose the "job creator" class to an objective measure of their competence, and they would not fare well. Headlines like "97% of US organizations are incapable of building secure systems" would not be fun.
You can read lots of comments in these threads about the cost/benefit analysis of mitigating the vulnerabilities. And whenever that cost/benefit calculation gets very complex, the default is to not get too worked up about fixing the status quo because "it's complicated."
Yeah, the cost would be corporate profits, and the benefit would be privacy for average people. Given those tradeoffs, I'm not surprised that those benefiting from corporate profits say "it's complicated", and then choose the course that results in more profits for them (while harming the general public and national security). I'm not surprised, but not happy about it either. But this is turning into more of a political rant so I'll end here.
Any network that isn't shared with your personal phone number. That is: if you have a dedicated number for SMS 2FA, it will show up in fewer places where the hackers might find it. It's easier to monitor for breaches, and replacing it is a simple matter of updating your accounts - no need to worry about lost contacts, friends, bills, etc.
Although, "bills" reminds me - a lot of companies overload the use of 2FA SMS for both identification and 2FA purposes, not to mention most customer service centers expect the call to originate from the same number that receives 2FA SMS messages for authenticating to the account being serviced.
Efani is the only carrier I'm aware of that is security-centric, I have not used them myself, but they claim zero SIM-swap attacks have been successful against them. Even though they are an MVNO they claim their upstream networks cannot change their customers' SIMs. Downside is it's expensive, it depends on what you need to protect I suppose.
Heads up that US mobile is an MVNO operating on T-Mobile and Verizon, so how good their 2FA system is irrelevant if hackers get deep enough into tmobile.
“Deep enough” would be true of any mobile carrier, to date all of these attacks are SIM swapping, with social engineering/phishing being the attack vector. Not particularly deep.
Attackers would have to social engineer the MVNO directly, which is certainly easier if they have data they’ve stolen from t-mobile first, but this isn’t a “they’ll get in no matter what because they’ve pwned T-Mobile so bad” scenario.
This article says that Google Fi customers were SIM swapped due to a T-Mobile breach. Even though "[t]here was no access to Google's systems or any systems overseen by Google."
> These attacks are conducted using social engineering, where the threat actor impersonates the customer and requests that the number be ported to a new device for some reason. To convince the mobile carrier that they are the customer, they provide personal information exposed to phishing attacks and data breaches.
> As the Google Fi data breach includes phone numbers, which can easily be linked to a customer's name, and the serial number of SIM cards, it would have made it even more convincing when contacting a mobile customer support representative.
They used the data in the breach to social engineer the Google fi reps. Attackers still needed to get through Google’s customer support system to perform the SIM swaps.
The cool thing about T-Mobile is they don't ask for your SSN or care about what name you give if you do pre-paid in cash. This anonymity means that if the bad guys call up T-Mobile and know all your details and they even have a compromised employee with full access, the bad guys still can't find out your real IMEI or phone number and do a sim swap. Another benefit is that, with all the cell phone location selling going on, they can't find your true location either!
You can put a customer service password on your tmobile account to avoid anyone calling customer service without that password to make any changes. This is separate from your online portal password.
That still doesn't fix the compromised employee problem. If they can match your identity with your phone number, and have full access to t-mobile they can sim swap. Sure, Google could have compromised employees, but I trust Google's security, especially their internal security, much more than T-Mobile.
This is why I buy service from Mint mobile using a fake name, a burner email address, and a prepaid debit card purchased at Walmart. Go to town with that info, hackers!
In fairness, T-Mobile (and other phone companies) don't really want to provide no cost authentication for other entities. SIM swapping wouldn't be an issue if forces outside the control of the phone companies were not making it so profitable.
If we need to legislate something, perhaps we should try to discourage this sort of thing in the first place. One company should not be allowed to paint a target on an uninvolved company for financial gain.
> don't really want to provide no cost authentication for other entities
Is the customer not paying for the cell phone plan? Nothing is “no cost” in this situation. The cost is just shifted to the consumer from the company in the form of requiring a phone number.
> One company should not be allowed to paint a target on an uninvolved company for financial gain.
Or, if T-Mobile and others did a good job in security for their networks and in turn their customers communications maybe they wouldn’t have this issue.
IMO this comparison would be like claiming a gas station is responsible for your cars electronics not functioning properly.
I believe this. I have a single credit card that I use only for our T-Mobile bill on autopay (the credit card offers insurance on my phones via this method).
About 2 months ago I noticed $15 charges very cleverly disguised as Amazon prime. The only giveaway was that it said the number was entered manually.
Everyone with T-Mobile autopay should check immediate for an Amazon prime charge that was manually entered.
Most of these breaches happen because someone gets targeted - something about their public profile lands them on the radar of the hackers. Then the hackers dig into the profile looking for associated phone numbers. So to mitigate this, you could (1) reduce your public profile, which is out of scope here, and/or (2) minimize phone number exposure. You want to make it impossible for someone targeting you to locate the phone number you use for 2FA on a particular site.
To minimize phone number exposure, you want to send the phone number to as few third parties as you can. You don't want it to show up in any databases, including in breached databases from hacks of companies where you stored your phone number for 2FA purposes. Unfortunately this means the only true solution is a unique phone number per account with SMS 2FA, but that's obviously not practical. So what can you do?
A VOIP number like one from Google Voice is the next best solution for receiving 2FA SMS codes to a dedicated number that you keep separately from your personal phone number. This way you receive texts purely through software and don't expose yourself to SIM swapping at the Mobile ISP level. Unfortunately, some providers won't accept Google Voice or VOIP numbers, so for them you're back to square one... maybe as a backup option (only for those sites), you could use a cheap phone with a pay-as-you-go plan; it's not great, because you're still vulnerable to SIM swapping, but at least you have a dedicated number for SMS 2FA.
Looking at the problem more widely, it would be nice if my phone or mobile ISP could solve this problem for me, with something akin to disposable phone numbers (think Apple Private Relay, or temporary credit card numbers from the bank) or a dedicated 2FA code relaying service (think Authy or Google Authenticator - in fact, maybe they could offer SMS numbers as a feature, although that seems at least as dangerous as the status quo).
This follows on the unpopular news story that T-Mobile will be requiring you to give them your debit card or bank account information to continue to qualify for their Autopay discount.
The story is that they’re no longer allowing you to get the AutoPay discount with a credit card so you’ll have to set up AutoPay with a debit card or bank account by May to continue to receive the discount.
I see people jumping towards regulation, but that has the side-effect of making it even more difficult for there to be any competition against these monopolies. What we really need is legitimate competition, to enable consumers to vote with their wallet and move to a competitor that takes the security of their customer's private data seriously.
> making it even more difficult for there to be any competition against these monopolies
If your snappy upstart cellular network can't afford to give out Yubikeys to employees, I don't want you interconnecting with the rest of the phone system.
Also, startups have such an advantage now that there is an ecosystem of COTS and SaaS tooling that can help you do a complete integration strategy. It's arguable MFA regulations would advantage startups because they don't have to deal with the complexity of legacy network piecemeal integration.
I planned and did the roll out of Yubikeys at the last place I worked, before there was a dollar in sales, and the lifecycle could be supported with 2 people (minutes at most out of each day for support) and an integration to our HR platform that automated procurement and mailing of keys.
No silver bullet but many of these types of attacks would be mitigated, or at least made much more expensive and difficult for the attackers, if we had wider adoption of Yubikey, Webauthn etc. type otp solutions which are more resistant to phishing, keyloggers etc.
In practice, what are the barriers to adoption which folks are seeing, and what can we do about it?
I think the biggest barrier to adoption is lack of end user demand for the service. That is followed by people not understanding/believing the incredible increase in user experience and security. It's almost like people think it is too good to be true.
Seems plausible. Suffered a SIM-hijack attack via T-Mobile a few years ago. Set a giant extra arbitrary password for account changes after that - but they essentially don't ask for it. Fairly regularly they show notices of breaches via email or when logging in.
Don't use a mere mobile number for the backup access to anything inportant!
That said, perhaps everybody using SMS 2FA is equally culpable (e.g. most banks). Nobody who has worked at a mobile carrier would ever think that they're ready to be high-value targets. So it's puzzling that the banks are so eager to put them in that position.
Oh you're right, I had it backwards. Perhaps the carriers should be trying to PR-hack this into more people's minds.
I'm imagining an authorized pen-tester program which lets authenticated users achieve an atomic sim-swap (i.e. the creds were intercepted but the swap-back occurred immediately after, so as not to deny additional service to the victim).
Anyone who is privacy/security savvy, is there anything that can be done to protect my data as a T-Mobile user? I avoid 2FA SMS authentication wherever I can (use Authenticator for most things)
I'm on an MVNO that uses T-Mobile. I recently got 100s of "verify your number" sign-ups from over a dozen services. Could this have been part of an (attempted?) sim swapping attack?
One random factoid I notice is that AWS and Microsoft just announced launch of Open Gateway. Noticeably missing from that list of Telecom Providers is... T-Mobile. I'm sure it's mere coincidence, albeit a noticeable coincidence.
" Initial carriers that have signed up to Open Gateway are América Móvil, AT&T, Axiata, Bharti Airtel, China Mobile, Deutsche Telekom, e& Group, KDDI, KT, Liberty Global, MTN, Orange, Singtel, Swisscom, STC, Telefónica, Telenor, Telstra, TIM, Verizon and Vodafone. "
Might it be time for the US government to step in using eminent domain, seize the company and merge it into a different provider? Are other providers more secure or do we just hear about T-Mobile the most? Who should take over T-Mobile?
[Edit] The more I think about this, perhaps another path to resolution would be to remove limited liability protections from companies that repeatedly put their customers at risk, especially given that phones are used as financial transaction authenticators. Perhaps some bank regulations need to find their way onto cellular providers.
And then we'd be down to what, two wireless carries in the US? AT&T already tried to acquire/merge with T-Mobile some years ago but it didn't go through. I forget why, but probably due to antitrust issues. And wasn't Sprint just acquired/merged with not long ago, by T-Mobile IIRC?
And then we'd be down to what, two wireless carries in the US?
I think you are correct. I don't like the idea of making a giant-bell yet once again but I also don't see a way to correct T-Mobiles obvious cavalier and brazen incompetence. Fines? Companies just factor that into the cost of doing business. Threat of losing their FCC license? I think collusion between business and government would drag that fight out for decades and probably even exacerbate the problem. March their leaders through town with a shame-nun? I don't know what would get real results quickly. Tack on some bigger fiduciary liabilities since phones are used to authenticate bank transactions?
Perhaps if some powerful political leaders had nasty secrets revealed or lost money as a result of these hacks there might be action but that is a big if. That might never happen and that also assumes there is proper attribution.
What should not have happened is the Sprint T-Mobile merger. Like when Wells Fargo bought the failed bank (forget which one) after 2008, Wells Fargo went from a reliable company to all kinds of suspect things going on with our account. So far T-Mobile has been fine for us but we are seeing some marketing things floating around suggesting the Sprint influence might be having a negative impact on T-Mobile. I miss John Legere as the CEO, he had it going on.
Washington Mutual maybe? Bank of America is in a similar situation, they're just NationsBank with a friendlier name on them now. NationsBank acquired BOFA in 1998 after BOFA lost a bundle on Russian bonds. The speed run on becoming one of the shittiest banks around continued in 2005 when they (NB/BOFA) acquired MBNA.
Those were awful mergers. Supposedly the US government has reduced the amount of rubber-stamping of these mergers and are said to be scrutinizing them more now. I suppose time will tell. I don't know how else to get real results on fixing poor security practices other than to remove all immunity and limited liability protections from businesses that repeatedly put their customers in harms way and that would have other incredibly bad ramifications.
An eSIM will prevent a physical swap. However, an eSIM will not prevent a port-out of your phone number.
To defend against port-out you should enable port protection. The name of such a feature varies by carrier, and T-Mobile seems to refer to it as "Takeover Protection."
Some of the comments here seem to argue that creating legislation that demands basic security baselines will not get the job done. In fact in a recent interview[1] Jen Easterly (head of CISA gov) fell into the same trap (assuming because she didn't want to upset tech company lobby groups) so her message was reduced to shouting into the void (asking vendors "please be-good"):
> Addressing these issues requires a long-term approach and not simply a new set of regulations or industry standards. Easterly said it will require the leaders of technology companies to focus explicitly on building safer products, provide transparency into their development and manufacturing processes, and an understanding that the burden of safety should not fall solely (or even mainly) on customers.
I'm right now struggling to get a bunch of US IoT companies to agree on a very basic set of security standards that would allow more interoperability. All we're asking are basic best practice to anyone working in security (e.g. ETSI 303 645). And the reason why I'm struggling is because in the EU these baselines are becoming the law as of 1st Aug. 2024 with the Radio Equipment Directive (RED). And in addition these same kind of guardrails will also become law with the Cybersec Resilience Act in 2025 expanded to the cloud and mobile apps. So this thing is coming and the US which has a lot better standards (thanks to NIST but lacks legalization due to power of lobby groups) looks like a total laggard here to a point where it becomes embarrassing.
Nobody in their right minds would argue there are unreasonable provisions in these proposals for RED (or the CRA). Yet all the US based vendors who do not sell into EU markets shout "bloody murder".
And it's hilarious how they're all grandstanding about "how dare the communist EU is telling business how to innovate".
Legislation works. Begging vendors to come up with better controls by themselves will not.
Anyone who has spent even a single day working in security in a company where security isn't part of their core value proposition (or isn't _the_ product) will know the only way to enforce even the most basic security and safety controls[2] is by legislation.
You want a unified charging standard for EV? Make it the law!
You want a single type of charger for all phones? Make it the law.
You want your coding standards to meet guidelines for functional safety? Make them law.
You want to eliminate OWASP Top-10 from production code? Make it the law.
I mean, you get no compensation from your data being sold.
This case is notably different from hackers stealing your data. Instead, they could steal your identity. With this type of access, they could impersonate you at your bank, your email, anything that uses SMS as a form of verification.
This isn't a "whoops, people know your birthday now (again)." This is "whoops, someone hacked into your bank account." All because TMobile's security practices (or at least Employee training) are extremely lacking.
You are right. I was speaking generally about the fact that my data and “identity” can be stolen without any penalty. There is zero incentive to T-Mobile to prevent things like this from happening in the future. There is no financial incentive for them. They won’t lose any customers, and won’t be fined. Why invest any amount in security with these sets of incentives in place?
Not if they are doing a SIM swap. If they are paying $1k as claimed they are after far more interesting things in your accounts than just basic information to sell.
- My Gmail account was compromised
- My Amazon account was compromised
In Gmail, they added a filter to hide any shipping or customer service messages from Amazon.
In Amazon, every other day, they placed an order for a ~500 USD GoPro device, delivered to an address in NYC. This address changed with every order.
Both passwords to both accounts were kept the same.
After I caught on to the above once I received my credit card statement, in November:
- They attempted to purchase something with my credit card. Security mechanisms triggered, and a verification code was sent to my phone at 4am in the morning. They successfully validated and placed the order. My credit card company assures me they input the right verification code.
- They applied for an Amazon credit card using my identity. It was auto approved, and they used the credit card to purchase ~5k worth of items.
I moved everything off of that T-Mobile number, and switched over to GoogleFi (only to learn GoogleFi uses T-Mobile also... still better than T-Mobile directly I'm hoping).
Edit:
I also wiped my phone, eventually thought that wasn't far enough, and switched to a new device entirely. I'm still unsure how the above occurred, because some of it feels beyond the scope of a SIM-swap.