Hacker News new | past | comments | ask | show | jobs | submit login
A hacker got all my texts for $16 (vice.com)
583 points by pje on March 15, 2021 | hide | past | favorite | 289 comments



In Australia it's mandated you're sent a message before rerouting or migrating to another provider. Surprised this isn't enforced in the other countries, it costs next to nothing to implement and is just an additional step in the account migration process.

I'd love to see companies allow for opt in additional security measures, like banks or telco's calling me - having a verbal password to confirm things, that level of security seems to only be available to VIPs.


Someone is going to have to take one for the team and SIM swap a senator if we ever want that requirement in the states.


Except they'll just punish the "hacker", make a big fuss about it, then the Telco's will donate money to the senator till they drop it.


> will donate money

Sakari (and the likes) will complain that government regulation is keeping the food of the hard workers table, and the gov has no right to intervene to the free market!!

In parallel they will 'lobby' (or as we call it in Europe "bribe") the key politicians and ask to a) either change that Bill down to the point that it is rendered useless, or b) cancel it altogether, and stock market will go up!!


Followed by a six month government contractor bidding process, two years of development hell, and a half-based solution that either doesn't work or requires fifty extra convoluted steps.


And a monthly "Regulatory Recovery Fee" to make the customers pay for it in perpetuity.


Nah, it's cheaper and easier for them to mandate that the companies take care of it.


They do but then the companies pay it forward to their customers. There’s a whole list of itemized fees on a cellular bill. Those aren’t collected “for” the government. The company is just itemizing it for you, probably so that they can neglect to advertise it in their contract sticker price.


I tried to get T-Mobile to stop giving my location to anyone that hits their APIs with a 'Yes I have permission' flag set.

There's no opt-out for it, and no enforcement of the permission requirement. Their support had me snail mail a letter to some PO box. I never got a response.

And now they're going to start outright selling their customer activity after forcibly un-opt-outing* everyone who opted out in their privacy settings previously..

*un-opt-outing -- ??? I don't know what to call this. It's not 'opting-in' since nobody has a choice.. 'resetting user selection without notification or consent' seems too mild and wordy.


T-Mobile has such shitty IT, infrastructure, and security practices.

My last experience with them caused me to switch away from them permanently. I switched away from them after getting SIM jacked, with real money stolen from me. Happened exactly like in this article[0].

Another incident happened where my online account was merged with someone else's in California (I'm in Texas). Our billing information was merged, with the others paying for the whole account. I couldn't make changes online- only after sitting on hold and explaining what happened was I able to get the whole situation unfucked, but there's no telling what amount of my data still lives in that other account.

Come to think of it, my first experience with T-Mobile was as a Radio Shack employee, circa 2010. When a customer came to the store to pay their T-Mobile bill with cash, if I took too long to enter all the data into their awful online portal the money would sometimes go to a completely different person's account. Many hours were spent on the phone with the local and regional rep resolving multiple instances of this happening.

[0]: https://www.vice.com/en/article/3kx4ej/sim-jacking-mobile-ph...


Tmo is pretty shitty, but i'm grandfathered in to 5 lines for $93, so i pretty much can't leave them. Not that much better in the jail cell next door or across from me anyways.


I haven't heard anything about a t-mobile api leaking that data and my searches doetsn't return anything of value, can you provide more details?


It's not just T-Mobile, it was most US carriers.

Some examples:

- Vice paid a bounty hunter $300 to track a phone number [1]

- Police have paid these services to avoid warrant requirements, and corrections facilities use aggregator services to track numbers that inmates have calls with [2][3]

Apparently carriers claim to have stopped after getting fined $200m last year [4].

It was typically done through aggregators. EG, services that have similar access to multiple carriers and in turn expose a single endpoint to their own customers.

The aggregators pass on responsibility for obtaining consent to their end customers. Again, with no enforcement or ability for a target to opt out.

The only protection is an authentication requirement. But that just confirms you have a valid credential. Which you get either as an aggregator (to tmobile/other carrier directly), or as the client to an aggregator (to the aggregator's API to query multiple carriers).

Though even that authentication requirement has failed in the past, like when LocationSmart had a public demo page exploited. Inspection of the requests the page sent made it trivial to replay them with any phone number, skipping any consent checking. They just had to add "privacyConsent":"True" to the payload [5].

But yeah, it sounds like that is less of a worry now.

Instead, T-mobile is selling the location data, and basically anything whatever usage data they collect from your phone with their root-privileged app to advertising networks. They say it's a

Although their privacy page has this statement [6]:

> We do not use or share Customer Proprietary Network Information (“CPNI”) or precise location data for advertising unless you give us your express permission.

The 'express permission' here is deceptive. Users default to permit this, so it's hardly 'express'.

Further, they recently mass reset user preferences to clear the opt-out setting for users who previously opted out. Without consent.

So basically everyone is 'consenting' unless they very recently opted-out. Though I have little faith they won't change this from underneath their users again in the future. No doubt in the fine print of one of those 'annual privacy notices' or some such.

Still, if the wording and definition of 'express consent' is questionable above, they word it more explicitly in the more detailed privacy policy [7]:

> We and others may also use information about your usage, device, location, and demographics to serve you personalized ads, measure performance of those ads, and conduct analytics and reporting.

Their privacy page is deceptive about how anonymized their collection is [6]:

> When we share this information with third parties, it is not tied to your name or information that directly identifies you. Instead, we tie it to your mobile advertising identifier or another unique identifier.

Tying it to a mobile advertising id, or any kind of unique identifier, is not de-identification. It is trivial to tie this to an email or a larger profile generated by an advertising network and combine with, say, your desktop web browser. Or any account you login with that is associated to your email..

It's despicable. But sorry, I'll stop ranting now.

[1] https://www.vice.com/en/article/nepxbz/i-gave-a-bounty-hunte...

[2] https://www.nytimes.com/2018/05/10/technology/cellphone-trac...

[3] https://www.zdnet.com/article/us-cell-carriers-selling-acces...

[4] https://www.nationalheraldindia.com/international/over-dolla...

[5] https://www.robertxiao.ca/hacking/locationsmart/

[6] https://www.t-mobile.com/privacy-center/our-practices/privac...

[7] https://www.t-mobile.com/privacy-center/education-and-resour...


T-Mobile has such bad practices -- about 6 years ago they gave my phone number out as a temporary number to someone else. I don't know how their infrastructure is set up, but both me and this other guy had the same number for a time. Incoming calls would be routed to the phone that called out last. At one point I was able to talk to the other guy by using my wife's phone to call my own number. T-Mobile claimed that what was happening was impossible, so I filed a complaint with the FCC and switched my phone service. By the time T-Mobile responded to the complaint (by saying nothing was wrong), I had long since switched providers, so I didn't pursue the matter further. Huge annoyance though.


> un-opt-outing -- ??? I don't know what to call this

I'd call it "forcing consent", all irony intended.


Capitalism doesn't ensure good things for people, just maximized profit for the best marketers. You want good things? The government has to require it. Otherwise it'll only happen if it's under the umbrella of maximized profit.


Meh.. every time an article comes out someone says this. Definately more complex than that. Look at Amazon as a counter example.. the reason they dominate is the combination of better product and maximizing efficiencies of scale. Additionally.. they rolled "profit" into growth, netting consumers on a whole better selection and service.

It is almost always better for the government to create "incentives" than to create "requirements" anyway. Instead of "requiring" a text before transfer. It would be better to hold both companies that facilitate a transfer without the customers autorization to large liabilities. This allows them to create a mechanism to prevent this that is probably better.


Wow. How long does your number go unavailable if you port out?

I may.


When I ported over to Project For a few years ago, it took about 30 minutes. I think there's a "pre-transfer" step that gets everything ready to cutover before you confirm.


Back in the early days of mobile number portability the majority of telcos put in systems to make porting out harder, e.g. getting an unlock code. This gave them a chance to keep the customer when they called up.

Regulators (particularly in Europe) soon put a stop to that to promote competition. While this was good, the majority of regulators failed to put in a consumer protection mechanism to stop identity theft through account stealing.

The article describes a more insiduous attack, as the mobile account is still active (hiding the existence of the attack from the user), but the message destination has been rerouted, making all the linked accounts that use SMS as their 2FA also vulnerable.


I think this particular issue is specific to North America, due to peculiarities of the NANP phone number scheme (inter-provider texts are routed quite differently from voice calls, if I understand it correctly).

In other countries, the two channels are more closely coupled (but SIM swap and/or number porting attacks are still possible, depending on the provider‘s security protocols).


> due to peculiarities of the NANP phone number scheme

I suspect more like due to peculiarities of the United States of America. Such as a disinclination to regulate anything, trusting that somehow this time the most profitable course for corporations will also work out OK for its citizens even if it didn't on previous occasions.

This report lists a long chain of buck-passing companies that have exploited an obvious defect and then escaped any responsibility for the consequences. Notice how the only work they made the hacker do was legal paperwork to cover their backsides, no actual technical countermeasures. Because nobody at these companies cared if it was used this way, they only wanted to make sure if they got sued they would be able to blame somebody else and get away with it.


Number portability is regulated: https://www.fcc.gov/general/wireless-local-number-portabilit....

The regulation seeks to promote competition and consumer choice. An onerous verification process would undermine that goal. Security is not a consideration.

This is sort of the point with regulation. The regulator makes the rules it thinks are best according to the considerations it thinks are important at the time. If someone later shows up with different considerations, they can go to hell.


Pretty sure a hacker would be perpetrating an actual, punishable-by-trial crime in forging those legal documents. That's generally the first regulation that the US imposes.

A disinclination to regulate anything is a good idea in a society that generally punishes bad behavior after the behavior has been perpetrated. I would have doubts for instance about government regulating the process for sending and receiving SMS - would you want every new software or protocol to have to go through some kind of bureaucratic review before it can be used?


That doesn't work well when the criminals are working from a sunny foreign beach resort.


Exactly, the only thing that the US achieves is creating thieves that have a propensity to go big fast, so they can forever evade the law.


> would you want every new software or protocol to have to go through some kind of bureaucratic review before it can be used?

Absolutely yes if said protocol is to be used by an entire population as a basic means of communication. Either by the government or a non-profit not tied to the industry. Protocols should also not be allowed to be secret if used at scale.

I see no reason to make a distinction between computer protocols and in-person safety protocols. The threat level is different, but it covers just as many (if not more) people.


A key part of regulation is placing the onus of solving problems on those best equipped to solve them.

You don’t need the government to mandate what the protocols should be, you just fine carriers for allowing this sort of bad outcome and let them sort things out.


This requires trusting "those best equipped" to prioritize the rules over money when the fines aren't significant enough to affect the bottom line.


SIM swaps are relatively easy in Australia, requiring only some fairly simple social engineering of staff in a phone store.

Number porting is trickier, requires a name and account number (or DOB in the case of a prepaid account) of the victim and they receive an SMS informing them their number was ported in advance.


Yeah getting thee account ID can be a pain, I've learned that the number in the UI and bill is not the identifier they want. Security by poor implementation.


I couldn't even get my own number ported in Australia (to a new provider on a new SIM). The old provider said the authentication failed. I gave up pretty quickly and just went with a new number.


I thought they require ID for buying SIMs in Australia, surely they also require ID for switching your number to a new SIM?


That requirement is there for new or ported-in services.

But when you Sim swap, it's tied to the same account. So if you can convince the minimum wage hourly wage contract employee at a franchisee that you're the account holder, no worries.

Worse, most of those stores are using generic accounts and/or passwords.

Telstra years ago had a policy along the lines that store accounts could be not tied to a specific employee, so long as the store manager/team leader rotated the passwords and kept records. in reality it's something stupidly guessable that rotates only when required and all the staff know them.

Optus effectively has the same thing - I had an issue getting a SIM established and sat with an employee for about an hour as they re-rolled the account about 10 times. By the end I knew the passwords for all the accounts in the store, plus other identifiers and numbers.


Same in India too. And the reply SMS contains a code that needs to be given to the destination provider, for the MNP process to proceed.


That is so sane, seriously sometimes it boggles the mind how banks to online stores will sms you to confirm ur identity yet sim swap is easy as it


I've only seen this extra step implemented by some providers.. It's definitely not the norm.


Nice one. My neighbour chaired the Australian inter-carrier roundtable implementing mobile phone number portability. I will send him a note! Other cool hacks of his: automatic video advertising scheduling system once saved Channel 9(?) from airing a gas oven ad during a Holocaust documentary. Scored a bonus for that one.


The ads are scheduled to be shown automatically? I thought advisers choose the show and pay for it


How is that a hack?


In Canada with tell you can put a “lock” on your account so there is additional steps like going into a store with is to remove it before you can port a number of sim swap (I think) Still don’t use my phone number on my google accounts thou


You can do the same in the US. However the bad guys just recruit low level retail employees to do the SIM swaps (which happens in Canada too).


SMS is irredeemably broken, like all telco-designed garbage protocols. The only way you can incentivize companies to stop using it as security theater is to shift liability so any losses incurred by SMS jacking is automatically the liability of the company using SMS, just as nowadays any credit card fraud is borne by the company that is not using the EMV chip to secure a transaction.


Reminder: SMS 2FA adds only a negligible amount of security, if your company does 2FA via SMS you're doing nothing more than lulling your users into a false sense of security. Don't do it. Support proper 2FA. (And while you're at it, allow your users to decide how much they care about their account. Don't make the decision for them.)


> Reminder: SMS 2FA adds only a negligible amount of security

I would disagree. Obviously, there are better approaches, but consider basic password auth on desktop, that is easily exploitable en masse by botnets. if you add 2FA via SMS, you would need to exploit both devices (or attack SS7, transfer number or some other trick) and match infos from these devices. Can be done in targetted attack, but harder in en masse botnet attacks.


Congratulations, you've spotted the negligible amount, which I explicitly said was negligible, as opposed to zero. Just because something has some benefit does not mean that benefit is greater than the costs.


Wow, is there any reason to be so snarky and dismissive here?


That’s great until your users lose/break their phone and have no backups for their 2FA codes.

“Sorry you’re locked out forever, good luck lol”

Is not a response you can give to them.


Printed/saved backup codes are still an option. Can also attach multiple 2FA tokens to one account. That's what Google and many others provide. Their customers seem satisfied.


> Printed/saved backup codes are still an option

Vast majority of users don't bother with such complexities.

SMS is the easiest minimum entry barrier to 2FA. It is better than having just passwords.


> It is better than having just passwords.

That is false. Many incidents have been widely reported where huge names, who certainly could afford even a $50 hardware token to protect their reputation/brand, were 'hacked' because they thought SMS 2FA protected them - and it didn't. Even with services which do also offer TOTP or U2F etc.


It is better. It’s just not perfect.


>Can also attach multiple 2FA tokens to one account

This is new to me. Most websites that I have seen offer only one 2FA token, but it could be scanned on any number of TOTP apps.


I disabled some 2fa cause I once replaced my phone without following some script to copy across the 2fa app

Luckily I was still signed in on a computer


I use Authy app which performs encrypted backups to the cloud and verifies regularly that I can still remember the passphrase. I have never lost a TOTP token this way.


I've never encountered this. All of my 20-something 2fa tokens I could recover with processes ranging from making videocalls to identify myself to getting a 24 hour slot in which all but the 2fa reset was locked.


I’d be curious to know which services are confirmed to have solid 2FA reset practices (i.e., you can do it if you lose your keys; no one else can do it).

These would probably be smaller businesses that earn their revenue directly from paying customers (and would lose if you give up and cancel your card/block their transactions)—I can’t imagine this ever working for ad-driven whales like Google or Facebook, or large corporations to whom you’re small fish and need them more than they need you.

Also, it’ll be interesting to see how 2FA reset options evolve in near future. A 24-hour slot to reset only 2FA, for example, looks like a valid attack vector. Also, I suspect deepfaking videocalls won’t be out of reach of a dedicated but average attacker for long.


> I can’t imagine this ever working for ad-driven whales like Google

Google was one of those that offers account recovery[1], but has it fully automated. I did not need it, because Google urged strongly to create backup tokens. I had those in my encrypted backup.

Focusing too much on what can go wrong is unproductive. I could also steal your iPhone, or force you to reveal a 2fa token using the Rubber Tube Decryption method.

There is no such thing as 100% security. And certainly not if it needs to be balanced against some real-world-ease such as "recovering after you dropped your iPhone in the toilet".

The 24hour recovery slot was at my cloud VPS service from which I got a bazillion warning mails. "your 2fa will be disabled in 48 hours, did you not initialize this, click here to ...".

The least secure was at my bookkeeper's online portal, where I could call them over the phone, offer some simple verification and have 2fa disabled. That does not remove my trust in them, because 1) it is an account that needs less security than e.g. my AWS account, and 2) they do know me personally and I them. It actually makes me trust them more because I know they are there for me when I need them.

-- [1] https://accounts.google.com/signin/v2/recoveryidentifier?flo...


There are many, many ways to handle backup authentication. SMS is not one of them.


SMS auth is great until your users move, get a new number and are locked out forever.

Pick you poison. Or even better, implement both and let your users pick.


That's part of a well secured account. You don't need this for everything just two or three super important things (bank, dns etc.)


there can at least be a notification and a delay, so I have 12-48 hours to respond if I get an emergency alert that my service is about to be deactivated


A lot of times SMS 2FA significantly degrade security with services that allows you to "recover" access to your account via SMS.


But then that's not actually 2FA then, is it?


I completely agree.

SMS 2FA is, at best, just adding a little hassle for the hacker. If it's not a targeted attack, there's a chance that the extra effort means they'll move on, but that won't stop any remotely determined hacker.


And isn't that true for most of the people? Still better than nothing right?


I'm not sure it's better, at least not in all cases. If you can reset your password or login without password using SMS, and you had a strong password, it could be worse.


that would be a veryy incorrect implementation of 2FA. Wouldn't be surprised if some service works that way, but would def. be unfortunate


Some services work in exactly this way; it's like using a magic link to log you into a website in the browser from an app on your phone/computer.


I agree it's def better than nothing. But I think most end users think 2FA SMS is the equivalent of hiring an armed security guard at your door, not when it's really the equivalent of putting an ADT sign in your front lawn from amazon.



This doesn't load for me for some reason. This works better: http://web.archive.org/web/20210315224524/https://lucky225.m...


SMS-2F needs to die. It has absolutely no benefit other than perhaps as protection against credential stuffing.


"2 Factor" isn't the right word choice. SMS isn't being used as a second factor here; it's the only factor.


"sms based one time passcodes" needs to die

and the companies that know better should be fined and sanctioned, particular the ones that are demanding SMS based OTP so they can also add your phone number to their social graph


Nonsense. SMS is a great recovery factor, both for people who forget their password, and for those who lose access to their other second factors. (E.g. email address or a smartphone app). The thing that makes SMS uniquely good at this is that there is infrastructure around for people to replace their lost SIM cards, and that SMS available globally (vs regional identity systems like the bank ids in Nordic countries).

The problem is purely with how some companies are applying SMS as an auth factor. In cases where SMS us being used as a recovery factor, it should not be allow for immediate recovery. Instead the user should be notified via other channels (email, phone notifications) about the recovery attempt, be given the opportunity to reject it, and for the recovery to only succeed if it is not denied after e.g. 3 days.


Not being able to access your account for 3 days when you need to recover your password is not going to be a viable business decision for most services. I think you are SEVERELY underestimating how often the average user needs to recover their password.


My partner resets her Google password every time she logs in. It's just part of the normal flow for her. She probably does it with everything, but I'm not listed as a recovery on the other things.

Something better would be great. She's probably an extreme example, but I think we techy people tend to have a warped view of how comfortable "normal people" are with effective password management.


Google has the following. Would you consider it to be "something better"?

https://support.google.com/accounts/answer/6361026

I.e. use smartphone prompts as the first factor (without causing password resets), while the password is just a backup when the phone is not available.


> My partner resets her Google password every time she logs in. It's just part of the normal flow for her.

I also do that everywhere. As a matter of principle, I have decided to not remember any passwords in my life. I call it "login by email".


tell me. On some little-used accounts of mine i need a new password for every login. Then there's one particular account which never lets me login. I have to make a new password every time...


I had one like this. I’d type the new password, confirm it, get “success!” And then type the exact same thing to log in and it would fail.

Turned out that the text box for entering the new password allowed a different number of characters than the one for logging in.


wha? Who does that? I don't think that's my problem, though i go crazy every time my password isn't recognized, i go through the process and this message comes up "you must use a different password". And i can't even just go back to the login menu. It's too late! And i paid money for this account.


I know a popular bank integral to the functioning and liquidity of an entire state whose "eBanking" features are like this, in 2021.


I hate having to use a smartphone for auth in general. Especially when I have an app on my phone that expects me to be able to receive an SMS on the same phone. It’s like I need my phone to recover having lost my phone.


> I hate having to use a smartphone for auth in general.

Same, especially since I don't have a smartphone.

Often times I'll go a week without looking at my phone and by then it has lost its charge so if an app requires a OTP to do something I often need to wait a while before it's charged enough to receive a text.

I do have a Google Voice number but I've mistakenly used my real number for a few services that frequently require SMS confirmations.


I use Google Voice when at all possible, but there are a few cases where it doesn’t work. The easiest way to piss me off is to make me use a USA number that isn’t my Google Voice number! It doesn’t help that some services won’t even let me log from a non-USA IP when I’m traveling.


> SMS is a great recovery factor

No. Send me an email, let me upload my ID, anything but SMS. SMS is completely insecure. Not only can it be passively sniffed along the way, not only can malicious actors intercept it without access, not only can pretty much any employee at my telco access it, not only can pretty much any employee at my telco get tricked into intercepting it, but by default (and therefore for the vast majority of users), it'll show up while the phone is locked!


Google is also a culprit in this same way. Activate normal 2fa, but when you click forgot password, conveniently it says Should we send a code to your phone?


Google does not offer me this option, I just checked.

If I claim to have forgotten my password, the first idea it has is that I should prove I still have my Security Key

Then it suggests it could send codes to my GMail (which might actually be useful if I have another device signed into that) or to another email address it knows about (it deliberately redacts part of each address in case I am not me)

Then it resorts to suggesting I try passwords I remember using on this account. I don't know what happens if I give it a password I haven't used for a few years, 'pass' means I keep a complete git history of Google passwords but I am reluctant to mess with this

Then it says too bad, it cannot authenticate me.


> Then it resorts to suggesting I try passwords I remember using on this account. I don't know what happens if I give it a password I haven't used for a few years

I can't say what it does currently but it used to say something along the lines of "you haven't used that password in a while. try something else."


It gives you a better case that you are the owner of the account, if you were to get a human to review.


Note that it sends a code to your phone which is logged into your Google account, via a (presumably/allegedly, but at least it's not SMS) secure channel.

(Actually, it doesn't send a code to your phone. It either sends a prompt to your phone, OR you can open a buried menu in some app to GET a - essentially TOTP - code.)


Oh no, if I cycle enough through Other Ways or I don't have my phone (while having my phone number connected with Google Account), it offers me to confirm my phone number with showing number as *** & last 4 digits.

When I confirm the phone number, it sends a 6 digit SMS code prefixed with G-, like G-123456 The input box on page has already a read only G- text, & then a box for 6 digit code. After I confirm code from SMS, it gives the option to reset password.

Most of the forgot password ways to reset password is Tap on other Device prompt OR get a code from Google App.

Sample Google SMS to reset code with fictional number.

```G-007007 is your Google verification code.```

After I removed the phone number, now if i click Other Ways enough times, it simply says, give us the information about last time logged, creation date, some address I email frequently, & some other stuff, & sats it will take few days for them to get back to me.


Interesting. I can't get it to give me any options like that personally. (Maybe because I have a security key active?)


Oh yeah, I also agree n believe that's the reason. Although having a key active does not mean the super secure government level threat protection, if one activates that threat from state protection, many of the account recovery options become unavailable.

I assume the number of account recovery options diminish with increasing levels of protection.


I have a security key active but it still offers to send me an SMS code for some reason (worryingly, to a phone number I no longer have... should probably get on to changing that)


One Time Passcode seeds are a globally available ID system.

and I really don't call them second factor, that conflates the whole issue of where they are stored, how they are synced and used. people should be able to recover access to their one time passcode seed and there is little excuse for this.


TOTP is globally available, but does not have an established way of recovering your key if it's lost. ("Little excuse" or not, people will not back up the key or print backup codes.)

While if I lose my SIM card, I'll walk to one of my operator's shops (there's probably one within 1km), show them my ID, and they'll replace the SIM. It's the only digital identifier that I could bootstrap from if I lost access to everything in one go.


I like not being locked out of my applications when my phone goes for an unexpected swim and I have to replace it.

The numerous emails I get when I log in from a new device serve me pretty well, all things considered


This has happened multiple times to me. It's also an issue when working in a building with poor reception, or travelling abroad.


Secure phones are sub-$200.

If you have multiple accounts, services, etc, then backing up your 2FA codes, or registering two devices/phones at the same time should be on your radar.


This doesn't sound like something your average user is going to be doing in most cases - keeping a backup, secondary phone.

We've already successfully gotten people to start using some level of 2FA in the form of SMS-based identity validation along with their password.

That's a pretty impressive step forward, and sufficient for most non-specifically targeted users' usage.


Until they're targeted.

You can fool carrier customer service with no training.


Yes, that is indeed what I said.

edit: everyone has a threat matrix they have to deal with.


NOBODY is targeted to be robbed until they are — what?


> or registering two devices/phones at the same time

A substantial number of services don't support this, which is a serious impediment to using 2FA both safely and securely.


What is an alternative that can work in all countries without any problems?


Does anyone know why services like Google Authenticator were ditched industry wide in favor of SMS codes? It has never made any sense to me.

Feels like the industry needs to push for a dedicated, universal, probably physical, tool for 2FA.


They have it, it’s called FIDO2, and it even works with existing devices such as Touch ID or Windows Hello in common browsers such as Chrome. Even Google doesn’t promote Google Authenticator now, but they keep it around for legacy reasons because it still works, until you lose your phone. That’s where FIDO2 shines: just authenticate more than one device, including purchased hardware tokens if you want something cheaper than a phone, and you’ll always have at least one device with access, somewhere.


My biggest issue with FIDO is that it is tied to a hardware device. So if I ever lose it it is a huge pain. So you need at least 2 (so only one can be your laptop with fingerprint or face recognition) and if you even get another one you need to remember every single service that you used 2fa for and enroll it in each of them.


> if you even get another one you need to remember every single service that you used 2fa for and enroll it in each of them

This is perhaps why FIDO2 works best when combined with single-sign-on systems, such as those promoted by large email providers, etc. Fewer accounts to have to manage 2FA devices for, and a greater chance that you've already signed in and authenticated your devices with all of them.

Personally, though, I use a password manager, and have some (but not all) sites tagged as 2FA in the password manager. So if and when it's time to add another key, I can just go down the list. Not as convenient as SSO-based 2FA, but sometimes you really don't want to sign in with Facebook, say. :)


can FIDO2 be implemented for day-to-day use right now, such as email access? sms 2FA and authenticator are built-in to most applications, so it makes it easy to use.

and how do you do estate planning? I'd like to give my family access to all of my private keys for everything when I pass.


It’s built in to Safari, Chrome, Edge and other browsers, apps can easily integrate with system libraries for Windows Hello or Touch ID as they would anyway.

As for estate planning, set up a spare key that you can keep at a relative’s place, or add others’ accounts to your “Family” in Google/Microsoft/Apple/etc. Either they have their own keys and the company is aware of the handover or they have a copy of yours — such as you logging in on their device or keeping a FIDO2 key at their house and they can pretend to be you. A service like 1Password Family could also be of use here.


Yes, on shit tons of major services.

Register multiple/duplicate keys.


> , until you lose your phone.

Just like any other password or data, 2fa strings also need to be backed up, like in a password database (separate from the usual one).


True, but last I checked, Google Authenticator and other similar apps (except maybe Authy or password managers) would refuse to upload or backup keys to iCloud Backups for odd reasons. Presumably they wanted the same sort of identity properties that something like Touch ID has, and thus would be solved by having more than one ID.

Unfortunately most people have only one phone, so that didn’t work until options came along where you could add more than one token/device instead as backup.


Oh no, the string and/or QR code should be backed up when one is setting up the 2FA.

If you have that seed phrase, & any device with correct time can calculate the TOTP code, even a simple local javascript app.

Obviously that phrase leaked would mean hacker can also generate codes. So that's why those phrases should be kept extra safe, away from normal passwords.


HN died on me before I was able to add the link of little utility I cooked to readd those totp seed phrases: https://spa.bydav.in/otp.html


> Does anyone know why services like Google Authenticator were ditched industry wide in favor of SMS codes? It has never made any sense to me.

This is not the case in my experience. Many apps that once used Authenticator-based TOTP now use app-based push alerts (Steam Authenticator, Blizzard Authenticator, Google->GMail App, etc.), but I haven't noticed a trend toward actual SMS.

Are there major orgs that switched to SMS 2FA and disabled authenticator apps? If so, I'd be interested in learning why, also.


The shit part is I now need 50 apps on my phone to use stuff. I don't want the steam app, I have no use for it. But features of my steam account are now limited because I don't use the app.


Service providers that are very behind the curve (e.g. banks, brokerages) started providing SMS-only 2FA years after internet companies started with TOTP. That could create the perception of a shift towards SMS.


SMS isn't about protecting your account from hackers, it's about protecting the service from bots. You'll notice if you have a VOIP account that the number can't be used to set up something like a GMail account, it requires an honest to god phone number, something you presumably paid money for. If you try to sign up a hundred accounts using one number you can be assured that it will cut you off very quickly. This is true of all major services.

This is also why they won't let you set up a good 2 factor authentication system (like a Yubikey) they'll force you to first set up a SMS 2 factor. It's very important to remember to delete that SMS second factor after setting up your good second factor or social engineers will use it to steal your account.


I think it's more that companies who did not previously offer 2FA, are offering only SMS-based 2FA. Not that companies who previously offered TOTP 2FA are now only offering SMS-based 2FA.


Ubiquity: almost everyone has a cell phone. They’re nearly required for all but the richest people.

Simplicity: nearly everybody understands how texting works.


Voip.ms, vonage/twilio, et al let you set up an SMS capable number really quickly and cheaply, available globally... And you'd be fully in control


I tried to set up a Twilio number specifically to handle these services that demand SMS for login.

Weirdly it only works for a minority of services, I expect many use Twilio to send their auth texts and Twilio blocks sending these to their own numbers?


The reason most services require a phone number is so you can't just create a new account if you get banned and ideally your account is somewhat tied to a real person. They ban VOIP numbers because it would defeat the whole point.


What's preventing someone from getting mutiple regular phones and accounts for them then?


Cost. You can do it, but not hundreds of times like with an email address. Probably not even 10 times before you give up and do something else.


Part of SMS verification is to raise the cost of Sybil attacks. Well known sources of cheap bulk phone numbers are often banned for that reason.


I'm not sure if Twilio blocks sending to their own numbers but you can't receive from or send to short codes, which will limit you a lot when verifying with services like Uber.


I think some platforms that use SMS 2fa explicitly ban this path for some reason. They can detect it and don't allow those numbers to be used.

It's a really backwards and confusing system, I agree.


No this doesn't work at all. As others have said, many companies will not let you set up an account or send SMS to numbers created this way.


Twitter SMS validation doesn’t even work with my regular phone!


Until you get deplatformed. Tech-first MVNOs don't always have the same consumer rights and anti trust requirements as those regulated by the FCC.


Too many services use phone numbers as the keys to the kingdom. It's a convenient and stable identifier, but holy shit it's not designed for security at all.


It's neither convenient nor stable for anyone moving between countries either. When given the choice between a service that uses my phone number as my permanent user identifier and one that uses my email, I'll always go for the latter.

Unfortunately, big parts of the industry seem to be headed the other direction.


I just transferred my phone number to google voice when I moved out of the country. When I moved it back I simply transfer it back to my carrier


There seem to be horror stories on reddit about Google Voice numbers being terminated for people out of the country too long.

Is there a stable inexpensive phone number service for folks that are outside the US a lot?


I've been looking into Google Voice alternatives, and https://voip.ms/ looks good.


I've been using voip.ms and it is fairly good. I wish the SMS support was a bit better but it does reliable deliver messages to email or SIP. (Large messages are not re-joined though and sending is based off of a code in the subject instead of an email address per-number which could be easily be added to an address book)


I used the Republic Wireless wifi only service for $5/month. Still more expensive than it _needs_ to be, but well worth it.


International relocation is a very good point!

Something I hadnt considered. Thanks!


This is about complete takeover of SMS for a phone number.

The threat model is beyond 2FA, imagine being able to impersonate anyone over text.

Social engineering gone to the next level. This isn't about just taking over accounts, it is about taking over a huge chunk of someone's social existence.


I realise TFA is about the US, but it’s worth noting that in most of the world, SMS is pretty much just used for receiving messages from your bank and other automated stuff these days.


Sure, and instead, people use apps like Signal or WhatsApp, which are tied to phone numbers, on which the attacker can now register to your phone number thanks to his receiving your SMS...


If you tell Signal not to allow anybody else to re-register from your phone number without your PIN it will enforce this until at least seven days passes without you using Signal.

If you've uninstalled Signal or just never use your phone then yeah, after a week or so this proposed attack "works" and the safety numbers for any ongoing conversations with anybody reset (the attacker doesn't know the long term identity key for your phone so they'll get a new one, thus generating a different safety number), which will be notified to the other participants although since you presumably never use Signal there may not be any such conversations.


> If you tell Signal not to allow

Big if already

> the safety numbers for any ongoing conversations with anybody reset ... which will be notified to the other participants

"Hi <name>, I have a new device, can you help me ____"


Absolutely. Was just responding to SMS being a direct part of people's "social existence", which it really isn't in most of the world.

As you say, though, it's one step away from things that are in fact directly used for communication.


> It's a convenient and stable identifier

It is not stable in the least for millions of Americans, especially those who live in poverty (I'm not sure about the rest of the world). Phones are lost or stolen, phone numbers changed because of being harassed by debt collectors, ex-partners, current partners, etc. And if it isn't stable, it isn't convenient.


My phone number changes every 2-3 years whenever I switch to a new carrier for their acquisition offer. Why would I port my old number when 9/10 people who call me, I don't want to be calling me? And heck, I don't exactly live in poverty, even if my (student and scum landlord) debt is significantly more than my assets.


It is better in the US as generally all plays are country-wide but as a Canadian my number has changed many times.

- First number. - Moved to a different city for Uni. Switched number so that people didn't have to pay long-distance to call. - Moved back. - Move to Europe for a job. - Moved back.

I would never consider an identifier that is (loosely) tied to your location stable.


The whole 2 factor thing really falls down if sms is a part and you aren’t getting the messages.

I had a miserable time trying to get into Backblaze recently, with even the ability it offered to switch sms providers failing.

The list of valid keys they give you on setup bailed me out eventually, but it took me a while to remember them.


> remember them

Uh. You're supposed to memorise them? I printed them out and stuck them in a safe place.


Maybe “remember that they exist”? I can’t imagine remembering those.


I laughed when I saw this. It took me a while to remember I had the backup codes. I hadn’t memorised them.


I put mine on stickies on my laptop. Or if they're really important on a sticky in my desk drawer.


Yep, I hate this with a passion. I deprecated text messaging a decade ago, too. Anyone who thinks they can reach me by some ancient 140-character mobile-operator-controlled service, it's their fault for not getting with the beat in 2021.


Agreed. Users have it in their power not to use services that require a phone number for SMS verifcation.


But we often don't. I just got my covid shot and with it a request to sign up for vsafe, which needs a phone number. (Which means i can't sign up since I have anti spam protection on and so their texts don't get through )


Tell them you have no phone. It seems to disarm people. You're not telling that you refuse their request, and getting into a power struggle. Then ask them if that means you can't get a vaccination or whatever. This has worked every time so far for me.

It doesn't get easier. It probably would if more of us did it, though.


I got the vaccine not problem. With the vaccine came a sheet of paper requesting I sign up for vsafe which is a program unrelated to those giving the vaccine. It is probably useful to sign up, so I tried, but there wasn't an option of no phone, or at least not until it was too late.


The local government here has a covid tracking system that uses SMS verification and many stores won't let you in without using it.


I hear you, this comes up more and more often. I remember reading that Singapore had something like this (for contact tracing, I think), but they'd give you a dedicated device if you didn't have a phone. Ugh.

I like telling companies who want my number: No, my phone is for people I know to call me, not corporations. Other times I tell them that I don't have a phone and ask them if they are refusing me service. Not saying I have a perfect record, sometimes it is convenient to have the mechanic call when the car is done, etc. But the more I see companies who don't need a dossier on me asking for my personal info the harder I want to push back. I seek out and appreciate organizations that don't do this. I joke that I might end up living in a commune one day!

I can see this becoming a bigger problem. I've been following the idea of covid-vaccination-tracking applications, and don't have a good feeling about any of that. I'm expecting that the vaccination campaign will work well enough for that idea to be a moot point, but also expecting that companies and governments will want to do the extra tracking anyways, because their incentives are not aligned with the general population for stuff like this.


Its been a bigger and bigger problem where installing an app is just expected and almost impossible to avoid in some situations.

I also hate how its hard to explain to normal people. I don't have a problem with covid restrictions. I'm happy to wear a mask, social distance, etc. I just don't want to be sending the government with a horrible privacy/security history a log of everywhere I have been if I can avoid it. But you will be seen as some covid conspiracy theory nutcase if you object.

Its also awkward to keep telling stores I don't want to give them my address or phone number.


> I just don't want to be sending the government with a horrible privacy/security history a log of everywhere I have been if I can avoid it.

Are you sure this is what your local app does? Many COVID-19 government apps were built reflecting this desire for privacy, I've written about the New Zealand one previously but lots are like this.

When you scan a QR code with that Kiwi app your phone learns you went somewhere and when ("This code is for the Auckland central library, and it's 1430 on Tuesday 16 March") but it doesn't tell the government, they don't care and could only make things worse by losing the information. It just remembers where you were.

Then when the government finds out that an infected person was careful to stay home except, oh yeah, they did pop to that library to get a book to read while they stayed home, for about 15 minutes, around 2-3pm on Tuesday, they send all those apps a message (it also goes in a press release but who seriously reads those?) and the app goes "Auckland central library? 1400 to 1500 on 16 March? That's a bingo" - and you get a message telling you that you should get tested, or to watch out for symptoms or whatever the government advice is in that particular case.

So effectively your phone is just simplifying work you'd otherwise have to do, instead of you laboriously checking the list of locations in your local paper or on a web page any time there's a breach, the phone matches it correctly for you.

If you're infected, you do have the option to have your phone tell the tracing people everywhere it remembers you going recently, but that's up to you whether you feel morally obligated to help them. Contact tracers in countries with low incidence are mostly from STI clinic backgrounds (which of course also need tracing), so "I went to the restaurant even though I had virus symptoms" is at least easier to confess than "I fucked some random stranger I met in a bar last Tuesday even though I'm married"


Nope, I know 100% it sends the record off to the government server and then when a location has a reported case, they call you using the info they have. Know the guy who built the system and he says while the data is encrypted in the db, the government also has the key to access everything.

Its also partly about the precedent it sets. Its now becoming required to carry a phone around with you and hand over more of your data without any opt out.


citation needed


More and more services are supporting - or worse, requiring - SMS-based or phone-based 2FA. Moreover, people frequently do not "have it in their power" not to use a particular service. For example, I decided to log in to Fidelity the other day, since I still have a 401(k) with them from an old employer who did matching. They require call or SMS 2FA. And you could draw even stronger requirements to various government services in various countries.


Most places offer an alternative. Especially institutions that are not FAANG-types, like government services and heavily regulated ones like banks. I am a U.S. citizen and have never encountered a service that didn't have alternatives to using a smart phone. Are you saying that Fidelity would not have mailed you a statement?

A complaint can be registered with the company, regulators, and/or politicians. Switch to another provider if possible. I know it's not always easy, I'm not perfect in this regard. But if nobody does anything, nothing will change. Are you telling those of us who feel this way to give up?


When did I say they required a smartphone? A landline will work perfectly fine for "voice" 2FA, and just about anything but a landline will work for SMS 2FA.

They probably would mail me a statement, but that means I'm limited to much less convenient (and less secure!) forms of communication with them, like calling them... or receiving a letter.

How can I switch to another company when my employer is the one who decides to whom they will match contributions? Or, to borrow from the people in other countries who have posted elsewhere here, when the account is related to taxes or government benefits? Or maybe all the major banks in their country require it?


It’s worth pointing out that often LOA forms ask for a PIN, usually the same PIN as would be required to check voicemail. A better telecom company might make the PIN something harder to remember but enforcing such things would also make it harder to switch carriers, particularly if it replaced today’s standard forms of ID checks.

It’s better to assume that until phone numbers can be locked and unlocked the way domains can, with a random authorization code only accessible by real offline 2FA (though not all domain providers require it), and with the option of completely encrypted end-to-end texting (RCS?), well, then SMS won’t really be all that secure.


My reading of this article suggests that the PIN requirement for number porting is bypassed in this forwarding scenario, since this method is claimed to be distinct from simjacking. That is, the number hasn't been ported by the FCC's guidelines, although I didn't glean exactly how that's happening by these retail providers.


SMS routing and number porting are different things, as the voice and SMS operate independently. I headed Engineering for a company that allowed you to SMS enable your landline or toll-free number, and our automated flow for non-toll-free landlines required receiving a code via telephone call (to avoid the situation of compromised SMS routing). We didn't support numbers that were not in those two buckets, i.e. mobile numbers (not allowed by carriers) as well as "virtual" numbers like Google voice, Twilio, etc. (possibility for abuse and/or no way to properly validate ownership). OP's issue is purely the fault of Sakari for having terrible process.

The process of changing the routing is pretty simple. It's a matter of being a trusted actor and having the ability to submit changes in routing for SMS to a central provider that maintains and propagates this info.


Thanks. So, as with simjacking, a bad actor, e.g. an employee at a company with poor internal controls, can sell (or inadvertently give) access to anyone's 2FA codes.


Can they do this with a Google Voice phone number? I always hate hearing how I’m basically surviving hacks because of obscurity.


Yes. There's nothing special about a mobile phone number when it comes to SMS delivery. The underlying infrastructure company given in the article, Bandwidth, provides phone number provisioning and bulk service for Google's Voice product. On-net (one number hosted by Bandwidth to another number hosted by Bandwidth) might be slightly more of a hurdle to intercept or redirect but off-net is fairly trivial.

Heck, even with "port lock" enabled on a Google Voice number, that is the barest of security against an attacker who has any kind of access better than "retail store employee." Working for a telco with access to our back-end port system, access several other people had, I could forcibly acquire a number by simply checking a box that said I had verified a written LOA even if the losing carrier responded with code 6P ("port-out protection enabled").

So, yes, you're likely sitting in a security-by-obscurity, or at least security-by-slightly-more-difficult-than-someone-else, situation.


"Yes. There's nothing special about a mobile phone number when it comes to SMS delivery."

This is false.

"Mobile" numbers - numbers that are classified as belonging to an actual mobile carrier - are indeed different than non-mobile numbers.

For instance, you cannot send SMS from a short-code to a non-mobile number. Which means, your twilio number (which is not a mobile number) cannot receive 2FA (or any other SMS) from the 5-digit "short code" numbers that gmail (and most banks, etc.) use for new account verification, etc.

Non mobile numbers are, in many ways, second class citizens in the mobile-operator ecosystem.


Short code delivery doesn’t depend on whether a number is assigned to a mobile endpoint, only if the owning carrier has an agreement to exchange messages with the short code provider. Google Voice can handle most short codes, as could Bandwidth.com’s old “demo” retail service, ring.to. For example, send the word “help” to 468311, the short code message service a lot of public agencies use for alerts, from a Google Voice number and you’ll get a response.

Any number can be provisioned at an SMSC, even toll-free numbers these days. But mobile providers—and the associated short code entities—are loathe to peer with many VoIP carriers. Partially for competitive reasons, partially because many short codes are premium billing numbers.

You’re right about non-mobile numbers being second class, but that’s largely because companies filter them out because “fraud,” which is also suspicious reasoning. I can get a hundred “mobile” numbers within a few minutes, rather inexpensively.


It would be useful to understand the flow of an SMS from a source to a Google voice number. While you can't port a Google voice number, it seems like if you can intercept an SMS from a source before it gets to Google then this technique will work.

A useful strategy to help against this in any case is to use a different email address for every online service. Hackers generally can't initiate an account password reset if they don't know the account.

Also if you use a different phone number for account security than your public one then it's a lot harder for them to know what SMS to intercept. Security by obscurity sucks but in this world it may be your only practical choice.


> While you can't port a Google voice number

You absolutely can port a Google Voice number. End-user subscriber numbers must be portable per FCC rules. Google, operating services provided by Bandwidth.com (mentioned in the article), does enable port-protection by default but this is easy to bypass by an operator who, like in the article, checks the box that says something like "I have a valid written LOA, complete the port as an exception." This has legitimate uses (some losing providers are very ruthless about not following the rules and letting customers move numbers) but unscrupulous or lazy operators will check the box and move on.


So, when my nontechnical friends ask me what they should be using for 2FA, I'm kind of at a loss what to tell them. It's either a false sense of security (e.g., SMS), or too complicated for them (Yubikey).

There's got to be a better system.


WebAuthn, so, a Yubikey would work for that, but also cheaper products (the keywords for a product search are FIDO Security Key) which are similarly capable.

If they have a nice phone (modern iPhone or Android phone that is able to recognise who you are by fingerprint or facial recognition ought to be enough) that can do WebAuthn too, the actual recognition remains local to your device (so you're not giving some mysterious entity your face or fingerprint).

I'm assuming since they're "nontechnical" that you mean as a user, the user experience for WebAuthn is trivial, one touch. You do this to enroll the Yubikey, and then you do it whenever you need to prove who you are to the same site. It's entirely phishing proof, the credentials can't be stolen, you can keep one on your keyring or just leave it plugged into a personal PC all the time, it has excellent privacy properties, the biggest problem is too few sites do WebAuthn but Google and Facebook do, so that's a good start for non-technical people.

Which brings me to the other side, if your non-technical friends are wondering what their organisation should mandate, then again, WebAuthn, but this time I admit it's somewhat complicated. Somebody is going to need to at least research what product suits the userbase, and check boxes in the software they use, and at worst they need to do a bunch of software development. It's not crazy hard, but it's a bit trickier than yet another stupid password rule requirement. However unlike requiring passwords to contain at least two state birds and the name of an African country requiring WebAuthn will actually make you safer.


What are the problem parts with yubikey? I've got a Fetian Epass and it feels like the most natural way of doing auth - here's a key, it goes on my keyring next to my locker key or my car key if I had one, I use it to log in like putting a key in a lock.


Authenticator Apps?


The annoying part is most of them are very hard to move over to a new phone or backup


Do any of them work on desktops? I keep around a spare iPad to run my authentication apps, but I'd rather have it installed on my computer instead.


Authy does but has some issues showing same site names as on PC so not perfect


1Password has built-in TOTP support, though it's a little overkill if you only use it for that purpose.


On Windows there's WinAuth: https://github.com/winauth/winauth

It doesn't seem to be updated anymore, but it works well.


Then use other ones :)

I currently use Aegis and Bitwarden. AndOTP also allows you to export tokens.


Google Authenticator now has an export and import feature where it bundles all your accounts into a QR code to scan on your new phone.

Might not be ideal for backup however


TOTP is only better than SMS against SIM swapping, a rare threat. They are identical against phishing, an enormously more common problem. For a typical user the delta in security when transitioning from SMS to TOTP is minimal.


... or trivial number porting attacks like the one described in this exact article.

Depends on your threat model, but unlike SIM swapping this may not be out of the reach of even a mildly technical angry ex.


And a mildly technical angry ex is a lot less likely than phishing. These are valuable topics but people go way way way too far and say that SMS is horrible and should be basically banned while TOTP is fabulous and a completely viable alternative, which is just fantasy.


My protection against phishing is my password manager. If the site is fake, it won't find the password for it.


The difficulty there is evaluating which ones are reliable, secure, and easy to use. I'd welcome recommendations.


I personally use andOTP [0] which I'm a fan of. I've been thinking of switching to aegis [1] for nothing more than a UI change.

[0]https://github.com/andOTP/andOTP

[1]https://github.com/beemdevelopment/Aegis


I never had any issues with andOTP. It worked even when some websites specifically asked for a different app.


The integrated TOTP in 1Password is pretty good, it can grab the QR code off the screen and everything.

https://support.1password.com/one-time-passwords/


Just be careful with these solutions, I use the one in Bitwarden for a few things and while great for convenience, there's a significant security tradeoff when you go ahead and load all your TOTP tokens into memory on the same machine you keep the passwords on. Turns your 2 factor authentication into single factor pretty fast against even a decent piece of malware, let alone a dedicated attacker.


Microsoft Authenticator is good, and there’s a reasonable chance they already use it at work.


Google Authenticator seems fine?


Google Authenticator ?


Yubikey is complicated?


For many non-technical users, unfortunately, yes.


How do you protect against this type of attack?


I believe the practical solution for many people is to switch the 2FA to an authenticator on-your-phone code generator, which someone cannot hack easily.

Most important account / banks / etc services now offer this option.

The only thing is, though, make sure to keep backups of the codes you use to initialize the authenticator app, because for some services there is no recovery if you lose your phone or don't have backups.


Hi, which bank(s) offer this?

> switch the 2FA to an authenticator on-your-phone code generator, which someone cannot hack easily.

I remember looking a few months ago and they only offered SMS 2FA.

Thanks


Sorry, I should've been more specific/accurate. I meant brokerages, like Fidelity, Etrade, Schwab -- where you're likely to have more funds/$ than a regular consumer bank. They do offer it. Even Amazon offers it.

And you are right, I have not seen any of the banks I use convert to authenticator (BofA, Chase, etc).

I can only guess that they think it's too difficult for the average consumer to understand or implement. But the fact that they don't even offer as an option is unfortunate.

edit: actually I correct myself, seems like BofA may actually offer something like this: https://play.google.com/store/apps/details?id=com.bankofamer...

However, I can't tell/test because I don't use Android


Unfortunately, Fidelity (at least for my account) only offers some non-standard "Symantec VIP" product. Does someone reading this know if there's a way to turn it into standard TOTP?


Yes, Symantec VIP is their TOTP solution they're chosen. Etrade also uses it.

I find it less friendly than the normal QR code, since you can't back it up or clone it (and it's proprietary, although that is not a huge concern for me). Basically the app is both the server and the code generator (?) because the website you log on to does not issue you a shared secret, the app creates it itself. Every device has its own unique code, so it can't be cloned.

Fidelity enforces that you can't have multiple devices floating around able to log in -- they don't let you enroll multiple devices if you opt in to it. (Although why exactly I don't know, because Etrade does). It is a pain because 1) I want multiple devices to have my codes as backup, 2) I want one of my family members to be able to log in -- although they say, you should make that person an authorized user who can use his/her own login + own VIP code.

It's a pain, and I'm still debating whether or not to activate it. The interesting part is they clearly have a fall back in-person way to turn this off / help you if you forget or get locked out. You have to even call them in person to turn this feature on.


I was able to find this. Haven't attempted it yet but it appears that it is in fact standard TOTP, only the VIP app generates the seed and you have to provide the seed to (in this case) Fidelity. https://gist.github.com/jarbro/ca7c9d3eebba1396d53b4a7228575...

And yeah, my biggest problem with it is that I already have a solution for TOTP; I don't really want to also figure out some solution for their proprietary garbage.


> which someone cannot hack easily

Everybody gets phished. Much easier than sim swaps.


Don't use Phone number based 2Factor or if you must use a number, keep it to an app (eg, Google Voice) and don't forward your Google Voice texts to your phone's number.

Basically, avoid using your carrier provided phone number for anything related to an account.


But Google Voice requires a Google account, and to create a Google account you need to provide a valid phone number. There are also a lot of service providers that don't allow you create an account without providing a valid phone number.

I wonder how high-profile politicians and celebrities deal with security issues like this? If this is really such an easy attack to pull off, what's stopping someone from shilling cryptocurrencies on celebrity social media accounts (again)?


I deleted the phone number from google account, just use 2FA from app. Now forgot password does not give extreme option of just sending a code to my phone.


How do you recover your account if you don't have the app?


I mean sending a code VIA SMS. I mean I deleted the phone number from my Google account. Now I have only Auth App, & Google App. The code & recovery options are 8 digit recovery phrase, Tap in Google App on other device. No reset code SMS.


You can have a backup of the private key written on a piece of paper.


You could remove recovery phone number from your Google account use a couple (main and backup) hardware tokens like Yubikey as 2FA.


Google Voice SMS might not be able to help since they are all in the same POTS ecosystem as well.


Google Voice is US-only.


Don't use a phone.

Lucky's company has this product that can monitor for the attack, but it won't prevent it: https://okeymonitor.com/


Banks already has tools that detects SIM Swaps and SS7 attacks. It's just hard to make decisions. Banks care about false positives too.


It’s insane that providers can do this.

I note, however, that this attack seems to only be possible on VOIP routable numbers, and it’s my experience that banks, etc, will not allow you to use VOIP routable numbers for 2FA.

That’s definitely not the case for a naive implementation of sms 2fa as would be done by likely any dev using Twilio, etc.

Also, don’t forget that NIST deprecated SMS 2FA over 5 years ago. Here’s their reasoning: https://www.nist.gov/blogs/cybersecurity-insights/questionsa...


Meanwhile my bank just added 2FA in the past year and it's... SMS. No option to use TOTP or U2F.


Is there any chance my cellphone number is a VOIP-routable nubmer? Is there a way I can check to find out?


Twilio has a (US-only) API for this: https://www.twilio.com/docs/lookup/tutorials/carrier-and-cal...

Im not sure what banks use, but I have had UK VOIP numbers flagged before when trying to register them for 2FA, so theres likely API providers for other countries.


Further reading suggests this isn’t just voip numbers! How worrying!


Yes, this is only for VOIP. The author of article is dishonest. He mentioned that his TMobile phone number got hacked but I am willing to bet that this is a marketing .. .


Lots of comments here along the lines of "SMS 2FA is bad", but hell, if the phone companies had an appropriate level of liability here (which should be a shit ton), this should be impossible.

And it's not just about 2FA, most of humanity expects that if someone else texts them, those texts will go to their phone and only their phone unless they've given explicit verifiable consent.

I mean, in this case all the hacker did was fill out a form and say pretty please. I hope phone companies that allow this get sued.


This would also be impossible if services stopped demanding your phone number to make an account.

This is a growing trend in consumer services, and it's a privacy nightmare.

Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.

There are widespread reports of delivery businesses selling their phone number databases (with associated credit card suffixes, delivery addresses, order history, et c) to large advertising companies for data mining.

Providing your direct cell number to an app is basically like providing your home address and a bunch of other sensitive data. Don't do it, or make a burner gmail account to get a disposable Google Voice number for each account that you must have that demands a phone number. Then, that number isn't reused and an attacker that obtains your mobile number can't attack your login method for other apps.

Reusing phone numbers is about as bad as reusing passwords.


> Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.

I have extremely bad news for you. US Social Security Numbers are not in fact unique, and the fact they're "sensitive" is a terrible joke because it's pretty easy to discover the SSN for an individual based on public information, especially older people because SSNs weren't even randomised at issuance until relatively recently.

Any system that depends on keeping public facts secret is horribly broken, yes that also includes "verifying" credit cards based on a bunch of digits that are written right on the card itself.


I work on such a system. I have the same sentiment as you, but the reality is that every entity along the way, including federal, state, county, city, and sub-city level governments all treat SSN as a unique identifier and accept no substitutes. The one and only way to get away from this is to pass massive legislature and have the federal government provide better IDs to the public, something most people don’t actually want. It will never happen unless a massive amount of people get defrauded overnight. Like 10-40% of the country, and literally in a short enough period of time to create a news shitstorm. This cannot be changed by your software system being different, and if it is, it will already start at a disadvantage for not being compatible with everything around it.


I'm aware, I'm a hacker (in the evening news definition of the term as well as the TMRC one). I was referring to the fact that most USians would not sign up for a whatever b2c service that demanded their SSN, but wouldn't hesitate to provide their phone number.

We should all stop doing either.


> Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.

The goal is for the service to have a unique identifier, and phone numbers happen to be a really good one to prevent spam also since it outsources verification of human entity to the phone companies.


> since it outsources verification of human entity to the phone companies.

That's not the reason phone numbers are used. They are used, because they are something you have in addition to something you know like an SSN or password. This is two factor authentication.


You’re not wrong. The problem is the lack of an authoritative identity provider in the US.


No, that's not the problem, the problem is that many many organizations demand an authoritative identity when no such thing is necessary or advisable.

https://sneak.berlin/20200118/you-dont-need-to-see-my-id/

The US has plenty of centralized identity systems, including the Real ID one, a backdoor federal ID system that is required to board all commercial flights in that country.


But what is an appropriate level of liability here? Phone companies never signed up to be the guardians of our digital lives, and the tech industry at large has just built a castle on shakey foundations.

And there are obvious trade-offs here, if we make number portability harder, it means you're somewhat hostage to your phone provider.


No, this is exactly what they signed up for. When I sign a contract with my phone company to give me access to their network I expect that they will not just give it to someone else instead under my name.


Phone companies are guardians of our our accounts with them. The absolutely bear responsibility if poor security or loopholes allow someone to gain any sort of access to our accounts. Security and convenience are often a trade off. Clearly service providers are not properly judging where that balance should be.


Sure, to a point. But the right costs and trade-offs aren't the same if you're protecting your spotify account vs millions of dollars in cash.


> Phone companies never signed up to be the guardians of our digital lives

The parent comment addressed this point. This is not just about 2FA. SMS users expect their communication are private, except (debatably) by the courts with a warrant.


When they invented text messaging, heck even the phone system itself, did they provide anything that said there was an expectation of privacy?

Not sure which is why I'm asking.


When cell phones first became big, probably 10-15 years ago at least, there was a website for my area I lived in at the time (southern Illinois) that would list texts and people could vote on the funniest ones. There were some really private messages that would hit the top (obviously phone numbers weren’t displayed.) So it used to be people had the assumption that texts were public, because for some carriers they basically were.


This is hilarious.

Do you happen to have any links regarding this? Would love to read more.


If I understand correctly, the initial telephone systems were run by manual operators at a physical switchboard, who could listen in to anything that was said on any line. Many people also had party lines, where someone (in another house or apartment) could pick up their phone and listen to your conversations.

So, no, not much of an expectation of privacy - at least, there shouldn't have been.


If there was no expectation of privacy, the police would not need a warrant to tap a line.


Is the initial set of expectations the immutable legal standard?


Wiretapping (without a warrant) is stupidly illegal.


Saying something is illegal and expecting everyone to follow the law is just pinky-swearing.


Yeah! Why have laws anyways? Every law will be broken sooner or later and it’s not like we have institutions tasked with enforcing laws. /s


I'm under the assumption that wiretapping to create evidence is illegal, but wiretapping to get a warrant probably happens all the time. (AKA Judge and Police officer listen to illegally captured audio - Judge approves official warrant to make future recordings legal)


Wiretapping by a private person should not happen.


"should not" and "does not" are not equal statements.


Isn't this easy solvable with additional SMS token approval as mentioned in article?

> "orsman added that, effective immediately, Sakari has added a security feature where a number will receive an automated call that requires the user to send a security code back to the company, to confirm they do have consent to transfer that number. As part of another test, Lucky225 did try to reroute texts for the same number with consent using a different service called *Beetexting*; the site already required a similar automated phone call to confirm the user's consent. This was in part "to avoid fraud," the automated verification call said when Motherboard received the call. Beetexting did not respond to a request for comment."

But it seems that the entire system is globally infested with security holes. Is this applicable worldwide or just limited to one country ?


Sakari just was dumb, and deserves the bad press. I've built similar products and we launched with the "phone call to verify" feature to specifically prevent this type of abuse.


I agree


Based on the high level description given in the article it seems to be related to enum lookup or net number. It's basically a kind of DnS lookup for phone numbers used for sms routing. Also this is used for routing sms that are belonging to a user to an application (in case you want to reroute your sms to an application). The company will change the enum code for the number to a.code that belong to the company and reroute the messages to its services. So the hack is not really a hack in a sense that it work as intendant, the safety net is missing though. The company operating the enum is supposed to check the legitimacy of the change.


That’s crazy that there is no verification system in place allowing the user to approve the forwarding.

Years ago I asked my carrier to not port or forward without me being physically present at a store. Maybe I should test them out to see if that’s still the case.

Regardless, I don’t use SMS MFA for anything important and even when I do, I have a 32 character password to go along with it.


> While adding a number, Sakari provides the Letter of Authorization for the user to sign. Sakari's LOA says that the user should not conduct any unlawful, harassing, or inappropriate behaviour with the text messaging service and phone number. But as Lucky225 showed, a user can just sign up with someone else's number and receive their text messages instead.

Um, what?!


So this means that the only protection from attacks like this is the law, and not a technical or operational hurdle like going through an AT&T hotline to get sim swapping going.

This is bad news because following the law isn't a top priority when trying to hack someone.


What I would find really interesting is if someone used this exploit to hack into the accounts of Sakari staff and sabotaged their service, deleting all their infrastructure from their cloud hosting provider etc. I'm sure Sakari would take this security hole more seriously if their own C-suite fell victim to it.


Weird. The whole idea behind the whole company is to send SMSes on behalf of its customers, if I understood the article correctly. So why would they need to muck about with reassigning the phone numbers of SMS recipients in the first place?


So how to disable the possibility to switch to SMS-Authentication as alternative 2nd token on my Google-Login?


Remove your phone number from your account.


I’ll sell all my texts for $15.


My strategy is to have a second phone that has Authenticator and is also the phone for any SMS based 2FA. The phone is locked in a file cabinet when not in use and never leaves my desk. An extra phone only costs me $10/month. Well worth the peace of mind.


You have to pay a monthly fee to own a phone?


Since they said it's also a backup for SMS-based 2FA, I assume the monthly fee covers the SIM and not the phone.


The problem is that user does'nt own his own phone number


FCC obviously needs to come down on this like a ton of bricks.


He can have all mine for a tenner! How do I contact him?


These hackers have so much time in their hands , that they can understand this technology more than the creators and abuse them, amazing how hacker culture works.


Damn lies. Damn lies. The attack vector only works for VOIP or Toll Free Numbers. The upstream agreements already block Mobile numbers. This is paid marketing for his company.


Not sure why this isn't higher up. This is crucial information showing this is FUD.

There are still grave vulnerabilities in mobile provider SMS (2FA or otherwise) due to how easy it is for a dedicated attacker to SIM swap, but this particular claim is completely misleading.


> Not sure why this isn't higher up. This is crucial information showing this is FUD.

It's already too high up given it's a blatantly baseless accusation. I'm confused why you think it's more credible than the article when it provides zero evidence.


True. Evidence one way or the other is needed.


Both articles provides zero evidence other than concept of attack in general and all of them just claiming that mobile numbers can be hacked.


Just adding that landline numbers, not just toll-free numbers, are probably vulnerable to this.


Yes, I forget to include it.


I also place the blame on Sakari. I headed Engineering for a company that allowed you to bring your own landline number for business, and our automated flow for non-toll-free landline required receiving a code via telephone call (to avoid the situation of compromised SMS routing) and entering that as part of the signup process.

For toll-free numbers, it was a manual process where we received written LOAs and verified ownership via the SMS/800 database (ironically, SMS here has nothing to do with messaging and is purely coincidental).


okay so how did he manage to pull this off and is this still possible? how would you protect yourself against this attack (i dont understand how it works)


The details are in the article, but essentially the attacker used a 3rd party bulk SMS service that allows it's users to use their own number and routes sms messages to said service provider.

The attacker instead used the cell number of the author of the article, and supplied a fraudulent letter authorizing the re-routing of text messages through the bulk SMS service.

The attacker works for a service, which purports to verify the routing and carrier settings for a given mobile phone number; I expect that their solution periodically checks the results and issues an alert if the results differ from a known valid value.


and there are no checks and bounds on their side??? no regulations?


Not really; that is the crux of the problem.

The article goes into detail; it's worth the read to answer your questions.

From my experience, there is very little process and oversight being followed. I had my number ported over (with my knowledge) to Tmobile by a 3rd party, however Tmobile had not attempted to verify my consent. The store associate took this person at her word. My then current phone stopping working caught me by surprise.

I can imagine if I signed up for a family plan, any store associate would be happy to move any number of phone numbers into my control.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: