In Australia it's mandated you're sent a message before rerouting or migrating to another provider. Surprised this isn't enforced in the other countries, it costs next to nothing to implement and is just an additional step in the account migration process.
I'd love to see companies allow for opt in additional security measures, like banks or telco's calling me - having a verbal password to confirm things, that level of security seems to only be available to VIPs.
Sakari (and the likes) will complain that government regulation is keeping the food of the hard workers table, and the gov has no right to intervene to the free market!!
In parallel they will 'lobby' (or as we call it in Europe "bribe") the key politicians and ask to a) either change that Bill down to the point that it is rendered useless, or b) cancel it altogether, and stock market will go up!!
Followed by a six month government contractor bidding process, two years of development hell, and a half-based solution that either doesn't work or requires fifty extra convoluted steps.
They do but then the companies pay it forward to their customers. There’s a whole list of itemized fees on a cellular bill. Those aren’t collected “for” the government. The company is just itemizing it for you, probably so that they can neglect to advertise it in their contract sticker price.
I tried to get T-Mobile to stop giving my location to anyone that hits their APIs with a 'Yes I have permission' flag set.
There's no opt-out for it, and no enforcement of the permission requirement. Their support had me snail mail a letter to some PO box. I never got a response.
And now they're going to start outright selling their customer activity after forcibly un-opt-outing* everyone who opted out in their privacy settings previously..
*un-opt-outing -- ??? I don't know what to call this. It's not 'opting-in' since nobody has a choice.. 'resetting user selection without notification or consent' seems too mild and wordy.
T-Mobile has such shitty IT, infrastructure, and security practices.
My last experience with them caused me to switch away from them permanently. I switched away from them after getting SIM jacked, with real money stolen from me. Happened exactly like in this article[0].
Another incident happened where my online account was merged with someone else's in California (I'm in Texas). Our billing information was merged, with the others paying for the whole account. I couldn't make changes online- only after sitting on hold and explaining what happened was I able to get the whole situation unfucked, but there's no telling what amount of my data still lives in that other account.
Come to think of it, my first experience with T-Mobile was as a Radio Shack employee, circa 2010. When a customer came to the store to pay their T-Mobile bill with cash, if I took too long to enter all the data into their awful online portal the money would sometimes go to a completely different person's account. Many hours were spent on the phone with the local and regional rep resolving multiple instances of this happening.
Tmo is pretty shitty, but i'm grandfathered in to 5 lines for $93, so i pretty much can't leave them. Not that much better in the jail cell next door or across from me anyways.
- Vice paid a bounty hunter $300 to track a phone number [1]
- Police have paid these services to avoid warrant requirements, and corrections facilities use aggregator services to track numbers that inmates have calls with [2][3]
Apparently carriers claim to have stopped after getting fined $200m last year [4].
It was typically done through aggregators. EG, services that have similar access to multiple carriers and in turn expose a single endpoint to their own customers.
The aggregators pass on responsibility for obtaining consent to their end customers. Again, with no enforcement or ability for a target to opt out.
The only protection is an authentication requirement. But that just confirms you have a valid credential. Which you get either as an aggregator (to tmobile/other carrier directly), or as the client to an aggregator (to the aggregator's API to query multiple carriers).
Though even that authentication requirement has failed in the past, like when LocationSmart had a public demo page exploited. Inspection of the requests the page sent made it trivial to replay them with any phone number, skipping any consent checking. They just had to add "privacyConsent":"True" to the payload [5].
But yeah, it sounds like that is less of a worry now.
Instead, T-mobile is selling the location data, and basically anything whatever usage data they collect from your phone with their root-privileged app to advertising networks. They say it's a
Although their privacy page has this statement [6]:
> We do not use or share Customer Proprietary Network Information (“CPNI”) or precise location data for advertising unless you give us your express permission.
The 'express permission' here is deceptive. Users default to permit this, so it's hardly 'express'.
Further, they recently mass reset user preferences to clear the opt-out setting for users who previously opted out. Without consent.
So basically everyone is 'consenting' unless they very recently opted-out. Though I have little faith they won't change this from underneath their users again in the future. No doubt in the fine print of one of those 'annual privacy notices' or some such.
Still, if the wording and definition of 'express consent' is questionable above, they word it more explicitly in the more detailed privacy policy [7]:
> We and others may also use information about your usage, device, location, and demographics to serve you personalized ads, measure performance of those ads, and conduct analytics and reporting.
Their privacy page is deceptive about how anonymized their collection is [6]:
> When we share this information with third parties, it is not tied to your name or information that directly identifies you. Instead, we tie it to your mobile advertising identifier or another unique identifier.
Tying it to a mobile advertising id, or any kind of unique identifier, is not de-identification. It is trivial to tie this to an email or a larger profile generated by an advertising network and combine with, say, your desktop web browser. Or any account you login with that is associated to your email..
It's despicable. But sorry, I'll stop ranting now.
T-Mobile has such bad practices -- about 6 years ago they gave my phone number out as a temporary number to someone else. I don't know how their infrastructure is set up, but both me and this other guy had the same number for a time. Incoming calls would be routed to the phone that called out last. At one point I was able to talk to the other guy by using my wife's phone to call my own number. T-Mobile claimed that what was happening was impossible, so I filed a complaint with the FCC and switched my phone service. By the time T-Mobile responded to the complaint (by saying nothing was wrong), I had long since switched providers, so I didn't pursue the matter further. Huge annoyance though.
Capitalism doesn't ensure good things for people, just maximized profit for the best marketers. You want good things? The government has to require it. Otherwise it'll only happen if it's under the umbrella of maximized profit.
Meh.. every time an article comes out someone says this. Definately more complex than that. Look at Amazon as a counter example.. the reason they dominate is the combination of better product and maximizing efficiencies of scale. Additionally.. they rolled "profit" into growth, netting consumers on a whole better selection and service.
It is almost always better for the government to create "incentives" than to create "requirements" anyway. Instead of "requiring" a text before transfer. It would be better to hold both companies that facilitate a transfer without the customers autorization to large liabilities. This allows them to create a mechanism to prevent this that is probably better.
When I ported over to Project For a few years ago, it took about 30 minutes. I think there's a "pre-transfer" step that gets everything ready to cutover before you confirm.
Back in the early days of mobile number portability the majority of telcos put in systems to make porting out harder, e.g. getting an unlock code. This gave them a chance to keep the customer when they called up.
Regulators (particularly in Europe) soon put a stop to that to promote competition. While this was good, the majority of regulators failed to put in a consumer protection mechanism to stop identity theft through account stealing.
The article describes a more insiduous attack, as the mobile account is still active (hiding the existence of the attack from the user), but the message destination has been rerouted, making all the linked accounts that use SMS as their 2FA also vulnerable.
I think this particular issue is specific to North America, due to peculiarities of the NANP phone number scheme (inter-provider texts are routed quite differently from voice calls, if I understand it correctly).
In other countries, the two channels are more closely coupled (but SIM swap and/or number porting attacks are still possible, depending on the provider‘s security protocols).
> due to peculiarities of the NANP phone number scheme
I suspect more like due to peculiarities of the United States of America. Such as a disinclination to regulate anything, trusting that somehow this time the most profitable course for corporations will also work out OK for its citizens even if it didn't on previous occasions.
This report lists a long chain of buck-passing companies that have exploited an obvious defect and then escaped any responsibility for the consequences. Notice how the only work they made the hacker do was legal paperwork to cover their backsides, no actual technical countermeasures. Because nobody at these companies cared if it was used this way, they only wanted to make sure if they got sued they would be able to blame somebody else and get away with it.
The regulation seeks to promote competition and consumer choice. An onerous verification process would undermine that goal. Security is not a consideration.
This is sort of the point with regulation. The regulator makes the rules it thinks are best according to the considerations it thinks are important at the time. If someone later shows up with different considerations, they can go to hell.
Pretty sure a hacker would be perpetrating an actual, punishable-by-trial crime in forging those legal documents. That's generally the first regulation that the US imposes.
A disinclination to regulate anything is a good idea in a society that generally punishes bad behavior after the behavior has been perpetrated. I would have doubts for instance about government regulating the process for sending and receiving SMS - would you want every new software or protocol to have to go through some kind of bureaucratic review before it can be used?
> would you want every new software or protocol to have to go through some kind of bureaucratic review before it can be used?
Absolutely yes if said protocol is to be used by an entire population as a basic means of communication. Either by the government or a non-profit not tied to the industry. Protocols should also not be allowed to be secret if used at scale.
I see no reason to make a distinction between computer protocols and in-person safety protocols. The threat level is different, but it covers just as many (if not more) people.
A key part of regulation is placing the onus of solving problems on those best equipped to solve them.
You don’t need the government to mandate what the protocols should be, you just fine carriers for allowing this sort of bad outcome and let them sort things out.
SIM swaps are relatively easy in Australia, requiring only some fairly simple social engineering of staff in a phone store.
Number porting is trickier, requires a name and account number (or DOB in the case of a prepaid account) of the victim and they receive an SMS informing them their number was ported in advance.
Yeah getting thee account ID can be a pain, I've learned that the number in the UI and bill is not the identifier they want. Security by poor implementation.
I couldn't even get my own number ported in Australia (to a new provider on a new SIM). The old provider said the authentication failed. I gave up pretty quickly and just went with a new number.
That requirement is there for new or ported-in services.
But when you Sim swap, it's tied to the same account. So if you can convince the minimum wage hourly wage contract employee at a franchisee that you're the account holder, no worries.
Worse, most of those stores are using generic accounts and/or passwords.
Telstra years ago had a policy along the lines that store accounts could be not tied to a specific employee, so long as the store manager/team leader rotated the passwords and kept records. in reality it's something stupidly guessable that rotates only when required and all the staff know them.
Optus effectively has the same thing - I had an issue getting a SIM established and sat with an employee for about an hour as they re-rolled the account about 10 times. By the end I knew the passwords for all the accounts in the store, plus other identifiers and numbers.
Nice one. My neighbour chaired the Australian inter-carrier roundtable implementing mobile phone number portability. I will send him a note! Other cool hacks of his: automatic video advertising scheduling system once saved Channel 9(?) from airing a gas oven ad during a Holocaust documentary. Scored a bonus for that one.
In Canada with tell you can put a “lock” on your account so there is additional steps like going into a store with is to remove it before you can port a number of sim swap (I think) Still don’t use my phone number on my google accounts thou
SMS is irredeemably broken, like all telco-designed garbage protocols. The only way you can incentivize companies to stop using it as security theater is to shift liability so any losses incurred by SMS jacking is automatically the liability of the company using SMS, just as nowadays any credit card fraud is borne by the company that is not using the EMV chip to secure a transaction.
Reminder: SMS 2FA adds only a negligible amount of security, if your company does 2FA via SMS you're doing nothing more than lulling your users into a false sense of security. Don't do it. Support proper 2FA. (And while you're at it, allow your users to decide how much they care about their account. Don't make the decision for them.)
> Reminder: SMS 2FA adds only a negligible amount of security
I would disagree. Obviously, there are better approaches, but consider basic password auth on desktop, that is easily exploitable en masse by botnets. if you add 2FA via SMS, you would need to exploit both devices (or attack SS7, transfer number or some other trick) and match infos from these devices. Can be done in targetted attack, but harder in en masse botnet attacks.
Congratulations, you've spotted the negligible amount, which I explicitly said was negligible, as opposed to zero. Just because something has some benefit does not mean that benefit is greater than the costs.
Printed/saved backup codes are still an option. Can also attach multiple 2FA tokens to one account. That's what Google and many others provide. Their customers seem satisfied.
That is false. Many incidents have been widely reported where huge names, who certainly could afford even a $50 hardware token to protect their reputation/brand, were 'hacked' because they thought SMS 2FA protected them - and it didn't. Even with services which do also offer TOTP or U2F etc.
I use Authy app which performs encrypted backups to the cloud and verifies regularly that I can still remember the passphrase. I have never lost a TOTP token this way.
I've never encountered this. All of my 20-something 2fa tokens I could recover with processes ranging from making videocalls to identify myself to getting a 24 hour slot in which all but the 2fa reset was locked.
I’d be curious to know which services are confirmed to have solid 2FA reset practices (i.e., you can do it if you lose your keys; no one else can do it).
These would probably be smaller businesses that earn their revenue directly from paying customers (and would lose if you give up and cancel your card/block their transactions)—I can’t imagine this ever working for ad-driven whales like Google or Facebook, or large corporations to whom you’re small fish and need them more than they need you.
Also, it’ll be interesting to see how 2FA reset options evolve in near future. A 24-hour slot to reset only 2FA, for example, looks like a valid attack vector. Also, I suspect deepfaking videocalls won’t be out of reach of a dedicated but average attacker for long.
> I can’t imagine this ever working for ad-driven whales like Google
Google was one of those that offers account recovery[1], but has it fully automated. I did not need it, because Google urged strongly to create backup tokens. I had those in my encrypted backup.
Focusing too much on what can go wrong is unproductive. I could also steal your iPhone, or force you to reveal a 2fa token using the Rubber Tube Decryption method.
There is no such thing as 100% security. And certainly not if it needs to be balanced against some real-world-ease such as "recovering after you dropped your iPhone in the toilet".
The 24hour recovery slot was at my cloud VPS service from which I got a bazillion warning mails. "your 2fa will be disabled in 48 hours, did you not initialize this, click here to ...".
The least secure was at my bookkeeper's online portal, where I could call them over the phone, offer some simple verification and have 2fa disabled. That does not remove my trust in them, because 1) it is an account that needs less security than e.g. my AWS account, and 2) they do know me personally and I them. It actually makes me trust them more because I know they are there for me when I need them.
there can at least be a notification and a delay, so I have 12-48 hours to respond if I get an emergency alert that my service is about to be deactivated
SMS 2FA is, at best, just adding a little hassle for the hacker. If it's not a targeted attack, there's a chance that the extra effort means they'll move on, but that won't stop any remotely determined hacker.
I'm not sure it's better, at least not in all cases. If you can reset your password or login without password using SMS, and you had a strong password, it could be worse.
I agree it's def better than nothing. But I think most end users think 2FA SMS is the equivalent of hiring an armed security guard at your door, not when it's really the equivalent of putting an ADT sign in your front lawn from amazon.
and the companies that know better should be fined and sanctioned, particular the ones that are demanding SMS based OTP so they can also add your phone number to their social graph
Nonsense. SMS is a great recovery factor, both for people who forget their password, and for those who lose access to their other second factors. (E.g. email address or a smartphone app). The thing that makes SMS uniquely good at this is that there is infrastructure around for people to replace their lost SIM cards, and that SMS available globally (vs regional identity systems like the bank ids in Nordic countries).
The problem is purely with how some companies are applying SMS as an auth factor. In cases where SMS us being used as a recovery factor, it should not be allow for immediate recovery. Instead the user should be notified via other channels (email, phone notifications) about the recovery attempt, be given the opportunity to reject it, and for the recovery to only succeed if it is not denied after e.g. 3 days.
Not being able to access your account for 3 days when you need to recover your password is not going to be a viable business decision for most services. I think you are SEVERELY underestimating how often the average user needs to recover their password.
My partner resets her Google password every time she logs in. It's just part of the normal flow for her. She probably does it with everything, but I'm not listed as a recovery on the other things.
Something better would be great. She's probably an extreme example, but I think we techy people tend to have a warped view of how comfortable "normal people" are with effective password management.
I.e. use smartphone prompts as the first factor (without causing password resets), while the password is just a backup when the phone is not available.
tell me. On some little-used accounts of mine i need a new password for every login. Then there's one particular account which never lets me login. I have to make a new password every time...
wha? Who does that? I don't think that's my problem, though i go crazy every time my password isn't recognized, i go through the process and this message comes up "you must use a different password". And i can't even just go back to the login menu. It's too late! And i paid money for this account.
I hate having to use a smartphone for auth in general. Especially when I have an app on my phone that expects me to be able to receive an SMS on the same phone. It’s like I need my phone to recover having lost my phone.
> I hate having to use a smartphone for auth in general.
Same, especially since I don't have a smartphone.
Often times I'll go a week without looking at my phone and by then it has lost its charge so if an app requires a OTP to do something I often need to wait a while before it's charged enough to receive a text.
I do have a Google Voice number but I've mistakenly used my real number for a few services that frequently require SMS confirmations.
I use Google Voice when at all possible, but there are a few cases where it doesn’t work. The easiest way to piss me off is to make me use a USA number that isn’t my Google Voice number! It doesn’t help that some services won’t even let me log from a non-USA IP when I’m traveling.
No. Send me an email, let me upload my ID, anything but SMS. SMS is completely insecure. Not only can it be passively sniffed along the way, not only can malicious actors intercept it without access, not only can pretty much any employee at my telco access it, not only can pretty much any employee at my telco get tricked into intercepting it, but by default (and therefore for the vast majority of users), it'll show up while the phone is locked!
Google is also a culprit in this same way. Activate normal 2fa, but when you click forgot password, conveniently it says Should we send a code to your phone?
Google does not offer me this option, I just checked.
If I claim to have forgotten my password, the first idea it has is that I should prove I still have my Security Key
Then it suggests it could send codes to my GMail (which might actually be useful if I have another device signed into that) or to another email address it knows about (it deliberately redacts part of each address in case I am not me)
Then it resorts to suggesting I try passwords I remember using on this account. I don't know what happens if I give it a password I haven't used for a few years, 'pass' means I keep a complete git history of Google passwords but I am reluctant to mess with this
> Then it resorts to suggesting I try passwords I remember using on this account. I don't know what happens if I give it a password I haven't used for a few years
I can't say what it does currently but it used to say something along the lines of "you haven't used that password in a while. try something else."
Note that it sends a code to your phone which is logged into your Google account, via a (presumably/allegedly, but at least it's not SMS) secure channel.
(Actually, it doesn't send a code to your phone. It either sends a prompt to your phone, OR you can open a buried menu in some app to GET a - essentially TOTP - code.)
Oh no, if I cycle enough through Other Ways or I don't have my phone (while having my phone number connected with Google Account), it offers me to confirm my phone number with showing number as *** & last 4 digits.
When I confirm the phone number, it sends a 6 digit SMS code prefixed with G-, like G-123456 The input box on page has already a read only G- text, & then a box for 6 digit code. After I confirm code from SMS, it gives the option to reset password.
Most of the forgot password ways to reset password is Tap on other Device prompt OR get a code from Google App.
Sample Google SMS to reset code with fictional number.
```G-007007 is your Google verification code.```
After I removed the phone number, now if i click Other Ways enough times, it simply says, give us the information about last time logged, creation date, some address I email frequently, & some other stuff, & sats it will take few days for them to get back to me.
Oh yeah, I also agree n believe that's the reason. Although having a key active does not mean the super secure government level threat protection, if one activates that threat from state protection, many of the account recovery options become unavailable.
I assume the number of account recovery options diminish with increasing levels of protection.
I have a security key active but it still offers to send me an SMS code for some reason (worryingly, to a phone number I no longer have... should probably get on to changing that)
One Time Passcode seeds are a globally available ID system.
and I really don't call them second factor, that conflates the whole issue of where they are stored, how they are synced and used. people should be able to recover access to their one time passcode seed and there is little excuse for this.
TOTP is globally available, but does not have an established way of recovering your key if it's lost. ("Little excuse" or not, people will not back up the key or print backup codes.)
While if I lose my SIM card, I'll walk to one of my operator's shops (there's probably one within 1km), show them my ID, and they'll replace the SIM. It's the only digital identifier that I could bootstrap from if I lost access to everything in one go.
If you have multiple accounts, services, etc, then backing up your 2FA codes, or registering two devices/phones at the same time should be on your radar.
They have it, it’s called FIDO2, and it even works with existing devices such as Touch ID or Windows Hello in common browsers such as Chrome. Even Google doesn’t promote Google Authenticator now, but they keep it around for legacy reasons because it still works, until you lose your phone. That’s where FIDO2 shines: just authenticate more than one device, including purchased hardware tokens if you want something cheaper than a phone, and you’ll always have at least one device with access, somewhere.
My biggest issue with FIDO is that it is tied to a hardware device. So if I ever lose it it is a huge pain. So you need at least 2 (so only one can be your laptop with fingerprint or face recognition) and if you even get another one you need to remember every single service that you used 2fa for and enroll it in each of them.
> if you even get another one you need to remember every single service that you used 2fa for and enroll it in each of them
This is perhaps why FIDO2 works best when combined with single-sign-on systems, such as those promoted by large email providers, etc. Fewer accounts to have to manage 2FA devices for, and a greater chance that you've already signed in and authenticated your devices with all of them.
Personally, though, I use a password manager, and have some (but not all) sites tagged as 2FA in the password manager. So if and when it's time to add another key, I can just go down the list. Not as convenient as SSO-based 2FA, but sometimes you really don't want to sign in with Facebook, say. :)
can FIDO2 be implemented for day-to-day use right now, such as email access? sms 2FA and authenticator are built-in to most applications, so it makes it easy to use.
and how do you do estate planning? I'd like to give my family access to all of my private keys for everything when I pass.
It’s built in to Safari, Chrome, Edge and other browsers, apps can easily integrate with system libraries for Windows Hello or Touch ID as they would anyway.
As for estate planning, set up a spare key that you can keep at a relative’s place, or add others’ accounts to your “Family” in Google/Microsoft/Apple/etc. Either they have their own keys and the company is aware of the handover or they have a copy of yours — such as you logging in on their device or keeping a FIDO2 key at their house and they can pretend to be you. A service like 1Password Family could also be of use here.
True, but last I checked, Google Authenticator and other similar apps (except maybe Authy or password managers) would refuse to upload or backup keys to iCloud Backups for odd reasons. Presumably they wanted the same sort of identity properties that something like Touch ID has, and thus would be solved by having more than one ID.
Unfortunately most people have only one phone, so that didn’t work until options came along where you could add more than one token/device instead as backup.
Oh no, the string and/or QR code should be backed up when one is setting up the 2FA.
If you have that seed phrase, & any device with correct time can calculate the TOTP code, even a simple local javascript app.
Obviously that phrase leaked would mean hacker can also generate codes. So that's why those phrases should be kept extra safe, away from normal passwords.
> Does anyone know why services like Google Authenticator were ditched industry wide in favor of SMS codes? It has never made any sense to me.
This is not the case in my experience. Many apps that once used Authenticator-based TOTP now use app-based push alerts (Steam Authenticator, Blizzard Authenticator, Google->GMail App, etc.), but I haven't noticed a trend toward actual SMS.
Are there major orgs that switched to SMS 2FA and disabled authenticator apps? If so, I'd be interested in learning why, also.
The shit part is I now need 50 apps on my phone to use stuff. I don't want the steam app, I have no use for it. But features of my steam account are now limited because I don't use the app.
Service providers that are very behind the curve (e.g. banks, brokerages) started providing SMS-only 2FA years after internet companies started with TOTP. That could create the perception of a shift towards SMS.
SMS isn't about protecting your account from hackers, it's about protecting the service from bots. You'll notice if you have a VOIP account that the number can't be used to set up something like a GMail account, it requires an honest to god phone number, something you presumably paid money for. If you try to sign up a hundred accounts using one number you can be assured that it will cut you off very quickly. This is true of all major services.
This is also why they won't let you set up a good 2 factor authentication system (like a Yubikey) they'll force you to first set up a SMS 2 factor. It's very important to remember to delete that SMS second factor after setting up your good second factor or social engineers will use it to steal your account.
I think it's more that companies who did not previously offer 2FA, are offering only SMS-based 2FA. Not that companies who previously offered TOTP 2FA are now only offering SMS-based 2FA.
I tried to set up a Twilio number specifically to handle these services that demand SMS for login.
Weirdly it only works for a minority of services, I expect many use Twilio to send their auth texts and Twilio blocks sending these to their own numbers?
The reason most services require a phone number is so you can't just create a new account if you get banned and ideally your account is somewhat tied to a real person. They ban VOIP numbers because it would defeat the whole point.
I'm not sure if Twilio blocks sending to their own numbers but you can't receive from or send to short codes, which will limit you a lot when verifying with services like Uber.
Too many services use phone numbers as the keys to the kingdom. It's a convenient and stable identifier, but holy shit it's not designed for security at all.
It's neither convenient nor stable for anyone moving between countries either. When given the choice between a service that uses my phone number as my permanent user identifier and one that uses my email, I'll always go for the latter.
Unfortunately, big parts of the industry seem to be headed the other direction.
I've been using voip.ms and it is fairly good. I wish the SMS support was a bit better but it does reliable deliver messages to email or SIP. (Large messages are not re-joined though and sending is based off of a code in the subject instead of an email address per-number which could be easily be added to an address book)
This is about complete takeover of SMS for a phone number.
The threat model is beyond 2FA, imagine being able to impersonate anyone over text.
Social engineering gone to the next level. This isn't about just taking over accounts, it is about taking over a huge chunk of someone's social existence.
I realise TFA is about the US, but it’s worth noting that in most of the world, SMS is pretty much just used for receiving messages from your bank and other automated stuff these days.
Sure, and instead, people use apps like Signal or WhatsApp, which are tied to phone numbers, on which the attacker can now register to your phone number thanks to his receiving your SMS...
If you tell Signal not to allow anybody else to re-register from your phone number without your PIN it will enforce this until at least seven days passes without you using Signal.
If you've uninstalled Signal or just never use your phone then yeah, after a week or so this proposed attack "works" and the safety numbers for any ongoing conversations with anybody reset (the attacker doesn't know the long term identity key for your phone so they'll get a new one, thus generating a different safety number), which will be notified to the other participants although since you presumably never use Signal there may not be any such conversations.
It is not stable in the least for millions of Americans, especially those who live in poverty (I'm not sure about the rest of the world). Phones are lost or stolen, phone numbers changed because of being harassed by debt collectors, ex-partners, current partners, etc. And if it isn't stable, it isn't convenient.
My phone number changes every 2-3 years whenever I switch to a new carrier for their acquisition offer. Why would I port my old number when 9/10 people who call me, I don't want to be calling me? And heck, I don't exactly live in poverty, even if my (student and scum landlord) debt is significantly more than my assets.
It is better in the US as generally all plays are country-wide but as a Canadian my number has changed many times.
- First number.
- Moved to a different city for Uni. Switched number so that people didn't have to pay long-distance to call.
- Moved back.
- Move to Europe for a job.
- Moved back.
I would never consider an identifier that is (loosely) tied to your location stable.
Yep, I hate this with a passion. I deprecated text messaging a decade ago, too. Anyone who thinks they can reach me by some ancient 140-character mobile-operator-controlled service, it's their fault for not getting with the beat in 2021.
But we often don't. I just got my covid shot and with it a request to sign up for vsafe, which needs a phone number. (Which means i can't sign up since I have anti spam protection on and so their texts don't get through )
Tell them you have no phone. It seems to disarm people. You're not telling that you refuse their request, and getting into a power struggle. Then ask them if that means you can't get a vaccination or whatever. This has worked every time so far for me.
It doesn't get easier. It probably would if more of us did it, though.
I got the vaccine not problem. With the vaccine came a sheet of paper requesting I sign up for vsafe which is a program unrelated to those giving the vaccine. It is probably useful to sign up, so I tried, but there wasn't an option of no phone, or at least not until it was too late.
I hear you, this comes up more and more often. I remember reading that Singapore had something like this (for contact tracing, I think), but they'd give you a dedicated device if you didn't have a phone. Ugh.
I like telling companies who want my number: No, my phone is for people I know to call me, not corporations. Other times I tell them that I don't have a phone and ask them if they are refusing me service. Not saying I have a perfect record, sometimes it is convenient to have the mechanic call when the car is done, etc. But the more I see companies who don't need a dossier on me asking for my personal info the harder I want to push back. I seek out and appreciate organizations that don't do this. I joke that I might end up living in a commune one day!
I can see this becoming a bigger problem. I've been following the idea of covid-vaccination-tracking applications, and don't have a good feeling about any of that. I'm expecting that the vaccination campaign will work well enough for that idea to be a moot point, but also expecting that companies and governments will want to do the extra tracking anyways, because their incentives are not aligned with the general population for stuff like this.
Its been a bigger and bigger problem where installing an app is just expected and almost impossible to avoid in some situations.
I also hate how its hard to explain to normal people. I don't have a problem with covid restrictions. I'm happy to wear a mask, social distance, etc. I just don't want to be sending the government with a horrible privacy/security history a log of everywhere I have been if I can avoid it. But you will be seen as some covid conspiracy theory nutcase if you object.
Its also awkward to keep telling stores I don't want to give them my address or phone number.
> I just don't want to be sending the government with a horrible privacy/security history a log of everywhere I have been if I can avoid it.
Are you sure this is what your local app does? Many COVID-19 government apps were built reflecting this desire for privacy, I've written about the New Zealand one previously but lots are like this.
When you scan a QR code with that Kiwi app your phone learns you went somewhere and when ("This code is for the Auckland central library, and it's 1430 on Tuesday 16 March") but it doesn't tell the government, they don't care and could only make things worse by losing the information. It just remembers where you were.
Then when the government finds out that an infected person was careful to stay home except, oh yeah, they did pop to that library to get a book to read while they stayed home, for about 15 minutes, around 2-3pm on Tuesday, they send all those apps a message (it also goes in a press release but who seriously reads those?) and the app goes "Auckland central library? 1400 to 1500 on 16 March? That's a bingo" - and you get a message telling you that you should get tested, or to watch out for symptoms or whatever the government advice is in that particular case.
So effectively your phone is just simplifying work you'd otherwise have to do, instead of you laboriously checking the list of locations in your local paper or on a web page any time there's a breach, the phone matches it correctly for you.
If you're infected, you do have the option to have your phone tell the tracing people everywhere it remembers you going recently, but that's up to you whether you feel morally obligated to help them. Contact tracers in countries with low incidence are mostly from STI clinic backgrounds (which of course also need tracing), so "I went to the restaurant even though I had virus symptoms" is at least easier to confess than "I fucked some random stranger I met in a bar last Tuesday even though I'm married"
Nope, I know 100% it sends the record off to the government server and then when a location has a reported case, they call you using the info they have. Know the guy who built the system and he says while the data is encrypted in the db, the government also has the key to access everything.
Its also partly about the precedent it sets. Its now becoming required to carry a phone around with you and hand over more of your data without any opt out.
More and more services are supporting - or worse, requiring - SMS-based or phone-based 2FA. Moreover, people frequently do not "have it in their power" not to use a particular service. For example, I decided to log in to Fidelity the other day, since I still have a 401(k) with them from an old employer who did matching. They require call or SMS 2FA. And you could draw even stronger requirements to various government services in various countries.
Most places offer an alternative. Especially institutions that are not FAANG-types, like government services and heavily regulated ones like banks. I am a U.S. citizen and have never encountered a service that didn't have alternatives to using a smart phone. Are you saying that Fidelity would not have mailed you a statement?
A complaint can be registered with the company, regulators, and/or politicians. Switch to another provider if possible. I know it's not always easy, I'm not perfect in this regard. But if nobody does anything, nothing will change. Are you telling those of us who feel this way to give up?
When did I say they required a smartphone? A landline will work perfectly fine for "voice" 2FA, and just about anything but a landline will work for SMS 2FA.
They probably would mail me a statement, but that means I'm limited to much less convenient (and less secure!) forms of communication with them, like calling them... or receiving a letter.
How can I switch to another company when my employer is the one who decides to whom they will match contributions? Or, to borrow from the people in other countries who have posted elsewhere here, when the account is related to taxes or government benefits? Or maybe all the major banks in their country require it?
It’s worth pointing out that often LOA forms ask for a PIN, usually the same PIN as would be required to check voicemail. A better telecom company might make the PIN something harder to remember but enforcing such things would also make it harder to switch carriers, particularly if it replaced today’s standard forms of ID checks.
It’s better to assume that until phone numbers can be locked and unlocked the way domains can, with a random authorization code only accessible by real offline 2FA (though not all domain providers require it), and with the option of completely encrypted end-to-end texting (RCS?), well, then SMS won’t really be all that secure.
My reading of this article suggests that the PIN requirement for number porting is bypassed in this forwarding scenario, since this method is claimed to be distinct from simjacking. That is, the number hasn't been ported by the FCC's guidelines, although I didn't glean exactly how that's happening by these retail providers.
SMS routing and number porting are different things, as the voice and SMS operate independently. I headed Engineering for a company that allowed you to SMS enable your landline or toll-free number, and our automated flow for non-toll-free landlines required receiving a code via telephone call (to avoid the situation of compromised SMS routing). We didn't support numbers that were not in those two buckets, i.e. mobile numbers (not allowed by carriers) as well as "virtual" numbers like Google voice, Twilio, etc. (possibility for abuse and/or no way to properly validate ownership). OP's issue is purely the fault of Sakari for having terrible process.
The process of changing the routing is pretty simple. It's a matter of being a trusted actor and having the ability to submit changes in routing for SMS to a central provider that maintains and propagates this info.
Thanks. So, as with simjacking, a bad actor, e.g. an employee at a company with poor internal controls, can sell (or inadvertently give) access to anyone's 2FA codes.
Yes. There's nothing special about a mobile phone number when it comes to SMS delivery. The underlying infrastructure company given in the article, Bandwidth, provides phone number provisioning and bulk service for Google's Voice product. On-net (one number hosted by Bandwidth to another number hosted by Bandwidth) might be slightly more of a hurdle to intercept or redirect but off-net is fairly trivial.
Heck, even with "port lock" enabled on a Google Voice number, that is the barest of security against an attacker who has any kind of access better than "retail store employee." Working for a telco with access to our back-end port system, access several other people had, I could forcibly acquire a number by simply checking a box that said I had verified a written LOA even if the losing carrier responded with code 6P ("port-out protection enabled").
So, yes, you're likely sitting in a security-by-obscurity, or at least security-by-slightly-more-difficult-than-someone-else, situation.
"Yes. There's nothing special about a mobile phone number when it comes to SMS delivery."
This is false.
"Mobile" numbers - numbers that are classified as belonging to an actual mobile carrier - are indeed different than non-mobile numbers.
For instance, you cannot send SMS from a short-code to a non-mobile number. Which means, your twilio number (which is not a mobile number) cannot receive 2FA (or any other SMS) from the 5-digit "short code" numbers that gmail (and most banks, etc.) use for new account verification, etc.
Non mobile numbers are, in many ways, second class citizens in the mobile-operator ecosystem.
Short code delivery doesn’t depend on whether a number is assigned to a mobile endpoint, only if the owning carrier has an agreement to exchange messages with the short code provider. Google Voice can handle most short codes, as could Bandwidth.com’s old “demo” retail service, ring.to. For example, send the word “help” to 468311, the short code message service a lot of public agencies use for alerts, from a Google Voice number and you’ll get a response.
Any number can be provisioned at an SMSC, even toll-free numbers these days. But mobile providers—and the associated short code entities—are loathe to peer with many VoIP carriers. Partially for competitive reasons, partially because many short codes are premium billing numbers.
You’re right about non-mobile numbers being second class, but that’s largely because companies filter them out because “fraud,” which is also suspicious reasoning. I can get a hundred “mobile” numbers within a few minutes, rather inexpensively.
It would be useful to understand the flow of an SMS from a source to a Google voice number. While you can't port a Google voice number, it seems like if you can intercept an SMS from a source before it gets to Google then this technique will work.
A useful strategy to help against this in any case is to use a different email address for every online service. Hackers generally can't initiate an account password reset if they don't know the account.
Also if you use a different phone number for account security than your public one then it's a lot harder for them to know what SMS to intercept. Security by obscurity sucks but in this world it may be your only practical choice.
You absolutely can port a Google Voice number. End-user subscriber numbers must be portable per FCC rules. Google, operating services provided by Bandwidth.com (mentioned in the article), does enable port-protection by default but this is easy to bypass by an operator who, like in the article, checks the box that says something like "I have a valid written LOA, complete the port as an exception." This has legitimate uses (some losing providers are very ruthless about not following the rules and letting customers move numbers) but unscrupulous or lazy operators will check the box and move on.
So, when my nontechnical friends ask me what they should be using for 2FA, I'm kind of at a loss what to tell them. It's either a false sense of security (e.g., SMS), or too complicated for them (Yubikey).
WebAuthn, so, a Yubikey would work for that, but also cheaper products (the keywords for a product search are FIDO Security Key) which are similarly capable.
If they have a nice phone (modern iPhone or Android phone that is able to recognise who you are by fingerprint or facial recognition ought to be enough) that can do WebAuthn too, the actual recognition remains local to your device (so you're not giving some mysterious entity your face or fingerprint).
I'm assuming since they're "nontechnical" that you mean as a user, the user experience for WebAuthn is trivial, one touch. You do this to enroll the Yubikey, and then you do it whenever you need to prove who you are to the same site. It's entirely phishing proof, the credentials can't be stolen, you can keep one on your keyring or just leave it plugged into a personal PC all the time, it has excellent privacy properties, the biggest problem is too few sites do WebAuthn but Google and Facebook do, so that's a good start for non-technical people.
Which brings me to the other side, if your non-technical friends are wondering what their organisation should mandate, then again, WebAuthn, but this time I admit it's somewhat complicated. Somebody is going to need to at least research what product suits the userbase, and check boxes in the software they use, and at worst they need to do a bunch of software development. It's not crazy hard, but it's a bit trickier than yet another stupid password rule requirement. However unlike requiring passwords to contain at least two state birds and the name of an African country requiring WebAuthn will actually make you safer.
What are the problem parts with yubikey? I've got a Fetian Epass and it feels like the most natural way of doing auth - here's a key, it goes on my keyring next to my locker key or my car key if I had one, I use it to log in like putting a key in a lock.
TOTP is only better than SMS against SIM swapping, a rare threat. They are identical against phishing, an enormously more common problem. For a typical user the delta in security when transitioning from SMS to TOTP is minimal.
And a mildly technical angry ex is a lot less likely than phishing. These are valuable topics but people go way way way too far and say that SMS is horrible and should be basically banned while TOTP is fabulous and a completely viable alternative, which is just fantasy.
Just be careful with these solutions, I use the one in Bitwarden for a few things and while great for convenience, there's a significant security tradeoff when you go ahead and load all your TOTP tokens into memory on the same machine you keep the passwords on. Turns your 2 factor authentication into single factor pretty fast against even a decent piece of malware, let alone a dedicated attacker.
I believe the practical solution for many people is to switch the 2FA to an authenticator on-your-phone code generator, which someone cannot hack easily.
Most important account / banks / etc services now offer this option.
The only thing is, though, make sure to keep backups of the codes you use to initialize the authenticator app, because for some services there is no recovery if you lose your phone or don't have backups.
Sorry, I should've been more specific/accurate. I meant brokerages, like Fidelity, Etrade, Schwab -- where you're likely to have more funds/$ than a regular consumer bank. They do offer it. Even Amazon offers it.
And you are right, I have not seen any of the banks I use convert to authenticator (BofA, Chase, etc).
I can only guess that they think it's too difficult for the average consumer to understand or implement. But the fact that they don't even offer as an option is unfortunate.
Unfortunately, Fidelity (at least for my account) only offers some non-standard "Symantec VIP" product. Does someone reading this know if there's a way to turn it into standard TOTP?
Yes, Symantec VIP is their TOTP solution they're chosen. Etrade also uses it.
I find it less friendly than the normal QR code, since you can't back it up or clone it (and it's proprietary, although that is not a huge concern for me). Basically the app is both the server and the code generator (?) because the website you log on to does not issue you a shared secret, the app creates it itself. Every device has its own unique code, so it can't be cloned.
Fidelity enforces that you can't have multiple devices floating around able to log in -- they don't let you enroll multiple devices if you opt in to it. (Although why exactly I don't know, because Etrade does). It is a pain because 1) I want multiple devices to have my codes as backup, 2) I want one of my family members to be able to log in -- although they say, you should make that person an authorized user who can use his/her own login + own VIP code.
It's a pain, and I'm still debating whether or not to activate it. The interesting part is they clearly have a fall back in-person way to turn this off / help you if you forget or get locked out. You have to even call them in person to turn this feature on.
I was able to find this. Haven't attempted it yet but it appears that it is in fact standard TOTP, only the VIP app generates the seed and you have to provide the seed to (in this case) Fidelity. https://gist.github.com/jarbro/ca7c9d3eebba1396d53b4a7228575...
And yeah, my biggest problem with it is that I already have a solution for TOTP; I don't really want to also figure out some solution for their proprietary garbage.
Don't use Phone number based 2Factor or if you must use a number, keep it to an app (eg, Google Voice) and don't forward your Google Voice texts to your phone's number.
Basically, avoid using your carrier provided phone number for anything related to an account.
But Google Voice requires a Google account, and to create a Google account you need to provide a valid phone number. There are also a lot of service providers that don't allow you create an account without providing a valid phone number.
I wonder how high-profile politicians and celebrities deal with security issues like this? If this is really such an easy attack to pull off, what's stopping someone from shilling cryptocurrencies on celebrity social media accounts (again)?
I deleted the phone number from google account, just use 2FA from app. Now forgot password does not give extreme option of just sending a code to my phone.
I mean sending a code VIA SMS. I mean I deleted the phone number from my Google account. Now I have only Auth App, & Google App. The code & recovery options are 8 digit recovery phrase, Tap in Google App on other device. No reset code SMS.
I note, however, that this attack seems to only be possible on VOIP routable numbers, and it’s my experience that banks, etc, will not allow you to use VOIP routable numbers for 2FA.
That’s definitely not the case for a naive implementation of sms 2fa as would be done by likely any dev using Twilio, etc.
Im not sure what banks use, but I have had UK VOIP numbers flagged before when trying to register them for 2FA, so theres likely API providers for other countries.
Yes, this is only for VOIP. The author of article is dishonest. He mentioned that his TMobile phone number got hacked but I am willing to bet that this is a marketing .. .
Lots of comments here along the lines of "SMS 2FA is bad", but hell, if the phone companies had an appropriate level of liability here (which should be a shit ton), this should be impossible.
And it's not just about 2FA, most of humanity expects that if someone else texts them, those texts will go to their phone and only their phone unless they've given explicit verifiable consent.
I mean, in this case all the hacker did was fill out a form and say pretty please. I hope phone companies that allow this get sued.
This would also be impossible if services stopped demanding your phone number to make an account.
This is a growing trend in consumer services, and it's a privacy nightmare.
Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.
There are widespread reports of delivery businesses selling their phone number databases (with associated credit card suffixes, delivery addresses, order history, et c) to large advertising companies for data mining.
Providing your direct cell number to an app is basically like providing your home address and a bunch of other sensitive data. Don't do it, or make a burner gmail account to get a disposable Google Voice number for each account that you must have that demands a phone number. Then, that number isn't reused and an attacker that obtains your mobile number can't attack your login method for other apps.
Reusing phone numbers is about as bad as reusing passwords.
> Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.
I have extremely bad news for you. US Social Security Numbers are not in fact unique, and the fact they're "sensitive" is a terrible joke because it's pretty easy to discover the SSN for an individual based on public information, especially older people because SSNs weren't even randomised at issuance until relatively recently.
Any system that depends on keeping public facts secret is horribly broken, yes that also includes "verifying" credit cards based on a bunch of digits that are written right on the card itself.
I work on such a system. I have the same sentiment as you, but the reality is that every entity along the way, including federal, state, county, city, and sub-city level governments all treat SSN as a unique identifier and accept no substitutes. The one and only way to get away from this is to pass massive legislature and have the federal government provide better IDs to the public, something most people don’t actually want. It will never happen unless a massive amount of people get defrauded overnight. Like 10-40% of the country, and literally in a short enough period of time to create a news shitstorm. This cannot be changed by your software system being different, and if it is, it will already start at a disadvantage for not being compatible with everything around it.
I'm aware, I'm a hacker (in the evening news definition of the term as well as the TMRC one). I was referring to the fact that most USians would not sign up for a whatever b2c service that demanded their SSN, but wouldn't hesitate to provide their phone number.
> Imagine if they demanded your SSN to sign up? A phone number is no different or less sensitive a unique identifier, perhaps even moreso these days.
The goal is for the service to have a unique identifier, and phone numbers happen to be a really good one to prevent spam also since it outsources verification of human entity to the phone companies.
> since it outsources verification of human entity to the phone companies.
That's not the reason phone numbers are used. They are used, because they are something you have in addition to something you know like an SSN or password. This is two factor authentication.
The US has plenty of centralized identity systems, including the Real ID one, a backdoor federal ID system that is required to board all commercial flights in that country.
But what is an appropriate level of liability here? Phone companies never signed up to be the guardians of our digital lives, and the tech industry at large has just built a castle on shakey foundations.
And there are obvious trade-offs here, if we make number portability harder, it means you're somewhat hostage to your phone provider.
No, this is exactly what they signed up for. When I sign a contract with my phone company to give me access to their network I expect that they will not just give it to someone else instead under my name.
Phone companies are guardians of our our accounts with them. The absolutely bear responsibility if poor security or loopholes allow someone to gain any sort of access to our accounts. Security and convenience are often a trade off. Clearly service providers are not properly judging where that balance should be.
> Phone companies never signed up to be the guardians of our digital lives
The parent comment addressed this point. This is not just about 2FA. SMS users expect their communication are private, except (debatably) by the courts with a warrant.
When cell phones first became big, probably 10-15 years ago at least, there was a website for my area I lived in at the time (southern Illinois) that would list texts and people could vote on the funniest ones. There were some really private messages that would hit the top (obviously phone numbers weren’t displayed.) So it used to be people had the assumption that texts were public, because for some carriers they basically were.
If I understand correctly, the initial telephone systems were run by manual operators at a physical switchboard, who could listen in to anything that was said on any line. Many people also had party lines, where someone (in another house or apartment) could pick up their phone and listen to your conversations.
So, no, not much of an expectation of privacy - at least, there shouldn't have been.
I'm under the assumption that wiretapping to create evidence is illegal, but wiretapping to get a warrant probably happens all the time. (AKA Judge and Police officer listen to illegally captured audio - Judge approves official warrant to make future recordings legal)
Isn't this easy solvable with additional SMS token approval as mentioned in article?
> "orsman added that, effective immediately, Sakari has added a security feature where a number will receive an automated call that requires the user to send a security code back to the company, to confirm they do have consent to transfer that number. As part of another test, Lucky225 did try to reroute texts for the same number with consent using a different service called *Beetexting*;
the site already required a similar automated phone call to confirm the user's consent. This was in part "to avoid fraud," the automated verification call said when Motherboard received the call. Beetexting did not respond to a request for comment."
But it seems that the entire system is globally infested with security holes. Is this applicable worldwide or just limited to one country ?
Sakari just was dumb, and deserves the bad press. I've built similar products and we launched with the "phone call to verify" feature to specifically prevent this type of abuse.
Based on the high level description given in the article it seems to be related to enum lookup or net number. It's basically a kind of DnS lookup for phone numbers used for sms routing. Also this is used for routing sms that are belonging to a user to an application (in case you want to reroute your sms to an application). The company will change the enum code for the number to a.code that belong to the company and reroute the messages to its services.
So the hack is not really a hack in a sense that it work as intendant, the safety net is missing though. The company operating the enum is supposed to check the legitimacy of the change.
That’s crazy that there is no verification system in place allowing the user to approve the forwarding.
Years ago I asked my carrier to not port or forward without me being physically present at a store. Maybe I should test them out to see if that’s still the case.
Regardless, I don’t use SMS MFA for anything important and even when I do, I have a 32 character password to go along with it.
> While adding a number, Sakari provides the Letter of Authorization for the user to sign. Sakari's LOA says that the user should not conduct any unlawful, harassing, or inappropriate behaviour with the text messaging service and phone number.
But as Lucky225 showed, a user can just sign up with someone else's number and receive their text messages instead.
So this means that the only protection from attacks like this is the law, and not a technical or operational hurdle like going through an AT&T hotline to get sim swapping going.
This is bad news because following the law isn't a top priority when trying to hack someone.
What I would find really interesting is if someone used this exploit to hack into the accounts of Sakari staff and sabotaged their service, deleting all their infrastructure from their cloud hosting provider etc. I'm sure Sakari would take this security hole more seriously if their own C-suite fell victim to it.
Weird. The whole idea behind the whole company is to send SMSes on behalf of its customers, if I understood the article correctly. So why would they need to muck about with reassigning the phone numbers of SMS recipients in the first place?
My strategy is to have a second phone that has Authenticator and is also the phone for any SMS based 2FA.
The phone is locked in a file cabinet when not in use and never leaves my desk.
An extra phone only costs me $10/month. Well worth the peace of mind.
These hackers have so much time in their hands , that they can understand this technology more than the creators and abuse them, amazing how hacker culture works.
Damn lies. Damn lies. The attack vector only works for VOIP or Toll Free Numbers. The upstream agreements already block Mobile numbers. This is paid marketing for his company.
Not sure why this isn't higher up. This is crucial information showing this is FUD.
There are still grave vulnerabilities in mobile provider SMS (2FA or otherwise) due to how easy it is for a dedicated attacker to SIM swap, but this particular claim is completely misleading.
> Not sure why this isn't higher up. This is crucial information showing this is FUD.
It's already too high up given it's a blatantly baseless accusation. I'm confused why you think it's more credible than the article when it provides zero evidence.
I also place the blame on Sakari. I headed Engineering for a company that allowed you to bring your own landline number for business, and our automated flow for non-toll-free landline required receiving a code via telephone call (to avoid the situation of compromised SMS routing) and entering that as part of the signup process.
For toll-free numbers, it was a manual process where we received written LOAs and verified ownership via the SMS/800 database (ironically, SMS here has nothing to do with messaging and is purely coincidental).
okay so how did he manage to pull this off and is this still possible? how would you protect yourself against this attack (i dont understand how it works)
The details are in the article, but essentially the attacker used a 3rd party bulk SMS service that allows it's users to use their own number and routes sms messages to said service provider.
The attacker instead used the cell number of the author of the article, and supplied a fraudulent letter authorizing the re-routing of text messages through the bulk SMS service.
The attacker works for a service, which purports to verify the routing and carrier settings for a given mobile phone number; I expect that their solution periodically checks the results and issues an alert if the results differ from a known valid value.
The article goes into detail; it's worth the read to answer your questions.
From my experience, there is very little process and oversight being followed. I had my number ported over (with my knowledge) to Tmobile by a 3rd party, however Tmobile had not attempted to verify my consent. The store associate took this person at her word. My then current phone stopping working caught me by surprise.
I can imagine if I signed up for a family plan, any store associate would be happy to move any number of phone numbers into my control.
I'd love to see companies allow for opt in additional security measures, like banks or telco's calling me - having a verbal password to confirm things, that level of security seems to only be available to VIPs.