I suggested exactly that (for the closely related Deprecation header) to Requests a few years ago, they feel it's the application's responsibility, discussion here: https://github.com/psf/requests/issues/5724
If you pull out to view the full history, you'll see it's a constant series of huge spikes (e.g. in late 2024 the site more than tripled in size over about a week) each followed by a slow trend downwards, then another huge spike up further, etc.
I suspect that's pretty common for something that's been in the news quite a bit: you get occasional big jumps in attention & usage, and then only some smaller percentage of users will stick around longer term. When you're getting such big spikes in signups this is unavoidable I think - even with new users coming in, the descent from the spike overwhelms any other trends.
The interesting question is whether that settles down into a slow steady sustainable state eventually. Looks plausible but still unclear imo.
I think Play Integrity is the fundamental issue here, and needs to go. That's the crux of the issue.
Allowing apps to say "we only run on Google's officially certified unmodified Android devices" and tightly restricting which devices are certified is the part that makes changes like this deeply problematic. Without that, non-Google Android versions are on a fair playing field; if you don't like their rules, you can install Graphene or other alternatives with no downside. With Play Integrity & attestation though you're always living with the risk of being cut off from some essential app (like your bank) that suddenly becomes "Google-Android-Only".
If Play Integrity went away, I'd be much more OK with Google adding restrictions like this - opt in if you like, use alternatives if you don't, and let's see what the market actually wants.
Banks seem to actually "want" Play Integrity. At least they act like it. I bet they would like for normal online banking on user-controlled devices to completely go away.
Of course they do, and of course they would. Banks are in a crazy legal position where they are financially liable for user stupidity. If my bank account gets breached, it doesn't matter that I didn't take any reasonable security measures, the bank will still have to refund me. If the bank could say "you didn't follow our recommended security practices to use a PW manager and MFA or passkeys, so it's a FAFO situation for you," then they wouldn't be pushing for this stuff. But they can't do that because the government doesn't allow them to.
There is even government regulator pressure now for financial services to be liable for cases where the user legitimately authorizes a transaction to a party that turns out to be a scammer. Of course the banks want to watch your every move and control your devices. They would be stupid not to given the incentives.
In what country do you live? In America, users are liable for the banks stupidity. If they don’t verify credentials and give away all of my money, I do NOT get it refunded, they are NOT responsible, and I am the victim of “identity theft.”
I live in America. I have got back every single cent I have lost due to fraudulent charges on my account. Furthermore, I was refunded instantly by the bank pending investigation.
The bank you have did the right thing and I think most banks and credit unions will do this, as it’s bad for a lot of reasons not to.
That said, the legal obligations around how this works is very different. One of the reasons common advice is use a credit card for online purchases instead if a debit card or checking account link is because of the fact that they have different liability expectations around fraud[0]
[0]: there are of course a multitude of good reasons for this advice generally speaking, but this one is cited a lot
You are incorrect. This isn’t good will measures, these are required by law. The EFTA, for example, requires banks to make you whole against fraudulent ATM transactions. The CC recommendation is more about you having more time and flexibility to dispute the charge without risking access to cash; most Americans don’t even have a few thousand dollars in cash so a fraudulent ATM withdrawal is a major problem. But if you have a good chunk of cash the fraudulent ATM transaction will not really be felt by you provided you follow the requirements about notification (you have 2 days after noticing to report it to the bank).
The losses due to fraudulent CC activity are governed by the FCBA.
It’s shocking how people think companies do this kind of stuff out of good will rather than being forced by law.
Are you mixing up fraudulent credit card charges? Because that's a whole lot other story. I can't even imagine you would be able to get any fraudulent debit card charges back from the bank.
I got a call from the bank asking if I'd spent over $8k today on my debit card at a mall and restaurant in a shady part of town... I said no, and they ended up refunding me and issuing me a new card.
They did ask me to make a statement to the police, which I did.
Funnily enough when I talked to the police, they said, "Oh, $7k, is that all? Just today we had someone lose over $140k".
How do you even spend $140k on a credit card? Must have been a platinum card or whatever.
I'm in Australia, not sure how different things are here.
Interesting. In EU the bank's liability is typically limited. However, but now that I think of it, they are only liable for bigger sums, not petty theft. So if you get scammed of up to, say, 200 euro, they don't care. Anything more than that, they do.
Most of that “app” security is requiring to use Symantec’s app which doesn’t actually require Symantec - there’s plenty of guides online showing how to register any authenticator app instead.
My bank doesn't allow access through a browser. It has to be the app or else you have to travel to their HQ in person (I guess) and close your account.
Can I ask what bank and why on Earth you continue to give them your business?
I guess I'm unusual in that I've been using an "online" only bank for 20 years (back then it wasn't so online... I had a stack of UPS overnight envelopes for check deposits), but I cannot imagine patronizing a bank that won't let me log in and do basically anything from a browser.
do they still allow you to download your transactions to your phone and get them to your pc that way? just curious, I'd be screwed, I don't know how to install apps on my phone.
Only because it's there. I don't think the would demand it if it wasn't offered, but once it's there imagine being in a bank and saying to management "it recommend we don't enable this security feature that works on 99.99999% of phones".
As someone who used to work for a bank building applications I would say no. This is definitely a feature companies and organizations like banks would request if it wasn't available.
There are a lot of scams targeting vulnerable people and these days attacking the phone is a very "easy" way of doing this.
Now perhaps there is a more forgiving way of implementing it though. So your phone can switch between trusted and "open" mode. But realistically I don't think the demand is big enough for that to actually matter.
Play integrity does almost nothing to prevent malicious actors. In fact, id say overall it's probably more harmful because it gives actors like Banks false confidence.
Even with play integrity, you should not trust the client. Devices can still be compromised, there are still phony bank apps, there are still keyloggers, etc.
With the Web, things like banks are sort of forced to design apps that do not rely on client trust. With something like play integrity, they might not be. That's a big problem.
I've worked on such systems. Love it or hate it, remote attestation slaughters abuse. It is just much harder to scale up fraud schemes to profitable levels if you can't easily automate anything. That's why it exists and why banks use it.
Wouldn't device-bound keys for a set of trusted issuing secure elements (e.g. Yubikeys) work just as well but without locking down the whole goddamn software stack?
RA schemes don't lock down the whole software stack, just the parts that are needed to allow the server to reason about the behavior of the client. You can still install whatever apps you want, and those apps can do a lot of customization e.g. replace the homescreen, as indeed Android allows today.
You need to attest at least the kernel, firmware, graphics/input drivers, window management system etc because otherwise actions you think are being taken by the user might be issued by malware. You have to know that the app's onPayClicked() event handler is running because the human owner genuinely clicked it (or an app they authorized to automate for them like an a11y app). To get that assurance requires the OS to enforce app communication and isolation via secure boundaries.
That's waay too much locking down, and while it gives me some control, it does not give me real control. I cannot remove or modify software in the software stack whose behavior I disagree with (e.g. all of Play Services). I can't replace the OS with a more privacy and security focused OS like GrapheneOS.
Imagine if this was done for desktop computers before we had smartphones. That's just crazy.
Relying on hardware-bound keys is fine, but then the scope of the hardware and software stack that needs to be locked down should be severely limited to dedicated, external hardware tokens. Having to lock down the whole OS and service stack is just bad design, plain and simple, since it prioritizes control over freedom.
1. I don't believe you. This is a measurement problem - you eliminated an avenue to measure abuse, because you are now just assuming abuse doesn't happen because you trust the client.
2. It does not eliminate any meaningful types of fraud. Phishing still works, social engineering still works, stealing TOTP codes still works.
Ultimately I don't need to install a fake app on your phone to steal your money. The vast, vast majority of digital bank fraud is not done this way. The vast majority of fraud happens within real bank apps and real bank websites, in which an unauthorized user has gained account access.
I just steal your password or social engineer your funds or account information.
This also doesn't stop check fraud, wire fraud, or credit card fraud. Again - I don't need a fake bank app to steal your CC. I just send an email to a bad website and you put in your CC - phishing.
1. Well, going into denial about it is your prerogative. But then you shouldn't express bafflement about why this stuff is being used.
Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved". Fraud and abuse have ground-truth signals in the form of customers getting upset at you because their account got hacked and something bad happened to them.
2. This stuff is also used to block phishing and it works well for that too. I'd explain how, but you wouldn't believe me.
You mention check fraud so maybe you're banking with some US bank that has terrible security. Anywhere outside the USA, using a minimally competent bank means:
• A password isn't enough to get into someone's bank account. Banks don't even use passwords at all. Users must auth by answering a smartcard challenge, or using a keypair stored in a secure element in a smartphone that's been paired with the account via a mailed setup code (usually either PIN or biometric protected).
• There is no such thing as check fraud.
• There is no such thing as credit card phishing either. All CC transactions are authorized in real time using push messaging to the paired mobile apps. To steal money from a credit card you have to confuse the user into authorizing the transaction on their phone, which is possible if they don't pay attention to the name of the merchant displayed on screen, but it's not phishing or credential theft.
> Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved".
There is an entire name for this: dark pattern.
People make this mistake all the time. Its a very common measurement problem, because measuring is actually very hard.
Are we measuring the right thing? Does it mean what we think it means? Companies spend hundreds of billions trying to answer those questions.
2. Not it cannot block phishing because if I get your password, I can get in.
To your points:
- yes, banks in the US use one time codes too. Very smart of you, unfortunately not very creative. Trivial to circumvent in most cases. Email is the worst, SMS better, TOTP best.
TOTP doesn't matter if the user just takes their code and inputs it into whatever field.
- yes there is such a thing as check fraud, you not knowing what it is doesn't matter.
- if I had to authorize each CC transaction on my phone, I'd put a bullet in my head. That's shit.
Yeah this thread boils down to US vs rest-of-world confusion. Or maybe a US vs Europe confusion.
TOTP, which you say is best, is considered weak sauce outside the US. I don't know any banks that have used it for a very long time. It's not secure enough. Cheques were phased out decades ago. There are entire generations in Europe who have never even seen a cheque, let alone written one. I think the last time I had a chequebook issued it was in 2004.
IIRC the differences arise because in the US consumer legislation makes merchants liable for refunding fraudulent transactions, so banks and consumers have no incentive to improve security and merchants can't do it except via convoluted and hardly working risk analysis. It's just so easy to do chargebacks there that nobody bothers fixing the infrastructure. This pushes everyone into the arms of Amazon and the like because they have the most data for ML.
Outside the US and especially in Europe, merchants aren't liable for fraudulent transactions if they verified the credentials correctly. It's much harder to do chargebacks as a consequence. Even if a merchant delivered subpar stuff or there was some other commercial dispute, chargebacks are very hard (I tried once and the bank just refused). So liability shifts to banks, unless they can show that the transaction was authorized by the account holder and they had correct information. That means banks and merchants are incentivized to improve security, and they do.
This is just blatantly false. Literally every bank in Denmark which is not an e-bank lets you do everything with a browser and the national digital identity, MitID. MitID offers an app, but they also offer alternatives both in the form of TOTP generators and NFC/USB hardware chips.
If by TOTP you mean an app like Google Authenticator, those are expected to be phones, aren't they? And the other things, as we already discussed, are hardware systems they can remotely attest - not browsers on their own.
People seem to be getting really hung up on this point. Accepting a browser means letting you do everything with nothing but whatever program you want that speaks HTTP. No special apps or authenticators or extra tokens. You should be able to write a plain Python script that sends money whenever it wants, on its own.
European banks do not allow this in my experience, and nothing being posted to this thread indicates otherwise. Apparently there are some banks especially in the USA who just don't care about security at all because they can push fraud costs onto merchants, so they do accept browsers for everything, or they make some trivial effort and if users undermine it using Google Voice or whatever they don't care - that's fine, I overgeneralized by saying "banks" instead of geographically qualifying it. Mea culpa.
But in your case, you need the assistance of something that's not a browser.
I thought that was what you meant too? If you mean TOTP via a QR code exposing the secret, then of course I agree, no banks allow that. But your comment read as a claim that all TOTP solutions were inherently deemed insecure and wouldn't work, and that smartphone based solutions were the only viable alternative outside the US. The code display is of course vulnerable to man-in-the-middle attacks where you trick users into authorizing transactions via fake web pages, but it is not a threat that is deemed serious enough to prevent our whole country from basing our digital infrastructure on code displays.
I think people get hung up on your point about banks not accepting browsers because you don't formulate your point very clearly, and it reads like you claim that they don't accept browsers at all when what you mean is just a browser and nothing else. Most European banks do in fact allow you to do business using a browser - you just have to prove your identity via other means as well. And there are no good security arguments why those means must be in the form of a smartphone app whose security requirements have the side effect of locking you into a business relationship with one of two American tech giants. As you can see, a whole country of almost six million people authenticates everything from bank transactions to naming their kids and buying houses using a system which allows you to use just a code display.
I think the strategy of remote attestation of the whole OS stack up to and including the window manager is a clunky and inelegant approach from an engineering perspective, and from a freedom perspective I think it is immoral and should be illegal. What I could accept would be an on-phone security module with locked down firmware which can simply take control of the whole screen regardless of what the OS is doing, with a clear indicator of when it is active. This allows you to authorize transactions and inspect their contents, and only needs remote attestation of the security module, not the whole OS.
From digging in a bit, it sounds like originally MitID was meant to be app only and it was only after pressure from a lobbying group that they relented and allowed a TOTP token.
So my guess is that this is not because they think TOTP is secure enough but rather due to the political aspects of it being centrally run by the government.
The security argument is pretty straightforward and I guess you know it already, because as you say, TOTP is vulnerable to phishing (unless you use some of the anti-bot tech I mentioned elsewhere but it's heuristic and not really robust over the long term). Whereas if you do stuff via an app, not only can malware not authorize transactions, but it can't view your financial details either - privacy being a major plank of financial security that can't be reliably offered via desktop browsers at all, but can via phones.
The alternative you propose is basically a secure hypervisor. Such schemes have been implemented in the past, but it's not ideal technically. For fast payment authorization via NFC, this is actually how it works, which is why when you touch a phone to a terminal to pay for something you don't see any details of the transaction on the display itself, just an animation. The OS doesn't get involved in the transaction at all, it's all handled by the embedded credit card smartcard which is hard-wired to the NFC radio. The OS gets notified and can send configuration messages, but that's about it.
For anything more complex the parallel world still needs to be a full OS that boots up, have display drivers, have touchscreen drivers, text rendering, a network stack, a way to update that software, etc. You end up with a second copy of Android and dual booting, which makes memory pressure intolerable and the devices more expensive. But it's hard to justify that when the base phone OS has become secure enough! It's already multi-tasking and isolating worlds from each other. There are no users outside of HN/Slashdot who would find this arrangement preferable. And as your concern is not fully technical, it's not clear why moving the hardware enforcement around a bit from kernel supervisor to hypervisor would make any difference. This isn't something that can be analyzed technically as it all seems to boil down to fear over the loss of ad blocking.
Well it is still a phone after all, what with UMA and baseband processing. You don't need to spend much time at Blackhat/Defcon to realize any true attempts at sealing it up are akin to plugging leaks in a sieve with epoxy. Its far too porus.
Meanwhile if attestation does reduce fraud, the ownability (by the user) of the device is now forfeit due to chasing a dragon's tail.
That’s a “seatbelts so no good because people still die in car crashes” argument with a topping of “actually they’re bad because they give you a false sense of security”
Play integrity hugely reduces brute force and compromised device attacks. Yes, it does not eliminate either, but security is a game of statistics because there is rarely a verifiably perfect solution in complex systems.
For most large public apps, the vast majority of signin attempts are malicious. And the vast majority of successful attacks come from non-attested platforms like desktop web. Attestation is a valuable tool here.
How does device attestation reduce bruteforce? Does the backend not enforce the attempt limits per account? If so, that's would be considered a critical vulnerability. If not, then attestation doesn't serve that purpose.
As for compromised devices, assuming you mean an evil maid, Android already implements secure boot, forcing a complete data wipe when breaking the chain of trust. I think the number of scary warnings is already more than enough to deter a clueless "average user" and there are easier ways to fish the user.
And those apps use MEETS_DEVICE_INTEGRITY rather than MEETS_STRONG_INTEGRITY so a compromised device can absolutely be used to access critical services. (Usually because strong integrity is unsupported on old devices)
This reminds me of providers like Xiaomi making it harder to unlock the bootloader due to phones being sold as new but flashed with a compromised image.
Maybe a good compromise is to change the boot screen to have a label that the phone is running an unofficial ROM, just like it shows one for unlocked bootloaders? If the system can update that dynamically based on unlock state, why can't it do it based on public keys? Might also discourage vendors/ROM devs from using test keys like Fairphone once did.
I developed this stuff at Google (JS puzzles that "attest" web browsers), back in 2010 when nobody was working on it at all and the whole idea was viewed as obviously non-workable. But it did work.
Brute force attacks on passwords generally cannot be stopped by any kind of server-side logic anymore, and that became the case more than 15 years ago. Sophisticated server-side rate limiting is necessary in a modern login system but it's not sufficient. The reason is that there are attackers who come pre-armed with lists of hacked or phished passwords and botnets of >1M nodes. So from the server side an attack looks like this: an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts, against unique accounts that log in from the same region as that IP is in, and the password is correct maybe 25%-75% of the time. Then the IP goes dormant and you never hear from it again. You can't block such behavior without unworkable numbers of false positives, yet in aggregate the botnet can work through maybe a million accounts per day, every day, without end.
What does work is investigating the app doing the logging in. Attackers are often CPU and RAM constrained because the botnet is just a set of tiny HTTP proxies running on hacked IoT devices. The actual compute is happening elsewhere. The ideal situation from an attacker's perspective is a site that is only using server side rate limiting. They write a nice async bot that can have tens of thousands of HTTP requests in flight simultaneously on the developer's desktop which just POSTs some strings to the server to get what they want (money, sending emails, whatever).
Step up the level of device attestation and now it gets much, much harder for them. In the limit they cannot beat the remote attestation scheme, and are forced to buy and rack large numbers of genuine devices and program robotic fingers to poke the screens. As you can see, the step-up from "hacking a script in your apartment in Belarus" to "build a warehouse full of robots" is very large. And because they are using devices controlled by their adversaries at that point, there's lots of new signals available to catch them that they might not be able to fix or know about.
The browser sandbox means you can't push it that far on the web, which is why high value targets like banks require the web app to be paired with a mobile app to log in. But you can still do a lot. Google's websites generate millions of random encrypted programs per second that run inside a little virtual machine implemented in Javascript, which force attackers to use a browser and then look for signs of browser automation. I don't know how well it works these days, but they still use it, and back when I introduced it (20% time project) it worked very well because spammers had never seen anything like it. They didn't know how to beat it and mostly just went off to harass competitors instead.
I may be mis-understanding, but it sounds like this kind of widely distributed attack would also be stoppable by checking how often the account is attempting to log in? And if they're only testing two or three passwords _per account_, per day, then Google could further block them by forcing people not to use the top 10,000 popular passwords in any of the popular lists (including, over time, the passwords provided to Google)?
The attackers only try one or two passwords, that they hacked/phished. They aren't guessing popular passwords, usually they know the correct password for an account and would log in successfully on the first try. There are no server side signals that can be used to rate limit them, especially as the whole attack infrastructure is automated and they have unlimited patience.
Forgive me for being reductive, but aren't these leaked accounts a lost cause? The vulnerability in question is attackers being able to log into user accounts with leaked credentials. The only mitigation for this is to lock out users identified in other password breeches and reconfirm identity out-of-band, like through a local bank branch, add a second factor like a hardware token, or use restrictive heuristics like IP geolocation consistency between visits.
If 3 attempts per hour is enough to gain access, then it doesn't seem attestation can save you. I imagine a physical phone farm will still be economically viable in such case.
Yes that's what companies do. I worked on the system there that addressed this. If you can detect a botted login you can lock the account until the real user is able to get new credentials, or block activity in other ways. Not a lost cause at all.
It was very effective when this problem was new. Don't know about the current state of things.
> an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts
How is the attacker supposed to bruteforce anything with 2-3 login attempts?
Even if 1M node submitted 10 login attempts per hour, they would just be able to try 7 billion passwords per month per account, that's ridiculously low to bruteforce even moderately secure passwords (let alone that there's definitely something to do on the back end side of things if you see one particular account with 1 million login attempts in a hour from different IPs…).
Brute force here can mean they try millions of accounts and get into maybe a quarter of them on their first try, not that they make millions of tries against a single account.
If you have an attacker that can gain access on 25% of its attempts, it doesn't matter it there is a botnet with millions of IPs, they would still have around 25% success rate on just 10 IPs, it bas nothing to do with brute force, it just means you have plenty of compromised accounts in the wild and you want to prevent bad actors from using them at scale.
The threat model is entirely different from what your brute force phrase implies, and it is also a threat model that isn't relevant to banking, which was the topic of the discussion in the first place. And more importantly, it doesn't affect the security of the user.
That's a very uncommon understanding of brute force, to be honest. Generally I see the term applied to cases where there's next to no prior knowledge, just enumeration.
Well, I'd have picked a different word in this context. I'm just explaining why attestation fixes the problem described by the OP as seen in modern contexts and rate limiting doesn't.
Its not that type of argument, because seatbelts actually work - play integrity does not.
Play integrity is just DRM. DRM does not prevent the most common types of attack.
If I have your password, I can steal your money. If I have your CC, I can post unauthorized transactions.
Attestation does not prevent anything. How would attestation prevent malicious login attempts? Have you actually sat down and thought this through? It does not, because that is impossible.
The vast, vast VAST majority of exploits and fraud DO NOT come from compromised devices. They come from unauthorized access, which is only surface level naively prevented by DRM solutions.
For example, HBO Max will prevent unauthorized access for DRM purposes in the sense that I cannot watch a movie without logging in. It WILL NOT prevent access if I log in, or anyone else on Earth logs in. Are you seeing the problem?
Cool. So you run a baking website. You get several hundred thousand legit logins a day, maybe ten million that you block. Maybe a hundred million these days.
Now, you have a bucket of mobile users coming to you with attestation signals saying they’ve come from secure boot, and they are using the right credentials.
And you’ve got another bucket saying they’ve are Android but with no attestation, and also using the right credentials.
You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.
Do you treat the logins the same? Real customers HATES intrusive security like captchas?
Are you understanding the tech better now? The entire problem and solution space are different from what you think they are.
> You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.
1. I don't believe this research - measurement is hard. If we just consider using an unattested device as malicious, as we do now with the play integrity API, then you fudge the numbers.
2. Even IF the research is true, relative probability is doing the heavy lifting here.
There's still going to be more malicious attempts from attested devices than those unattested. Why? Because almost everyone is running attested devices. Duh.
Grandma isnt going to load an unsigned binary on her phones. Let's just be fucking for real for one second here.
No, she's gonna take a phone call and write a check, or get an email and go to a sketchy website and enter her login credentials and then open the investable 2FA email and then enter the code she got into the website. Guess what - you don't need a rooted device for that. You just don't.
There are extremely high effort malicious attempts, like trying to remotely rootkit someone's phone, and then low effort ones - like email spam and primitive social engineering.
You guess which ones you actually see in the wild.
Is there a real threat here? Sure. But threat modeling matters. For 99.99% of people, their threat model just does not involve unsigned binaries they manually loaded.
Why are we sacrificing everything to optimize for the 0.01%? When we havent even gotten CLOSE to optimizing the other 99.99%?
> That’s a “seatbelts so no good because people still die in car crashes”
Except it's not a seatbelt, it's straitjacket with a seatbelt pattern drawn on it: it restrain the user's freedom in exchange for the illusion of security.
And like a straightjacket, it's imposed without user consent.
The difference with a straightjacket is that there's no doctor involved to determine who really needs it for security against their own weakness and no due process to put boundaries on its use, it's applied to everyone by default.
Great. Let's just require every single computing device to be verified, signed, and attested by a government agency. Just in case it is ever misused to attack a Google online service that cannot be possibly bothered to actually spend one nanosecond thinking on security.
What could possibly go wrong. It's not only morally questionable no matter what "advantages" it provides Google, but it's also technically ridiculous because _even if every single computing device was attested_, by construction I can still trivially find ways to use them to "brute force" Google logins. The technical "advantage" of attestation immediately drops to 0 once it is actually enforced (this is were the seatbelts analogy falls apart).
Next thing I suggest after forcing remote attestation on all devices is tying these device IDs to government-issued personal ID. Let's see how that goes over. And then for the government to send the killing squad once one of these devices is used to attack Google services. That should also improve security.
Here's the dystopian future we're building, folks. Take it or leave it. After all, it statistically improves security!
Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.
This is not the anti-attestation / anti-seatbelt argument many think it is.
All security is mitigation. There is non perfection.
But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.
I think your enthusiasm for melodrama and snark may be clouding your judgment of the actual topic.
> Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.
I won't solve the problem for _anyone_ once it is required, because it is trivial to bypass once the incentive is there. This is what kills this technically; it does not even go into the other cons (which really should not be ignored). Seatbelts absolutely do not have this problem.
> All security is mitigation. There is non perfection.
This is an absolutely meaningless tautology. It is perfectly true statement. It adds absolutely nothing to the discussion.
Say I argue in favor "putting a human to verify each and every banking transaction with a phone call to the source and the destination". And then you disagree, saying that there will be costs, waste of time for everyone, and that the security improvement will be minimal at best. And then I counter with "All security is mitigation, there is no perfection!".
Can you see what you're doing here? This is another textbook example of the politician's fallacy (something must be done; this is something; therefore we must do this).
It is trying to bypass the discussion on the actual merits of the proposal as well as its cons by saying "well it does something!" . True, it does something. So what? If the con is bad enough, or if the benefit too small, maybe it's best NOT to do it anyway!
> But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.
Not long we had right here in HN a discussion about the merits of remote attestion for anti-cheating: turns out the "lot of money" is a custom USB mouse (or addon to one) that costs cents to make. Sure, its not zero. You have to go more and more draconian in order to actually make it "a lot of money", but then you'll tell me I'm being melodramatic.
Probably not even that, but it limits liability and that’s the only purpose, just like the manual in your car, nobody will ever read it but it contains a warning for every single thing that could happen.
On the other hand, it's not really up to the bank. It's my money, not theirs.
I really wish I wouldn't need to have my money managed by some corporate drones in suits but it's really hard these days to do without a bank account.
This is why I was really into crypto at the beginning; it envisioned giving us control abck over what's ours. But all the KYC crap and the wishes of the speculators for more oversight basically made crypto the same nasty deal as the public banking sector.
It is desired enough that plenty of developers license third party libraries that roll their own device attestation, instead of or in addition to Play Integrity.
What's absurd though is that they have never demanded it for browsers. I think there is a much higher risk of someone being tricked into downloading a compromised browser with a backdoor than someone being tricked into downloading a modified version of their particular banking app. It gives the attacker the same level of control though.
Is this not more or less what Manifest is attempting to do? The headline grabber is that it disables ad-blocking but it's essentially trying to establish the browser as a "trusted" (owned) platform, no?
Banks have never accepted browsers. They don't need to because they can require the web app be paired with a mobile app or SMS code to log in. Before they used mobile apps they issued smartcard readers (at least they did everywhere I lived). The smartcard readers were also used to digitally sign transactions.
In other words, there aren't many banks that let you take sensitive actions with just a browser and that's been true since the start of online banking.
These days they also apply differential risk analysis based on the device used to submit a transaction and do things to push people towards mobile. For instance in Switzerland there's now a whole standard for encoding invoices in QR codes. To pay those you must use the mobile apps.
Edit: people are getting hung up on the "never accepted browsers" part. It means they only use the browser for unimportant interactions. For important stuff like login or tx auth, they expect the use of separate hardware that's more controlled like a SIM card/mobile radio, smartcard or smartphone app. Yes some banks are more lax than others but in large parts of the world this was always true since the start of online banking.
Thats ... false. Every bank I have used in Denmark allows me to log in and do all operations without an app. They require authentication and authorization using the national digital identity (MitID) which comes as an app, but also as a TOTP token and a FIDO (or similar) chip. No apps needed.
I guess the smartcard reader is equivalent. But my point is that locking down the OS of the phone is sufficient to establish client trust but not necessary. You should always be allowed to run the app without strong Play Integrity verification but then just be required to scan your hardware token with NFC in every authentication and authorization flow.
That's mostly prevalent in third-world countries like Brazil. I work for a fintech-turned-bank here and the biggest problem we have to deal with is fraudulent actions made by scammers who got access to users' accounts via social engineering. Outsiders don't know how prevalent scamming is in Brazil and how much is spent/lost trying to fight them and how that shapes the security vs convenience landscape. For example:
- I can't transfer a single cent if I didn't had my face and documents scanned after installing the bank app.
- I can't have the same bank account logged in two of my devices at the same time, all banks require you to use an account on a "verified" device (previous point).
- If I want to use a desktop to access my bank account, I have to either install a desktop client provided by the bank or be limited to just checking my balance. Some banks doesn't even allow you to log in if you don't have a "verified" device for doing 2FA.
I am very sure my higher ups are cheering with these news, even though it solves none of the problems.
I have used zelle many times from the browser. It's been a while, so maybe that has changed, though. I never even tried to deposit a check from the browser or an app, so you may be right on that point.
I have 3 different banks (well 2 banks and a credit union.) I can use Zelle in my browser from all 3. I don't even have the app installed for 2 of them.
In my country almost all banks removed their web apps. They existed like 15 years ago, before smartphones became widespread, but nowadays very few banks offer web apps, only mobile apps.
That's exactly what I'm saying. They don't let you take actions using only a web browser. If you don't use a mobile app they issue you with trusted hardware that performs a similar function (although usually less secure and not as convenient).
My bank does still allow login and txns to be authorized with a smart card reader. You have to type in fragments of the account number to authorize a new recipient. After that you can send additional transactions to that account without hardware auth.
Pure NFC tokens don't work because you need trusted IO.
Yeah but Google Voice isn't something you're meant to use to receive SMS codes. That's very US specific, and if you go there you've undermined the security the bank was trying to provide.
The reason they used SMS codes for a while is because phones have always tried to block malware from reading your screen or SMS storage whereas PCs don't, and because phones can do remote attestation protocols to the network as part of their login sequence. The SIM card contains keys used to sign challenges, and the network only allows authorized radio firmwares to log on. So by sending a code to a phone you have some cryptographic assurance that it was received by the right user and viewed only by them.
2FA and RA are closely related for that reason. The second factor is dedicated hardware which enforces that only a human can interact with it, and which can prove its identity cryptographically to a remote server. The mobile switching center, in the case of SMS codes.
Obviously, this was a very crude system because malware on the PC could intercept the login after the user authorized, but at least it stopped usage of the account when the user wasn't around. Modern app based systems are much more secure.
... which is why none of the banks I've used support it for many years now. It's a legacy example. Modern banks all rely on apps that bind to the secure element in the phone or they issue a smartcard reader.
Alright, I think I misunderstood you. I know most banks allow alternatives other than the app.
But just the fact that there are options which have the side effect of making you choose between convenience and digital autonomy is wrong, and I don't think remote attestation should even exist in the toolbox. We should make dedicated hardware solutions work better instead.
Dedicated hardware solutions are remote attestation. The smartcard OTC readers are doing exactly that: you sign a challenge with a private key that never leaves the smartcard and is paired to the bank at the factory. This is what remote attestation is doing behind the scenes, the only difference is the smartcard user interaction is much more limited. It's of no use for protecting your financial privacy, for example, only for stopping a hacked display device authorizing transactions.
If you evolve the smartcard based systems with better I/O capabilities, then you end up with a modern smartphone. At which point you may as well let the user supply their own rather than charging them lots of money for a dedicated device that's not much different.
No, I reject the idea that general purpose computing devices should be locked down to satisfy a very narrow security use case. I really don't believe that you end up with a smartphone, and I don't think you give a very good argument for why.
I am fine with locking down devices that have very limited security purposes. I am fine with my passport containing locked down hardware if it makes it harder to forge. But I am also not browsing the web on my passport, and therefore its security requirements cannot prevent me from removing ads.
OK, use a browser that lets you remove ads then! Android isn't iOS, you can run browsers that aren't Chrome and nothing about this change would stop you installing a custom browser with whatever features you want. Your banking app doesn't care what browser you use.
You are fundamentally misunderstanding my point about freedom.
Yes, I can do it now, but this is only because Google allows me to do that on their approved Android distribution, not because they are unable to prevent me from doing it. I don't trust them to not take away that freedom from me as soon as they can be sure that they can afford the anti-trust lawsuit since their core business model is to show me ads.
I know that my bank doesn't care about my browser, but by relying on Play Integrity they are indirectly forcing me to operate in Google's control regime in every other aspect on my device.
I don't want them to control my software stack, period. I don't care if they act as the good guys right now, they have been steadily doing downhill in the moral department and I expect them to continue to do so.
I don't understand how you can act like there is no problem at all with technology like this.
I work in fintech, formerly as a contractor for some major banks, and absolutely nothing you say is true, generally.
This might be the case for a couple of banks - or maybe in one or two specific countries, but broadly, none of what you've said here applies to banks anywhere else in the world.
Which banks outside the US allow you to submit payments using only an arbitrary desktop browser, without any other device getting involved? No mobile phones to receive codes, no smartcard readers, no secure elements, nothing except a browser and a password? I have never encountered such a bank.
I’m not sure why “outside the US” is a factor here, but nearly every bank in the world. Some only require email verification, some don’t even require that.
There are banking systems in some countries that do not even require an ATM/Debit card for automated withdrawals, just an account number and grouping code.
It's fascinating that people have had such different experiences here.
In my entire life, I have never banked anywhere that would let you transact or log in with just a desktop browser. You seem to be convinced this is an edge case but every bank in Europe works this way, as far as I know. There are US financial institutions that would do this, but the US financial system is uniquely fraud prone to a level just not tolerated elsewhere. It lagged years behind on chip-and-PIN cards for instance, and largely never managed to roll it out. The US treats bank account numbers as credentials and other stuff that doesn't apply elsewhere.
Just look at this thread: plenty of people saying what I'm saying. If you bank somewhere that lets people use just a browser to do transactions, you're either in an environment where fraud doesn't matter at all, or you're with a bad bank and should leave them.
Have you considered that Europe is a fractional part of the world with close international relationships to its geographical neighbors and does not represent the rest of the world's experiences?
You mention the US as lagging behind Europe, which is true - but I assure you from my experience working in international fintech from the US, there are more people in the world than the entire population of my country with even worse banking security controls by default.
> In other words, there aren't many banks that let you take sensitive actions with just a browser and that's been true since the start of online banking.
when I started online banking I used a browser and a TAN list for years. No apps required
"Browser and TAN list" is equivalent to "Browser and app". A browser can't be used in isolation, there is and was always some second factor required for online banking, but a banking app can be used in isolation.
If play integrity went away, all mainstream Android users would suddenly experience a huge increase in captchas and other security measures.
It’s funny to see the volume of comments on HN from folks who are outraged at how AI companies ferociously scrape websites, and the comments disliking device attestation, and few comments recognizing those are two sides of the same coin.
Play integrity (and Apple’s PAT) are what allow mobile users to have less headaches than desktops. Not saying it’s a morally good thing (tech is rarely moral one way or the rather) just that it’s a capability with both upsides and downsides for both typical and power users.
There is no logical inconsistency in disliking abusive scraping, remote attestation, malware, and CAPTCHAs at the same time. Of these, I merely dislike CAPTCHA while I make moral judgments about the other three.
I see creating a mechanism for remote attestation of consumer devices as morally bad because it's a massive transfer of power away from end users to corporations and governments. A scheme where only computers blessed by a handful of megacorporations can be used to interact with the wider world will be used for evil even if current applications are fairly benign.
Yeah, its like the world has been turned into one giant corporation, and the only computers you can use on it are corporate, botted, Active Directory joined, crap. All machines are belong to them.
Play Integrity's highest level of attestation features requires devices to be running a security update which is within a sliding window of 1 year.
LOTS of Android devices have not released a security update in many many years. This forces users to unnecessarily upgrade to higher end OEMs.
Google is effectively pushing out Xiaomi, Huawei, and many others that offer excellent budget options. Google is not just offering you the comfort of not having to fill out CAPTCHAs on your phone, most importantly they are playing monopoly.
They can, it would likely just increase the cost of cheap devices to end users, as the manufacturer now has to provide additional software support and does not want to lose money.
Because when you buy most smartphones, you're buying a vendor locked device and choosing to stay within their ecosystem. That's how Google has designed their monopoly. Apple is the same way, but non-fragmented.
You've never had to wait for Dell to type apt update and apt upgrade, but MacOS users have to wait for Apple to update their computer.
These manufacturers gladly took in AOSP back in 2011 when it was still truly a great open source project - exactly as the name should require it to be - and also when security requirements were much much lower. Of course to keep up with device security it turns out you need complete control over the whole stack and regular updates anyway, so now these manufacturers are in a pickle of a situation.
Its possible the forced apps are a cost recouping mechanism. But how does a phone bootloader being locked down become Google's fault? Does it mandate that for some kind of Android certification?
Yes Google mandates a locked bootloader in order to meet Google Play Integrity's remote attestation. More generally it mandates a perfectly clean and valid secure boot chain. Among a variety of other requirements.
One could argue that those “cheap” devices are ewaste from the beginning, and customers needing lower cost mobile devices should be buying more expensive ones used or refurbished.
Because they fucking suck. I never heard desktops or laptops being tied to Dell or Asus or what not for run of the mill kernel or os upgrades. If phone makers want to be fucking ass by locking down bootloaders, jealously preventing reversing etc preventing kernel devs etc from doing their own thing then they should accept the just label of being fucking ass or take on the responsibility of supporting it forever.
This is only allowed to exist because the justice system and politicians are mostly tech illiterate.
Play Integrity is not compliant with any antitrust legislation, that's painfully obvious. The sole and only purpose of this system is to remove non-Google Android forks.
As someone working on a product that relies on Play Integrity and PAT to give legit mobile users zero captchas while challenging non-attested clients, I promise you are quite wrong here.
The benefits may not be sufficient to offset the harms you see, but if you don’t understand how and why these capabilities are used by services, I’m also suspicious you understand the harms accurately.
Using Play Integrity for captchas is completely useless, criminals are using unmodified devices farms on racks anyways. Why would they need to modify their device?
Betting on Play Integrity to solve that is betting that devices will become more expensive in the future, that's quite obvious that the opposite is happening, they are getting cheaper and cheaper.
Using your dominance in one market to secure the dominance in other market is illegal monopoly, no matter how convenient it might be for a third party.
> if you don’t understand how and why these capabilities are used by services, I’m also suspicious you understand the harms accurately
Yeah, I see this mentality a lot on HN (and kinda everywhere for that matter). "Anyone who disagrees with me is evil, and must therefore have evil motives for everything they're doing. The reasonable/innocent explanation they give for why they're doing this must actually be a front for this other shadowy, nefarious motivation that I just made up on the spot, because surely nobody ever does bad things for good reasons. Certainly not those evil people who disagree with me!"
I hate having to defend Google here, because I think this is genuinely a terrible, freedom-destroying move, but malware on Android is a real problem (especially in Brazil, Indonesia, Singapore, and Thailand, where they're rolling this out initially) and this probably will do a lot to solve it. I'm just categorically against the whole idea of taking away the freedom of mentally sound adults "for their own good" regardless of whether it works or not, and this particular case is especially maddening because I'm one of those adults whose freedom is being destroyed.
I think everyone views themselves as a harmless smol bean, even as they wage war on general purpose computing and liberty in the name of safety. How could their actions have negative externalities, they're one of the good guys!
You’ve discovered local optimization / global reduction.
But how else should Google and their users react? Insist on offering a platform with far more abuse while subjecting users to worse user experiences and websites to more attacks… in the name of abstract freedom?
It's not a coincidence that this big push for Safetynet/Play Integrity happened after the pressure against Cyanogenmod and then Huawei.
If they really care about scams, they could remove all these casino-like games on the playstore. But they aren't going to do that because a huge chunk of the playstore revenue comes from those scam games.
This is textbook whataboutism. The type of device-pwning malware Google is concerned with here has very little in common with "casino-like games on the playstore".
No it isn't. Both are sources of scam and I'd argue that the scam officially hosted on the store is orders of magnitude more widespread than anything using direct installs.
If it's really a problem they care about, here's some priorities. (And I'd personally happy if they cared as I have some family members who got scammed by those)
For generic consumer products, sure, but for dev & technical power user tools the audience is big enough that these arguments doesn't hold water. Stack Overflow's latest survey shows nearly 30% of professional devs using Ubuntu specifically (https://survey.stackoverflow.co/2025/technology#1-computer-o...) and my own metrics (building a cross-platform dev desktop app with a global dev/technical user base) show pretty similar numbers: 65% windows, 20% Mac, 15% Linux. I would expect there's a significant (comfortably above 10%) Linux user base within the claude computer use audience.
The practical reality of distributing is mildly complicated, but there's now lots of good cross-distro options, and not having to deal with code signing everything makes some parts much easier than Mac & Windows. Ignoring that many users is fair enough for a startup or first MVP, but quite surprising for a company at Anthropic's level.
Being able to rewrite existing working code sufficient to copyright-launder it isn't the same as being able to write it from scratch, unfortunately, especially since LLMs seem to be allowed to ignore quite a bit of copyright law with complete impunity.
Imo it's totally plausible that something will be expensive & time consuming to create, even with LLMs, but still easy to fork outside current licensing restrictions with LLMs.
Rewriting it with a guarantee of not introducing any errors is still beyond current LLM capabilities, and there might be a certain correlation between that capability and the capability of writing it from scratch.
It's just _ridiculously_ useful having every single device you own work with the same charger. It's not the end of the world, but not even having to think about chargers has been a gamechanger.
There's a bunch of other "digital wallet" development going on in general, effectively providing digital certificate-backed identity documents and similar (driving licenses, passports). The plan for age verification is that these wallets will also be able to provide a cryptographically signed attestation of age (signed by an EU verification authority, i.e. your id-issuing government org) but with no other personal info included. Then you can present this to anybody, and they can independently verify the signature to confirm it's a recent proof-of-age attestation without knowing anything else about you.
It's still fairly early - lots of blueprints and proof-of-concepts, not yet rolled out anywhere AFAICT - but looks like a reasonable solution I think. In practice I suspect most people's experience will be a government-backed mobile app that you authenticate with once, and then it can handle verification requests on-device or show a QR code that other people can scan & verify.
> In a way we just need MQTT servers, a client with reliable push notifications and a manual key exchange mechanism. That would be really hard for govs to target.
Go even further: Meshtastic (https://meshtastic.org/). P2P E2EE texting, primarily via LoRa mesh (a mesh of long-range low-bandwidth direct radio connections) plus MQTT backup, with surprisingly nice UX even for non-techies. You can message people directly, or create encrypted groups too.
In effect, you broadcast your message (encrypted) via LoRa (travels a couple of kilometers through apartment blocks in a big city, or up to hundreds of kilometers in open countryside with line-of-sight), and then anybody else with meshtastic rebroadcasts it, up to 3 hops by default. Works OK for local chat through normal nodes, or really well if somebody within a few kilometers has a router on a roof/big hill nearby (map of opted-into-mapping public nodes: https://meshmap.net/ - IME that's about 10% of actual nodes). Optionally uses MQTT when there's any kind of internet connection available so you can chat long-range too (there's a public MQTT server available, or you can run your own) although that's not really the main use case.
No paid intermediaries or services involved, doesn't require a cell plan or internet or anything, even if the whole world collapses, you just keep on texting (for as long as you have battery).
Requires either a tiny radio gateway (e.g. https://lilygo.cc/products/t-echo-meshtastic) that you connect through with your phone via BT, or you can get a standalone device (https://lilygo.cc/products/t-deck-plus-1) but <$100 in either case. Low-bandwidth though: only text & GPS, no pictures or audio. And obviously, this is pretty deep in the weird nerd shit so it might be a hard sell for your grandparents, and by its nature it's mostly useful for the local area chat anyway. Perfect for trips to low connectivity zones though (hiking, skiing, etc).
Agree I love those LoRa devices & Meshtastic, but it requires your contacts to invest and carry additional hardware.
Now the most exiting project in that space IMHO is Reticulum because you can transparently mix transports: any radio (incl LoRa), TCP, UDP, etc. [0]
Their Sideband app [1] is not as polished as Meshtastic but you can start over the standard Internet, or I2P, yggdrasil and slowly introduce LoRa among your group of friends over time and if necessary.
Cloudflare have some new bot verification proposals designed to fix this, with cryptographic proofs that the user-agent is who they say they are: https://blog.cloudflare.com/web-bot-auth/.
The company doesn't actually keep your card details at all (at least, all reputable companies). They take the details to the payment processor at first purchase, but they then get swapped for a token which can be used to process transactions (usable only for transactions to you by this one vendor, so tokens can't be stolen/leaked, unlike card details) and then future transactions all just use the token.
When your card details change, all issued tokens generally stay valid, they're effectively independent. A payment card is basically an initial authentication process for the account, it's not really the payment method.
reply