Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who used to work for a bank building applications I would say no. This is definitely a feature companies and organizations like banks would request if it wasn't available.

There are a lot of scams targeting vulnerable people and these days attacking the phone is a very "easy" way of doing this.

Now perhaps there is a more forgiving way of implementing it though. So your phone can switch between trusted and "open" mode. But realistically I don't think the demand is big enough for that to actually matter.





Play integrity does almost nothing to prevent malicious actors. In fact, id say overall it's probably more harmful because it gives actors like Banks false confidence.

Even with play integrity, you should not trust the client. Devices can still be compromised, there are still phony bank apps, there are still keyloggers, etc.

With the Web, things like banks are sort of forced to design apps that do not rely on client trust. With something like play integrity, they might not be. That's a big problem.


I've worked on such systems. Love it or hate it, remote attestation slaughters abuse. It is just much harder to scale up fraud schemes to profitable levels if you can't easily automate anything. That's why it exists and why banks use it.

Wouldn't device-bound keys for a set of trusted issuing secure elements (e.g. Yubikeys) work just as well but without locking down the whole goddamn software stack?

RA schemes don't lock down the whole software stack, just the parts that are needed to allow the server to reason about the behavior of the client. You can still install whatever apps you want, and those apps can do a lot of customization e.g. replace the homescreen, as indeed Android allows today.

You need to attest at least the kernel, firmware, graphics/input drivers, window management system etc because otherwise actions you think are being taken by the user might be issued by malware. You have to know that the app's onPayClicked() event handler is running because the human owner genuinely clicked it (or an app they authorized to automate for them like an a11y app). To get that assurance requires the OS to enforce app communication and isolation via secure boundaries.


That's waay too much locking down, and while it gives me some control, it does not give me real control. I cannot remove or modify software in the software stack whose behavior I disagree with (e.g. all of Play Services). I can't replace the OS with a more privacy and security focused OS like GrapheneOS.

Imagine if this was done for desktop computers before we had smartphones. That's just crazy.

Relying on hardware-bound keys is fine, but then the scope of the hardware and software stack that needs to be locked down should be severely limited to dedicated, external hardware tokens. Having to lock down the whole OS and service stack is just bad design, plain and simple, since it prioritizes control over freedom.


1. I don't believe you. This is a measurement problem - you eliminated an avenue to measure abuse, because you are now just assuming abuse doesn't happen because you trust the client.

2. It does not eliminate any meaningful types of fraud. Phishing still works, social engineering still works, stealing TOTP codes still works.

Ultimately I don't need to install a fake app on your phone to steal your money. The vast, vast majority of digital bank fraud is not done this way. The vast majority of fraud happens within real bank apps and real bank websites, in which an unauthorized user has gained account access.

I just steal your password or social engineer your funds or account information.

This also doesn't stop check fraud, wire fraud, or credit card fraud. Again - I don't need a fake bank app to steal your CC. I just send an email to a bad website and you put in your CC - phishing.


1. Well, going into denial about it is your prerogative. But then you shouldn't express bafflement about why this stuff is being used.

Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved". Fraud and abuse have ground-truth signals in the form of customers getting upset at you because their account got hacked and something bad happened to them.

2. This stuff is also used to block phishing and it works well for that too. I'd explain how, but you wouldn't believe me.

You mention check fraud so maybe you're banking with some US bank that has terrible security. Anywhere outside the USA, using a minimally competent bank means:

• A password isn't enough to get into someone's bank account. Banks don't even use passwords at all. Users must auth by answering a smartcard challenge, or using a keypair stored in a secure element in a smartphone that's been paired with the account via a mailed setup code (usually either PIN or biometric protected).

• There is no such thing as check fraud.

• There is no such thing as credit card phishing either. All CC transactions are authorized in real time using push messaging to the paired mobile apps. To steal money from a credit card you have to confuse the user into authorizing the transaction on their phone, which is possible if they don't pay attention to the name of the merchant displayed on screen, but it's not phishing or credential theft.


> Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved".

There is an entire name for this: dark pattern.

People make this mistake all the time. Its a very common measurement problem, because measuring is actually very hard.

Are we measuring the right thing? Does it mean what we think it means? Companies spend hundreds of billions trying to answer those questions.

2. Not it cannot block phishing because if I get your password, I can get in.

To your points:

- yes, banks in the US use one time codes too. Very smart of you, unfortunately not very creative. Trivial to circumvent in most cases. Email is the worst, SMS better, TOTP best.

TOTP doesn't matter if the user just takes their code and inputs it into whatever field.

- yes there is such a thing as check fraud, you not knowing what it is doesn't matter.

- if I had to authorize each CC transaction on my phone, I'd put a bullet in my head. That's shit.


Yeah this thread boils down to US vs rest-of-world confusion. Or maybe a US vs Europe confusion.

TOTP, which you say is best, is considered weak sauce outside the US. I don't know any banks that have used it for a very long time. It's not secure enough. Cheques were phased out decades ago. There are entire generations in Europe who have never even seen a cheque, let alone written one. I think the last time I had a chequebook issued it was in 2004.

IIRC the differences arise because in the US consumer legislation makes merchants liable for refunding fraudulent transactions, so banks and consumers have no incentive to improve security and merchants can't do it except via convoluted and hardly working risk analysis. It's just so easy to do chargebacks there that nobody bothers fixing the infrastructure. This pushes everyone into the arms of Amazon and the like because they have the most data for ML.

Outside the US and especially in Europe, merchants aren't liable for fraudulent transactions if they verified the credentials correctly. It's much harder to do chargebacks as a consequence. Even if a merchant delivered subpar stuff or there was some other commercial dispute, chargebacks are very hard (I tried once and the bank just refused). So liability shifts to banks, unless they can show that the transaction was authorized by the account holder and they had correct information. That means banks and merchants are incentivized to improve security, and they do.


This is just blatantly false. Literally every bank in Denmark which is not an e-bank lets you do everything with a browser and the national digital identity, MitID. MitID offers an app, but they also offer alternatives both in the form of TOTP generators and NFC/USB hardware chips.

If by TOTP you mean an app like Google Authenticator, those are expected to be phones, aren't they? And the other things, as we already discussed, are hardware systems they can remotely attest - not browsers on their own.

People seem to be getting really hung up on this point. Accepting a browser means letting you do everything with nothing but whatever program you want that speaks HTTP. No special apps or authenticators or extra tokens. You should be able to write a plain Python script that sends money whenever it wants, on its own.

European banks do not allow this in my experience, and nothing being posted to this thread indicates otherwise. Apparently there are some banks especially in the USA who just don't care about security at all because they can push fraud costs onto merchants, so they do accept browsers for everything, or they make some trivial effort and if users undermine it using Google Voice or whatever they don't care - that's fine, I overgeneralized by saying "banks" instead of geographically qualifying it. Mea culpa.

But in your case, you need the assistance of something that's not a browser.


By TOTP I mean a hardware token using the TOTP algorithm to generate a nonce, like the second option on this page: https://www.mitid.dk/en-gb/get-started-with-mitid/how-to-use...

I thought that was what you meant too? If you mean TOTP via a QR code exposing the secret, then of course I agree, no banks allow that. But your comment read as a claim that all TOTP solutions were inherently deemed insecure and wouldn't work, and that smartphone based solutions were the only viable alternative outside the US. The code display is of course vulnerable to man-in-the-middle attacks where you trick users into authorizing transactions via fake web pages, but it is not a threat that is deemed serious enough to prevent our whole country from basing our digital infrastructure on code displays.

I think people get hung up on your point about banks not accepting browsers because you don't formulate your point very clearly, and it reads like you claim that they don't accept browsers at all when what you mean is just a browser and nothing else. Most European banks do in fact allow you to do business using a browser - you just have to prove your identity via other means as well. And there are no good security arguments why those means must be in the form of a smartphone app whose security requirements have the side effect of locking you into a business relationship with one of two American tech giants. As you can see, a whole country of almost six million people authenticates everything from bank transactions to naming their kids and buying houses using a system which allows you to use just a code display.

I think the strategy of remote attestation of the whole OS stack up to and including the window manager is a clunky and inelegant approach from an engineering perspective, and from a freedom perspective I think it is immoral and should be illegal. What I could accept would be an on-phone security module with locked down firmware which can simply take control of the whole screen regardless of what the OS is doing, with a clear indicator of when it is active. This allows you to authorize transactions and inspect their contents, and only needs remote attestation of the security module, not the whole OS.


From digging in a bit, it sounds like originally MitID was meant to be app only and it was only after pressure from a lobbying group that they relented and allowed a TOTP token.

https://www.dr.dk/nyheder/seneste/mitid-kan-digitalt-udelukk...

So my guess is that this is not because they think TOTP is secure enough but rather due to the political aspects of it being centrally run by the government.

The security argument is pretty straightforward and I guess you know it already, because as you say, TOTP is vulnerable to phishing (unless you use some of the anti-bot tech I mentioned elsewhere but it's heuristic and not really robust over the long term). Whereas if you do stuff via an app, not only can malware not authorize transactions, but it can't view your financial details either - privacy being a major plank of financial security that can't be reliably offered via desktop browsers at all, but can via phones.

The alternative you propose is basically a secure hypervisor. Such schemes have been implemented in the past, but it's not ideal technically. For fast payment authorization via NFC, this is actually how it works, which is why when you touch a phone to a terminal to pay for something you don't see any details of the transaction on the display itself, just an animation. The OS doesn't get involved in the transaction at all, it's all handled by the embedded credit card smartcard which is hard-wired to the NFC radio. The OS gets notified and can send configuration messages, but that's about it.

For anything more complex the parallel world still needs to be a full OS that boots up, have display drivers, have touchscreen drivers, text rendering, a network stack, a way to update that software, etc. You end up with a second copy of Android and dual booting, which makes memory pressure intolerable and the devices more expensive. But it's hard to justify that when the base phone OS has become secure enough! It's already multi-tasking and isolating worlds from each other. There are no users outside of HN/Slashdot who would find this arrangement preferable. And as your concern is not fully technical, it's not clear why moving the hardware enforcement around a bit from kernel supervisor to hypervisor would make any difference. This isn't something that can be analyzed technically as it all seems to boil down to fear over the loss of ad blocking.


That's not actually what the article says, the article said that the rollout of MitID first included the app, and that the alternatives were made available later. The alternatives were always part of the plan. The lobby group mentioned were complaining because MitID was replacing an existing solution, NemID, which offered the alternatives. For a while during the rollout you could use both methods of identification, and the lobby group wanted to wait with retiring NemID until the alternatives for MitID were available. The old solution was not replaced due to security issues but because the vendor lost the project when the contract ran out.

There are two discussions here, the technical and the one concerned with freedom. I am concerned with both, and I think we need a compromise which doesn't throw out the latter in order to obtain a perfectly secure model.

My concern is not only with ad removal, that was just an example. My concern is digital autonomy in general, and the issue of giving an American company the power to decide what software users around the world are allowed to execute. They can censor software they don't like, and rogue governments can pressure them to censor software that THEY don't like. E.g. the EU who might want to prevent people from installing E2EE apps soon when Chat Control is rolled out.

There are good technical security arguments for phone based solutions over the alternatives, but it doesn't mean that the alternatives are worthless, just that the users have to be a bit more vigilant. I think that is a better compromise in the interest of protecting freedom and democracy.

We are some of the few people who can understand the long-term implications of the different technical solutions and the potential tools it will give private companies and governments to suppress people. If we are not advocating for freedom over convenience, then who will?


Well it is still a phone after all, what with UMA and baseband processing. You don't need to spend much time at Blackhat/Defcon to realize any true attempts at sealing it up are akin to plugging leaks in a sieve with epoxy. Its far too porus.

Meanwhile if attestation does reduce fraud, the ownability (by the user) of the device is now forfeit due to chasing a dragon's tail.


That’s a “seatbelts so no good because people still die in car crashes” argument with a topping of “actually they’re bad because they give you a false sense of security”

Play integrity hugely reduces brute force and compromised device attacks. Yes, it does not eliminate either, but security is a game of statistics because there is rarely a verifiably perfect solution in complex systems.

For most large public apps, the vast majority of signin attempts are malicious. And the vast majority of successful attacks come from non-attested platforms like desktop web. Attestation is a valuable tool here.


How does device attestation reduce bruteforce? Does the backend not enforce the attempt limits per account? If so, that's would be considered a critical vulnerability. If not, then attestation doesn't serve that purpose.

As for compromised devices, assuming you mean an evil maid, Android already implements secure boot, forcing a complete data wipe when breaking the chain of trust. I think the number of scary warnings is already more than enough to deter a clueless "average user" and there are easier ways to fish the user.


And those apps use MEETS_DEVICE_INTEGRITY rather than MEETS_STRONG_INTEGRITY so a compromised device can absolutely be used to access critical services. (Usually because strong integrity is unsupported on old devices)

This reminds me of providers like Xiaomi making it harder to unlock the bootloader due to phones being sold as new but flashed with a compromised image.


Maybe a good compromise is to change the boot screen to have a label that the phone is running an unofficial ROM, just like it shows one for unlocked bootloaders? If the system can update that dynamically based on unlock state, why can't it do it based on public keys? Might also discourage vendors/ROM devs from using test keys like Fairphone once did.

Pixels with, for example, GrapheneOS already do exactly that:

"Your device is loading a different operating system."


I developed this stuff at Google (JS puzzles that "attest" web browsers), back in 2010 when nobody was working on it at all and the whole idea was viewed as obviously non-workable. But it did work.

Brute force attacks on passwords generally cannot be stopped by any kind of server-side logic anymore, and that became the case more than 15 years ago. Sophisticated server-side rate limiting is necessary in a modern login system but it's not sufficient. The reason is that there are attackers who come pre-armed with lists of hacked or phished passwords and botnets of >1M nodes. So from the server side an attack looks like this: an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts, against unique accounts that log in from the same region as that IP is in, and the password is correct maybe 25%-75% of the time. Then the IP goes dormant and you never hear from it again. You can't block such behavior without unworkable numbers of false positives, yet in aggregate the botnet can work through maybe a million accounts per day, every day, without end.

What does work is investigating the app doing the logging in. Attackers are often CPU and RAM constrained because the botnet is just a set of tiny HTTP proxies running on hacked IoT devices. The actual compute is happening elsewhere. The ideal situation from an attacker's perspective is a site that is only using server side rate limiting. They write a nice async bot that can have tens of thousands of HTTP requests in flight simultaneously on the developer's desktop which just POSTs some strings to the server to get what they want (money, sending emails, whatever).

Step up the level of device attestation and now it gets much, much harder for them. In the limit they cannot beat the remote attestation scheme, and are forced to buy and rack large numbers of genuine devices and program robotic fingers to poke the screens. As you can see, the step-up from "hacking a script in your apartment in Belarus" to "build a warehouse full of robots" is very large. And because they are using devices controlled by their adversaries at that point, there's lots of new signals available to catch them that they might not be able to fix or know about.

The browser sandbox means you can't push it that far on the web, which is why high value targets like banks require the web app to be paired with a mobile app to log in. But you can still do a lot. Google's websites generate millions of random encrypted programs per second that run inside a little virtual machine implemented in Javascript, which force attackers to use a browser and then look for signs of browser automation. I don't know how well it works these days, but they still use it, and back when I introduced it (20% time project) it worked very well because spammers had never seen anything like it. They didn't know how to beat it and mostly just went off to harass competitors instead.


I may be mis-understanding, but it sounds like this kind of widely distributed attack would also be stoppable by checking how often the account is attempting to log in? And if they're only testing two or three passwords _per account_, per day, then Google could further block them by forcing people not to use the top 10,000 popular passwords in any of the popular lists (including, over time, the passwords provided to Google)?

The attackers only try one or two passwords, that they hacked/phished. They aren't guessing popular passwords, usually they know the correct password for an account and would log in successfully on the first try. There are no server side signals that can be used to rate limit them, especially as the whole attack infrastructure is automated and they have unlimited patience.

Forgive me for being reductive, but aren't these leaked accounts a lost cause? The vulnerability in question is attackers being able to log into user accounts with leaked credentials. The only mitigation for this is to lock out users identified in other password breeches and reconfirm identity out-of-band, like through a local bank branch, add a second factor like a hardware token, or use restrictive heuristics like IP geolocation consistency between visits.

If 3 attempts per hour is enough to gain access, then it doesn't seem attestation can save you. I imagine a physical phone farm will still be economically viable in such case.


Yes that's what companies do. I worked on the system there that addressed this. If you can detect a botted login you can lock the account until the real user is able to get new credentials, or block activity in other ways. Not a lost cause at all.

It was very effective when this problem was new. Don't know about the current state of things.


> an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts

How is the attacker supposed to bruteforce anything with 2-3 login attempts?

Even if 1M node submitted 10 login attempts per hour, they would just be able to try 7 billion passwords per month per account, that's ridiculously low to bruteforce even moderately secure passwords (let alone that there's definitely something to do on the back end side of things if you see one particular account with 1 million login attempts in a hour from different IPs…).

So I must have misunderstood the threat model…


Brute force here can mean they try millions of accounts and get into maybe a quarter of them on their first try, not that they make millions of tries against a single account.

If you have an attacker that can gain access on 25% of its attempts, it doesn't matter it there is a botnet with millions of IPs, they would still have around 25% success rate on just 10 IPs, it bas nothing to do with brute force, it just means you have plenty of compromised accounts in the wild and you want to prevent bad actors from using them at scale.

The threat model is entirely different from what your brute force phrase implies, and it is also a threat model that isn't relevant to banking, which was the topic of the discussion in the first place. And more importantly, it doesn't affect the security of the user.


That's a very uncommon understanding of brute force, to be honest. Generally I see the term applied to cases where there's next to no prior knowledge, just enumeration.

Well, I'd have picked a different word in this context. I'm just explaining why attestation fixes the problem described by the OP as seen in modern contexts and rate limiting doesn't.

Its not that type of argument, because seatbelts actually work - play integrity does not.

Play integrity is just DRM. DRM does not prevent the most common types of attack.

If I have your password, I can steal your money. If I have your CC, I can post unauthorized transactions.

Attestation does not prevent anything. How would attestation prevent malicious login attempts? Have you actually sat down and thought this through? It does not, because that is impossible.

The vast, vast VAST majority of exploits and fraud DO NOT come from compromised devices. They come from unauthorized access, which is only surface level naively prevented by DRM solutions.

For example, HBO Max will prevent unauthorized access for DRM purposes in the sense that I cannot watch a movie without logging in. It WILL NOT prevent access if I log in, or anyone else on Earth logs in. Are you seeing the problem?


Cool. So you run a baking website. You get several hundred thousand legit logins a day, maybe ten million that you block. Maybe a hundred million these days.

Now, you have a bucket of mobile users coming to you with attestation signals saying they’ve come from secure boot, and they are using the right credentials.

And you’ve got another bucket saying they’ve are Android but with no attestation, and also using the right credentials.

You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.

Do you treat the logins the same? Real customers HATES intrusive security like captchas?

Are you understanding the tech better now? The entire problem and solution space are different from what you think they are.


Who is responsible for fraud? If the user loses their password, that's their problem.

Banks are responsible, that's the point.

> You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.

1. I don't believe this research - measurement is hard. If we just consider using an unattested device as malicious, as we do now with the play integrity API, then you fudge the numbers.

2. Even IF the research is true, relative probability is doing the heavy lifting here.

There's still going to be more malicious attempts from attested devices than those unattested. Why? Because almost everyone is running attested devices. Duh.

Grandma isnt going to load an unsigned binary on her phones. Let's just be fucking for real for one second here.

No, she's gonna take a phone call and write a check, or get an email and go to a sketchy website and enter her login credentials and then open the investable 2FA email and then enter the code she got into the website. Guess what - you don't need a rooted device for that. You just don't.

There are extremely high effort malicious attempts, like trying to remotely rootkit someone's phone, and then low effort ones - like email spam and primitive social engineering.

You guess which ones you actually see in the wild.

Is there a real threat here? Sure. But threat modeling matters. For 99.99% of people, their threat model just does not involve unsigned binaries they manually loaded.

Why are we sacrificing everything to optimize for the 0.01%? When we havent even gotten CLOSE to optimizing the other 99.99%?

Isn't that fucking stupid? Why yes, yes it is.


> That’s a “seatbelts so no good because people still die in car crashes”

Except it's not a seatbelt, it's straitjacket with a seatbelt pattern drawn on it: it restrain the user's freedom in exchange for the illusion of security.

And like a straightjacket, it's imposed without user consent.

The difference with a straightjacket is that there's no doctor involved to determine who really needs it for security against their own weakness and no due process to put boundaries on its use, it's applied to everyone by default.


Great. Let's just require every single computing device to be verified, signed, and attested by a government agency. Just in case it is ever misused to attack a Google online service that cannot be possibly bothered to actually spend one nanosecond thinking on security.

What could possibly go wrong. It's not only morally questionable no matter what "advantages" it provides Google, but it's also technically ridiculous because _even if every single computing device was attested_, by construction I can still trivially find ways to use them to "brute force" Google logins. The technical "advantage" of attestation immediately drops to 0 once it is actually enforced (this is were the seatbelts analogy falls apart).

Next thing I suggest after forcing remote attestation on all devices is tying these device IDs to government-issued personal ID. Let's see how that goes over. And then for the government to send the killing squad once one of these devices is used to attack Google services. That should also improve security.

Here's the dystopian future we're building, folks. Take it or leave it. After all, it statistically improves security!


You just proved the seatbelt analogy.

Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.

This is not the anti-attestation / anti-seatbelt argument many think it is.

All security is mitigation. There is non perfection.

But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.

I think your enthusiasm for melodrama and snark may be clouding your judgment of the actual topic.


> Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.

I won't solve the problem for _anyone_ once it is required, because it is trivial to bypass once the incentive is there. This is what kills this technically; it does not even go into the other cons (which really should not be ignored). Seatbelts absolutely do not have this problem.

> All security is mitigation. There is non perfection.

This is an absolutely meaningless tautology. It is perfectly true statement. It adds absolutely nothing to the discussion.

Say I argue in favor "putting a human to verify each and every banking transaction with a phone call to the source and the destination". And then you disagree, saying that there will be costs, waste of time for everyone, and that the security improvement will be minimal at best. And then I counter with "All security is mitigation, there is no perfection!".

Can you see what you're doing here? This is another textbook example of the politician's fallacy (something must be done; this is something; therefore we must do this).

It is trying to bypass the discussion on the actual merits of the proposal as well as its cons by saying "well it does something!" . True, it does something. So what? If the con is bad enough, or if the benefit too small, maybe it's best NOT to do it anyway!

> But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.

Not long we had right here in HN a discussion about the merits of remote attestion for anti-cheating: turns out the "lot of money" is a custom USB mouse (or addon to one) that costs cents to make. Sure, its not zero. You have to go more and more draconian in order to actually make it "a lot of money", but then you'll tell me I'm being melodramatic.


>After all, it statistically improves security!

Probably not even that, but it limits liability and that’s the only purpose, just like the manual in your car, nobody will ever read it but it contains a warning for every single thing that could happen.


I've actually read the manual for my car, and it absolutely does not "contain a warning for every single thing that could happen".

> This is definitely a feature companies and organizations like banks would request if it wasn't available.

Really? Because they've been fine without this feature on desktop for literally decades.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: