Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Smartphone Security: You'll Never Guess Who Just Messaged You (jordansmith.io)
161 points by jordansmithnz on April 8, 2018 | hide | past | favorite | 79 comments


I really don't get the security model for smartphones. It seems horribly brittle. I mean, the fundamental protection is using apps from trusted sources, basically Google or Apple. Anything not trusted can't install, unless you've rooted the phone. And so old-school Windows-style malware is blocked.

However, when trusted apps are installed, they often demand all sorts of privileged access. And if they're malicious, there's no way to protect against them. Except that they get reported to Google/Apple and become unavailable. But that doesn't help people who already got pwned.

What am I missing?


There are many layers of protection. For example with iOS every application runs in a restricted sandbox. It should be impossible for malicious application to access data from another application. There's old UNIX permissions system in place, so you can't replace iOS kernel with something else. Firmware checks kernel signature, so even if you've found a bug, you'll still have a hard time replacing a kernel. Very sensitive data is protected by a secure hardware chips. So even if you think that you got pwned, it doesn't mean the end of the world.


Malware persistence methods are also being cut off on iOS, so even with a 0-day, it is still a challenge to survive reboot.


A variant of the 'single non-root user' problem on Unix systems.

A non-root user (hopefully) can't root the system or rm -rf /root.

But everything interesting is stored in that user's home folder with implicit RW permissions anyway.

On Android apps just request everything. I imagine (without explicit knowledge) that an app given permissions could rewrite, erase, or pull down over the network contacts / photos / etc in the background.


I remember learning Unix on my school's big Unix system. Lots of talk about how important root was. I understood it from a system standpoint sure.

Yet I was confused, because as a user all I cared about was my stuff that was ... right there in a non root account.

As you say all the stuff I was concerned about was right there, but nobody talked about how important that was.


"If someone steals my laptop while I'm logged in, they can read my e-mail, take my money, and impersonate me to my friends... but at least they can't install drivers without my permission" https://xkcd.com/1200/


On a large multi user system there’s a big difference between “your stuff” and “everyone’s stuff” on a single user system like a phone - not so much.


The point is that your trust envelope extends to the authors of software, which are frequently not acting in the best interests of the user, and have their own goals and incentives (including competing goals with other software authors).

On my Linux systems, particularly under Debian, there's some assurance provided through the Debian Project, its guiding documents (social contract, constitution, policy), and debian developers. The project explicitly serves the users. This doesn't prevent bugs and occasional malice, but tends to tremendously reduce incentives for it.

Smartphones ... are a mess, and Android rather particularly so. I've suggested entirely rethinking how app development is performed, particularly for basic utilities, closer to the Debian model. I have little hope of this occurring.


Although you can leverage multiuser capable systems for more security by, say, using different user accounts for playing games and banking, or whatever.


> On Android apps just request everything

That was true up to Marshmallow (Android 6), when finer grained permissions were added.


Let me just update to it, oh wait....


Ah, finer-grained permissions, something I've seen on a BlackBerry in 2009.


I learned quite recently that Android was using one user ID per installed app, effectively isolating them one from another. However, I am not sure I understood what you meant by

> On Android apps just request everything

Well, this is as true as with any other operating system, isn't it?

Nowadays, I try to use F-droid as much as I can, applications there ask for reasonable permissions, and are open source, which makes it easier to trust them.


OK, but doesn't AppArmor protect against that?

And it looks like it's been ported to Android and iOS. But it's not default, I guess.

But neither is AppArmor in Linux. So why haven't we heard about apps stealing stuff from /home/user/? Is it just that distros do a better job of policing their repos? Or that the userbase is too small to attract malicious app developers?


> rm -rf /root

I don’t think this will do much besides blow away the root user's home directory, which probably doesn't have much in it anyway. But I get what you mean.


I think that he meant "rm -rf /" :)



How many years after Android permissions changed for the better will users who have never owned an Android phone complain about a problem that no longer exist.


Well, because there is this thing called lack of updates.

On German consumer shops, most "deal of the day" are still a mix of 4.4 and 5.1.


Until Google finally matches the permission model that iOS has had for years, and until the changes roll out to the majority of users instead of the few that own Pixel phones?


It has been there since Android 6.0.


Only available to around 60% of Android users.


It doesn't change the fact that it is there. :D


It is NOT there for at least 40% of android users.


«Anything not trusted can't install, unless you've rooted the phone»

Not true on Android. Apps can be installed from third party sources, without having to root the phone.


And on iOS, apps can restrict what permissions you can give them. Uber for instance allows only “no location” or “location all the time”, and does not allow “location while using the app”.


The Uber app now allows the "While using the app" option, fortunately. No longer need to remember to toggle it off.


Or, more accurately, iOS forced apps to support "Location while using", precisely because apps like Uber abused their permissions: https://www.engadget.com/2017/09/21/ios-11-uber-always-on-lo...


The main line of defense is the permissions system. If an app wants to access the user's camera, contacts, etc, it needs to come up with a good excuse for why or risk the user just tapping "deny". This seriously limits the damage a malicious app can do.

Aside from making the permissions system even _more_ fine-grained, I don't know how you'd make the situation much better than it already is.


> If an app wants to access the user's camera, contacts, etc, it needs to come up with a good excuse for why or risk the user just tapping "deny".

This isn't how many users use apps. For them any of these prompts are getting in the way of using the app, so they tap the button that would let them use the app quicker, without reading what the message said.


"Deny" will (usually) let you use the app just as quickly as "Accept". Permissions on iOS and Android are just-in-time and optional.


How exactly does tapping "Deny" on Snapchat's request to access the camera let you use the app?


That'd be silly, since it's obvious why Snapchat would want camera access. If you don't want to grant it access to your camera though, you _should_ be able to click "Deny" and still use the app to read messages.


Presumably you can still view things sent to you and look at the sponsored Snapchat stories.


Yes. I'm still hoping that I can one day run Linux on my phone, and iOS/Android apps in a sandbox.


iOS apps run in a sandbox


>trusted sources, basically Google or Apple.

I'd say open source apps are more trustworthy that Google or Apple.


Only if you inspect the source, understand the source, and build it yourself. Most people just check if an app is OSS and then install the binary version from the app store.

I've installed plenty of OSS apps but I've never built them myself and while I've looked at the source code, I have no way of knowing if what was in the repository was what got delivered.


For me, using FLO apps is more about freedom from lock-in.

You're right; I don't usually vet apps past the simple "is it FLO?" check (although I will do this before accepting a suspicious permission). However, I do keep a copy of the source of most of the apps I use. As a result, if it were to come out that the app is doing something shady, I could remove that part and keep enjoying the functionality of the app, whereas with a proprietary app I'd be SoL.

I also know that the developers know this, so they have an incentive not to do that stuff if they don't want a competing fork to take off. That is the main reason why I trust FLO apps more.


The viability of OSS is based on presumably someone within the community has or will check that connection. The risk of that highly encourages OSS owners to tell the truth in what they're doing.


Realistically though no one is checking, especially on smaller projects. You may be putting yourself at even higher risk since open source contributors are often not vetted as deeply as random employees.


Quite a lot of people do look into sources, actually. Quality of those checks, and how much do they cover in the end - that is the question. But it is better than nothing, and absolute majority of closed source apps offers you exactly nothing in this regard. Surely, opensourced code is not inherently safe, but with time passing, while userbase is big enough, it gets some attention, which means advantage on a scale of probabilities.


There are actually two gains from having the source in the open. One, that you can inspect the source - and in case of popular applications, even if you don't, you can be sure that someone already had and would speak up if they noticed anything off.

But the second one is, if there really is something off, there's an unambiguous proof of it - you can point directly at the implementation! And with luck, even at the particular person or organization responsible.


> and in case of popular applications, even if you don't, you can be sure that someone already had and would speak up if they noticed anything off.

You can’t be sure of that at all. This isn’t just an epistemological point either - people are still routinely finding vulnerabilities in the most widely used open source software that were introduced two decades ago. This is despite extremely lucrative incentives for finding them. In many cases it’s because no one bothered to look; in other cases it’s because the numerous (ostensibly qualified) people who looked weren’t capable of finding them.

Open source software is a red herring for security. Its benefits are vastly overstated. You gain an incremental improvement in the theoretical ease of review, but this almost always comes at the cost of the overseeing organization having fewer resources to devote to security because its more difficult and uncommon to monetize open source software.

Overall I’d say open source software is at best weakly correlated with improved security posture. As it turns out most people don’t inspect their code before running it, and this includes those who fully buy into the many eyes aphorism. Of those who do inspect it, approximately all of them are woefully incapable of identifying real vulnerabilities beyond all but the lowest hanging fruit. Those who remain are typically extremely well paid for their expertise and will only look at software which features a sizable bounty.

Unfortunately most open source software, including widely used open source software, does not have a bug bounty and isn’t on e.g. Google Project Zero’s radar.


> Overall I’d say open source software is at best weakly correlated with improved security posture. As it turns out most people don’t inspect their code before running it, and this includes those who fully buy into the many eyes aphorism.

I and sure many others do look at the code, even for languages that I don't know. I often end up removing functionality that I don't need (less unnecessary code often means less security vulnerabilities) or patch it slightly to suit my needs better. For proprietary code I don't have that luxury, the other issue is that a lot of free-ware software tracks you and won't give you any opt-out options.

It's not weakly correlated if let's say I can remove finger protocol and get direct security benefits. Things like that make outdated proprietary binaries unusable from security perspective.

> approximately all of them are woefully incapable of identifying real vulnerabilities beyond all but the lowest hanging fruit.

Sure, but more often that not it is those that will get exploited and open source makes it damn easy to fix. Bug bounty shouldn't be the only incentive to make the software more secure.


Vulnerabilities are everywhere, let's not be mistaken here. Closed source is full of them too, you just don't have a chance to know it in most cases. The fact that bugs are found, and publicly announced does, in fact, prove that the model works. That said, I agree that "opensource" label is overadvertised, and many people started to see it as a synonym to "safe" which is definitely a grave mistake. But dismissing its small advantages, just because someone believes they are big doesn't make sense too.


I didn’t dismiss its small advantages, I put them in context so it would be clear that they’re small.


I guess that the argument is that: 1) sources are available, so anyone can look at them; and 2) that if sources and binaries are in a distro's repo, someone knowledgeable must have looked at them. Also, some repos (Debian and ?) are signed, so you know that you're getting authentic binaries.

But even with that, it's taken years to find some serious bugs.


I feel like this is a really common problem for both mobile AND OAuth authorization frameworks. The scopes and permissions are not quite fine-grained enough - it's generally a read and WRITE for all of what they ask for, instead of being read (most common use case).


I can imagine how this evolved -- somewhere at Apple HQ, a whiteboarding session about classes of information and capabilities that the OS will gatekeep, which got us to Contacts, Photos, Camera, Microphone, Location. Presumably apps would be extensively curated, keeping out the scummiest of the lot, and apps were envisioned as asking for permissions directly relevant to their utility so having separate read/write permissions would be overkill.

When Android cloned the same model, they got much more granular with permissions [1][2], but then completely undermined it by making it occur only once at app install. As someone put it [3], they're not permissions because you can't turn them off -- they're warnings about what the app does. Then they further mucked this up, by eventually grouping them together into broad categories within which apps could automagically gain all other permissions without your approval [4].

Then, the following year, in 2015, they finally introduced iOS-style runtime-granted permissions, if your device was lucky enough to be up-to-date and your apps were gracious enough to target the new API level; otherwise you missed out on this change.

In fairness, by this point, hoover-style request-everything permission requests were extremely common among mainstream apps like Facebook, Messenger, Snapchat... so reigning in on contact-harvesting flashlight apps was a bit of a lost cause.

[1] https://developer.android.com/reference/android/Manifest.per... [2] https://developer.android.com/reference/android/Manifest.per... [3] https://news.ycombinator.com/item?id=7959925 [4] https://news.ycombinator.com/item?id=7959660


This is scary. I always assumed that the "access contacts" permission was read-only, but I guess it's not.

Just another thing to be paranoid about in modern life.


Maemo maintains separate histories and conversation windows for separate phone numbers/IM accounts and highlights the currently used one when you check the contact details. I should be immune :P


Bring back Maemo/Meego/Tizen/Mer/Sailfish!

I wish I had more time and patience for smart phone development and that the heroic efforts of those unlocking these devices, writing OS drivers for proprietary (and adversarial) hardware and making alternative operating systems possible were more widely acknowledged. It just feels hopeless out of the box.


I just checked on iPhone (Settings-Privacy-Contacts): Threema, Google Voice, Hangouts, WhatsApp, Telegram. Google Voice and Hangouts have my contacts anyway since I'm on gmail. There is a few other apps where I denied contacts access, like Uber, Skype, Twitter. There isn't really much more here.

So for me that's a non-story.

While I agree that access to contacts should be read-only, and write access should be a special permission, for me it appears to be not a problem. While I have a few other apps installed, most didn't ask for contacts permission (i.e. all games), and from those that did ask, I denied in some cases where it didn't make much sense (why should twitter access my contacts? to find my friends on twitter? I don't need that.)

I actually like the way iOS handling the permissions. The privacy overview in the settings is very easy to understand and maintain. Permissions are only asked once when access to them is actually required in that moment (like when you tap on "take photo" in some app). Permission is not asked for while the app is launching (aside of push notifications and location).

So on iOS usually what happens is this:

    - I install some app, lets say WhatsApp
    - I launch it, it asks for push notifications and contacts permission
    - i use the app
    - if i dont share my location, i'll never get asked for the permission to GPS
    - if i use "send photo" - it will ask for access to photos, but not to the camera
    - sometimes, months or years after usage, it will ask me for a permission, i.e. microphone, because I have never used that feature before and only now want to use it
    - etc.


Android is extremely broken and is designed to leak information like a sieve. Not surprising given the incentives of Google.

Don't expect goodwill or good behavior when the fundamental incentives are surveillance and hoovering user data. The multi billion dollar ad economy is based on this.

A system designed with user privacy would be designed to lock down hard on contacts, sms, location, and other personally identifying information access. But the android permission system for instance is so involved it's not surprising lay people are not able to understand the implications, read between the lines of actual motivations and take proper actions.

Facebook does not need your location, contact or sms information. Neither does Google. Yet Google insists on creepily telling you your location on every Google search. This itself is sinister and attempts to normalize stalking behavior.

Uber and others don't strictly need location access, you can type it in, and if required it should be used for the convenience it offers - you are paying for the service - without the possibility of Uber and others collecting historical location and ride information to build invasive files on their users.


Location is useful if you're searching for things like restaurants though.

But Google is stupid, yesterday I read this commentary about a movie about a plane hijacking in the 70's that landed the plane in Entebbe, Uganda:

https://www.theguardian.com/commentisfree/2018/apr/07/entebb...

Being interested in the historical context, I googled the name of the city. Later on Google showed me as one of its results "Flights to Entebbe". Gee, how clever.


I've previously seen this done with SMS gateways, which allow you to specify/spoof the sender number (it's how you get a text addressed from 'COMCAST' for instance). Your phone will trust that the spoofed number is genuine, so if I send you a message with Bob's details, your phone will tell you it's from Bob.

Unlike the OP, the attacker can't receive replies from the recipient.


That also wouldn’t work over iMessage.

I’d like to know if this is actually being done in the wild. Certainly once caught would be banned from the App Store and possibly a lawsuit or two filed.

The author didn’t note that a new thread would have started on iOS, which would provide some visual feedback that something was different. You could click for further info and see the different number. I know it would foil most but it’s something.


> once caught would be banned from the App Store

But the issue is that it's impossible to detect. An app could've added the extra number months ago, and you've deleted the app since then. There is no way to find out which app did it.


Apple’s security team can always collaboratively filter on reports and find out what is the intersection app.


Assuming this is used widely for there to be enough reports. I expect this to only be used sparingly so I'd be surprised if there is even a single report of this; as the targeted people will mostly have no idea this is even possible.


So you’re telling me that someone went to all the trouble of building an app, getting people to download it, and then only used its main purpose (phishing/malware) for a couple people?


For even greater effect, the app could apply heuristics when selecting target contacts, preferring names such as ‘Dad’ and ‘Mom’, or contacts that have nicknames.

The best heuristic would probably be the contact that has messaged you {first|second|third} most frequently in the last week.


On iOS at least, an app would have no way to determine that unless it was a callkit app that explained how to enable access to your sms history as well (they can’t request it - you have to go to settings and turn it on from what I’ve seen). That would be super suspicious and get extra scrutiny during app review.


The malicious app may not have a way to get these statistics.


An app could look in the "favorites" list to get the same info, which probably is available.


How about the OS log all the actions taken by an app with respect to contacts and other system data, and allow you to reverse them?

And make an indicator in the status bar similar to how an app recently checked your location.

They might even highlight (in red) new changes in the Contacts app. Or when you get such a message from a new user, the Messages app would do the highlight the first time.

Seems that would mitigate this particular thing.


The current Android approach of granting all permissions the app need at installation time leads to a lot of abuse like this. Just ask for everything because the user just wants to install it and does not read the permission dialog anyway.

The current Apple system is slightly better: when the app asks to use one capability (camera, smartphone, contacts) the user is prompted to Grant permission the first time this capability is requeated by the app.

This approach should be further developed with prompt dialogs to continue allowing the app to use the requested capabilities.

Examples:

The App XYZ is requesting read and write access to your contacts. It was granted access on {date} and has read 224 phone numbers, 121 names and 56 addresses and modified two contacts.

Continue allow access? Disallow? Report?

App XYZ is requesting access to microphone. In the last month it accessed and recorded 342 hours of microphone.

Continue allow access?Disallow? Report?

App XYZ is accessing your geolocation in the background. Permission was granted on XXX. During the last month it accessed to your geolocation approximately 461 times every day.

Continue allow access?Disallow? Report?

Apple Goole and others don't have to get on the way of every users all the time with these prompts. They can identify at which time to show this prompt based on analytics (how many users already reported this app etc). They can also identify users who are more privacy aware and that will be glad to read carefully these dialogs and report wrongdoing, and send more prompts to these users.


Android no longer always grants all permissions upon install and in fact offers something more like the Apple model. This is as of Android 6 and I believe it is more or less required nowadays for new apps to use certain permissions.

https://developer.android.com/training/permissions/requestin...


Permissions can also be revoked after the fact in the app settings menu.

https://www.howtogeek.com/230683/how-to-manage-app-permissio...


I remember recently reading somewhere, that apparently if you build an app targeting an old enough version of Android, you get the old behavior as a "backwards compatibility" feature — reportedly kept even in the newest versions of Android. I don't have time to search for the link to the article now however, sorry.


Targeting very old versions is soon going to be banned from the play store.


My Android phone provides prompts to have me grant access to certain features. It also lets me know when apps use certain permissions in the background.


> The current Android approach of granting all permissions the app need at installation time leads to a lot of abuse like this. Just ask for everything because the user just wants to install it and does not read the permission dialog anyway.

This hasn't been true for some years now

https://inthecheesefactory.com/blog/things-you-need-to-know-...


I just checked my contacts privacy settings and none of the apps have access to my contacts. really why should they have access to my contacts anyways? I would pass that convenience.


I just checked my contacts privacy settings and 10 apps have permission while 29 do not.

Contacts, Mail, Agenda, Messages legitimately want contacts.

All google apps ask for contacts (Photos, Keyboard, Maps, Docs, Keep, Play Store...), all (undeactivable) samsung doppelgangers also (Browser, Gallery, AppStore, Music), as well as all (deactivated) microsoft apps (Powerpoint, Onenote, Excel)...


Wow ! This is simple and a very powerful technique. Thanks for sharing


> This is simple and a very powerful technique.

I do hope you don't mean that you're going to be using it in your own software…


Hopefully it has been fixed, but last time I used iOS, the phone/messages app seemed to mix up contact info from identical numbers with different area codes. So if I had 206-555-5555 in my address book, and got a call or message from 503-555-5555, it would show the contact info from the 206 entry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: