Apple pulls some tricks like this with otherwise hidden-from-user SMSs.
In France, some cheap SIM cards charge per mb and per SMS until you register a plan. So I carefully disabled mobile data, avoided SMS, loaded 10 EUR of credit over wifi, which 'activated' the phone on the network, but when I went to sign up for the 10 EUR plan, I found I only had 9,95 EUR left.
As soon as credit was loaded, my iPhone sent an SMS ping to an Apple shortcode to tell iMessage my new number. The sending and record of this SMS was completely hidden from the user on the phone. Cue some he-said she-said with the carrier about whether I did or didn't send an SMS. Most mobile providers zero-rate shortcodes to Apple and hide it on their billing system too, but not Lebara.
So I had to add 5 EUR more of credit just to buy the 10 EUR package for the month.
In recent versions of iOS, it literally says "Your carrier may charge for SMS messages used to activate iMessage" (or FaceTime or iCloud) ... so at least you know what's going on.
It also gives you the option to cancel before it sends the SMS.
> In recent versions of iOS, it literally says "Your carrier may charge for SMS messages used to activate iMessage" ... so at least you know what's going on.
Apple inc. should still ask for my consent before they spend my money. Just because they run some 'exclusive' black box messaging service doesn't mean they can pull shit like this.
From googling the popup earlier it looked to me like one of those 'i will dismiss this annoying popup straight away' type things.
When i spend bank money i have to enter my pin and authenticate. For my phone carrier money/credit i get no formal bank transaction screen like i do with Apple pay or similar. It's just a popup similar to those never-ending iOS update popups.
Why? Both are money. Why is spending bank money much more respected than spending phone carrier money?
Exploitation by telecoms has been normalized, so we still accept things like this happening (like OP having to buy €5 extra credit for being 5 cents 'short'). In Europe there are many more protections now, e.g. they also abolished extortionate roaming charges throughout Europe, but that took a very long time and we still have a long way to go.
I wish we treated spectrum licenses like driver’s licenses: a privilege that can be taken away. And no, no refunds for the time and money you put into getting it.
I concur, but they do ask. But its not one of those annoying pop ups.
It asked me once during my setup of my phone.
I mean you are expected to see what you’re doing when you are setting your new phone?
Maybe they've ignored the popup, but the first and third examples definitely doesn't sound like that.
My guess is that Apple keeps a list of carriers that agreed to zero-rate these and keeps an on-phone index to determine whether to pop up that message, but what Apple thinks carriers do and what carriers actually do are 2 different things.
> My guess is that Apple keeps a list of carriers that agreed to zero-rate these and keeps an on-phone index to determine whether to pop up that message, but what Apple thinks carriers do and what carriers actually do are 2 different things
Lebara is an MVNO so it could also be an issue where the phone is applying the rules for the underlying network rather than the MVNO.
MVNOs in the country I live in have a lot of issues with iPhones applying the mainline network settings to the point where you had to manually install a specific provisioning profile just to get the APN settings right.
"Just signalling," seriously? Apple and the carrier are also "other people." Whose phone is it anyway? Any action taken by a device one controls should be made transparent and auditable by the device's owner. They are meant to be tools, not undomesticated semi-wild animals with a will and mind of their own.
Why not just remove the SIM card. In the past, Apple phones used to work fine for iMessage without a SIM. Only an email address was required. Maybe this still works. You can disable mobile data, but with the latest phones coming out can you always be certain the phone will never automatically switch between wifi and cellular without user input. Maybe you can. Maybe you cant. Removing the SIM resolves the question.
Now, could someone somehow change this and do something nefarious because it'll mismatch with what the network things? Maybe. But what's stopping someone from iMessage takeover by messing with the SIM and sending a spoofed message to the shortcode?
But someone could send any info to the shortcode indirectly. Not sure how apple encodes the info its sending, or if it's just sending a signature that authenticates the message.
But the phone could also wait until wifi is lost to send its 'ping' to Apple for the new number to associate with the phone because it could only be for SMS backup message reception purposes I would think. This would also permit it to backoff sending SMS to the old number if out of range of wifi range. (Is there a DoS case against this? I don't know).
I'm reading that article as the SIM only has an IMSI, and the MSISDN routing is all on the provider end, the SIM doesn't know or care.
This has also been my experience when traveling and using local SIM cards. My phone has a space to show your own phone number, and a rare few SIM cards have this pre-filled out but usually you have to enter your own phone number yourself, the phone doesn't know.
What usually happens is that the app on the phone (whether a legit one or a sneaky one) sends a message to a shortcode. The message has the sender ID when collected by the server. That's how the app can be informed of the phone number. Some encryption is usually done to the info sent to prevent spurious messages going to the shortcode.
> it could only be for SMS backup message reception purposes
Of course the iMessage service needs to know your phone number before marking your number as iMessage-capable and routing messages directed at your phone number to you.
Once I have bought a no-name tablet made in China that had malware installed in an unremovable 'browser' app. It was displaying ads on top of other apps, and was installing new apps onto tablet, and also restoring deleted apps, and also installing false copies of well known apps. It was rather ok because i bought it with the intention of tearing it apart, but still, the lesson for me was, NEVER enter sensitive personal data in devices of unknown origins.
Keep in mind that the devices from manufacturers which people usually "trust" more also have a similar, although less-blatantly-malicious, degree of disobeying you by default (silent automatic updates, unremovable system apps, etc.)
You could root it, and that would actually give you full control to delete/install/modify whatever you want.
I think the real lesson here is that if you do not have full control over a device, it is not truly yours and may disobey you.
They're not the same though; these problems need to be looked at through a combination of ability (to exploit) and motivation. Apple/Google isn't going to try and steal your identity, bank account info, or numerous other "small" things that some unknown and unaccountable company selling malware infected hardware could do.
It's like when people state that online voting should be safe because online banking is safe. I'm pretty sure if a nation state really wanted to steal a few grand from your (individual) bank account, they'd be able to do it. But it'd probably cost them more money/power in doing so then it's worth it.
> Apple/Google isn't going to try and steal your identity, bank account info, or numerous other "small" things
You can believe in that, and trust them, but you cannot prove that, unless you have access to the source of everything installed by Google/Apple, and source of third party apps, favored by Google/Apple. But, even if you're working for a government agency and have access to sources, it's still a monumental task to do, because of the volume of the sources. It's why we, Linux owners, were crying when trivial initd and trivial shell scripts, which are easy to read and understand in about an hour or two, were replaced by Systemd, which may take days just to read source.
Apple, Amazon, Google are not going to steal your credit card, identity, bank account info because you could sue them, and because consumer protections agencies would pursue them. The same is not true if you buy a no name tablet with malware pre loaded
They will not try to steal it because they have it. Check what analytics has the page where you make payments. Check your browser settings (passwords, credit card data).
> Linux owners, were crying when trivial initd and trivial shell scripts, which are easy to read and understand in about an hour or two, were replaced by Systemd
Nah, most Linux owners didn't care and the fact that the most distribution voted with their feet to switch should say enough. Also, full init script were not trivial.
Most people don't care about plumbing until it breaks and spews sewage water all across the apartment, after which they'll start having some pretty strong opinions about types of pipes. I didn't care about systemd until its default configuration deliberately broke emacs daemon and tmux, after which I had some pretty strong opinions about systemd.
At this point, every distribution disables that setting, but it shows that systemd's developers cannot be trusted to make reasonable defaults, and must be double-checked to avoid blatant foolishness.
Been using systemd for a very long time now and have had no issues. Had a very pleasant time setting up services to start on boot and such.
And you can't just pull out "Yeah but it might break sometime in the future" because it never has broken for me, and that same argument can be used against literally every thing in existence.
The vast majority of linux users have very positive experiences with systemd and it's cli tools are very good.
In this case, the only reason that it didn't break more widely is because the adults in the room overrode the default KillUserProcesses=yes at the distribution level.
This isn't an argument about whether something "might break in the future", as you so kindly put words into my mouth. This is an argument that I do not trust the systemd developers to make reasonable choices, as they have shown a willingness to break long-standing standards for benefits that are marginal at best.
KillUserProcesses=yes is quite a reasonable default for security though, as lingering processes after logout can be an issue on multi-user systems. For example it can be used as protection against fork bombs. I actually have no idea how Linux persisted for so long without something like that, it seems the long-standing standard here was not great. Rememeber that a lot of what a service manager has to do is decide what policies to set based on what security is provided by the kernel, so in some cases all they can do is provide an option and try to set that to the most secure default.
I tried to read those threads but I couldn't really gather much information, a lot of the rhetoric is pretty extreme and outrageous. I would suggest to avoid ad hominem comments about "adults in the room" and such, as it distracts from your argument. (more reading on this subject here http://www.paulgraham.com/disagree.html)
Multi-user systems are exactly the use case that makes KillUserProcesses=yes unsuitable as a default. Control groups are good for stability, because as you say, you can manage resource quotes, or kill an entire forkbomb. However, KillUserProcesses goes a step further, and also kills processes whenever there is a network failure. KillUserProcesses only makes sense in a single-user environment, where logging out is an active decision from the user.
Typically, logging out (intentionally or through dropped connection) sends SIGHUP to the active process, then recursively to all children. If SIGHUP is explicitly caught, then the process continues to live. With systemd defaults, logging out sends SIGKILL to processes, unless they they were started with "systemd-run --scope --user $PROCESS". In both cases, programs can persist, so there isn't a security benefit. However, one removes portability because it requires calling a systemd binary or linking against a systemd library, rather than being part of the POSIX standard.
The short-term benefit of KillUserProcesses=yes is that programs that erroneously request to be long-lived remain alive. The short-term drawback is that programs that correctly request to be long-lived are killed. The long-term benefit of KillUserProcesses=yes is nil, because programs can still erroneously request to be long-lived. The long-term drawback is increasing dependence on systemd internals, code complexity, and lack of portability. In both short and long term, the drawbacks massively outweigh the benefits.
The vitriol received in those threads was in large part due to systemd developers not recognizing those drawbacks. From their comments, requesting to be long-lived through the standard POSIX method is a hack, allowing the short-term drawbacks to be dismissed, and that a hard dependency on systemd libraries is not a long-term drawback.
With regard to my tone, perhaps I should have combined my two comments in this thread into a single one. The distributors being the adults in the room and overriding the systemd default was intended as a conclusion, to more fully describe an appropriate view toward the systemd developers. However, it appeared at the start of my second comment, and so I can see how it would appear as an attack rather than as a conclusion.
>KillUserProcesses only makes sense in a single-user environment, where logging out is an active decision from the user.
I can't agree, IMO processes left over after a network connection terminating could be considered a security hole. That's up to the sysadmin how to set it up. I certainly don't suggest getting into the habit of launching long running tasks in an ssh session, a remote sysadmin could already very easily end your processes if they find you AFK launching things on a sensitive machine. Systemd just gives some standard tools to do that.
>Typically, logging out (intentionally or through dropped connection) sends SIGHUP to the active process, then recursively to all children. [...] The short-term drawback is that programs that correctly request to be long-lived are killed.
This is incorrect, traditionally SIGHUP only means that the controlling terminal was disconnected. What you are describing is only correct for a single login shell with no sub-reapers, on a modern setup there are other things besides logging out that will send SIGHUP, such as for example if you run your program in an xterm and then close the window, or if you are operating within a sub-shell, etc. SIGHUP provides no way to differentiate those conditions, plus some programs will overload SIGHUP to be a "reload config" command, etc. SIGHUP is just not a reliable way to do it, I would not describe that as correctly requesting to be long-lived. SIGKILL is unfortunately the only method the kernel provides to do a reliable cleanup.
>The long-term drawback is increasing dependence on systemd internals, code complexity, and lack of portability. In both short and long term, the drawbacks massively outweigh the benefits.
AFAIK there is no portable or simple way to do this, please enlighten me if I am missing something. Using daemon(3) and SIGHUP is not adequate here for reasons described above. Of course other init systems are free to create their own simplified implementation of systemd scopes, while skipping the Linux specific directives if they want. So it's not clear what your actual complaint is and what you mean the drawbacks are. Do they need help doing this? If so I would be happy to advise.
>in both cases, programs can persist, so there isn't a security benefit.
This is also incorrect, in the case of systemd scopes, you can further restrict the ability to create them with polkit or SELinux or similar. So you could make it so only tmux is allowed to start one for example. That would be one of the big benefits of having a real API for this versus just sending a signal and hoping the program is well behaved.
>The vitriol received in those threads was in large part due to systemd developers not recognizing those drawbacks.
I don't think that is warranted, systemd developers can decide for themselves the trade offs of their program. The vitriol is not needed and is in fact likely to weaken the argument. In any case there are several other inits to choose from with their own set of strengths and drawbacks, I suggest focusing on what choices you have rather than agonizing over the small decisions of just one of them.
>Linux owners, were crying when trivial initd and trivial shell scripts, which are easy to read and understand in about an hour or two, were replaced by Systemd, which may take days just to read source.
Not sure I understand why you would cry about that, systemd is actually not a very complex project if you're used to low level C daemons. The most complicated part is the d-bus portions, but those are mostly a lot of boilerplate, and you should be understanding it anyway if you use a modern system with d-bus. Trivial initd and trivial shell scripts are also lacking in quite a bit of functionality compared to systemd, at least in my experience those would tend to grow until they became brittle and unmaintanable.
Online banking is safe because it is auditable and traceable. Voting has anonymity and chain-of-custody requirements that make doing it online extremely difficult.
> Apple/Google isn't going to try and steal your identity, bank account info, or numerous other "small" things that some unknown and unaccountable company selling malware infected hardware could do.
They all could do it, but are you aware of any manufacturer-installed malware or rootkits on a device that have? They don't steal your bank account info or impersonate you ever as far as I know, they make money off you in the same way every other company does.
If we can't show any instances, then it becomes difficult to find this materially worse than what other tech companies do. It becomes more like embarrassment from being owned by a obscure foreign company rather than a famous American one.
That account name is apt. Everybody -could- do it. Even your spouse, your best friend, your parents. They could all steal your shit. But for sanity’s sake a balance has to be struck between trust and paranoia.
The main difference, of course, being that your best friend is someone you could actually make having to face the legal consequences (and/or punch in the face).
These oversized American corporations are practically untouchable to the majority of people. They'll pay their laughable fines, pinky promises to better self-regulate for real this time, and move on. Leaving you in the dirt.
You're moving the goalpost from what Andrew and the person responding said though. The Chinese phone is acting just like a Samsung phone, only a tad bit worse. The Russian phone and the ones you mention are of course another story like you say but that's not what you were responding to, which is a valid point.
Of a well-known manufacturers, I noticed Xiaomi to do some real shit: it had an app preinstalled that displayed ads above other apps. But at least when I identified the culprit, it was possible to remove it completely and the problem went away.
I first ran into Xiaomi when I used a build of their MIUI rom on my Sony Ericsson X10 back in the day. It was amazingly better than default android (2.3/gingerbread era), so I wanted to keep using it.
Got a new phone at some point and tried it again, and they were on some real shit by then. All the default built-in programs for like, /music playing/ would force you through a Xiaomi Cloud sign up process just to play music /already on your phone/.
Granted, Google is now on the same shit; I found with my latest android they've removed all the basic apps for file browsing, music playing, and the like. Now only "Google Play" versions remain, that of course require a Google account.
I feel like the camera and calculator are the only non-play-required default apps left, but I won't be surprised if I find one day that the camera is somehow integrated into Google Photos, which will of course not open until you sign in.
I agree, this is basically how all tech companies work now. The only thing that holds it back is the outrage cycle, which
a) only turns its spotlight on companies above a certain size/visibility,
b) can be defeated with a large enough marketing/lobbying spend, and
c) can be waited out by companies with other lines of business that bring in profits, and
d) can be combated with cycles of withdrawing, quietly reintroducing, then withdrawing again, and reintroducing again until
d1.) media outlets get bored with it, or
d2.) all other companies in the same line start to do the same thing, a traditional way of price-fixing. Once this happens, the only way you'll be stopped is with legislation, because e.g. every TV has banner ad pop-ups from the manufacturer now.
> every TV has banner ad pop-ups from the manufacturer now.
AFAIK, Samsung and cheap Chinese brands (Vizio, TCL, and similar) do this.
Months ago I was between buying a LG or a Sony because they don't have bad reputation around this, and the LG I finally bought haven't any ad. Also, cookies, extra internet stuff, and the Alexa service can be deactivated separately without affecting apps like Netflix
Vizio, a US company, is the worst offender. These days it's hard to tell what is jingoism, or propaganda, and what is an honest mistake because a product is cheap
Before a certain Android version it was pretty common for Aliexpress sellers to plant unremovable ads, third-party stores, and god knows what else into the otherwise clean firmware, to be able to sell the phone with a discount. They usually didn't deny it, or even genuinely wondered - what's wrong with it? You bought it for a cheaper price after all, you should be happy, the seller is happy, everyone is, have a nice day sir. (a real conversation I had years ago)
It is the potential of hardware compromise that concerns me. Software can be wiped, but if the hardware itself contains backdoors, software can then be install at any time. Furthermore, given the global supply system, its so hard to confirm that any hardware is not compromised.
And always keep in mind that "you get what you paid for", and "if it's too good to be true, it probably is".
That makes it doubly shit that even premium products - TV's, laptops, etc - that aren't cheap or budget can come with ads, tracking and shitware pre-installed.
I'm thankful that my TV showed me separate licenses for things like advertising, tracking and voice control and wasn't difficult or annoying when I did not accept them.
Presume any phone you own has malware in it, and adjust your behavior accordingly. This means putting the phone in a Faraday sleeve when not using it, so it can't communicate with a C2, putting black nail varnish on the camera, keeping the phone in another room when having a sensitive conversation, etc
For doing the crimes, use a desktop PC with a TailsOS flash drive and communicate with XMPP with OTR preferably with Intel Management Engine neutered and removed. Do Not use a smartphone or dumbphone for criminal dealings.
My preferred method is to skip to the end and just carry around a brick. Attaching a message to it and throwing it is a pretty effective way of communicating, provided the recipient is using windows.
Intelligence agencies are a lot more likely to be using illegal malware than law enforcement. Criminals are actually pretty far down the list of people who should be concerned.
Think political dissidents and groups that support them like citizenlab.org, people working on classified projects - or with geopolitcally important trade secrets (which is actually a pretty broad range), members of the military in times of war - or when working with classified information/going to classified locations, people susceptible to being blackmailed with things like nudes of themselves, and so on and so forth.
This is HN so definitely serious. There are a lot of people with rather distorted views of reality here. Also armchair criminal masterminds apparently.
I think the idea is that it can't be tracked, but unless the gyroscope and accelerometers are disabled somehow, at least it won't be tracking in realtime.
> Presume any phone you own has malware in it, and adjust your behavior accordingly. This means putting the phone in a Faraday sleeve when not using it, so it can't communicate with a C2, putting black nail varnish on the camera, keeping the phone in another room when having a sensitive conversation, etc
This solution is of course a nonsolution. It vaguely acknowledges the problem of NSA spying yet does not offer a meaningful collective response to counter it (e.g. pushing for open source movement and technology embedded in an anti-capitalist framework/system).
To anyone downvoting this person, are you sure you've read and completely understood every single law that could technically have you regarded as a criminal?
From what appears to be a russian reddit about the one that opens a GPRS connection:
> Here it was one to one, on a simple dialer with a flashlight from Fly, Bata bought it, because he liked the big screen and big buttons, what he needed, he can't even write SMS, only calls, so the tariff is without the Internet. And it began, once every two or three days, Internet access (usually at night) for 15-20kb, and the operator rounds up to a megabyte. Just like you, I turned off the data transfer in the phone, deleted the dots, everything was useless.
Espionage is highly unlikely since nobody important will buy cheapest of the cheap dumb phones. Most likely it's used for CC theft, spam, proxying, forcing unwanted paid subscriptions, and other scamming schemes. That DEXP is involved is especially interesting because it's a face brand for DNS, a large Russian retailer. While all these models are Chinese OEM phones with a label slapped on them and little to no modification otherwise, it's possible that DNS is involved.
I also want to mention that "Russian hacker groups don't do cybercrime at home, and the state lets them do it abroad" meme, which half of HN seems to sincerely believe, is extremely misguided, and just sounds bizarre to anyone who follows the topic. There is a continuum of loosely related Russian-speaking criminals in Russia, Ukraine, Belarus, Kazakhstan, and Baltic States (mostly Lithuanian criminals who traditionally work as the EU bridge for others), and it's always hard to tell who located where. Some of them have some ties with the Russian state (regardless of the country of origin), most don't. Domestic cybercrime is rampant in Russia, often involves big names (such as top 3 mobile operators in Russia) and the mere notion it's controlled in some way is ridiculous. The only issue is there's not much money to steal, so they turn to EU and US targets.
Actually, considering the low pay and the rules, prohibiting snartphones at many russian defence companies, cheap dumb phones may be a great target for espionage.
>and the rules, prohibiting snartphones at many russian defence companies
In such companies, you typically leave any electronic devices on you (including watches) at the gate, from clocking in to clocking out. Nobody would care if your phone is dumb, it's still breaking the rule.
Depends, some do indeed ban all electronic devices, others only ban smartphones and any devices with cameras. Probably after this incicident most of them will move towards the former policy.
According to the original article, DEXP has stopped selling these phones and is doing internal investigation.
The real criminal here is Russia's big three (I would say, excluding MTS but including a new hot contender Tele2) who repeatedly rob vulnerable and elderly people via "paid content" schemas which have zero usefulness
outside of scam.
>Espionage is highly unlikely since nobody important will buy cheapest of the cheap dumb phones
I agree that this event is unlikely to be espionage, but someone important might buy a cheap dumb burner phone. I wouldn't put it past an intelligence agency to wholesale compromise cheap dumb phones for that reason.
sure. It is different endeavor/specialization though. Stealing just several thousands CC from Sberbank leads to Sberbank security paying you visit at home late night with you completely voluntarily signing confession and a letter of apology well before the dawn.
Guess I need to learn how to dump the firmware of a dumb phone eventually. Does anyone have any advice about this? I'm reading some articles but looks like "dumb phone" consists of a wide range of phones from authentic Nokia old timers to who-knows companies.
How do we never hear from the individuals who actually wrote the code to do these things? We be great to get an expose on the motivations and rationale for all these creeper spyware installed by rogue companies.
When I need to replace my phone, how to I make sure my next one has never entered Chinese-controlled territory at any stage of its manufacture—including all its components?
In this case I'm thinking in the perspective the mass of average consumers would say someone is thinking. To them a realistic threat model assessment would be 'paranoia' (people are out to get you).
The intent was to use a single word to convey most of a complex intent, without diluting the message trying to express nuance that is better left to the reader to evaluate.
Can you instead verify that each component is uncorrupted, a hardware hash function if you will? Looking at density, centre of gravity, weight, appearance, and/or radiographic imaging?
You'd have to verify the electrons match a specific known-good structure since a targeted attack is likely to simply modify the bits on the components (as in, a custom manufacturer-conspired/factory worker-conspired firmware) instead of swapping the components out.
In France, some cheap SIM cards charge per mb and per SMS until you register a plan. So I carefully disabled mobile data, avoided SMS, loaded 10 EUR of credit over wifi, which 'activated' the phone on the network, but when I went to sign up for the 10 EUR plan, I found I only had 9,95 EUR left.
As soon as credit was loaded, my iPhone sent an SMS ping to an Apple shortcode to tell iMessage my new number. The sending and record of this SMS was completely hidden from the user on the phone. Cue some he-said she-said with the carrier about whether I did or didn't send an SMS. Most mobile providers zero-rate shortcodes to Apple and hide it on their billing system too, but not Lebara.
So I had to add 5 EUR more of credit just to buy the 10 EUR package for the month.