Maybe this is a good time to think about what policy could help discourage these horrific practices (it sounds like their storage was unprotected)
* App Store review requires a lightweight security audit / checklist on the backend protections.
* App Store CTF Kill Switch. Publisher has to share a private CTF token with Apple with a public name (e.g. /etc/apple-ctf-token ). The app store can automatically kill the app if the token is ever breached.
* Publisher is required to include their own sensitive records ( access to a high-value bank account) within their backend . Apple audits that these secrets are in the same storage as the consumer records.
There has to be a better way than just adding another deterrent to starting a company. Could there be an industry standard for storage security? Certification (a known hurdle) is better than "don't fuck up or we'll fine you to death".
Regulate software development. Other industries already do this.
You could:
- make Software Engineer a protected title that requires formal engineering education and mentorship as well as membership to your country's professional engineering body (Canada already does this)
- make collecting and storing PII illegal unless done by a certified Software Engineer
- add legal responsibility to certified Software Engineers. If a beach like this happens they lose their license or go to jail. And you easily know who is responsible for it because it's the PEng's name on the project
- magically, nobody wants to collect PII insecurely anymore or hire vibe coders or give idiots access to push insecure stuff
- bonus: being a certified Software Engineer now boosts your salary by 5x and the only people that will do it actually know WTF they are doing instead of cowboys, and that company will never hire a cowboy because of liability. The entire Internet is now more secure, more profitable for professionals, and dumb AI junk goes in the trash
I think fines are very reasonable. If you can’t safely do the thing, you should be punished for doing it. If you can’t safely safely do the thing then there is no issue.
Certification is essentially "don't fuck up or we'll fine you to death" with extra steps. Especially because it mostly comes down to the company self-verifying (auditors mostly just verify you are following whatever you say you are following, not that its a good idea).
Its not like anyone intentionally posts their entire DB to the internet.
This is the only way to deter this. Negligence and incompetence needs to cost companies big money, business-ruining amounts of money, or this is just going to keep happening.
the problem is what are the damages? how much are those damages?
My SSN / private information has been leaked 10+ now. I had identify fraud once, resulting in ~8 hours of phone calls to various banks resulting in everything being removed.
Having the threat of lawsuits is not really about the actual lawsuit, its about scaring people into being more careful. If you actually get to the lawsuit stage, the strategy has failed.
> We can reduce the latency of discovery and resolution by adding software protocols.
Can we? What does this even mean?
[Edit: i guess you mean the things in your parent comment about requiring including some sort of canary token in the DB. I'm skeptical about that as it assumes certain db structure and is difficult to verify compliance.
More importantly i don't really see how it would have stopped this specific situation. It seems like the leak was published to 4chan pretty immediately. More generally how do you discover if the token is leaked, in general? Its not like the hackers are going to self-report.]
The signatures would appear in the drop . A primitive version would be file meta data or jfif. Even the images themselves or steganography could be used
I guess, but it seems a bit like a solution that only works for this specific dump - most db breaches don't have photos in them.
My bigger concern though is how you translate that into discovering such breaches. Are you just googling for your token once a day? This breach was fairly public but lots of breaches are either sold or shared privately. By the time its public enough to show up in a google search usually everyone already knows the who and what of the breach. I think it would be unusual for the contents of the breach to be publicly shared without identifying where the contents came from.
There is no indication that this particular breach was ever on the "dark web" before widely being discovered.
Yes dark web scanners are a thing, but just because something exists does not mean it would work for a specific situation. I'm doubtful they would work most of the time.
That's a reactive measure. Certainly, it's worth pursuing. Though like the notion that you can't protect people from being murdered if you only focus on arresting murderers, there is a need for a preventative solution as well.
just use your brain and don't upload your face and driver's license to a gossip website. when I was growing up, it was common knowledge that you shouldn't post your identity online outside of a professional setting.
The onus is on users to protect themselves, not the OS. As long as the OS enables the users to do what they want, no security policy will totally protect the user from themselves.
While true, using that logic I can say porn is also discussed at work if you work in the porn industry :)
On a more serious note, implementing such a law without also providing a 0-knowledge authentication system ready to use by the government is just so unbelievably stupid (for multiple unrelated reasons).
This is becoming more unfeasible as it becomes required to access online services like reddit, nexusmods, verification on dating apps. Sending facial, and documentation data is becoming mandated by governments across the world.
Good thing our children will learn all about this at their mandatory Internet Literacy Fundamentals course they have to take in high school.
Oh wait—no such thing exists!
It's up to us to teach this to our children. There's no hope of getting the current generations of Internet users to grasp the simple idea that app/website backends are black boxes to you, the user, such that there is absolutely nothing preventing them from selling the personal information you gave them to anyone they see fit, or even just failing to secure it properly.
Without being a developer yourself or having this information drilled into you at a young age, you're just going to grow up naively thinking that there's nothing wrong with giving personal information such as photos of your driver's license to random third parties that you have no reason to trust whatsoever, just because they have a form in their app or on their website that requests it from you.
In this case it appears to be a public Firebase bucket; shutting down the app wouldn't help. Quite possibly access to Firebase was mediated through a backend service and Apple couldn't validate the security of the unknown bucket anyway.
Also about validating the backends, apple has the resources to provide a level of auditing over the common backends. S3, Firebase -- perhaps the top 5. It's easy to provide apple with limited access to query backend metadata and confirm common misconfigurations.
wouldn't it be funny if the app store had to review it and make sure the personal info was sensitive and possibly humiliating enough . "sir your app has been denied because MY_PERSONAL_INFO table requires at least 3 d-pics"
* Mandate 3rd party auditing once an app reaches > 10k users
* App publishing process includes signatures that the publisher must embed in their database. When those signatures end up on the dark web, App Store is notified and the App is revoked
> * Mandate 3rd party auditing once an app exceeds 10k users
You have a lot of interesting suggestions.
I would love to see some kind of forced transparency. Too bad back-end code doesn’t run under any App/Play Store control, so it’s harder to force an (accurate) audit.
also i remember maybe Facebook trying to do this when they acquired Parse. For a while they were promoting developers host their backends on Parse / FB .
The idea has merit. You have to relinquish some control to establish security. Look at App Store, Microsoft Store , MacOS App store -- they all sandbox and reduce API scope in order to improve security for consumers.
I'm more on the side of autonomy and trust, but then we have reckless developers doing stuff like this, putting the whole industry on watch.
thanks. Yeah I think there are a lot of ways to decouple App store from publisher and auditor . That way the publisher can retain autonomy / control , while still developing trust with the consumer.
We could do better in our trade at encouraging best practices in this space. Every time there's a breach , the community shames the publisher . But the real shame is on us for not establishing better auditing protocols. Security best practices are just the start. You have to have transparent, ongoing auditing and pen-testing to sustain it.
better, in that the app store has more weight and more leverage to establish more comprehensive auditing.
The EV certs failed because general SSL identity is pretty weak. Consumers don't know how to use it to establish trust. There's no enforcement on how the names are used. for example, my county treasurer has me transfer thousands of dollars on a random domain name.
The EU took sideloading mainstream. It's only a matter of time before that becomes the norm.
You also need to consider the fact that Epic Games is big enough that they could have just used an exploit to sideload Fortnite back on the iPhone, and the lawsuit was basically PR to draw attention to the App Store's ambiguities. That in itself shows App Stores are slowly on their way out.
Linux is up to 5% of the desktop. Gog and Itch.io are DRM-free, and are slowly gaining ground against Steam. Fediverse networks are slowly gaining ground against traditional social media. Signal is more popular then ever.
There will always be lowest-common-denominator users, but there is clearly some demand for an alternative to the biggest 5 websites...
>There will always be lowest-common-denominator users,
Interesting play, calling 95% of users "lowest-common-denominator". Those silly, blabbering morons that don't understand that they should be running Bazzite on their Framework laptops instead of using evil evil sofware.
>there is clearly some demand for an alternative to the biggest 5 websites...
This demand doesn't pay, and also happens to be some of the most demanding, entitled users you'll have ever seen.
I wasn't intending to insult anyone, just to refer to most users as "people who don't want to think about the tools they use, and don't care about privacy". Which i think is true of most people. I never called anyone a moron, they just have different priorities.
Meanwhile, an Android app for some random banking or government thing will require an attested boot chain measured all the way down to the stage 0 ROM burned into the SOC. That's not to say the open ecosystem isn't better, but to say it's winning enough to guarantee sustained general purpose viability is simply untrue.
>* App Store CTF Kill Switch. Publisher has to share a private CTF token with Apple with a public name (e.g. /etc/apple-ctf-token ). The app store can automatically kill the app if the token is ever breached.
How do you enforce the token actually exists? Do app developers have to hire some auditing firm to attest all their infra actually have the token available? Seems expensive.
I disagree; if you suggest doing something, and someone points out a (legitimate) potential flaw/problem/shortcoming/difficulty, then that person has helped you and improved the conversation. Full stop. It might be nice if they can also suggest something better, but it's not necessary. It might even be in the final outcome that the original idea is still the best option, and even then it is preferable that its problems are known and hopefully considered for mitigation.
That still doesn't make sense. How does the ACL work? What prevents the usual shenanigans like cloaking to prevent legitimate detection from working? Moreover what secrets are you even trying to detect? The app API token?
I can't be constructive when your proposal is too vague to know how it works, I'm forced to take pot shots at what I think it is, and you getting upset because I'm not "constructive". Thoroughly explain how your plan works beyond the two sentences in your original post, and I can be "constructive".
* App Store review requires a lightweight security audit / checklist on the backend protections.
* App Store CTF Kill Switch. Publisher has to share a private CTF token with Apple with a public name (e.g. /etc/apple-ctf-token ). The app store can automatically kill the app if the token is ever breached.
* Publisher is required to include their own sensitive records ( access to a high-value bank account) within their backend . Apple audits that these secrets are in the same storage as the consumer records.