Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe this is a good time to think about what policy could help discourage these horrific practices (it sounds like their storage was unprotected)

* App Store review requires a lightweight security audit / checklist on the backend protections.

* App Store CTF Kill Switch. Publisher has to share a private CTF token with Apple with a public name (e.g. /etc/apple-ctf-token ). The app store can automatically kill the app if the token is ever breached.

* Publisher is required to include their own sensitive records ( access to a high-value bank account) within their backend . Apple audits that these secrets are in the same storage as the consumer records.



Make company liable for damages when breached.

If you want companies to care about security then you need to make it affect their bottom line.

This wasn't the work of some super hacker. They literally just posted the info in public.


There has to be a better way than just adding another deterrent to starting a company. Could there be an industry standard for storage security? Certification (a known hurdle) is better than "don't fuck up or we'll fine you to death".


Regulate software development. Other industries already do this.

You could: - make Software Engineer a protected title that requires formal engineering education and mentorship as well as membership to your country's professional engineering body (Canada already does this) - make collecting and storing PII illegal unless done by a certified Software Engineer - add legal responsibility to certified Software Engineers. If a beach like this happens they lose their license or go to jail. And you easily know who is responsible for it because it's the PEng's name on the project - magically, nobody wants to collect PII insecurely anymore or hire vibe coders or give idiots access to push insecure stuff - bonus: being a certified Software Engineer now boosts your salary by 5x and the only people that will do it actually know WTF they are doing instead of cowboys, and that company will never hire a cowboy because of liability. The entire Internet is now more secure, more profitable for professionals, and dumb AI junk goes in the trash


For writing lists with one item per line, Use two line breaks on HN to start a new line

Like this


Canada does this but it is barely enforced.

Many non-certified people call themselves "software engineers" with no consequence.


I think fines are very reasonable. If you can’t safely do the thing, you should be punished for doing it. If you can’t safely safely do the thing then there is no issue.


Certification is essentially "don't fuck up or we'll fine you to death" with extra steps. Especially because it mostly comes down to the company self-verifying (auditors mostly just verify you are following whatever you say you are following, not that its a good idea).

Its not like anyone intentionally posts their entire DB to the internet.


Those extra steps help insult from penalties and lawsuits in a lot of cases.


Professional Engineer (PE) certification for cyber security professionals would help.

Without personal and professional consequences, the default 1 year of credit monitoring for weak security is just the cost of doing business.


How would that help?

By all accounts this app has no security professionals involved with it.

Its not like there was some incompetent cyber security expert saying its ok to skip ACLs in firebase.


This is the only way to deter this. Negligence and incompetence needs to cost companies big money, business-ruining amounts of money, or this is just going to keep happening.


the problem is what are the damages? how much are those damages?

My SSN / private information has been leaked 10+ now. I had identify fraud once, resulting in ~8 hours of phone calls to various banks resulting in everything being removed.

What are my damages?


I would suggest that damages should be punative, not to make the victims whole. So i dont think it matters.


Punitive damages are no-go in Europe given they would mostly result in money transfers from the ruling families to common people.


Have you seen the GDPR? Its basically the definition of punative damages.


> Make company liable for damages when breached.

This won't be enough, you have to make PEOPLE liable for breach.

Making a corporation being liable is useless, it's a legal Person and it can simply declare bankruptcy and move on.


I agree, but relying on lawsuits is far too slow and costly . We can reduce the latency of discovery and resolution by adding software protocols.


Having the threat of lawsuits is not really about the actual lawsuit, its about scaring people into being more careful. If you actually get to the lawsuit stage, the strategy has failed.

> We can reduce the latency of discovery and resolution by adding software protocols.

Can we? What does this even mean?

[Edit: i guess you mean the things in your parent comment about requiring including some sort of canary token in the DB. I'm skeptical about that as it assumes certain db structure and is difficult to verify compliance.

More importantly i don't really see how it would have stopped this specific situation. It seems like the leak was published to 4chan pretty immediately. More generally how do you discover if the token is leaked, in general? Its not like the hackers are going to self-report.]


The signatures would appear in the drop . A primitive version would be file meta data or jfif. Even the images themselves or steganography could be used


I guess, but it seems a bit like a solution that only works for this specific dump - most db breaches don't have photos in them.

My bigger concern though is how you translate that into discovering such breaches. Are you just googling for your token once a day? This breach was fairly public but lots of breaches are either sold or shared privately. By the time its public enough to show up in a google search usually everyone already knows the who and what of the breach. I think it would be unusual for the contents of the breach to be publicly shared without identifying where the contents came from.


dark web scanning is common. the developers would be notified when those signatures appear in dark web indexes .

jfif is just an example. any file format or metadata could be used as a signature depending on the storage type.


There is no indication that this particular breach was ever on the "dark web" before widely being discovered.

Yes dark web scanners are a thing, but just because something exists does not mean it would work for a specific situation. I'm doubtful they would work most of the time.


That's a reactive measure. Certainly, it's worth pursuing. Though like the notion that you can't protect people from being murdered if you only focus on arresting murderers, there is a need for a preventative solution as well.


Maybe the idiot that published this didn't even form an llc, "waste of 200$"


GDPR makes company liable for damages when breached.

That is why Tea did not operate in Europe.


just use your brain and don't upload your face and driver's license to a gossip website. when I was growing up, it was common knowledge that you shouldn't post your identity online outside of a professional setting.

The onus is on users to protect themselves, not the OS. As long as the OS enables the users to do what they want, no security policy will totally protect the user from themselves.


> just use your brain and don't upload your face and driver's license to a gossip website

Meanwhile, in the UK, new legislation requires me to upload my face and driver's license just to browse Reddit.


The fact that UK politicians cannot use their brains is a separate issue. May I interest you in a VPN?


You only require ID verification for NSFW subreddits, right?


Nsfw includes subreddits that discuss beer.


You know, what's funny about NSFW is that a lot of things tagged NSFW are actually regularly discussed at work!


While true, using that logic I can say porn is also discussed at work if you work in the porn industry :)

On a more serious note, implementing such a law without also providing a 0-knowledge authentication system ready to use by the government is just so unbelievably stupid (for multiple unrelated reasons).


All of Reddit is NSFW. Why are you on Reddit, you should be working!


And requiring KYC to access a subreddit marked NSFW is somehow legitimate why, exactly?


Subreddits now 18-only in the UK now include:

r/ukguns r/cider r/sexualassault r/stopsmoking

Think of the children!


>just use your brain and don't upload your face and driver's license to a gossip website.

It isn't just gossip websites requiring this, and it isn't just gossip websites suffering breaches.


This is becoming more unfeasible as it becomes required to access online services like reddit, nexusmods, verification on dating apps. Sending facial, and documentation data is becoming mandated by governments across the world.


[flagged]


Do you think it'll stop with those sites? You might need it for your banking app soon, or to browse LinkedIn, or etc.


For banking it's fine; I expect to need to prove my identity to my bank, and it's tied to my bank accounts. And I expect a bank to have high security.

The vast majority of online services have no good reason to want my ID, nor will they ever get it.


Banks already ID you in person (at least the ones with branches). And LinkedIn has been useless for years for most of my family and friends.


Then I'll do my banking in person, and stop browsing LinkedIn. I'm looking forward to my reduced dependence on the internet.


Your bank will close branches thanks to the incredible convenience of online banking.


They haven't done that yet, and if they do there are tons of other banks for me to use.


Is there a single bank that doesn't require ID to start an account?


life is better with skyrim mods


The app store is auditing & restricting functionality within the iPhone, but the backend protections are a wild west.

"use your brain" is no substitute for security. This is a hacker forum. We think about how to protect apps. Even smart people have slipped up


Good thing our children will learn all about this at their mandatory Internet Literacy Fundamentals course they have to take in high school.

Oh wait—no such thing exists!

It's up to us to teach this to our children. There's no hope of getting the current generations of Internet users to grasp the simple idea that app/website backends are black boxes to you, the user, such that there is absolutely nothing preventing them from selling the personal information you gave them to anyone they see fit, or even just failing to secure it properly.

Without being a developer yourself or having this information drilled into you at a young age, you're just going to grow up naively thinking that there's nothing wrong with giving personal information such as photos of your driver's license to random third parties that you have no reason to trust whatsoever, just because they have a form in their app or on their website that requests it from you.


education is helpful, but it's also inadequate. we need good drivers, and good driver safety systems. they go hand in hand.

even the most savvy consumers slip up, or are in a hurry. it's impossible to make a perfect security decision every time


Yeah, just upload the pictures of unsuspecting guys.

Sorry, well deserved ladies. It just made my day. ROTFL.

And please provide an app with all the names and pictures of the ladies who used it. So that I can easily check who not to date.


Nice, some unsolicited victim blaming!


In this case it appears to be a public Firebase bucket; shutting down the app wouldn't help. Quite possibly access to Firebase was mediated through a backend service and Apple couldn't validate the security of the unknown bucket anyway.


Also about validating the backends, apple has the resources to provide a level of auditing over the common backends. S3, Firebase -- perhaps the top 5. It's easy to provide apple with limited access to query backend metadata and confirm common misconfigurations.


I partially agree. At least the threat of app shutdown would be enough consequence for the publisher to take things seriously


I think iOS and Android already holds the threat of app store removal over developers' heads.

Presumably the risk/reward still favors risky practices.


but it's not contingent on backend violations, only frontend ones. I'm proposing decoupled ways for app store validation to audit backend security.


> Publisher is required to include their own sensitive records within their backend.

Now that's a creative solution! Every admin must have a table called `MY_PERSONAL_INFO` in their DB.


wouldn't it be funny if the app store had to review it and make sure the personal info was sensitive and possibly humiliating enough . "sir your app has been denied because MY_PERSONAL_INFO table requires at least 3 d-pics"


More power to app store reviewers? Please no. They already deny apps for random reasons and figuring out why is often a hair pulling experience.


i agree about the power concerns, but where would you assign the authority if not the app store?


This is the kind of thing government regulation is useful for, when it works.


In practice they delegate certification to a legacy and expensive certification authority


* Mandate 3rd party auditing once an app reaches > 10k users

* App publishing process includes signatures that the publisher must embed in their database. When those signatures end up on the dark web, App Store is notified and the App is revoked


> * Mandate 3rd party auditing once an app exceeds 10k users

You have a lot of interesting suggestions.

I would love to see some kind of forced transparency. Too bad back-end code doesn’t run under any App/Play Store control, so it’s harder to force an (accurate) audit.


also i remember maybe Facebook trying to do this when they acquired Parse. For a while they were promoting developers host their backends on Parse / FB .

The idea has merit. You have to relinquish some control to establish security. Look at App Store, Microsoft Store , MacOS App store -- they all sandbox and reduce API scope in order to improve security for consumers.

I'm more on the side of autonomy and trust, but then we have reckless developers doing stuff like this, putting the whole industry on watch.


thanks. Yeah I think there are a lot of ways to decouple App store from publisher and auditor . That way the publisher can retain autonomy / control , while still developing trust with the consumer.

We could do better in our trade at encouraging best practices in this space. Every time there's a breach , the community shames the publisher . But the real shame is on us for not establishing better auditing protocols. Security best practices are just the start. You have to have transparent, ongoing auditing and pen-testing to sustain it.


Yes, pushing companies away from mobile apps and towards PWAs or even ordinary websites does sound like an excellent idea.


it could be an enhanced certification like "Enhanced SEcurity" or "End to End security" to allow gradual adoption.


So like those EV certs that turn the address bar green.


better, in that the app store has more weight and more leverage to establish more comprehensive auditing.

The EV certs failed because general SSL identity is pretty weak. Consumers don't know how to use it to establish trust. There's no enforcement on how the names are used. for example, my county treasurer has me transfer thousands of dollars on a random domain name.


The world is moving away from App Stores and walled gardens. Figure out other options.


The world was moving away from App Stores and walled gardens. And then I woke up, and returned to grim reality.


that sounds preposterous . can you qualify that?


The EU took sideloading mainstream. It's only a matter of time before that becomes the norm.

You also need to consider the fact that Epic Games is big enough that they could have just used an exploit to sideload Fortnite back on the iPhone, and the lawsuit was basically PR to draw attention to the App Store's ambiguities. That in itself shows App Stores are slowly on their way out.


what level of adoption would you consider "norm"? i don't expect more than 2-3%


Linux is up to 5% of the desktop. Gog and Itch.io are DRM-free, and are slowly gaining ground against Steam. Fediverse networks are slowly gaining ground against traditional social media. Signal is more popular then ever.

There will always be lowest-common-denominator users, but there is clearly some demand for an alternative to the biggest 5 websites...


>There will always be lowest-common-denominator users,

Interesting play, calling 95% of users "lowest-common-denominator". Those silly, blabbering morons that don't understand that they should be running Bazzite on their Framework laptops instead of using evil evil sofware.

>there is clearly some demand for an alternative to the biggest 5 websites...

This demand doesn't pay, and also happens to be some of the most demanding, entitled users you'll have ever seen.


I wasn't intending to insult anyone, just to refer to most users as "people who don't want to think about the tools they use, and don't care about privacy". Which i think is true of most people. I never called anyone a moron, they just have different priorities.


Meanwhile, an Android app for some random banking or government thing will require an attested boot chain measured all the way down to the stage 0 ROM burned into the SOC. That's not to say the open ecosystem isn't better, but to say it's winning enough to guarantee sustained general purpose viability is simply untrue.


>Apt install app

Mmmhmm


i see thanks for clarifying


>* App Store CTF Kill Switch. Publisher has to share a private CTF token with Apple with a public name (e.g. /etc/apple-ctf-token ). The app store can automatically kill the app if the token is ever breached.

How do you enforce the token actually exists? Do app developers have to hire some auditing firm to attest all their infra actually have the token available? Seems expensive.


[flagged]


It's perfectly possible to point out a flaw without suggesting a replacement.


[flagged]


I disagree; if you suggest doing something, and someone points out a (legitimate) potential flaw/problem/shortcoming/difficulty, then that person has helped you and improved the conversation. Full stop. It might be nice if they can also suggest something better, but it's not necessary. It might even be in the final outcome that the original idea is still the best option, and even then it is preferable that its problems are known and hopefully considered for mitigation.


it could be made available just to apple servers via ACL or protected token. but no one else .


That still doesn't make sense. How does the ACL work? What prevents the usual shenanigans like cloaking to prevent legitimate detection from working? Moreover what secrets are you even trying to detect? The app API token?


[flagged]


I can't be constructive when your proposal is too vague to know how it works, I'm forced to take pot shots at what I think it is, and you getting upset because I'm not "constructive". Thoroughly explain how your plan works beyond the two sentences in your original post, and I can be "constructive".


I like the ctf one, but it would probably be hidden way deeper than the rest of the info.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: