Hacker News new | past | comments | ask | show | jobs | submit login
New study: The advertising industry is systematically breaking EU law (forbrukerradet.no)
454 points by robin_reala on Jan 15, 2020 | hide | past | favorite | 183 comments



"– Every time you open an app like Grindr advertisement networks get your GPS location, device identifiers and even the fact that you use a gay dating app. This is an insane violation of users’ EU privacy rights, says Max Schrems, founder of the European privacy non-profit NGO noyb."

Yes, insane seem like it's not actually an overstatement for once.


In which way is this a violation? It is not illegal for companies to process the data which you have signed away with a contract. It is allowed if [1]

>they have a contract with you – for example, a contract to supply goods or services (i.e. when you buy something online), or an employee contract

You just have the right to request that your data is deleted but as long as you don't do that, why should your data not be processed?

Additionally, it is allowed to process private data without a contract if it is necessary. Then, an excessive use of private data is illegal.

But most services will have a contract and thus it is legal.

Minor detail: services cannot be restricted if you don't agree to share your private data. But that doesn't touch sharing the data that was signed away with a contract.

[1] https://europa.eu/youreurope/citizens/consumers/internet-tel...


> Minor detail: services cannot be restricted if you don't agree to share your private data. But that doesn't touch sharing the data that was signed away with a contract.

That's not minor, it's major. It invalidates any sort of "let us use your data in order to use the app" profiteering clickwrap nonsense, and any kind of "contract" derived from that would be void.


You don't have to exclude people, you just have to make it inconvenient to not click ok. The majority will accept the default value.

It is the same with ad-blockers. Youtube would be bankrupt if people weren't lazy.


Also wrong, the "not share" option has to be default, and displayed larger and clearly visible


Can you give me a reference for that, please. I have only found [1]:

>It states that you should integrate data protection from the designing stage of processing activities. Article 25 of GDPR lists the requirements for data protection by design and default.

But that's the processing, not the agreement.

And about consent [2]:

>Freely given - the person must not be pressured into giving consent or suffer any detriment if they refuse.

>Specific - the person must be asked to consent to individual types of data processing.

>Informed - the person must be told what they're consenting to.

>Unambiguous - language must be clear and simple.

>Clear affirmative action - the person must expressly consent by doing or saying something.

But freely given doesn't forbid using the default or does it?

[1]https://www.cookielawinfo.com/gdpr-privacy-by-design-and-def... [2]https://www.privacypolicies.com/blog/gdpr-consent-examples/


You said yourself:

>>Clear affirmative action - the person must expressly consent by doing or saying something

It's not "clearly affirmative" if it's a default that's difficult to find the alternative to, or easy to select by mistake.


Thats a subjective interpretation of "clearly affirmative" that matches more closely with "obvious and clear or straightforward". An option can be obvious to find and yet worded unclearly as we have all encountered "do you want to opt out or not?" Y/N - wait what did the text blob above this say?

Pretending this isnt a delicate issue creates more loopholes and bench time than helping consumers.


I think "clearly affirmative" is only possible when it's "clear" - the consumer cannot clearly affirm to an unclearly-worded question.

You can also cover it under "easy to select by mistake [the unintended answer]".

I think the root of the problem in most cases is that companies don't want to help consumers, they actively want to mislead them and then claim plausible deniability.

I don't understand which sense you mean by "pretending this isn't a delicate issue..." here. I can't even tell if you are pro-GDPR or anti-GDPR from that, and which of those positions you consider to be helping consumers more. Ironic, Y/N? :-)


Can you tell if I am pro- or anti-GDPR?

They, like I, are not arguing for a side (I assume). We are pointing out that the legal situation is not as clear as the article suggests.

Judging by the lack of won cases, a default ok button seems to be 'clearly affirmative' enough.

The law states [1]:

>It shall be as easy to withdraw as to give consent.

Nothing states that consent has to be more difficult than non-consenting.

>the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language.

The request has to be distinguishable, not the consent.

[1] https://gdpr-info.eu/art-7-gdpr/


> Nothing states that consent has to be more difficult than non-consenting.

Nobody is arguing that consent should be more difficult.

The complaint is that non-consent is often much more difficult than consent, sometimes ridiculously so.

In my personal experience I have been unable to find the no-consent option at all on some sites. Just links that go around in circles, sometimes to hundreds of ambiguous and mixed-polarity yes/no-or-was-it-no/yes-style options (one for each of hundreds of "partner sites" I've never heard of), with the only clear option being consent-to-all.

If I eventually click on "ok" that is not freely given consent, it's coerced due to me being unable to find or understand how to decline it.

It is technically easy to provide a "decline-to-all" option whenever they have provided a "consent-to-all" option.

Therefore, clearly companies which provide an easy consent-to-all but make decline-to-all virtually impossible to select, or actually impossible, are doing so deliberately, intending to frustrate the consumer from exercising their rights.

The law says that a person should be able to decline if they choose, that it should be easy enough to do, and easy to understand which option they are choosing. Such sites are not compliant with that principle, and it looks like deliberate non-compliance to me.

> The request has to be distinguishable, not the consent.

Well, "the request" is what we've been talking about. It means the UI. Things like "Ok" and "decline" buttons, how the options are presented, how they are explained clearly and unambiguously, the ease and accessibility of selecting the freely chosen option, that sort of thing.


>Therefore, clearly companies which provide an easy consent-to-all but make decline-to-all virtually impossible to select, or actually impossible, are doing so deliberately, intending to frustrate the consumer from exercising their rights.

Yes, that's their business concept and it is legal.

>The law says that a person should be able to decline if they choose, that it should be easy enough to do, and easy to understand which option they are choosing.

The law states:

>>It shall be as easy to withdraw as to give consent.

>Such sites are not compliant with that principle, and it looks like deliberate non-compliance to me.

I rather think that they follow the law to the T. People would love if their behavior would be illegal but they forgot that companies are involved in the law making process, too. The EU wants its companies to be competitive on the internet. Making it impossible for companies to finance themselves with advertising in their home market would kill their already weak internet economy. Who would accept the sharing of private data if a rejecting would be as easy as accepting?

GDPR is a compromise between the protection of the netizens and the business interest of the economy. As such, it protects against the worst abuse but the world is not free. In one way or the other, somebody has to pay.


>>>It shall be as easy to withdraw as to give consent.

>>Such sites are not compliant with that principle, and it looks like deliberate non-compliance to me.

> I rather think that they follow the law to the T.

I think "as easy" is plainly incompatible with "much harder" or "impossible".

You cannot make something plainly much harder than something else, and still pass the "as easy" test in the law to a T.

You also cannot pass the "accessible" test that way.

>> intending to frustrate the consumer from exercising their rights.

> Yes, that's their business concept and it is legal.

I don't believe it is legal, because these are statutory rights.

To use an analogy that involves another statutory right, it would be like a company preventing you from exercising your right to return a broken product "because it's their business model to ship defective products and we cannot kill the economy by preventing that business". Companies do get away with that, because people can't find the energy to pursue it, especially for small violations, but when sued those companies do lose.

You cannot determine that it's legal just from the fact that companies get away with it.


It's one thing to reject consensus and another to withdraw consensus.

The companies can make the rejection difficult as long as the withdrawal is as easy as the giving.


Not if the rejection is made so difficult, or impossible, or inaccessible, or incomprehensible, or ambiguous, that the consent fails to meet the standard of freely given consent.

Clicking the "ok I consent" button does not count as consent under the law if the user believes they have to click it to use the service, assuming what is attached to that button isn't technically necessary for delivery of the service.

And holding PII for marketing and tracking purposes does not count as necessary, despite any economic argument that it pays for the service. That argument is disallowed.


>the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language. [1]

On which part of the law do you base your first paragraph? The text that fits for me is all about the consent, not the rejection. It must be easy to understand to which a person consents, but the rejection can be difficult.

There is also:

>When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.

Services have to point out that consent is not necessary. If that's usual not done, then this abuse can be ended by notifying the EU. I thus assume that most services offer that notice. Then it is very difficult to argue in court that a user still believed that they didn't mean to give consent. People have to argue for their legal incapability if they want to get out. Who would do that?

The compromise of the law is that people in general mindlessly click ok so that targeted advertising is possible. People who mind tracking can easily opt out. This leaves the ignorant to be tracked. How else should free services be financed? The only other option is making people pay for everything which is ok but a radical shift for the internet.

[1] https://gdpr-info.eu/art-7-gdpr/


"Consent requires a positive opt-in. Don’t use pre-ticked boxes or any other method of default consent."

https://ico.org.uk/for-organisations/guide-to-data-protectio...


Except one of the aims of GDPR is to explicitly rule that out. Consent requires informed, positive opt-in. Not a dark-patterned default.

See ICO's guidance on consent: https://ico.org.uk/for-organisations/guide-to-data-protectio...


As you write: 'aim' and 'guidance'. But is it actually part of the law?


They link to the actual legislation at the foot of the page, noting the relevant sections. GDPR is very readable, and pretty concise. They also link to two more in depth explorations of consent. Bear in mind the ICO are the UK's enforcement body, so are presenting an accurate picture of the law. To quote three relevant points from the linked page:

The GDPR is clearer that an indication of consent must be unambiguous and involve a clear affirmative action (an opt-in). It specifically bans pre-ticked opt-in boxes. It also requires distinct (‘granular’) consent options for distinct processing operations. Consent should be separate from other terms and conditions and should not generally be a precondition of signing up to a service.

The GDPR gives a specific right to withdraw consent. You need to tell people about their right to withdraw, and offer them easy ways to withdraw consent at any time.

If you make consent a precondition of a service, it is unlikely to be the most appropriate lawful basis.

So yes it's law, unless and until someone manages to appeal some interpretation of a point all the way up the chain.


Imagine a disturbing box on a startpage which asks you for consent to some data processing, e.g. to show you the best matching advertisements. That box has a big ok button at the center and a small cross to exit at the top. That's a situation where most people will press ok and give clear affirmative consent.

Since most people press ok, it doesn't matter to also offer the service to those who cancel. Actually those people still leave a signal and you can show special ads to anybody who isn't part of the ok-clicker database.

I don't see how this would violate the GDPR:

- unambigous and clear affirmative action. People press ok and not the closing cross.

- no pre-ticket opt-in box

- distinct consent to advertisement processing

- separation from other terms

- not a precondition of signing up

- remaining right to withdraw consent

- ability to also tell people about their right to withdraw in that box

- possible to offer an easy way to withdraw

Actually withdrawal has to be as easy as consent. The law states:

>It shall be as easy to withdraw as to give consent.

That's the point where everybody is violating the law because the opt-out button is not constantly shown like the ok-button for opt-in.


What I find very unsettling is that opt-out options are often processed much more slowly than the “ok with everything” option, to the point that it’s almost unusable. I remember Oath to be one if these networks.


How about making it illegal to withold your data from commercial sites? Would that be reasonable?


There are rights that you cannot sign away. For example you can't sell yourself into slavery in most countries. The signed contract is irrelevant because it is unenforceable in law, in much the same way as if it contained a clause putting you in indentured servitude for violations of the TOS. Its only purpose is to obscure people's rights.


It is not illegal for companies to process the data which you have signed away with a contract.

This isn't true. The actual GDPR text on consent states [1]:

the data subject has given consent to the processing of his or her personal data for one or more specific purposes;

-> So processing of data based on consent is limited to the purposes agreed to.

or

processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject

-> So processing of data is necessary to fulfill a request made by the data subject.

This is a far cry from "processing the data which you have signed away". The only blanket allowance for data processing is this item:

processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data

Which again limits the processing allowed under GDPR: processing of data under this rule requires the legitimate business interest to be weighed against the subject's personal rights.

[1] https://gdpr-info.eu/art-6-gdpr/


>So processing of data based on consent is limited to the purposes agreed to.

Why does this exclude a broad clause, e.g. one that states that the data is used to show the most fitting advertisements. You can see that purpose on many webpages. Almost any current data processing will fall within that area.

In conclusion, I still think that you can sign away the right to process your data.


From the article:

> The extent of tracking makes it impossible for us to make informed choices about how our personal data is collected, shared and used, says Finn Myrstad, director of digital policy in the Norwegian Consumer Council.

The Council's point is that the average consumer is not capable of meaningful consent due to the extent and obfuscation of the tracking involved. Much of contract law, afaik, requires a person to be capable of consent at the time of signing for a contract to be valid. The Council's argument, therefore, is that even if technically a consumer has "signed away" the right to process their data, the contract is not valid because the consumer is not able to, and/or cannot get the information required to, make an informed choice when signing.


The concept of processing necessary for the performance of a contract is interpreted extremely narrowly by data protection law. Rightly so, because otherwise it would give entities far too much latitude to stuff as many different processing activities as possible within that ground, even though certain processing activities aren't at all necessary to provide the service.

With Grindr, they only need to process data to provide the service by making it available to you and to other users. What they definitely don't need to do in order to provide the core service is to share your data with third parties who can then use it for their own purposes.

Any argument that the processing is necessary because it's an ad-funded service would not be acceptable under data protection law.

On that basis, performance of a contract would not be a relevant ground. You're also looking at e-Privacy Directive considerations in the EU where either a cookie or similar is essential to provide the service, or you need consent. Similar for location data, you will generally need consent.

So you not only have GDPR issues but also e-Privacy Directive issues where your processing grounds are actually incredibly limited anyway.


>or you need consent

That was my point. People sign contracts where they consent to sharing. The advertising industry is not breaking the law because they don't use the data that is necessary for the performance but they use the data that is voluntarily shared.


And as mentioned above, the sharing aspect of those contacts is more or less void if the personal information sharing was implied to be required to use the service, and/or was not opt-in such that the option without opting in was not the prominent default.


I have family in the industry (digital ads, bidding for ads) and some of the stuff they have explained to me is pretty gnarly. One cousin was explaining an idea he had for a new business and it was essentially just about collecting as much personal data as possible. Imagine a simple DB table with 'user_id' as primary key. Then you just start adding data on top of data with new columns or joins. After a while you have hundreds (thousands) of datapoints per person, then you start selling that data.

So i want to help my family out b/c i have the tech skills and building some prototypes would be easy. But i'm also not sure i want to feed the advertising beast. Shit is whack right now man they listen to you talking to your wife through your phone about buying a mattress, going to a concert, taking a trip somewhere. Next time you open the browser you are bombarded with relevant ads. too creepy no thanks!


> they listen to you talking to your wife through your phone about buying a mattress, going to a concert, taking a trip somewhere. Next time you open the browser you are bombarded with relevant ads.

I've heard this from non-tech family/friends.

Is this actually happening though? If so, who is doing it and how?

Edit: I'm referring specifically to ad tech that targets ads based on overheard conversations.


Without doxing myself by giving too much info about myself away, I can say with 99% certainty that this is not happening with Facebook or Apple.

The remaining 1% is if policy changed in the past few years (read: < 3 ), or if there was some top-secret team that few knew about working on this.

In general, these cases are coincidences based upon one or some combination of the following:

1) Search history that became "subconscious" and the person forgot. Annecdotally, check your history and see if you remember _everything_ you've ever searched for...you may be surprised. Android especially saves more than you might think

2) Related searches or characteristics relevant to the market. Example: during Black Friday in the 'States, many people are searching for TVs, so Amazon or Walmart will probably serve you ads for that in anticipation. Let's say with 10% odds, 1/10 people will see this ad and think "huh - I was just telling my wife we need a new TV. How did they know?" Law of large numbers and such...

3) The person installed malware that IS collecting sound samples, feeding that data to an ad-server, and actually performing this malicious behavior - but it isn't necessarily FB/Apple/etc. You tend to see this more on Android, as the Play Store has too many apps with malware like this, but it can happen on iOS too if the user isn't careful about privacy permissions.

Hope that long response helps answer your question =)


Your item (3) definitely seems to match the current situation in the wild - FAANG companies are genuinely invested in maintaining their brand and reputation and so I do completely agree that they wouldn't tend to collect seemingly shady or 'underground' data themselves.

I think the definition of malware is blurring though. On some websites when I view the list of advertisers & third parties listed in consent dialogs, it's literally hundreds of them. There's no way to tell how many of those are equally motivated to protect user data and be on best behaviour.

It's also easy to forget that many users simply don't have the same understanding or reasoning about phone permissions that the audience on HN does.

I worry a great deal that we're walking into a world where older generations in particular are exploited by technology via a bombardment of settings and dialog boxes -- and that we're veering away from the real promise of technology, which is to provide clear, simple, fast and effective life improvements.


> I think the definition of malware is blurring though. On some websites when I view the list of advertisers & third parties listed in consent dialogs, it's literally hundreds of them. There's no way to tell how many of those are equally motivated to protect user data and be on best behaviour.

I worked on a side project a few years ago for price checking stuff. When I was working on the toys r us integration I noticed their site loaded insanely slow. So I dug deep and the site had dozens of hits to random urls. After doing a bit of spelunking - whois lookups, etc - the webpage was having my browser contact pretty much every major (or subsidiary of) tech company you could think of: oracle, ibm, cisco, facebook, etc. etc. etc.

It made me realize how utterly insane the web has gotten.


> I can say with 99% certainty that this is not happening with Facebook or Apple.

> The person installed malware that IS collecting sound samples

The Facebook app uses certificates that circumvent permissions. Any analytics package installed in the Facebook app could actually be listening and putting that on their ad server, making the Facebook app the malware that is collecting sound samples.

Facebook Inc. can accurately say "we aren't collecting" and may actually have no knowledge, and everyone continues to be misdirected.

Ironically, whether it was a package in the Facebook app, or from any other service on the phone, the ad-server is sharing the same fingerprinting across to all the other apps and companies including Facebook.


If i remember right when there was a 'scandal' last year - where journalist worked with transcribing voice data from google assistant recordings. They found out that it activated plenty of times without the key phrase(it is easy to see why it happens), and it did contain PII occasionally.

Probably by sheer 'coincidence', when google was being investigated - apple, samsung and few others shut down similar initiatives.

And with how hostile and profitable the advertising world is, I'll stay with 'guilty until proven innocent' mindset.


Did they also say that it was used for advertising? I don’t think it’s much of a stretch to think they would, but it doesn’t sound like a smoking gun.


You are saying, "Trust me I work with apple and facebook"

No, no trust, none at all, zero. This is not at all in any way personal and you're anon so it couldn't be.

Do you understand how that works and why? Pathological lying has taken place in the world's most successful bait and switch. Nobody agreed to this. Not a single person agreed to a surveillance state and the creation of a turnkey facist enforcement solution. The stasi couldn't have dreamed of having so much power.

Zero trust. Less than zero. Facebook and Apple (and others) have now been caught and are desparately trying to pretend it's all ok. It isn't. Not even close.

We now have to assume the content is lies, we don't have a choice. The fact you need to be anonymous in claiming everyting is really ok is telling.


Do you not trust anything on Wikipedia? Of course you can't ascertain 100% whether a comment is true or not but going around saying everything is a lie because Ad companies lie to you doesn't seem very helpful. OP wasn't saying trust me I work at Apple/Facebook they were saying: I had pretty good visibility into internal projects and the codebase and from what I saw that type of tracking wasn't going on. Of course you can only take that type of comment at face value but to assume it's a lie seems silly.

Not believing a PR/damage control statement from an Ad company on the other hand is probably the right thing. Now at some point Ad companies may start doing huge disinformation campaigns on social media with payed commenters but that doesn't seem to be the case yet.


You trust Google anytime you plan your way to the airport with maps.

“Zero trust” sounds cool because cynicism is often confused with smartitude. But it is impossible to actually verify even a tiny slice of the information you consume and rely on every day.


Conflating reading a map with a claim of "nothing to see here, there is no crime, trust me." is utterly ludicrous. But I think you know that.


Then I guess when you say “zero trust” you actually mean “some trust, but not enough for that”.


If you wish to deliberately play silly semantic games to obsfuscate the obvious and intended meaning there is no prospect of meaningful exchange of ideas. This conversation is well beyond sense in that if you have a point I have no idea at all what it could be regarding whether or not facebrick et al are criminal enterprises or you should not trust at all those who claim, anonymously, without any evidence at all one way or another. But especially when those claims are that facebrick aren't misbehaving terribly. Especially based on hard won experience and the mess we are now in. Espeically because we know they are desperate not to be seen that way and spend money to that effect.

But yeah, maybe it doesn't involve trust at the level of whether the letter s is actually q. Sure.


>> You are saying, "Trust me I work with apple and facebook"

The exact turn of phrase was "Facebook or Apple". Formally speaking, you cannot conclude that they are working for either, given they phrased it as a disjunction, much less that they are working for both.

So maybe let's not jump to conclusions about other commenters? Who knows what the GP meant by "doxing myself"?

(Note: formally, "A or B" implies neither A nor B).


FFS

the claim is "i have inside info at apple and facebrick therfeore trust me" The pedantic difference is utterly meaningless here but you know that.

The answer is you absolutely refuse to trust an anonymous person based on this claim. The end. Period. Yeah? Yes.

Interesting all the pedantic responses that have zero baring on that, including yours.

Trust was asked for. The only sane response is to refuse, publically and loudly.


Even if "trust no one" is good advice, it's certainly not evidence that Facebook or Apple listen to audio from your phone and use that to target advertisements.


Verify uisng evidence. Esepcially when dealing with the claims of an industry of pathological liars.

Especially verify if there are zero consequences for someone deliberately and falsely making a claim that this time there is no deception.

Deliberately false and misleading claims by facebrick, goog, apple and the entire internet advertising con-job is how we got to this point, remember. So maybe don't just decide to believe blindly when that has been tried before with the outcome we have.

The _denial_ is the thing that is not evidence. Whether it is happening or not the denial here is not worth electrons used to deliver it.

I make no claims at all about what /is/ happening beyond all these companies having zero credibility given their incredible bait and switch to produce an outcome that literally nobody agreed to.


I don’t really trust them more than I can throw them, however I’m inclined to believe in this case given the practicality of it. Capturing and parsing all that audio is expensive and would be low signal to noise. Additionally, many of these companies are sitting on data that is significantly more valuable and relevant.

So purely from a selfish business perspective, which is how I assume they make decisions, why would they implement this? And that is to say nothing of the PR risks.

What do you have leading you to believe it’s happening?

Or are you simply saying we should not implicitly trust the anon commenter? If so, I agree fully.


> Verify uisng evidence. Esepcially when dealing with the claims of an industry of pathological liars.

Verify what, exactly? Verify literally every claim someone else makes that Facebook denies? There's simply no smoking gun to indicate that they're listening to phone audio. It would be extremely surprising if security researchers had not discovered them secretly doing this, or that a concerned employee wouldn't have leaked it to the press.

I don't trust these companies either, but that doesn't mean I believe any and all negative claims made about them, particularly claims that don't have any evidence supporting them other than the occasional well-targeted advertisement.


So now flip it and decide whether you trust positive claims that facebrick are not doing whatever.

That is what we are discussing, here, now.

I don't trust those positive claims that facebook are not misbehaving. If you do, I can't help you.

Is that evidence of their nuclear weapons program? Obviously and clearly not. I have never here put forward evidence of their misbehaving. That continues to come regularly. Whether it is this or somenthing else more will come is a reasonble bet.

Should you trust them or anyone who says without evidence "we are doing nothing wrong" That is being addressed here.

Do you? Really?

So much pushback for taking exception to an anonymous defence with no evidence. Surely taking exception to anonymous claims with no evidence because "trust me" is what every single one of should do. Especially when we've been burned so very, very hard.

Until there are real consequences for telling lies trusting facebook denials is naieve in the extreme.

Fool me a 56th time what am I? An owner of a facebrick account.


Re: #2, it's also possible that an ad you saw and dismissed prompted you to start thinking about replacing your TV, and then a later ad from the same campaign seems like an odd coincidence.


I worked at Facebook for four years (three of which on the iOS app) and it's implausible to me that they're using the microphone to listen into people’s conversations, at least on the flagship apps (Messenger, Facebook, Instagram).

They'd have to be using some very sophisticated techniques to hide internally that they're doing it, given that engineers can all see the entire codebase. I was pretty intimately familiar with every bit of the build process; it'd have been hard to miss some injection of secret code. Also, the benefit would be minuscule, and the threat to their reputation if caught (given that Zuckerberg has very explicitly claimed not to be doing this) would be huge.

I am almost certain that it's not happening on Android either, but I'm not as familiar with that codebase, so I won't claim to know first hand.


I recently read an article by Nicholas Nassim Taleb: https://medium.com/incerto/the-intellectual-yet-idiot-13211e...

Here is an excellent description of the Big Tech employee today: "but their main skill is capacity to pass exams written by people like them".

Yes, Facebook is like a cult. You can verify this by asking any Facebook employee their view on privacy. You will usually get a ramble which will look exactly like the other passage from that essay:

"that class of paternalistic semi-intellectual experts.... who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for."

And I will add 6) why privacy does not matter.

My point is: I think this person is lying. Now, if only Facebook open sourced its entire codebase, I would be willing to take back my view. See the problem here? It will be back to square one.

"Trust us. We know we are right. And we must be right because we are successful.."


Sure, facebook probably doesn't listen to your conversations directly, but I can believe some random free to play game does, and through various data brokers that data can end up with people who advertise on facebook.


the threat to their reputation if caught (given that Zuckerberg has very explicitly claimed not to be doing this) would be huge.

"We're sorry, we can do better." ad nauseam.


What does that prove though? Those very same engineers are accessory to all the crap that we do know Facebook has pulled in the past and likely is still pulling in the present. That code doesn't have to be secret at all.


There are quite a few ex-Facebook people who have come out with scathing criticisms of the company, and plenty of Google's internal political fights have spilled out into the media.

I'm fairly certain that if this has been happening for years, somebody would have leaked confirmation or publicly admitted to it by now.


>Is this actually happening though? If so, who is doing it and how?

It's possible but a lot of the reports are clearly coincidences. Take this example of someone watching 'Catch Me If You Can' 2 days ago, and then yesterday seeing a HN posting about the main character and finding it hard to accept as coincidental.[0]

0. https://news.ycombinator.com/item?id=22048337


I think, in this case, you underestimate the probability of that happening randomly. The birthday problem [0] can explain that.

Imagine you have conversations with your spouse about hundreds of things. And then you see ads about hundreds of things. Sometimes they happen to be about the same things. That's something out of ordinary for you, so you remember that. You are spooked by that, and you might even tweet about it, or tell your friends about it. You don't tweet every time you don't see an ad about the thing you talked about. But the probability of that happening several times over the course of a year is quite high.

[0] https://en.wikipedia.org/wiki/Birthday_problem


I think it’s possible that some minor actor is doing it (which I suppose would mean it’s almost certain that someone does it), but it’s likely not done at scale or at a big actor such as Facebook, Samsung or Google.

Most people who are creeped out about things like “I talked to my friend about traveling to Aruba yesterday and today on Facebook I have ads for trips to Aruba!”

First of all you could have had those ads all year but only noticed after the discussion. That’s the simplest explanation.

But also, if this was indeed a friend (or someone who was ever e.g. the recipient of the same email as you, or has friends in common with you etc) then that person perhaps shared that they were in Aruba recently. Since the two of you were in the same location when you talked, a clever advertiser could show you ads for products or services that people they think you interacted with have already bought - such as a hotel stay in Aruba.

I’d really like to hear if anyone has first hand experience with spying like this - and the fact that no one has yet come forward and confirmed it suggests it must be rare.


Surely it couldn't stay a secret in the industry. Especially as plenty of people spoke up against other questionable practices.

Maybe it's like the dieselgate thing where various engineers may not see the big picture, but I honestly do not believe this is the case. I have a strong feeling that for vast majority of people, ads have a significant effect on your purchases, perhaps you were not thinking about traveling to Aruba but after couple of your friends went there, talked to you about and you got bunch of ads showing beautiful Aruba you decided to go. They don't need to listen to you to "get you".


Uber spies on passengers using the microphone, I’m willing to bet. I was in an Uber, having a conversation with the driver about wine tasting in Woodenville. When I got home, I had an email from Uber advertising wine tasting trips to Woodenville. The email was sent four minutes after my Uber ride. Coincidence?

I found a forum for Uber drivers where someone asked, “Why does the driver app ask to access the microphone?”


Do you know how expensive (and shit) voice-to-text is? Go check out AWS Transcribe prices. Try it out too, see how much nonsense it will spew out.

It's funny as I've literally just done a project on this to automatically classify phone calls. In reality voice recognition is only good at set phrases, most of Google Assistant and Alexa is smoke and mirrors.

It wouldn't even be able to recognise "wine tasting in Woodenville", let alone for it to be economically viable to process every voice like sound in a taxi and make money off adverts.

It's just the Baader-Meinhof phenomenon, not Uber recording you.


Did it occur to you that maybe the driver brought up Woodenville because he got the same email before the ride?


I've never heard of Woodenville, but I'd be willing to bet that it's a popular place to go wine-tasting, and that would explain both your conversation and the email advertisement. The exact timing could of course be a coincidence, and not one that seems at all unlikely.


No they don't but due to false allegations from passengers (and to a lesses extent drivers) they plan on starting to record the conversations.


It's not proven yet I guess? But then again, the big companies keep saying they don't record/listen to voice commands for assistants either.

But the reports of third party companies suddenly having access to this data (for quality improvement purposes) tells another story.

Also my YouTube keeps suggesting videos for any show my phone is near when it plays.


They don't need to listen to your voice for that. The show can have a signal hidden in its sound track that you can't hear, but that the apps on your phone can. Then they can report home that you are watching such-and-such show. No idea what happens after that but I guess google apps share data somehow.

There was an article about this that I read a while back, I'll try to find it if I can.

Edit: here, found something:

https://arstechnica.com/tech-policy/2015/11/beware-of-ads-th...

The ultrasonic pitches are embedded into TV commercials or are played when a user encounters an ad displayed in a computer browser. While the sound can't be heard by the human ear, nearby tablets and smartphones can detect it. When they do, browser cookies can now pair a single user to multiple devices and keep track of what TV commercials the person sees, how long the person watches the ads, and whether the person acts on the ads by doing a Web search or buying a product.

Edit again: to be fair, not even Facebook and Google have the tech to match the shows you watch with ads just by what your phone hears you say. Speech recognition still doesn't work _that_ well.


Of course they don't record and listen, that would be slow and take huge stores of data. They save transcripts and read/parse it instead. Much faster and less data to store.


> Is this actually happening though? If so, who is doing it and how?

I think there's a lot of anecdotal evidence that would seem to confirm this. It might seem like they're listening, but it might just be the real-time data collection taking place. If you have an Alexa in your house, it's always on and listening.

I have a neighbor who swears when him and his wife are talking about their grocery list, they'll get ads on their phones for the stuff they need. They even tried on purpose to test it out and were talking about getting blinds (they didn't need new blinds) and twenty minutes later they opened Google on their phone and on their desktop PC and they were being served ads for blinds.

I had another friend who did the same thing and started talking about how he hated he had to constantly shovel all the snow in his driveway and within minutes was getting ads on his laptop for snow removal services.

People will swear Alexa doesn't do this, but the amount of instances I've heard makes me think they can't all be coincidental.


I have experienced situations in the past with android which makes me believe that either android itself or an application on android phones is intercepting the plain text values of notifications.

I've sent messages to friends through signal about specific things which then started showing up in their google ads the next day. After digging into it we found the recommendation to set signal to no longer preview the messages via notification. Since changing that there's been no further occurrences.


I don't know if mainstream apps are doing that, but I don't see why someone couldn't create a "trojan horse" app that requires the microphone, and starts monitoring and sending back recordings if it detects a conversation, it probably would be eventually spotted though if it gets any populartity.

Or, perhaps microphones in public that try to tie overheard conversations with beacon or location data to find out who it heard.


https://www.popularmechanics.com/technology/security/a145332...

Apparently these apps don't record human speech, but the capability's there.

https://www.latimes.com/business/la-fi-cameras-grocery-store...

Not quite targetting based on conversations but still creepy.

I think the targetting based on voice conversations is maybe a bit overblown, but text conversations through messenger or whatsapp, I have no doubt about. There's been more than one time I've started getting targetted ads based on text conversations i've had through those apps.

But, I also get targetted ads based on my emails, things I search for on reddit, youtube videos I watch or search for, websites I browse, local ones based on my location, and a myriad of other things that are nearly as bad as my conversations being recorded.


It's all anecdotal and so far I don't think anyone has actually found phones etc exporting conversation keywords or anything like that. That being said it's happened a few times that I've been in the car with my girlfriend talking about something, won't search anything about it and we'll get ads about it in the next couple days... The one time that stood out was talking about babies and getting ads about childcare, baby products, etc way more over the next couple days.


No, although Google and/or Alexa will build up a profile of your interests based on things you ask/search for through them - but this is the same as with any search engine.


I partially suspect Facebook since a lot of people I know mindlessly let Facebook Messenger send / read / receive texts. If anybody has evidence to the contrary feel free to let me know. I also suspect anybody using Google Voice is giving up their secrets as well.


No, it's not.


I used to work in the space, assembling the DB table you described in a meaningful way is surprisingly hard to do, because you would need to attach confidence intervals to everything you tag onto that user_id. Reality tends to be complex and often wrong, and there are very few actually good signals. That's what makes Facebook such a powerhouse.

I eventually left that space for personal reasons, but with distance and time I've resolved that I no longer want to touch that space, no matter the startup idea. I would sooner start developing software to block or push noise into those systems. The digital ad/marketing industry is driven by a lot of people who care about anything but the health of our society or the actual consumers.


1: Your family have missed the boat - this has been done for the last 5 years. It isn't new

2: It isn't as easy as you (Or they) think. I simple DB isn't going to cope with the real-time needs of the ad industry


I was at a company doing this in 2004.


Five years? More like the last 20+.


See also companies/products such as Anonos BigPrivacy[0] which claim to provide a software solution which somehow still allows you to perform analytics on user data while magically ticking all privacy compliance laws for you.

Pretty hard to interrogate without actually seeing the technical details / implementation, but I strongly suspect this is not really following the intent of regulation (but instead playing a shell game to extract revenue and allow players to navigate through the field for a while until it all unravels).

[0] - https://www.anonos.com/company


I have fam in adtech and the running joke is "haha, we're the ones doing all the evil stuff you wouldn't believe even if we told you, HHOS."


Help your family out, make good money from it. The worse it gets, the better. When the shit hits the fan, nobody needs it to come into the headlight with a bang, we want it to become visible with a hypernova.


The Trump approach, for when you actually need shit to hit the fan.


The technical report is pretty interesting. They break down what's going where:

https://fil.forbrukerradet.no/wp-content/uploads/2020/01/mne...


Also, the non-technical(?) report is also a great read, and can a good one to share around to managers/non-engineers:

https://fil.forbrukerradet.no/wp-content/uploads/2020/01/202...


Agree! Both reports are a pretty good and reflect the status quo in the industry. There is an overcollection and oversharing of data without proper consent and that has to stop. For Europe - forcing the ad-tech industry to adhere to the GDPR is the correct next step as self regulation did not work.

Having said that the report paints a pretty dark and one-sided picture. Let's see how far the authorities will follow their argumentation / conclusion.

(full disclosure: I work in that industry.)


All this invasion of privacy and still little evidence that user targeted advertising is substantially more effective than simple content based advertising.


I'm still "waiting" for ads that are actually relevant for me, and not just based on something I randomly watched a few days ago.


It has been proven very effective.

Quote: "So successful, by the way, that between 2000 and 2004 with their IPO [Initial Public Offering] documents going public, the first time we got to learn exactly what the impact of this new logic was. And the impact was a revenue increase of 3,590%, just during those years 2000-2004."

source: https://www.econtalk.org/shoshana-zuboff-on-surveillance-cap...


That's comparing tracking advertising with bad data to tracking advertising with good data. Where's the comparison of tracking to non tracking advertising?

Tracking advertising costs the advertiser more (and makes the ad provider a lot more), but there's little evidence to suggest it's significantly more effective than content advertising. And this isn't even accounting for the enormous social cost of private surveillance.

https://techcrunch.com/2019/01/20/dont-be-creepy/


The way we use internet has also been changing at a significant rate since 2000 so we don't really have the counterfactual to show that content-based advertising wouldn't have experienced similar growth rates, unless we find a report covering revenue growth over the same period from a company (or better yet, a set of companies) that focuses on content-based ads


Get ready for ads to get more annoying. The hidden benefit of using personalized data to target and track prospects was an advertiser could use "soft sell" to build a brand over time.

As that market gets shut off, we're back to aggressively using interruption marketing "shock jock" ads and auto-play video. Click now or forever hold your peace...

The problem with contextually targeted ads is there is no real guarantee of repetition and brand building...


> we're back to aggressively using interruption marketing "shock jock" ads and auto-play video

Advertisers thought they could get away with pop-ups until browsers shut them down by shipping pop-up blockers by default. I have no doubt ad blockers will shut down that market as well. Hopefully it will shut all of it down, driving advertisers out of the internet forever.


I am only OK with one type of advertising.

1) I have a problem or a need that I am looking to fulfill.

2) I ask (E.G. a search engine) about fulfilling that need.

3) The results are no BS, no hidden fees, directly up-front responses that tell me how much something is going to cost, where it is, and maybe why that will solve my needs/desires.

That is the only place at all that 'ads' belong, informative messages that are intended to actually help BOTH the consumer and any service provider.


What you described is reasonable and I'm fine with it too. Though is it really advertising if you ask for it? To me, ads are the stuff they shove down people's throats whether they want it or not hoping that a fraction of them will appreciate it.


Be careful what you wish for.

Commercial ads aren't perfect but they represent a decent middle ground for content creators. And have a powerful moderating effect on webmaster behavior.

If you want a nasty experience, look at any non-sponsored digital eco-system which doesn't qualify for display ads. Things tend to get pretty raw in a hurry. (and most nice content creators stop wasting time and get regular jobs)


I have doubt, considering safari doesn't let you use extensions and chrome is continually working towards diluting ad blockers out of existence. Firefox is the last bastion of ad free browsing, and it's market share is low enough that advertisers aren't going to care about firefox users (good for us, I guess, but bad for the 95% of web users that don't use firefox).


Excellent, as that will prompt a reaction. First to push even more non technical folks to join most techies using no exceptions ad blocking. Second to provoke even more onerous regulations.


And even more importantly this will eventually foster more palatable/unavoidable kinds of advertising to avoid the arms race entirely, e.g. product placement and endorsements.


That's an interesting point. I do rather have hiking shoe/soylent ads snuck into my feed than random gaudy shit trying to get my attention.


I wish GDPR had some exemption for small companies or companies that are starting out. Maybe limitations on revenue, amount of users and scope of data, eg strict rules still apply (to a small company) when dealing with sensitive data such as health information, but email addresses are not as strictly governed. Perhaps require a plan of action to abide by the full rules by X future date even.

Take for example some one person indie developer. A common way to monetize that is through ads and in app purchases that turn ads off. With the current rules it's difficult for a single person to know that they abide by all of that while still monetizing in such a way.


To me that sounds a bit like adding health and safety code exceptions for roadside food stalls.


It's more like not subjecting employees who bake cookies and share them with coworkers to the same inspection requirements as commercial kitchens whose entire business is selling food to the public.

Even moreso because one person distributing bad food can give a dozen others food poisoning, but the concerns with data collection come from mass surveillance and aggregation, which implies a scale that large entities have and smaller ones don't.


> not subjecting employees who bake cookies and share them with coworkers to the same inspection requirements as commercial kitchens whose entire business is selling food to the public.

A coworker baking cookies and sharing them with coworkers isn't trying to make money off of those cookies. A startup is.

The coworkers consuming those cookies are generally able to identify that the cookies were homemade, bakeshop, or industrial quality. Today we can barely even identify what is tracking us, let alone how, and definitely not why.

You're not comparing apples to oranges. You're comparing apples to bushes. Homemade cookies and startup personal information aggregators aren't even in the same league of category.


Also it feels like a slippery slope. Let's say the cut off is $25m in revenue. So you have some business ideas based on violating user privacy that are profitable, until the company reaches the size where it has to comply, and all of a sudden its unprofitable.

Clearly the next step is to lobby for raised limits, rather than companies planning in advance for what they'll do when they cross the threshold.


Typical lobbying is completely the opposite of that. The companies who have the resources to do it aren't going to be inside the threshold between e.g. $25m and $100m and they wouldn't want to stay that size anyway, so what they'll do is lobby for a generic exception that isn't related to entity size at all. Or if anything is inversely related to it, so that the same large entities that have the resources to successfully lobby to change the law are also the only ones who can comply with the rules while simultaneously skirting their intended purpose.

And if you have a problem with corporations controlling your legislature then in general it doesn't do a whole lot of good to debate what some other law should be since the better law wouldn't be enacted by a legislature controlled by lobbyists anyway.


When I said small I was thinking more about <$250k a year in revenue. At that size I think the companies would be small enough that their lobbying wouldn't matter.

If you're in the millions a year in revenue then I don't see how understanding and complying with GDPR would be a large barrier. Even $250k might be too high.


> A coworker baking cookies and sharing them with coworkers isn't trying to make money off of those cookies. A startup is.

You're implying that the rules only apply to companies whose line of business is selling data. There wouldn't be an issue if that were true. Making money by selling mugs on the internet where you also collect some customer data is analogous to making money by working for a company where you also distribute cookies.


If you can't handle data reasonably then you can't handle data. I don't see why the company size is relevant.


It seems like many small companies are also not run by people who have read in some business magazine or have been told by some consultants that they need to be doing big data. Unless they do advertising, they likely don't need the data in the first place and so the path of least resistance is to do the right thing. No data, no problems.


And if you don't do advertising how do you monetize? Do you follow all the other apps that implement gambling as their monetization strategy?


> if you don't do advertising how do you monetize?

Try making sales instead of providing a "free" product?


Have you looked at how apps on the app stores are monetized? The only ones with a lot of success that charge upfront are big existing names or rare exceptions. Everything else that has success tends to be free. You can't compete with that, because somebody will just clone your app and release a free version of it with ads.


You might need data to increase your sales, though. But that parts needs not be unethical, nor leave your company.


Targeted affiliate marketing is far better than ads imo. Odds are that you’re catering to a subgroup with very specific needs (unless you’re trying to clone facebook). You can inform users about products that they might need, ideally in form of valuable content and just include links with your vendor identifier. No third party scripts whatsoever and ads the user might actually like and click.


Surely there are ways to serve ads without collecting any user-specific data at all? Tada, compliance, and zero effort, too.


Of course there are. They also pay next to nothing and you're likely limited in scope in networks. Even then you need to be able to figure out that the ad network is actually keeping their promises. Remember that we're not talking about operations where you an just hire someone to do that. We're talking about operations where essentially you would have to figure all of that out and often take personal responsibility that you did figure it out properly.


You do it while complying with GDPR, of course.


Because this is a massive barrier to entry for Europeans. You either need to essentially be a lawyer or you can't run ads (or you break the law). Non-europeans can easily implement ads and combine this with in app purchases and what not. Europeans can't do that nearly as easily. They will get out competed. I don't see how you don't see how this is bad.


> Because this is a massive barrier to entry for Europeans.

No. This is a massive barrier to people who aren't used to thinking about their fellow people.


European here, both a consumer and micro-ISV owner.

As a consumer, I very much like the protections the GDPR provides.

As a business owner, I don't find GDPR to be a barrier to entry - TBH, anyone who actually cares about user privacy at all would have been near-compliant even before GDPR name in. For me, I think it was 1-2 days, most of it spent reading.

It's not difficult to comply with, and I'd be very against exemptions for small businesses - also because larger businesses would use it as a loophole to game the system.


You very much don't need to "essentially be a lawyer" to be able to include ads - services like AdMob have easily configurable SDKs for getting GDPR-compliant consent.

It's really no more than a few lines of code to respect user's privacy, and if you're collecting significantly more user data then you should be handling it with care.


I'm sorry but this is utter rubbish. The GDPR is easy to follow. If you can't think 9f how to run a business without being reckless or scummy, then you shouldn't be running one.

I work in the EU, in tech, with customer data, with these rules and have zero problems following them.


That would be way too easy to take advantage of by a big company making lots of small side companies


I worked for a company that was structured like that.

Basically I worked for a very small company that was in the same offices as several other similar sized companies. All those companies where on the same network and there was no distinction of who was working for which company. They all just mixed together.

No one really cared who you work for anyway since it was all owned by the same 3 people and it behaved like a single large company. Technically though it was just lots of small companies as far as the Government was concerned.


You could just add a good faith clause that forbids it.


How would that work out in practice? Sounds highly unlikely.


Have you heard of contracting practices in the US? Rather than "highly unlikely", this is the norm for tech companies that want to skirt corporate responsibility.


That's an unrelated issue. In this context that could be avoided simply by counting more than half time individual contractors as employees when measuring entity size.


Splitting large companies into smaller ones to get tax breaks and other incentives is extremely common.

For example in the UK there are tax breaks for companies that spend more than a certain percentage of the revenue on research. Just below the threshold? Simple! Split out a research arm!


This is common because splitting a large company into five or six smaller ones is feasible, since the overhead of doing so is much smaller than even the smaller entity size. Setting the threshold such that a company the size of Google would have to split their operations into thousands of individual corporations would be a lot less feasible.

You could also avoid the issue by limiting the exemption to corporations not owned in any part by a corporation that wouldn't itself qualify for the exemption, since the point of the exemption is to help real small entities and not subsidiaries of huge publicly traded corporations.


It does have an exemption that works for everybody: don't track your users, and don't keep data you don't need. Problem solved.


Unless you implement data collection in your indie code (as an indie developer) you have no 'extra' work or burden.

The GDPR scales with the size of the data collection, meaning: it doesn't matter how large the staff/company is, it matters if you collect data, and as an indie developer you'd be in an excellent position to quickly scope out anything you don't need. If anything, it's easier to not build collection than build it if you don't need it.


You might need minimal data yourself, but the tools you use (eg Unity) or the way you monetize (AdSense) do data collection.


That's a failure of the tools, not the regulations.


And again, time to swap out third parties that don't/can't/won't comply... and maybe have to roll your own a little bit if you can't find any compliant tools.

(Throwing Unity in there is a pretty pointless strawman, unless you know that there is something in there that DOES collect user data.)


> A common way to monetize that is through ads and in app purchases that turn ads off.

The point of GDPR is to stop this crap where advertisers demand way too much information. As long as you don't collect uniquely identifying information and personal data about the user then you don't fall under GDPR. Serve whatever ads you want, they just won't be tailored and that's fine (it might reduce your ad rates so its no-longer a viable business proposition but that's the point).

Most people that cry foul about GDPR are the ones collecting uniquely identifiable information about their users. Just stop doing that, that's what the law is designed to prevent.


Quite frankly, I'm sick of all the FUD or what-ifs, but it reliably happens in every GDPR thread, so thank you for jumping in and refuting this. There doesn't seem to be actual evidence that people with non-scummy business practices have been fined, or would be - like the hypothetical indie dev.

I suspect the hardest part is actually finding an ad network that is GDPR compliant, and coming to terms with that you can't just embed every shitty tracking and "analytics" framework into your app. As a consumer, this still all seems like a win to me.


Of course the indie dev wouldn't be fined. These regulations are meant to be used selectively. Essentially, you're encouraged to break the law because everyone else on your level is doing the same. Then you just hope you're not the one that's made an example of.

>As a consumer, this still all seems like a win to me.

Of course, because you won't notice that Europeans can't compete. You'll just use the Chinese/American apps instead, because those can afford to compete.


Note that almost anything can form uniquely identifying information, so technically, it's not easy to fully exempt yourself from GDPR.

Not that anyone would care about minor unwitting infractions. So far, enforcement is way on the other side of spectrum, it's hard to get yourself in trouble even when blatantly and purposefully breaking GDPR.


As point of comparison: CCPA only applies to companies if they have either $25 million in revenue or data on 50,000 users (or derive half their revenue from sale of personal information, regardless of size). And to your point about sensitive data, neither CCPA nor GDPR prevent more specialized laws like HIPAA or GLBA.


If you operate a small company, why not be GDPR complaint to begin with? It seems easier to start out complaint, rather than trying to "fix" the issue once you reach a certain size. The upfront cost seems much lower than retrofitting GDPR into your systems.


Unless I'm a lawyer, I wouldn't trust myself to ensure I'm compliant with all laws. Would probably have to hire lawyers/do an audit which if you're just starting out might not have much money for. Money that could be better spent improving your product so you can provide a service and also make money.

Doesn't mean you can't try your best to be compliant, I still think no matter what you wont know for sure until you hire a third party.


How is the upfront cost lower? If you know software development you could make a simple game quickly. You can add ads to it and release it, except that now you need to essentially be a lawyer to know that the ad network usage you have does not violate GDPR in some way.


The point of GDPR is to protect users' privacy. Clearly it is not in line with the goals of GDPR to allow small companies invade privacy anyway, just because they're small. Privacy is privacy.


GDPR compliance for indie developers is a little painful, but completely doable. The core principles are that you collect as little personal data as possible, have an expiration policy for all data, and don't share personal data with 3rd parties that don't themselves abide by GDPR. If you do these three things you should be OK, assuming you don't collect the most sensitive data.

GDPR isn't meant to bully people who make a good-faith effort to deal with customer data responsibly, and GDPR isn't enforced to destroy those who don't comply perfectly. In the US prosecutors will mercilessly enforce the letter of the law for a few unlucky nobodies to set an example for the rest. In the EU it doesn't work like this.


How is "this policy probably won't be enforced so don't worry about it" in any way a reasonable approach?


For indie developers not in the EU but with users in the EU, what's a good way to deal with Article 27's requirements [1]?

[1] https://gdpr-info.eu/art-27-gdpr/


1. Many indie developers will not actually be subject to the GDPR. The GDPR effectively only applies to non-EU controllers if they are actively targeting and expecting EU users, or if they are monitoring/tracking users in the EU.

The canonical reference to when that's the case is the EDPB Guideline 3/2018 on the territorial scope of the GDPR [1]. These guidelines are more narrow than widely perceived, e.g. that regional US news websites started blocking EU visitors is totally silly and unnecessary. Mere accessibility of a service, without offering the service to persons in the EU, does not trigger the GDPR.

2. If the developer does expect to be subject to the GDPR, they can hire a representative. There are many lawyers offering this service. It's not a terribly involved job, but it affects which member state's data protection agency is responsible. E.g. if you think Austrian GDPR interpretations are nuts, go look for someone in Ireland instead.

[1]: https://edpb.europa.eu/our-work-tools/our-documents/guidelin...


>In the EU it doesn't work like this.

Yep, it's much more selective in the EU. As long as you are in the good graces of those in power everything will be fine. If you aren't...


The level of corruption is fairly low around here, thankyouverymuch.


The problem is really that the indie developer doesn't really have any monetisation options other than those which break the law, but in theory it's not overly onerous to pick one that at least claims to be GDPR compliant. They should be able to trust that their suppliers aren't providing them illegal products tbh.


You're forgetting the most legitimate monetization option:

If you make something great, people will pay for it.


That's not how it works at all though. The games on the app store that make a lot of money are those that mercilessly try to rip people off. Gacha games are essentially gambling, whereas proper games with a story and gameplay earn very little.


So if you are talking about games I assume you are talking about something like the Steam store? When I look at their top selling list, there are lots of pretty deep games listed.


That seem like a lot work, go for VC funding and sell to Google, Facebook, Microsoft or Apple in two years.


The proposed ePrivacy Regulation[0] looked like it was set to introduce some very positive recommendations to curb the worst of advertising cookies and pop-up dialogs.

Article 10 of the draft regulation suggested moving consent settings into the browser so that you could specify whether you will accept various forms of cookies centrally and then have those settings apply to all sites.

A ruling[1] related to shady consent practices by a website called Planet49 seems to have shifted the regulatory window towards the idea that users have to definitively prove informed consent.

Meanwhile the latest draft[2] of the ePrivacy regulation has removed Article 10 and the mention of browser-based controls for cookies entirely and thus consent stays per-website.

I really wish the choice of privacy related to advertising was baked into the browser and enforced there. Given the above developments, it's the only route I can see that avoids pop-up fatigue for users. The number of pop-ups everyone has to deal with causes user experience friction and wastes everyone's time.

It'd seem reasonable to me for sites to be allowed to pair with advertisers to request additional consent via pop-ups if they want, but with the defaults in the browser.

That way a site would have to make a conscious decision that it's worth getting consent from a user in order to monetize them -- and users would only need to be informed and provide their consent when something outside their expectations is being requested.

I'd love to hear from anyone who's tracking this - I'm not a lawyer and all this is the bits and pieces I've picked up while reading on the web and trying to determine an analytics consent strategy for a project I'm developing.

Edit: NB: I realize there's a context of apps rather than websites in the article, but I'd hope and suggest that the fundamentals are the same, especially if & when PWA's blur the distinction between browser/mobile-OS as host.

[0] - https://en.wikipedia.org/wiki/EPrivacy_Regulation_(European_...

[1] - https://www.cookiebot.com/en/active-consent-and-the-case-of-...

[2] - https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CONS...


Another source:

https://noyb.eu/three-gdpr-complaints-filed-against-grindr-t...

Research by the Norwegian Consumer Council (Forbrukerrådet) shows that many smartphone apps send highly personal data to thousands of advertising partners. The report uncovers how a large number of shadowy entities are receiving personal data about our interests, habits, and behavior, every time we use certain apps on our phones. This information is used to create comprehensive profiles about us, which can be used for targeted advertising and other purposes.

“These practices are out of control and are rife with privacy violations and breaches of European law. The extent of tracking makes it impossible for us to make informed choices about how our personal data is collected, shared and used. Consequently, this massive commercial surveillance is systematically at odds with our fundamental rights”, says Finn Myrstad, director of digital policy in the Norwegian Consumer Council.

“Every time you open an app like Grindr advertisement networks get your GPS location, device identifiers and even the fact that you use a gay dating app. This is an insane violation of users’ EU privacy rights”, says Max Schrems, founder of the European privacy non-profit noyb.


CCPA cannot kick in fast enough. We need consumer protection in the USA too.


I haven't delved into a lot of detail of how CCPA works, but it seems to me as a step in the right direction that won't achieve much practical change.

It's worth to look at this from the perspective of EU privacy legislation before GDPR - it already included all the things that data should be safeguarded, customers should be informed, that consent matters, etc. However, it didn't really have a meaningful effect on actual privacy before GDPR (and IMHO we'll need a GDPRv2 not that long in the future), because companies were able to comply simply by adding legalese, privacy policies and notifications without actually changing the privacy-relevant behavior.

One aspect that I feel that CCPA is getting wrong and GDPR got right is the "default condition". According to GDPR, companies are prohibited to process your data unless they can provide a specific legal basis that allows them to do so, one of which is opt-in consent. According to CCPA (correct me if I'm wrong) the default position is that companies are allowed to do whatever they want with your data, but they are required to inform you and honor opt-out requests. This means that it's plausible for a company which currently is doing shady stuff to legally continue doing so, as long as they can write up some fancy words to 'inform' and ensure that most people don't do the opt-out process.

The second difference is the distinction between having the data and using it for a specific purpose. It is a very common scenario where certain private data are clearly needed for a specific purpose (e.g. your address to deliver some goods), but we'd want to restrict the company's ability to use the same data for other purposes e.g. targeted advertising. With the way how GDPR is structured, this separation is built in; but CCPA doesn't make such a distinction, so (for example) the company is able to share data with a third party that's needed for fulfilling some business purpose (even if the customer had the opt-out for selling data, because it's not selling), then (as far as I understand - correct me if I'm wrong) according to CCPA the third party is legally free to use data for whatever other things it wants. And we should expect that business arrangements will be intentionally designed to 'accidentally' make transferring such data (without a sale) to be part of many deals, so that the big data aggregators would still continue to get all the same information that they had; that companies will be able to comply while continuing to do the same thing.


> This means that it's plausible for a company which currently is doing shady stuff to legally continue doing so, as long as they can write up some fancy words to 'inform' and ensure that most people don't do the opt-out process.

CCPA requires you to have a link with the exact words "DO NOT SELL MY INFORMATION" visible on the homepage. That is an explicit provision. This somewhat limits company's ability to legalese their way around it.


Still pretty crappy requiring users to take action to stop the selling of that information. Worse, it should be stopping the collection of it. Ugh. :-/


While that is true - a consumer must take action to protect their individual information - it may be that if enough people do this that the cost of compliance (responding to the ccpa requests) will make it less profitable to hold on to a consumer's information.

There are also lists with links and contact info of important businesses to exercise your CCPA opt-out or delete rights with like data brokers such as Experian, Epsilon, TransUnion as a start. Even ccpa email templates that may be easier to use rather than going through a form (not sure if they will work).


It actually is opt-in, not opt-out, by default. In order to do opt-out, a business has to demonstrate that their users are all above 13 and able to consent, which is a hard requirement that will force nearly all data brokers to do opt-in instead.

To completely stop collection is difficult. But the legislative will is clearly manifest, so it might happen.


The way I read CCPA, me going to the link "DO NOT SELL MY INFORMATION" in company A can (and IMHO will be) be legally followed by company A giving the information to company B for whatever reason (other than 'monetary or other valuable consideration'), and then company B selling that data, unless I go to company B's website and opt out there as well. And, of course, there are many "company B"'s, a random news website can easily have 50+ "partners" with whom they share data.



Yep. It's almost like the privacy law in the EU was written with no mind paid to how the world of online interactions actually works.

I anticipate enforcement will go about as well as enforcement of drug policy goes in the United States.


The products are not illegal, just the business model. If the businesses can't follow the law, they will get blocked or banned, and their market share will become available for the same or similar product backed by a legal business model. Unlike drugs, where the product itself is illegal.


I don't think it's nearly so clear that the product isn't illegal.


thanks for reminding me to reset my advertising identifier on my iphone.


The world is beginning to rely more and more on Scandinavia to defend human rights and democracy.


That's not really the failing you get living here. I'm not saying it other don't have it much much worse, but at least the Danish government really like invading peoples privacy.


In what ways do they invade peoples' privacy?

Swede considering moving across the sound.


Currently telcos are required to collect data on EVERYONE, not just criminals. That's clearly illegal, according to EU law, but the Danish government and minister of justice have basically said that they don't care.

There's an increased interest in CCTV and privacy concerns are ignored, even though crime rates are lower than they ever have been, but you know: Terror!

You can't drive around the country without the police knowing where you are, because license plate scanners are everywhere.

Oh, and the Danish government have exempted itself from the GDPR.


I realize we need formalized reports to produce legal action, but this sure reads like "News! Water is wet!"


The overall conclusion is hardly surprising, but the details matter in formulating a response.


Every time there's a corporate scandal of sorts (Boeing and Wells Fargo come to mind as recents), the people here scream for accountability.

So what say ye, Googlers, Facebook employees and others? Should fines or prison fall on your shoulders? Why do you continue to do it, knowing you are breaking the law? Knowing you are harming people? Glass houses and all that...


Until senior management and boards of directors are facing the very real prospect of prison all criminal behavior will be treated with statistical analysis of expected values; expected gain versus exected loss with being caught simply a business expense that probably won't even need to be paid. Even the most egrarious action resulting in extreme cases such as preventable death will be in a spreadsheet.

How is whistle blowing going for engineers who hate this? Who could you report it to? Who would write the proper story without burying it for political friends (NYT, WSJ) or burning you as the source (Guardian)? Is there anywhere who will publish credible material they've received no matter what while doing everything to protect sources?


No but the government should start some crippling fines like "google thou shalt be broken into 3 companies and pay 50% of last years revenues". If it's just a couple million bucks that's nothing to google but permanent damage to the company would put the fear of Dog into them.


Probability of succeding in that aim is small. Probability of making that an immaterial expense to google by burying it with lawyers and taking it as far as the supreme court is close to 100%. Personal accoutability or bust. It's a very different management analysis if "I could go to jail."




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: