There are sites (like wwe.com) where after you have successfully located the preference to opt out from everything it shows a "processing" screen which is stuck at 98% for about a minute. But accept is "processed" in an instant. Another dark pattern is showing you have opted out but some sites cannot receive opt out requests through https which is written in small fonts. By doing this they have successfully targeted security conscious people. I know this is not something major, but still, how do these people sleep at night?
edit: I looked it up and wwe.com uses TrustArc, which seems to be a shady org certifying privacy. Mired in controversy, they have even settled a case with FTC in 2014 for $200,000. I'm guessing when push comes to shove and EU actually decides to prosecute they will pay a similar amount. I bet that amount is already in there books, set aside as "future risk management" or something like that. Just the cost of doing business.
> There are sites (like wwe.com) where after you have successfully located the preference to opt out from everything it shows a "processing" screen which is stuck at 98% for about a minute. But accept is "processed" in an instant.
How desperate does one have to be to work as a developer on projects like this?
you assume the people doing this see a problem with it. They could easily...
* just be a nihilist and not give a damn
* Think its fine because they are encouraging people to do something they see as good.
* Think its fine because 'its just ads'
* Think it's someone else's responsibility to make decisions about ethics (as an engineering prof, I see this all the time...students who say engineers' jobs are technology not ethics/morals)
> Think it's someone else's responsibility to make decisions about ethics (as an engineering prof, I see this all the time...students who say engineers' jobs are technology not ethics/morals)
The problem is that they're correct in a practical sense.
The company they work for might get fined, but all the engineers see is the performance review. This system encourages engineers to think of compliance and morality as someone else's problem because: it is.
Many will justify this as "if I don't do it, someone else will" and again: absolutely correct.
Pass a law where engineers themselves may face fines or jail time for implementing immoral code, and they will suddenly discover a keen interest in the ethics of what they do.
> The company they work for might get fined, but all the engineers see is the performance review. This system encourages engineers to think of compliance and morality as someone else's problem because: it is.
or lowers the cost because at least some ethics are the luxury of the well-off. (sure, worrying about feeding family might not lead you to kill someone but it might let you make it difficult for users to opt out of privacy invasion)
Morality (definition left as an exercise) is something you have to be able to afford. Get hungry enough, get tired of sleeping on cardboard enough, get tired enough of restrictions imposed on you by society for stupid mistakes you made while out of your mind ... and morality is a luxury you can't afford. (In many countries, that luxury is permanently out-of-reach for the great majority.)
Now add to that the constant reminders that morality doesn't stop a lot of people at the top from immoral behavior. Banks, politicians, Wall-Street, celebrities, billionaires etc. are in the news all the time for pulling shady shit. When they get caught, they say OOPS! They hear: 'don't do that any more, where people can see you' and they're free for another round.
Many people tend to adopt behavior that's rewarded. It takes a strong moral compass; some people never got magnetized. Show people a society that rewards moral behavior (they exist) and they might go there. (Some countries have low recidivism rates because of how well they treat people in prison.) If people can afford to move, they might. Else they may just say screw it, XYZ throws toxins in the river, I'll do it too. Choice: track people or go shoot 'terrorists'. Hmmmm.
I see this argument a lot, I grew up in such an environment. But being moral is really not a luxury because there are several very rich people who are immoral. Being moral is easy or hard depending upon your means, but so is everything else. At the end of the day you have to think about what you are doing and you and solely you are responsible for this.
Also this whole argument is distracting from the fact that this is TrustArc we are talking about. Its an american company with 340 employees and over 20 million american dollars in annual revenue. When a company out of India does this same thing we can argue impact of means on morality then.
You're assuming it's developed in the west. They could've just outsourced that work, too. Here in India, I don't see too many developers who give a damn about user privacy. From privacy international's report on menstruation apps stealing data [1], two of the worst offenders (Maya and My Period Tracker) were from Indian companies.
This argument doesn't hold up if you refer back to all engineering ethics dilemmas of the past. If all these engineers could have easily gotten another job, why did they stay? Why did Boeing engineers not speak up, why did Firestone engineers not speak up, the list continues. Not everyone has the same moral compass and that is the bottom line.
The best thing is that, once you waited for the time, it fails:
> This page transmits information using https protocol. Some vendors cannot receive opt-out requests via https protocols so the processing of your opt-out request is incomplete. To complete the opt-out process, please click here to resubmit your preferences.
Given that very nearly the entire software industry has either acquiesced to or actively abets practices like these, the real answer is generally "desperate enough to be seeking a job in your field".
I suppose someone could also structure the requirements for the task in such a way that it would have this result without explicitly saying it would have to have this result.
Given that the accept is our default just send off the accepted token = ok and don't wait for the response.
Send array of opting out info and then wait for all responses are ok so you can tell the user that it worked to opt out, the user must see opting out worked! response before going on to next step, because for GDPR reasons we need to tell the if there is a problem so they can try again!
- but that means we will be stuck at 98% for about a minute!
>I know this is not something major, but still, how do these people sleep at night?
Easily and without issues. Humans are very good at making sure they do not feel themselves to be evil. A mass murderer will blame everyone except themselves or rationalize their actions as just.
Things which come to mind in 30 seconds:
"The regulation is draconian and it is just to fight it in any way possible."
"Our business helps people and working around this helps our business and thus helps people."
"If people really wanted and weren't simply mindlessly clicking buttons this won't stop them so we're actually helping user's enact their will."
"We put all this effort into the business, it's evil for the government to interfere for wishy washy reasons."
> There are sites (like wwe.com) where after you have successfully located the preference to opt out from everything it shows a "processing" screen which is stuck at 98% for about a minute. But accept is "processed" in an instant.
This one drives me nuts! It's just such a brazen and blatent piss-take - "you won't let us hoover up your data and sell it to everyone we can? Then we'll punish you".
> it shows a "processing" screen which is stuck at 98% for about a minute
Proximus [0], the partially state-owned and largest telecom provider in Belgium, uses this pattern too.
Additionally, on mobile, scrolling through the cookie-usage options automatically selects the maximum invasive option. The 'scroll-touch' is registered as a regular touch selecting the option.
This is also Truste/TrustArc [1] that GP mentioned, and they do it on every website they're installed.
Looking at their Javascript code and Timeline in the inspector it is 100% fake. It is all implemented using setTimeout and there's no communication with the server in the meantime.
Shows a popup, saying "we collect your data yadda yadda yadda". Then there were two buttons. One to agree to that. One to manage it. But clicking on the manage button just took a user to screens and screens of garbage information mainly listing the companies that used the information. Without any option to opt out (you could contact them to opt out, I assume individually). There was a button (if you drilled through the screens) which seemed to imply that it would link to a page that allowed opting out, but all it did was take you back to the first screen of the popup. Unreal. Somebody has thought about that; absolute cretins.
They've changed that so that now there is opt out toggles (which are obviously all split into groups and are all on by default and so on), I assume because of someone in legal tapping them on the shoulder?
thank you, there are other sites[1] (mostly non English) that still can get out of it. I'm trying to manually block it, but it uses random ids. Is there a method for that?
And, an "accept" holds for seemingly till the end of time, while a "I don't accept" is valid not much more than the mouse button click echoes through my room before you stand before the dialog box yet again, contemplating why you don't do something more useful with your life.
Tried to deny some google apps access to fine location, and it would ask at every single opportunity, it was so easy to accidentally enable it as well, which I guess is the point, wear you down until you press the wrong thing or just give in.
Hell I seen one yesterday that was only 'accept' or 'subscribe to newsletter' with the 'not accept' being linked to the 'cancel' button of the subscribe to newsletter popup. Which you had to click into first.
Thank you for mentioning this. I feel as if I have to repeatedly jump through those hoops to decline on the very same sites every other day, if not visit.
Surely the GDPR has to have the foresight of dictating that my choice to accept or decline has to be valid for an equal amount of time, doesn't it? If I'm confronted with that popup as long as I'm declining the regulations are worthless.
The GDPR itself really doesn't delve into that kind of detail. Generally, it's up to individuals to lodge complaints with their national data inspectorates regarding this type of thing, who then determine whether some specific procedure is compliant with article 7 (in this case) of the GDPR.
On sherdog.com, you get a giant cookie dialogue that covers half screen on mobile.
If you don't accept, but click on "cookie settings" instead, a page will tell you that you can't choose to block 3rd party cookies unless you accept their 3rd party cookie, because they need to save your setting of not accepting 3rd party cookies in a 3rd party cookie. It's not a Monty Python episode, it's real: https://ibb.co/6YFpGWK
alternatively, if you click on the next link, you will be taken to a page that explains how to disable cookies in all latest-gen browsers such as Netscape 3 or IE 4.0:
One of the only websites I've seen that offers a big and clear "Decline" button is NextRoll's advertising service [0], e.g. seen on Texas Instruments' website [1]. But I haven't checked whether clicking the "Decline" button actually opts out everything.
My company actually did a very extensive study on this and found that the majority of websites utilize dark patterns. Only 25% of sites are even legally compliant even after deploying consent management software. Companies are openly flouting privacy laws and we actually found some of the worst offenders in the app space where consent mechanisms don't even exist within the apps themselves.
I have seen worse. Der Speigel has "Accept" and redirection to sign up page for a paid subscription. I wan under the impression that forcing people to agree to tracking by withholding services until they do is not allowed under GDPR. Well, apparently it is.
> I wan under the impression that forcing people to agree to tracking by withholding services until they do is not allowed under GDPR.
You're 100% correct in your assumption. Der Spiegel is treading on thin ice here, or at least I hope they are.
Here's the relevant GDPR text:
Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment [1]
The Washington Post have the same thing. To view articles for free, they require you to sign away your GDPR rights, and they attempt to charge you 2x the money for a subscription without tracking.
Sucks to be them I guess, as I pay for news (and run Ublock) but won't reward this mendacious behaviour.
I'm wondering if enforcing this on the browser level wouldn't make more sense. I mean, if a site use cookies or wants to track you in any other ways it would have to ask for your permission the same way you are asked before accessing your camera or microphone. You could opt-in or opt-out which should be the default if you don't make any choice. Then the sites would have to clearly stop you from accessing their content if you didn't opt-in, the same way you are stopped with paywalls. I think this would be much more transparent if we really want to teach people that they can pay for the content either with their money or their data. Of course another choice would be to just avoid these sites whatsoever.
Cookies are just a mechanism - tracking information about you without cookies (e.g. by associating it with your IP address and/or other identifying bits) needs just as much consent. A purely technical solution cannot work.
Browsers could still provide a consent API but without strict enforcement it would be pointless - and with proper enforcement you don't need it.
It's worth noting that Microsoft enabled Do Not Track in Internet Explorer by default just after it started to gain traction, which caused a small backlash against its use [0], notably Apache started to ignore all DNT headers from Internet Explorer.
I have to wonder whether it was an intentional effort by Microsoft to discredit DNT and to stop people from respecting it... On one hand, the same outcome (almost ignored everywhere) is likely to occur regardless of whether Microsoft made its move, on the other hand, it may have gave it a greater push.
[0] I have my cynical Fundamental Law of Privacy - it must not be enabled by default, so that the industry will continue profiting from it, while giving the user the illusion of choice. But at least those who enabled it enjoyed privacy, although only to a very limited extent.
This is how it should be. Unfortunately even if this becomes a standard I can foresee Google dragging their feet on this and never implementing, and providers refusing to comply on grounds of "the user didn't reeeeeally mean to block tracking".
The tracking banners frenzy gdpr started is unbearable. It has decreased the usability of the web significantly.
Everyone is obsessed with improving page loading time but what does worth that the page loads instantly if I have to navigate a maze of banner consent screens before I can see the content behind it
Why can't everyone at least agree on the same banner format / ui or have it delegated to the browser behind some native browser functionality like autocomplete
The reason is that if it's done in the browser then a person's preferences will apply to every website they use with that browser. Publishers will not want that as they hope that users will give them more consent than other web sites (I certainly do give some websites full permissions if I like them, others are a 'reject all' and 'object all')
Also, your consent preferences are stored under that website's cookie. There is the option of a global cookie but nobody uses it. This cookie data is then sent to everyone involved in the adtech chain (which is causing issues since it can be multiple KB's in size). It's format is described in [0]
Perhaps this is where the regulation should've been aimed at in the first place? Cookie options have been present for a long time, but they don't have the necessary granularity. This would've been a great push to update the system.
GDPR does not require any banners or consent dialogs at all for cookies that are necessary for authentication, navigation, or keeping track of shopping cart contents in the current session, etc.
It's only the unnecessary tracking that needs explicit consent. So it's a good thing if such sites are slow to load and have to present irritating banners for legal reasons. This will hopefully put them at a competitive disadvantage compared to sites that don't insist on tracking their customers.
Not even the EU as a whole, it's up to the individual regulators in each country.
On a related note: Since most US companies base their EU subsidiaries in Ireland for tax purposes, it's actually just up to the Irish regulator. There are... concerns.
It'd be useful if the author revealed how she managed to obtain her data. I am pretty sure that a request with just your real name wouldn't reveal much. I assume that most data is collected under some identifier which isn't matched to your real name in order to thwart this kind of request.
Send an email to the address specified (privacy@quantcast.com) with your information and what you're trying to accomplish (typically either a disclosure of what personal information they have on you, or erasure of any said information).
They will almost certainly satisfy your request (even if you don't truly live in California or the EU) because there are significant regulatory repercussions for not responding to legitimate requests. Or at least that's how it works at the big company I work for.
Depends on the company. Atlassian for example thinks they've found a loophole by not allowing access to your account -- California allows a business to use logins to verify identity if you already have an account, but if they turn a blind eye and don't let you in with valid credentials then they supposedly don't have to respond to data access or deletion requests. I haven't cared enough to get a lawyer involved (which is what they're banking on I'm sure), but it seriously pisses me off.
Try emailing legal@atlassian.com if privacy@atlassian.com isn't getting you anywhere.
I can't speak to Atlassian specifically, but at sufficiently large companies, privacy@ emails tend to get routed directly to internal compliance teams, which may be operating under/within the legal org or just using a playbook legal has previously signed off on.
Legal@ has a good chance of being monitored by someone else. Worst case they route it back to the appropriate team and you continue getting stonewalled. Best case, the new set of eyeballs on the conversation has a very different view of the legal risk of your stonewalling experience, and you get what you want.
I haven't tried the above for compliance requests (as I'm not in a jurisdiction covered by GDPR or CCPA), but general BigCo experience has taught me just how variable responses from legal can be depending on which particular lawyer covers it[1]. Every lawyer evaluates risk in their own way, based on their experience, understanding, and conservative (or not) predilections. Simply having your correspondence seen by a different set of (legal) eyes could be enough to get a more satisfactory outcome for you.
[1] Or in this case, if the legal team sees it at all. Which may not be the case for privacy/compliance requests, if they've been delegated to a purpose-specific team that's operating off of a playbook.
I actually e-mailed to the author a year ago to ask that very question.
Her answer was that she provided her cookie ID to Quantcast and then asked for any data associated with that ID. She also promised me to include that information in the article to prevent confusion, but she never did.
Ironically, Quantcast only knew her real identity after the request.
A cookie to take to live ramp and they tell you all the other “anonymous” ids you have, including those that can identify you. Which is how ads follow across devices, for example. But can be used for any purpose.
Is this comment and that website [0] sarcasm? What exactly are you automating? The theft of my PII or the opposite? On this matter your privacy policy[1] confuse me.
Certainly seems like theft to me. Just because computers spew ridiculous amounts of PII does not mean company xyz llc has a right to collect that information or to use it for anything without educated and explicit opt in disclosure that verbosely enumerates every single instance in which said PII will be used between the time of collection and the heat death of the universe.
'server logs' fails to account for how that data is used which should explicitly defined. Failures to do so is misappropriation. A good litigation firm couls retire by challenging reckless companies on these grounds.
>does not mean company xyz llc has a right to collect that information or to use it for anything...
I guess this is where our opinions differ. In order for them to be absent the right to collect it you must force them to forget. That's where it doesn't seem like your information, after all they need to erase it. I'm all for legislation to regulate it's use.
Yeah, that’s a grey area actually. It’s why Google Analytics has the option of chopping off the last byte of IP addresses, for example.
Better to assume all PII and PI even if not identifying, belongs to the user. GDPR is explicit on some of this and not on others. Shared information, or that deemed necessary, won’t be deleted on request for say Uber/Lyft. There is a financial transaction and a driver etc, they won’t delete. They could sever the link to your profile though. Facebook offers something like this, but don’t do it. You will never be able to authenticate yourself again, and they will keep building the “anonymous” profile. It’s complicated for users out there...
>Better to assume all PII and PI even if not identifying, belongs to the user.
I agree from a liability standpoint, from a company's perspective. From a user perspective, better to assume all information that can be captured will be, it will eventually be available to all humanity and it doesn't belong to you.
Not sarcasm, we issue GDPR requests from an app on device, and you can request data (back to your device and not through us unless stated). Deletion requests are done as well. Data brokers, as a group, are obviously very anti-consumer, and getting them to comply in CA has been a huge headache (most simply do not). Prop 24 should help, so it’s going to be a long burn for consumers to take control. CCPA made hiring an agent (like us) explicit, but almost no one accepts that at the moment.
Alright, that's good, because I would really love for there to be a service that would streamline the way I request data from service providers or request the deletion of data connected to my account, as well as the account itself.
However, your site says:
>> We import and analyze all of your data across your online accounts and give you an audit and a plan of action.
Doesn't that mean that apart from all the, possibly bad, actors out there that have gotten their hands on my activities _you_ are now also in possession of PII connected to me? How does that improve things for me?
No, you are. Which has been a pain on our side to not have possession of the data, and also why it is an app for desktop if you want to have a copy of your data. It can be really large for a mobile device, and processing in the background is generally not available on mobile. Trying to get some things on mobile though -- deletion is easier than copies of data.
And yeah, we don't want to become a honeypot for what is the largest profile on you -- the combination of all the others.
One more reason to make sure your email account is not compromised in any way. I have many emails associated with my 'profile'. If one of those is compromised somebody could potentially request all of my data.
Requests for information should only be fulfilled with a notarized identification verification. The potential for security breaches here is massive.
In your request, let them know that you are specifically wanting to see what data they have that needs to be updated/corrected. Let them know that the ads you are getting are currently not working, and you are only wanting to help them fix the problem.
The data quantcast collects and stores is associated with cookies in the browser. Generally, you would have to visit their site to allow their code to query the data associated with their domain from your browser.
What to me _seems_ to be much more likely though is that multiple cookies are connected to a classification ID that multiple other users may also be connected to and that to identify your PII within their system you'll need to provide your user name.
I'd also like to know this. It seems like asking this organisation to delete my data would be largely beneficial, but what data do I need to provide for them to do it?
According to GDPR, the contact info for sending an access or deletion request must be provided in the Privacy Policy.
Under GDPR (Europe), if you send a request, the company must honor it unless they have reason to doubt your identity, in which case they must ask for follow-up. Under CCPA (California), they are only obligated to honor "verified" requests. There's a range of what counts as verifying, from just being able to log in to your account on the low end, up to providing 3 pieces of matching data on the high end.
The company is obligated to tell you what data they have. They are not obligated to go out of their way to make connections, though, so you're better served by providing as many identifiers as possible (like account numbers).
This is explicitly false. Mastercard or Experian might know her name but this would not be shared for an audience. Its simply cookie123 is in audience456.
It's a sizable revenue stream for the payment networks, credit bureaus, Tivo/Roku, and others you wouldn't even think of. When a cashier asks you for your zip code or phone number in the store, that's two ways used to tie the purchase back to your identity.
I did enjoy my time at Quantcast. The dataset is used for more than targeting advertising. For example, Quantcast's offers a free analytics product that uses the same dataset.
I am conflicted, and my view on data collection more broadly is more nuanced than what's in this comment. For this kind of data collection specifically: On one hand, it's how the entire publishing industry has built their revenue model. And I like news, sports, content, etc. On the other hand, it's creepy for a 3rd-party service that I've never heard of or interacted with being able to infer traits about me based on my browsing patterns, and then sell targeted advertising to yet another company I've never interacted with. I use an adblocker specifically for this reason, despite running an analytics startup.
I bought a NET10 international SIM card 7 years ago. Only used it for a couple weeks. Last month I asked them to delete my account. Spoke to 3 people and they weren't able to do it. One agent outright lied and said he did, but I was still able to log in after the fact. The best they managed was to change some of the profile details on the account (name, etc).
I submitted a formal request under California's "Right to Delete" legislation (CCPA section 1798.105).
The response was a formal letter from the parent company denying my request. It's a template letter with legalese bullshit that's totally inapplicable (e.g. they argue there's still a "business relationship", even though we haven't done any business in 7 years).
NET10 is owned by TracFone Wireless, which in turn is 100% owned by América Móvil (NYSE:AMX, $41B market cap). I believe they had my address, email address, phone number, date of birth, etc.
It's disgusting what these giant telco bastards get away with. Why don't US laws have the same "teeth" as GDPR, and any advice to force them to delete my data? (e.g. If anyone here advocates for this sort of thing on social media and wants a slightly-redacted copy of the letter to publicly shame them I'd be happy to deliver that).
FWIW, the GDPR doesn't have too much teeth, either; most big players haven't received big fines. The biggest fine to date has been the French fine of EUR 50m on Google, and Facebook has gotten off almost entirely scot-free so far. That should tell you almost everything you need to know about how effective the GDPR has been.
Of course, I don't mean to say the GDPR is useless. There's a lot of good work being done, and an Italian telecom was fined ~EUR28 mn for violations similar to what you had to face. I just think GDPR enforcement needs to step it up and hit the usual suspects with fines that go beyond a slap on the wrist for it to really change the world. You can track major fines using an enforcement tracker, I check on [1], but you can also just google it every now and then to stay up to date.
To be fair, it's pretty difficult to sue a megacorp and make it stick, so I suspect that we won't see either Google or FB be massively penalised till late 2021 or early 2022.
By which point FB will no longer exist in Europe (as they recently claimed that the Privacy Shield ruling would require them to do).
This reminds me of a story of a litte boy that just said: "I like seeing personalized ads."... while dragging everyone with him.
It is incredible that this industry is allowed to operate like it does. If it vanished over night nothing would happen. The EU just had its strategy changed and pronounced that it is everyone's civic duty to share even more.
Doubtful it would be able to handle advertisers. Although I don't think many countries would be.
> The EU just had its strategy changed and pronounced that it is everyone's civic duty to share even more.
What do you mean? I don’t really understand what that would mean, or what you’re referencing. Was that part of the State of the Union, or is it another announcement?
Edit: I found it. It’s an information based on EU strategy document from February.
It wants to market private information of citizens. Justification is that big tech companies already do so. The difference is that not everyone is on facebook/instagram.
Ultimate Hosts Blacklist: 1 million blocked domains (once in a while you might need to unblock something) and also a bonus known hacking IP blocklist (prevents common hacking sources). https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist
If you have iOS device install an ad blocker app like AdBlock Fast, this plugs to practically all web sessions in the phone.
The personal data industry is truly disgusting but the really funny thing is that most of the data is actually worthless. Its collected only because it can be. Not because its valuable or worthwhile. These companies are basically hoarders. Hoarders that rummage through your trash and spy on you from afar. They are awful, the business is awful and is a viable case of "just because you can, should you?"
Indeed it's totally worthless data, like what are you going to do, dissect people into groups that you could heavily target and try to sell them gizmos or swing an election? Pfft, not worth the effort. There are no companies or state actors are into that kind of thing.
Individual data is useless, but big data is worth gold. It can show you exactly where your target audiance is, and what they're common interests are. That's super valuable information if you want to start ad campaigns.
It has worth, only not the way you mean. It may (or may not) lead to better sales through ads, but it leads to more and more expensive ad sales and some very wealthy companies.
It occurred to me that if you want to see fewer ads and be generally left alone by marketers, get your IP and online data footprint associated with pariah topics like fringe news sites, weird subcultures, edgy politics, drug use, and privacy.
Basically like being a hacker in the 1980s and 90s, or even part of early rave culture, where the sort of people who work in marketing would be afraid or uncomfortable with being associated with you before the culture is gentrified by people preoccupied by their reputations, and you can be free to create and innovate without being co-opted.
No doubt they still have a category for you, but it's marked as a minefield, which is as good a moat as any.
Do you guys see any ads online? I have ublock origin on my laptop and mobile browsers, modded youtube and instagram with no ads on android. I pretty much never see any ad.
I also add the Annoyances lists to uBlock and I have the "I don't care about cookies" extension to ignore cookie popups.
This is a dangerous game. Be aware that just because it might not seem like anyone is using these data against you right now, doesn't mean they won't in the future.
Another option is to make the choice to not support companies affiliated with the ad-tech industry and just use an ad-blocker. They will even block most black-pattern ridden GDPR popups.
Every time I read "We value your privacy" I want to throw something at my computer screen.
The amount of dark patterns and dishonesty in targeted advertising is astonishing.
Nice! I did the same recently, where I requested my data from 14 different location data companies. [0] One company returned my data. Part of the difficulty was making the CCPA requests since I live in Texas, but the majority of responses were along the lines of having no way to identify the person behind the device identifier.
These kinds of articles always make me think about how big a privacy risk the laws that make these possible are. We're worried about user privacy, so we mandate that companies build a way to connect PII to otherwise-pseudonymous browsing data in case the user asks for it. What if someone else asks for it? Huge security risk.
That's not to say these laws are bad, just that the giant additional privacy risk they pose is kind of funny to think about.
It's a somewhat common pattern that regulation intended to stop the encroachment of some bad thing also effectively causes a standard to form around the (now) worst thing you can do and still be legal.
Some people would be paid less - those that produce output below min wage level but the employer has no way to avoid hiring them and just eats the difference. Also those that produce output above min wage but there are many people willing to do the same work for min wage so they don't have a choice to agree to that wage too. Some would be paid more - those that produce output below min wage level and the potential employer doesn't hire them because they can't eat the difference, so they get no wage at all now.
In the end, the free market would decide how much 1 hour of work is worth. If employees are sufficiently desperate, this could be below what it takes to keep the human doing the work alive.
IDK about that. I did say "somewhat common." It emerges in some circumstances.
GDPR is a decent example. The minimum level of privacy required by GDPR is now the standard required whenever standards are required. EG banks, regulators and such now expect you to have as little privacy as GDPR allows. Everything allowable under GDPR is now semi-mandatory for KYC.
Privacy/data related regs really have this kind of tendency. If you must destroy records after X years, this often develops into a mandate to keep records for x years.
As a customer I would give them all info they need if they paid be a small sum compared to what they spend with absurd technologies and time. Maybe it would even make their product cheaper, increasing demand and further developing the economy! I mean: in theory, if they target us better, it would be cheaper to market, therefore allowing smaller prices - right !? Most people don't outright say it, but we're not only being spied: we pay for it. And most people are subject to a true hostage situation refered to as employment/salary. This is all abusive, and the worst part of it is that it's self-inflicted. The whole privacy thing, including the privacy-paranoia refers to a serious collective health issue. "Because the quest for profit makes men into predatory, insensitive and stupid brutes."
they can already. but as you can see, it's more often than not wrong information. There's so much scam the scammers scam themselves. Also, I couldn't care less, honestly. If I were a "Heavy home alcohol purchaser", perhaps I'd like to receive latest news related to that and promotions? I don't think privacy is the matter here at all. I think it's a problem with how we perceive our economy, which molds our entire life and death. And I get just honestly astonished by the stupidity reigning all over. Seeing how much attention a lot of groups devote to such concerns is only a leftover from the fact that real problems are not being addressed. Our focus as a collective is devoted to making money. People do everything for it. And who could blame them? If they don't have money they won't have a house, food, medical treatment, transportation, peace of mind, shelter from storms and cold, books, etc etc etc in most cases. At the same time, to make money they have to spend money. It's indeed maddening.
A prospective employer could buy the information about your alcohol use and decide not to hire you. An opponent in a lawsuit could use it to show you have a pattern of heavy alcohol use that caused you to be negligent. When you run for office in 10 years the other party will run attack ads against you using the information. Need I go on?
I can be reselling alcohol? I maybe run a dorm. Or a private club. Or I buy a lot of beer for my trendy startup employee. Or I'm a religioous fundamentalist buying alcohol to toss it in bulk into the ocean because alcohol is Satan. Etc. This data ain't reliable at all.
Also, people are already giving this info for free most of the time on social media. And it's not even a matter of fighting Google, or facebook, or amazon. This is going on all over the internet, and if they take down facebook something new will come up.
If anything, I think employment system, judicial system, political system and others will have to develop and fundamentally change. It's not like 'privacy' is the only matter the whole social complex is facing, at all.
Heavy alcohol abuse is not a crime, it's connected to serious collective mental health issues. This is one of the problems that deserve real attention, for instance.
For those looking for how to request your information:
This is what I've found so far on the privacy policy page.
It sounds like it may be available only for california residents.
> In addition to the information, controls and rights detailed in this privacy policy, California law provides for specific rights for California residents. California residents may request access to Personal Information, request a copy of their Personal Information, or request that their Personal Information be deleted. You may submit a verifiable request through Quantcast’s Data Subject Rights page, which can be found here. California residents may also submit verifiable requests by contacting Quantcast via email
Like many data-collection companies, unless you live in the European Economic Area or the state of California, you are not allowed to see the data about yourself that these companies are collecting
I wonder what a company like this would do with this data set:
gender: male, female, trans, etc...
location: various around the globe
age: various ages
orientation: hetero, homo
How would they pick the right age, gender and location for you? Do they just pick the latest they have?
They're out to make money not be politically correct. So edge cases probably don't matter to them much, even if they occasionally manage to offend some people.
Extension idea: A chrome or firefox extension that allows me to automatically choose the minimum cookie settings for most websites that use a standardised “consent solution”. I.e. if it is a consent solution by quantcast it automatically opt-outs of all cookies.
Or alternatively, hide the popup prompt from showing up at the first place as I found myself clicking "Consent". That tends to be a quicker path to getting things done than figuring out whatever wordings the designer came up with decline.
Maybe a bit of topic, but since we all should pay cashless, because of corona, what info does a shop get about me, if I pay with a debit card or credit card? Has someone a data sample? And what company's get (buy) information about what I bought with my debit card?
The shop can correlate all of your purchases (down to individual items) into a single purchaser profile tied to your credit card number -- basically the same as if you used a loyalty card with all of your purchases. If they can ever gain the ability to link a credit card with an identity, (say, via a credit card purchase that uses your loyalty card) then they can link multiple credit card profiles together.
In terms of data flow, it mostly goes the other way. It allows the shops to sell this data about you to others. Your credit card company does this, of course, but the only "direct" visibility they get is to the establishment. "In an average month, _trampeltier spends $x at grocery stores, $y at liquor stores, $z at tobacco shops", etc. They don't learn the actual itemized purchases without cooperation from shops (who of course have this data). Though I wouldn't be surprised if that was built into credit card processing agreements at this point, who knows.
I can't repeat this enough: disable third-party cookies in your browser and install uBlock origin. You aren't going to lose much in the way of useful website functionality, but you will become effectively untrackable.
I do not understand the complacency of people. Maybe they think that their freedom is unassailable?
The commercial surveillance industry is a real threat. It exploits citizens' data and in doing so serves the dictatorial interests of government and the amoral activities of capitalism. The surveillance industry is also inept, and cybercriminals and state actors will get the data eventually.
Because while you're worried that you'll get an ad for a clothes iron after searching for one, they're worried about how they're going to make rent this month.
I feel the biggest problem is most of this happened before there was awareness. This isn't something an individual can stop anymore, it needs an collective massive effort to stop, blocking ads or trackers is only fighting the symptoms not the cause. So we need these practices to not happen, somehow. Legalisation is the obvious answer and there lies the problem, this is one thing where technological advancements fast outpaced legislation, and it doesn't help that big tech generally goes the way of practicing questionable or even illegal methods simply waiting to be called upon, then get away with empty promises, operating internationally they've gotten too powerful for a single government to deal with. So we need intenternational governmental cooperation in an field where those same governments benefit from this data, good luck with that. (is love to have a purely technical solution like blocking ads to work, but the problem isn't a technical one)
People have busy lives and other things to worry about. Even if they knew this were happening, no one has the time or energy to do anything about it. What have you done, other than post the obvious?
Because there are no noticeable effects. You get really well-targeted ads once in a while, but that's about it. Fighting back, on the other hand, brings a few inconveniences.
I hope all the people who work at Google and Facebook see this and know that this is the goal of their jobs. Not "solving hard problems" or "bringing the world closer together". It's to gather incredibly fine-grained data on every person possible and use it/sell it to people who will use it to influence and change the behavior of those people in a way never imagined, as well as hand it over to governments at the drop of a hat to enable surveillance Orwell couldn't have dreamed of.
Pleas reconsider where you work and what you enable.