I've slowly been developing the opposite habit. Over the years I've become more interested in how articles are digested by people without an agenda (which the modern, salaried digital author may be increasingly pressured to succumb to). Reception is demonstrated in comment sections, and that sometimes feels like a more pure read, stripped of whatever fluff or flair a journalist may feel compelled to add (accelerated portfolio enrichment attempt? Agenda implies bonus payment).
We can assume these may also be populated by automatically generated comments to suggest support or rejection of the content. Sometimes this leads me into [rabbit-hole] investigations through a user's post history if I am significantly moved by their angle(s).
I needn't read anything more than the title, as the meat content would likely just be an assimilation of empirical evidence and/or gossip. I'm not even going to bother turning off ad blocker or using archive.is with this one - in aggregate I just see it as yet another "distrust [insert internet company] with your data" offensive (defensive?) campaign.
I think it's dangerous to consider commenters "people without an agenda". Automatically generated comments aside, I'm quite sure that plenty of real human beings have "an agenda" when they're commenting on an article.
I'm also not really sure what agenda the "modern, salaried digital author" is supposed to be succumbing to. Clickbait? If they're salaried then they aren't paid per click. Again, the journalist is a known employee of a known organisation. To think of anonymous commenters as more reliable feels a little baffling to me.
Yep, I rarely read the articles, and mostly come for the comments. I'm especially unlikely to click the link when it comes from known clickbait factories.
"For these reasons, I – together with our board of directors – have decided to terminate the Jumpshot data collection and wind down Jumpshot’s operations, with immediate effect." Ondrej Vlcek, CEO.
A lot of staff will be laid off, but I think this was the only good choice.
I work in this world. This is just one company of many doing the exact same thing. We're going to need a lot more investigative journalists.
They sell which apps are installed on your phone and how often you use each [1], they sell your credit card transactions [2], they sell your emails [3], they sell your web browsing activity (jumpshot on this list) [4], and they sell your precise timestampped locations [5].
Senators are trying to get Yodlee investigated by the FTC [6] and they sell data to numerous companies. Second Measure's (YC S15) entire business model is cleaning up and reselling Yodlee's data.
Makes me wonder at what point they will sell your input (keyboard) history... There's technically nothing stopping them from it, if they have access to all this other information.
I think my biggest problem with all of this is the word "caught". Why, still in 2020, is there no requirement for software/websites to disclose the information they keep and the information they sell to the end user? Is it secretly there deep in some terms of service? Nobody would ever plaster "we sell your data" on their front page, but once we start shining a light on it, maybe companies will see how much people care.
Which would be? I am living in Germany and I am pretty disappointed so far.
GDPR has (intentionally?) a huge loophole: It is up to national agencies to enforce it, with no individual right to sue. These are heavily underfunded thus enforcement is weak to non-existent.
Well, I made some input to data protection agencies and got some feedback, so I'm rather happy on how things do progress.
That said I assume nearly all companies out there are not in compliance. To the point of the article, privacy policies are mostly not detailed enough and it will take some time before companies come into compliance.
This is the trade off between a strict PCI level compliancy policy with a strict checklist of things to do and the "vague" GDPR compliancy which was created that way to be independent of technology changing over time. The downside is it's not clear how to be really compliant and companies do the very minimum on what they think they get away with.
Also there are so many huge violations, that yes, the data protection agencies can't cover everything, so they start from the top with the companies that get the most complaints (1&1 getting a 10M EUR fine) or have the biggest missteps. I assume the Buchbinder fine will be much larger than the 1&1 fine, and it will for the first time proof to companies that they are still responsible when they hire an IT company to manage their data - which was the point of the parent.
Until the GDPR arrived data leaks were just "Ooopsy" moments to companies. This culture has festered for decades and it will take some time to change.
And my comment was to the parent "and the information they sell to the end user? Is it secretly there deep in some terms of service" where the GDPR requires you to tell people what you do with the data in terms that they understand it without obfuscating the message or hand weaving. I would have wished that companies need to open their process directory to the public though.
They have both paid and free products. In a zero marginal cost world, this is a legit business model, where you offer a limited free product and then pay for the full version, say with support and more functionality, similar to what many other vendors do. In the security space, for example, Burp suite provides a "community", "professional", and "enterprise" version.
So I disagree that "free" means "collects all your mouse clicks and sells them off to the highest bidder".
Seems to me like basic financial physics. No one is saying it's moral, or that we should put up with it - but if a for-profit company is paying a bunch of employees to provide a free service to you, you'd have to be dumb not to be curious about their business model.
I get it. But to be fair, there are many for-profit companies that offer lower tiers of their products for free (or even opensource), no (or little) strings attached. So it is certainly not the norm.
Whelp, at least from a business case I know, it goes like: "Hey, you get our basic tier for free, but please, let us send you a newsletter occasionally. You don't like that? That's fine, you'll find a unsubscribe link in every newsletter."
So the business case doesn't sell the data (that's screaming for trouble), but rather uses it for their own ads.
There are ways to provide a slimmed down version for people to use and try and that way they can advertise their other products inside their software where you can pay to use them.
They simply chose to do this. Regardless if it's paid or not, this was a business decision.
If I advertise "free food" that doesn't mean it's your fault if I poison you. Nor does it mean I should be allowed to continue without legislative hindrance.
This was anonymized data, but of course you de-anonymize a lot of it if you get raw data and you have reasons to put effort into it (e.g. you can find the account ID based on the exact purchase time and then follow his journey because of an unique anonymous ID). I don't think the intent was to spy on individuals, but it's not easy (or possible?) to truly anonymize data without loosing 99% of the information value.
The whole premise is ridiculously bad - this is done by antivirus software. It doesn't matter how the data is handled, the act of losing trust is in action itself.
Its like adware-blocker would install its own adware upon removal of all the others. But it would be a 'good' one ie for 'optimizing internet connection' or similar bullshit.
Trust is a finicky issue - long time to build, can be lost with 1 mistake. This event can kill Avast company in long term. Stupid, stupid move from owners, only explainable by greed, and not really justifiable.
Yes, in this case it's a very bad decision. I guess it made business sense if you want to have a free-tier of AV for tens or hundreds of millions of users - you need to somehow generate the money, but it's a ticking bomb.