For once, can we have articles (or even opinion pieces) which try to explain the context and position of both sides rather than pieces such as these. These kind of writings do nothing to change the mind of anyone. For someone who hates FB and its practices, they reinforce the already held belief. And for those, who are skeptical of such media articles, they would be put off by the language used. And make no mistake, its not about putting arguments from all sides, they picked a side and decided they are right, while the other side is wrong.
> Ad Observer is a browser plug-in that Facebook users voluntarily install. The plug-in scrapes (makes a copy of) every ad that a user sees and sends it to Ad Observatory, a public database of Facebook ads that scholars and accountability journalists mine to analyze what's really happening on the platform.
I agree with the sentiment, but this clearly mentions that the plugin scrapes data and sends to NYU servers. I may be wrong, but arent we taking NYU on their word that they are scraping only the ad data and nothing else? People agree to share their data voluntarily, but they are taking this add in at the face value. It may not be scraping, but can scrape. If Facebook allows third parties to scrape data (who can incentivize users any which way) where does it stop?
The question I had was, both - the independent ad analysis and the user privacy - seem important. So, what is the way both can exist? Maybe FB sharing the data by themselves? But, isnt that close to how Cambridge Analytica happened? These are grey areas, we should be debating and discussing in detail about them, not writing snide pieces with one side as the aggressor and another as victim.
> isnt that close to how Cambridge Analytica happened?
Close to. But different.
Cambridge Analytica drew from Facebook’s servers. This plug-in pulls from users’ computers. Cambridge Analytica was not academically affiliated. This is legitimate research. Cambridge was closed source. This is open.
Most importantly, users are explicitly sharing their data with NYU. There is no carrot of a personality quiz to obscure the quid pro quo.
I'd also add that Cambridge Analytica used obfuscation to mask the true reason they wanted the data: NYU seems to be totally upfront about the collection.
cambridge analytica was absolutely academically affiliated when they requested research data from FB, they just lied about their research which is why the doors are closed now.
i think the point is that no academic is owed any of this data, and while FB was collaborating with academic institutions, its clearly not worth the risk
I would argue yes, particularly social media data. For one thing most interactions on these platforms involve at least two parties interacting. If one of these parties decides to send the transactions to a chrome extension, it seems they are making a decision unilaterally for all participants. Certainly if anyone is going to do that it will be the platform itself.
Cambridge Analytica was academically affiliated. The data they got was laundered through a University of Cambridge researcher Aleksandr Kogan, and their app claimed to be using the data for legitimate academic research. I’m not saying it’s exactly the same, but the fundamental challenge is still present: how can Facebook be sure that these researchers won’t share the data with some controversial organization?
No they were not. Please see this post from Dr David Stillwell from the Psychometric Centers at Cambridge University who responded to a post here a few weeks ago about this very point:
Thanks for the link, but I'm sure you can understand my skepticism of a naked assertion that the reporting on this issue was wrong - especially since this was Mr. Stillwell's only comment ever.
Is it really that implausible that someone have have forwarded him a link to a site he has no interest in and he saw a comment he felt compelled to comment on? Since when is the number of of past comments a barometer for legitimacy? I mean Hacker News even gives you the option to post from a throwaway account.
Further the comment is simply refuting one Wired article with an earlier article Wired published on the same subject in which he participated in. It's not like some crazy idea was being posited in the comment.
Lastly Dr Stillwell has made the assertion many times that the app and data he and Dr Michal Kosinskis developed for their "MyPersonality Quiz" app at Cambridge University is neither the same app or data as Aleksandar Kogan's My Digital Life" app that Kogan developed for Global Science Research and their client SCL Elections. The "MyPersonality Quiz" app was simply based on Stillwell's earlier work.
I really don't follow your point here. It's plausible, but lots of things are plausible - it's plausible that the original reporting is correct, plausible that Stillwell is embarrassed about his involvement, plausible that Cambridge was involved in some way that didn't involve Stillwell. You seem to have reached a stronger conclusion, that Stillwell's statement is definitely correct and the reporting was definitely wrong, and I don't see how you got there by just looking at plausibility.
I actually provided a link that to a well-researched article by Wired Magazine. I have also pointed out that the differentiation between the two different apps is well-documented. In that I have given you the two key points that you could easily use in your own Google search.
What have you provided? You have provided exactly nothing to substantiate your "strong" conclusion. It's hard to see how you yourself "got there" based on nothing other than it's just your opinion.
The story is unfortunately complicated, so many news outlets get it wrong. myPersonality was my app that ran from 2007-12. Michal and I published papers based on its resulting data, which was collected with user consent and only from users who used the app (not their friends). We worked in the Psychometrics Centre - a research group in Cambridge University. Prof. Kogan joined Cambridge University later on in 2013 as an independent faculty member not in the Psychometrics Centre, created his own app 'mydigitallife,' and went on to work with Cambridge Analytica using data from users of his app and their friends.
So key points from my perspective are (1) My app isn't the same as Kogan's app, (2) I didn't work with Cambridge Analytica, (3) Kogan was independent faculty, not part of the Psychometrics Centre.
As you've noticed from my post history, I don't usually use this website. I do, however, have an alert set up that pings me links when someone uses my name online. I have it set up so that I can attempt to correct journalists who write about me and my research group and get the facts wrong.
Hope this clarifies things
Edit: Since I'm here. I think it's critical that NYU's Ad Observatory continues and that academics have freedom to collect data and report results that might embarrass rich and powerful companies. Tweet #9 in this thread is the key one ( https://twitter.com/doctorow/status/1329873620353515520 ) - how do we know that FB's own ad library isn't good enough? Because Ad Observatory found that it wasn't reporting all instances of political ads.
You can download a release straight from there and import it into your browser. Browsers also do some reviews of the plugins that are in their stores, so every update should be tested by both Google and Mozilla (even if only superficially).
I'd say the odds of them managing to pull off scraping anything else without anyone noticing are rather slim.
Thanks for the link. I agree odds would be slim, but this can potentially set a precedent for others to do similar things. Imagine, in the future, a new company comes up with similar add-in claiming to scan ads, and take more data than needed. Facebook needs to be careful of that too. (though i think that is not the reason they are shutting this down, just that privacy is favorable to them in this case :)). There has to be a better way to do this, and that needs serious discussion.
Also, as an aside, the add-in records the following things: country, age, gender, language etc. and the text of the ad. Not so clear what happens when ads have personal info (or friends' names) attached like xyz likes page. Payload seems to suggest they take every bit of info so it could include that as well maybe.
What is the alternative? No visibility for the general public into hyper targeted ad campaigns?
If the users are opting into this, isn't that the same argument for letting them opt into the Facebook platform itself? Why must we protect them from NYU and not from Facebook?
with respect, thats not the issue. With facebook they only collect "limited" information. Its what they do with it afterwards that matter.
How is it stored, is it secure, how do they anonymise it, who has access to the raw data, what happens when there is a break, is there monitoring, etc etc etc.
They are making a massive honey pot, and we just have to trust them that they are doing good job. (much like facebook....)
Facebook doesn't have the greatest security track record. While it wouldn't be productive to argue that one group is inherently more trustworthy than the other it's worth pointing out (as others already have) that the code is available to the public.
The last part is the crux of this dispute to me. If the users are opting into this, isn't that the same argument for letting them opt into the Facebook platform itself? Why must we protect them from NYU and not from Facebook?
> If Facebook allows third parties to scrape data (who can incentivize users any which way) where does it stop?
What say should Facebook have here though? I'd argue that when I am viewing data through my browser it's on my computer at that point. I've downloaded it so why should Facebook be allowed to decide what I can and cannot do with it any more than they can decide what I can do with my emails that I also view on my computer?
> I've downloaded it so why should Facebook be allowed to decide what I can and cannot do with it any more than they can decide what I can do with my emails that I also view on my computer?
While I agree with your opinion, that data would still be Facebook/the advertisers' intellectual property and your actions could be considered infringing the terms you agreed with.
Though NYU may not have agreed to those terms, encouraging others to violate the terms and provide them the data could be considered the intellectual property damages needed to at least start a civil court case
Fair point but I would argue that NYU are only looking at the page and saying "this user viewed ad XYZ" and recording that fact.
They're not (I assume) copying the advert and storing it on their servers. I could see Facebook arguing that taking a copy of it is a violation of IP somehow.
But simply looking at the page and recording some of the meta data is the same as someone asking me on the phone "what advert did you just view?".
Samsung TVs take screenshots of displayed content and send them to their servers. They use it to track what you watch, whether it's the latest blockbuster DVD release or your private homemade porn video :). It's all sent to them.
Of course they can but my point still stands: what I do with data that is, by every definition, running on MY computer is my business, not Facebook's. Them banning me for doing that (possibly violating their Terms) is a separate argument.
In addition, NYU didn't sign up to Facebook so they have no need to abide by their Terms. At least, they shouldn't have to abide by them... not sure what the laws in the US say about it.
I'm not for government intervention for things like this but the courts (I believe) have already ruled that scraping data is legit.
Edit: added that I think they shouldn't be able to have a say
It's not running on their servers at this point: It's on my computer.
Now, I don't know what the US laws exactly say about that but in terms of common sense, FB should not be allowed to make any claims to that data when it's on my computer.
Sure, they can say "well, no more data for you, you're banned" but that's a different argument.
In addition, having control over users of FB is one thing as they signed up to it and agreed to the terms but NYU didn't so I have no idea how they can even claim to enforce them especially if NYU are scraping data from the users computer (I assume they aren't making a connection to the FB servers!)
Also, not that it matters for this argument, I do not have a FB (or any social media) account: I tried it, maybe 10 years ago, and found it silly :)
Edit: Guys, this isn't Reddit! I'm all for downvotes but not drive-by downvoting... a simple "I downvoted you because..." would suffice.
Edit2: Mea Culpa! As has been pointed out, I replied to someone that wasn't actually commenting on my post... not sure how I missed that!
Isn't(technically speaking) the addon taking data only from the users PC and not from facebook servers?
At the end of the day it shouldn't be facebook's call what goes on the user's device.
One thing about these addons is that you can actually take them apart and see what is going on if you don't trust that the addon is built from their github code.
I do think that users opting in is an insufficient response. A lot of users opt in for things they do not understand. (Terms and conditions of any site or app for example). For tech savvy users with the plug in, they will be smart enough to know what they are accepting, though that is not true for everyone.
I like the idea of open source code and that is helpful but there is got to be a better way. And I also believe that the research like this should be supported. All things aside, we can arrive at solutions without the need of an article like this which neither poses such questions, nor provides context.
> I do think that users opting in is an insufficient response. A lot of users opt in for things they do not understand. (Terms and conditions of any site or app for example). For tech savvy users with the plug in, they will be smart enough to know what they are accepting, though that is not true for everyone.
Everything you just said here applies to Facebook themselves as much as the researchers. If we allow people to opt-in to sending data to Facebook (by creating an account), why would be against giving those same users the option to send the data somewhere else?
> isnt that close to how Cambridge Analytica happened?
Yes, its exactly this.
"Facebook changes based on past scandal for the better, and this is bad. here is my 9th grade essay with no research just feeling"
Facebook _should_ be held to account.
I personally don't think that the advertising is actually that much of a problem. It all has paper trails because they have to pay. Advertising is not the worst part of facebook is its users.
for that to change facebook needs to understand that "freedom of expression" disappears when you sign the terms and agreements, which specifically limit your freedom of expression. If they actually enforced their community standards properly, the place would be much better.
Instead they carve out exceptions for celebs, based on precedence that not even employees can find.
> "I personally don't think that the advertising is actually that much of a problem"
Facebook was used to illegally infuence elections in Uk and break spending rules and because of their closed nature, nobody realised untill a whistleblower stepped forwards. Its a massive problem, and maybe we would not have Brexit without it.
www.bbc.co.uk/news/amp/uk-politics-44856992
A TV ad, by comparison, is aired publically and you can see it and ask who paid for it.
That is not true, according to an investigation ran by a third-party https://www.bbc.com/news/amp/uk-politics-54457407
Do you trust the third-party investigation, or a couple of whistleblowers one of which designed the entire system and another of whom sold the system.
No doubt CA is a wake-up call for data security and privacy.. but a sales person telling you how freakin’ powerful the system is and how they can control the world with their ML models. Well we’ve all heard that.
By the way it’s not just TV ads to consider. Mail is a huge part especially for political campaigns and NO one has visibility to that. You can’t just cherry-pick one distribution model of advertisements.
>Facebook was used to illegally influence elections
yes, the electoral commission found that. sadly its only able to issue a tiny fine. Not only that its being eroded by populists from both sides of the political spectrum. Its not really a facebook problem but an EC one.
This transgression was found by going over the accounts, not by leaks.
> A TV ad, by comparison, is aired publically and you can see it and ask who paid for it.
political TV ads are illegal in the UK. Long may it continue.
We are also supposing that facebook adverts are actually effective. They have the power to somehow corrupt a morally "good" person into someone that votes for the "other side". This is plainly nonsense.
The real issue is that facebook is a warped mirror of society. without rules and order, chaos and bad actors reign. This means "censorship" or if it were treated as the press, editorial standards.
Said whistleblower (assuming you're talking about Christopher Wylie) got banned from fb. That's my issue with all this. Did the Russians or Cambridge Analytica do bad stuff? Probably yes. Is Facebook also incredibly hypocritical and self serving? Yes
The question is the degree to which advertising actually influences behavior. It must have some effect, I’d agree, but was Cambridge Analytica’s ad targeting so powerful that a 1.3 million vote margin was entirely their doing? I’m pretty skeptical.
It was highly targeted advertising on targets which were highly susceptible to be turned. Whatever they were turned by, doesn't matter whether it was factual or not (cause that is the low standard for advertising).
We have Edward Bernays standard of advertising coupled with the power of Facebook available for everyone, including people who lie through their teeth to get what they wanted.
Furthermore, there is evidence Russian trolls influenced the Brexit campaign, and if that isn't enough: it was a moment in polls where leavers indeed won, but before and after the referendum it was more votes for stay and for leave.
I'm actually glad The Netherlands quit with referendum. Elections are very tough nowadays because of how easy people are influenced.
> Elections are very tough nowadays because of how easy people (WHO VOTE AGAINST MY POLITICAL ORIENTATION) are influenced.
Let's be honest, none of this were a scandal if Cambridge Analytica would have worked for Hilary or the remain campaign. I dont deny all this stuff influence people, it clearly does, but that is the price you have to pay for having a society with freedom of speech and universal vote. If you want to get rid of, either you have to take more draconian censorship laws or you eliminate universal voting for adults, both cornerstones of a modern democracy.
Brexit is just an example. Manipulation of citizens is of all time. The danger lies in how precise the targeting nowadays is, together with internet being world-wide and advanced.
Its the price we all paid for some of us having a dishonest ML data hoarder.
I'm involved in the integrity of elections in The Netherlands. I have a political preference, but I am able to leave that out when I manually count votes.
I also am aware of ways which undermine our democracy: electronic voting, strategic voting, and abused ML. I believe a high integrity on elections benefits society on the long term.
However given the international forecast of our societies (ie. what used to be called globalization 20 years ago), and difference in population and cheap labour, a country like Russia or China can heavily influence our society. In this case with the help of an American behemoth, but there is no reason other sources cannot be abused.
I don't have the solution to this problem. It warrants further investigation though. Hopefully before it is too late.
Brexit could only take place because the political class spent 25 years using the EU as an excuse for everything that went wrong.
The current PM spent his formative years making up stories about the EU to send back to the telegraph.
FAcebook's advertising only works (assuming that it does actually work) if people are receptive to the message. Those messages have to be thought up by someone.
Thats not facebook's fault. thats the fault of political discourse.
The Pro EU side _lost_ because they failed to follow the standard rules of election winning: Make the other side seem like they are going to make you poorer, or less patriotic (or both)
They spent their time shouting at the sceptics calling them racists. Waffling on about economics, never emotion, never empathy, Never ambition.
Much as its comforting to think that the likes of russia, google facebook and china are at fault, its really not the fault of advertising.
> It was highly targeted advertising on targets which were highly susceptible to be turned
This doesn't gel with me... how many people saw the ads, how many times, and of those, how many were actually influenced by it?
Going by my sample size of two (my sister and my wife) they both claim that they never even look at ads that appear on screen.
I honestly am calling bullshit on the whole ad industry here... show me evidence that a significant number of people are influenced by ads that makes it worth spending money on (if unbiased data actually exists).
Are there any advertisers out there that are seeing a quantifiable return on their ad spend?
It is bullshit I agree. We all know what the intent is with an advertisement. You don’t have to be a “tech-elite” to understand ads.
There should be more transparency around how ads are displayed on the internet.. but giving researchers full access to a user’s data and their friend’s data is a pretty poor one. Can’t a better system be designed for this?
Content, however, is very dangerous and it’s not clear what a good solution is. It’s not clear what the purpose of content is on the internet.
> It’s not clear what the purpose of content is on the internet
Something I have pointed out to others in the past, and on here too, is that the Internet existed just fine before all these ad-related sites appeared. Of course, nothing on the scale of FB, Google etc, but I'd argue that we're worse off now than before.
When you rely on advertising you then have to make sure that every piece of content works towards making the advertising worthwhile. You then maximise SEO, and clicks and focus on metrics and bullshit like engagement (whatever that means).
You have to calculate whether a blog post or article is worth doing because your sponsors now care about that: you stop making content for the love/joy/sake of the content and everything becomes bait for your ads.
20 years ago, the only ads were about punching a monkey and finding out that I was the 1 millionth visitor to a website and could click the ad to claim my prize :)
I agree with your skepticism of referendums, because they do seem to be influenced more by cultural battles than policy outcomes. But I don't follow the connection you're drawing. If survey polls both before and after the election showed a different result, that just indicates the survey polls were flawed - any opinion changes Cambridge Analytica was able to create should be present both in survey polls and the election result.
Survey polls can and do manipulate people. True, unlikely in a referendum, but not impossible. Say the polls say its 55/45 for your team, and its raining heavily (weather is known to influences outcome). Would you be motivated to go? Maybe you would, but some would not.
Another one (not referendum specific; my comment regarding strategic voting wasn't specific to referendum either). Say your preferential candidate is Charlie. You have no idea what other people vote. You'd vote for Charlie. However, say you know its going to be between Alice and Bob. You really don't want Bob to win. So you vote for Alice.
If you consider mass media and hype surrounding survey polls, these cause strategic voting between whoever ends up popular. Consider, for example, the way a candidate gets chosen for Republican or Democratic party. Prime example of strategic voting.
Some see these examples as opportunities to get their preference [for a candidate or political goal] in higher regard. I don't; I see these factors as factors which potentially harm the democratic outcome on the long term. Now, like Earth, democracy is strong and adaptive. It can take a bump. But it isn't endlessly going to accept stomps and beatings. Hence, I believe we need to try to maintain the integrity and authenticity.
Breaking election laws in an attempt to influence an elections outcome is still breaking election laws whatever the outcome. It doesn’t have to be entirely their doing.
Sure. I'm not defending Cambridge Analytica here - just saying that, on a practical level, it seems unlikely that they had a decisive impact on any elections.
The thing is, I don't see how Facebook have any business allowing or forbidding users to install software on their own devices. I mean, if this was about FB banning users for TOS violations, that's one thing. But claiming a third party is not legally allowed to provide software that does something FB do not like? That's a whole different can of worms.
IANAL, but if NYU makes software designed to violate a contract between user and FB, isn’t that torturous interference? IOW, I think what you just described is not legal. I can’t make something that is designed to violate a contract, then step back and pretend they aren’t encouraging users to violate a contract w fb.
There are several arguments to be made for a non-evil Facebook. But people are unlikely to engage with a starting comparison like that. Perhaps that's why you haven't see any opposing arguments.
I have recently started supporting a ban on targeted advertising. Dividing people into cultural bubbles and being able to spend unlimited dollars on it is dangerous if we are to maintain cultural unity and be able to agree on facts.
Also, when your advertising is not public and only visible to subsections of people, it is more difficult to investigate what is going on. Facebook can do it themselves, but do we want to rely on a single company to do it? But when the advertising is public, it is easier to have a public discussion and have a possible backlash to it.
Since that'll never happen without toppling the money printing machines of several of the world's most powerful companies, I'd like to repeat an idea that I once heard -
A publicly available clearinghouse of all targeted ads published via a service so that researchers can find out
- Every ad published
- What targeting parameters were used, other basic data about the ad
- How many impressions the ad received
- How many clickthroughs, etc
It would not include any PII of those who saw the ads, only on the ads themselves. Bonus points if we can also have some kind of GUID "advertiser_id" that would allow researchers to tie the ads back to some kind of probably-anonymized-but-maybe-not entity that published the ad. The API would be defined in the regulation so that vendors can't pull any obfuscatory fuckery.
None of that will work. Sure, have some basic regulatory model for your political campaigns to avoid unduly intervention by foreign actors and similar shady stuff,but that is an small deterrent. You cannot avoid targeted advertising, hell, all advertising is in one way or another targeted. Even if you get rid of advertisement period, you will have to deal with stuff like famous academic Op-Eds, celebrities recommending who to vote for, media bias, etc. So you can control the blatant intervention but at the end you have to trust all voting citizens are discerning adults, savvy consumers of information in a complex world, although you know that is not true.
> You cannot avoid targeted advertising, hell, all advertising is in one way or another targeted
But there's a drastic difference in effectiveness. No one would support private ownership of nuclear weapons just because everyone has fists which are weapons also.
Hyper-targeted advertising has gone off rails recently and when you can simply buy your way into any specific segment of the population without having to put it under public scrutiny, you are asking for trouble.
What is the difference between targeted advertisement and targeted content? I’d argue targeted content is more dangerous because the intent of the content is not as clear as for an ad.
Btw the ad library on FB allows you to put scrutiny on those ads. In fact (some) journalists love to browse it to find a shocking ad and write a news story about it.
The difference is that you can throw unlimited money to guarantee to get eye pairs. The one with the most money will get their message across best. And also the advertising is part is much easier to regulate. We already have loads of regulations and limitations about advertisement, so there would be no problem adding one more.
Unless FB is in on the scam, you cannot buy eyepairs on the platform.
Please reread what I actually said. I didn't say to stop targeted advertising, but that who is targeting, how they're targeting, and what the ultimate reach of a given ad is should be publicly available. Targeted advertising is too powerful a force to remain completely in the dark as it has so far.
agreed to an extend, I simply feel that any idealistic / political advertiser needs to be in some sort of public ledger. The real blocker is transparency.
Since I turned off targeted advertising, youtube delights in showing me very graphic videos of earwax extraction. It feels like deliberate punishment and a not-so-gentle nudge to opt for more personalised ads.
Advertising, targeted or not, does not change the general public's ability to agree on facts or not, if indeed that ability exists at all.
If you're going to advocate for restricting free speech (by the state, no less) there had better be a great reason for it, not false claims like this one. It's a critical human right, not to be regulated lightly.
A vague desire for "cultural unity" is insufficient alone to start eroding human rights.
This is a great idea. Transparency is the real blocker here. On the other end, maybe idealistic / political advertisers need to register their content and spending in a public ledger.
On the other hand, I am a programmer and receive a lot of ads about basic programming courses or even something slightly tech related (build your own startup, how to improve your UI/UX skills etc.) that target me for what they think I do, but not for what I am really interested in.
I think good advertising shows people something that they don't already know a lot about to get them interested, knitters probably already have their channels to research about the tools they use.
They do not even take up the main problem between academia and FB.
Science goes were the data is- and data is at G,F,A . Meaning any science regarding people e.g. social sciences, psychology, behavioural studies - are effectively privatized.
Meaning, the paper of psychology grad-student who did a experiment and asked 200 people on campus is less valuable, as the internal report of some fb-data scientists, who wrote a query and a filter and send the very same question into the production database.
So if a whole branch of the sciences is effectively privatized, why do we even pretend the outside branches are anything then a publicly subsidized hobby?
Why is there no real discussion about the moral problem of privatized sciences, selling the knowledge and advantage gained as manipulation lever to politicians?
The third part of Orwell’s original formulation is “ignorance is strength” which is not a pair of opposites either. “Weakness is strength” or “ignorance is wisdom” would be more accurate pairings.
The NYU gives a clear statement on why they want this data (to track ads) and what data they collect, this is informed consent.
CA on the other hand obfuscated the purpose and what they collect as much as possible, pretending it was a personality/IQ quiz (something fun and benign) when in fact they harvested the data in order to create models for targeted advertising which they then used in political campaigns. This isn't informed consent.
That's a very different starting point already.
Your argument seems to be "well, if that doctor with the curious name of 'NYU' and working for a reputable facility asks you for informed consent to do a surgery to remove a cancerous mole, this doctor can in theory just be lying and cut off your leg!!!1!". CA in this context would be the back alley quack who pretends to provide you with a free non-invasive exam for skin cancer, but then steals your kidneys and sells them on the black market.
The NYU also tries to be as transparent as possible beyond the initial value proposition, including open sourcing the collection software (browser extension), again the opposite of what CA did.
Is there still a possibility of abuse by NYU? Sure, and that's why it is entirely fine to keep an eye on them, and keep holding them accountable, but for things they actually do or don't do.
However, I think trying to hold the NYU to an unachievable standard ("we need to audit everything they do independently") while at the same time not even remotely applying that same standard to FB, their FAANG buddies, and everybody else really, that's just... wat
Tell me again how you think there may be a vague security and privacy concern with the NYU voluntary ad data collection for the willing participants when Target starts selling your daughter pregnancy products (because they knew before her herself that she is pregnant), facebook detecting your face in other people's image shares and adding public tags with your name automatically, google telling you "you have visited this place 2 times in the last 10 years", or famous twitter users (or rather the new owners of their accounts) pushing bitcoin scams because twitter had their systems compromised.
The users also voluntarily share all this data with Facebook. Why do we have to protect the users from NYU academics, but not from Facebook's business model?
No, Facebook shared uaers' profile data (which now is banned by GDPR) in exchange for letting users take the quiz. The quiz wasn't the data CA abused, it was just bait.
This NYU project only collects data that the users want to share with NYU.
I'm not expecting this to happen for many financial and sociopolitical reasons, but I wish NYU would sue Facebook before November 30 seeking a declaratory judgment that their extension complies with all applicable laws and also with FB's terms of service (leaving aside the obvious catch-all of "we can cut you off for no reason in our sole discretion").
Too many threats like this from FB and other powerful entities yield unwarranted capitulation. If those powerful entities had to fear a real risk of losing a lawsuit on the topic of their baseless threat, they'd at least think before threatening and would sometimes refrain. Alas.
> This may be par for the course with Facebook, but it's not something we as a society can afford to tolerate any longer.
Those are strong words, but I'm afraid the "we as a society" are not as much of a power as one would hope. Facebook as a company might turn out to be more powerful. Facebook can bring lawsuits, PR doublespeak and over time wear down adversaries threatening their business.
I think the underlying problems need to be fixed before there can be hope of restricting what businesses like Facebook can do.
“We as a society” write the laws. They can be changed.
The process, IMO, already has started, with the USA and the EU launching investigations against about all the big tech companies (I think Microsoft isn’t investigated yet, but in the context of privacy, they aren’t a truly big player yet. Any new laws will apply to them, regardless)
Frankly, I really don't care. I keep my account and treat it like what it is, utter garbage. Though, it keeps my contacts in cases where it's convenient to connect. If not FB, it would be something else.
The problem is those people who derive their world-view from social media platforms and buy in to all the lies and manipulation from bullies. There's pressure for accountability, and that should be a good thing, although we need to keep watch. If they oust the bullies and fake lies, good riddance. I don't need someone telling me their "Final Solution", especially without anything backing their words.
I wonder if NYU business school prof Scott Galloway @profgalloway is going to weigh in on this issue. NYU is by no means powerless.
We can guess that FB employs a full-time proactive crisis management public relations team. Why? To help guess what they can get away with. But they may have underestimated NYU as an opponent.
Is there any difference between selected private ads, secret society and religion? It's quite similar. Subgroup of crowd, believing something may not be scientific, xenophobia, delivering some "special idea" to their members, etc.
If there is no illegal stuff, why should we criticize on this? Just because they are not mature or long enough as religion?
It’s important to point out that Facebook has an Ads library that allows you to research currently running ads for a given Facebook page and for politics, social issue, or election ads there is an API.
While these capabilities do not map 1:1 to how this plugin works it seems like a bit of a middle ground for the researchers to be able to research ads without risking privacy of people or violating terms of service.
> Facebook has an Ads library that allows you to research currently running ads for a given Facebook page and for politics
The library omits targeting information. The specific subject of this research. Also, Facebook has a culture and history of lying. There is public interest in auditing its disclosures.
Moreover there's value in seeing how that targeting applies to real people in practice.
It's great to be able to see what ads campaigns are running, but if you're a real user, who's feed is full of ads from overtly partisan news sites, clothing companies selling aggressive political gear, AND all the campaigns and SuperPACs, what does your experience look like as a whole?
Our information environment is festering. I can't speak to any given study or technique, but we need to understand this stuff.
So you don't want independent researcher to verify if you are targeting ads when you have proved from time to time Facebook is not trustworthy with our data?
And the link you shared doesn't have any information how ads are targeted. Stop using Privacy as a PR stunt. I trust those researcher with my data than facebook.
Maybe, but I guess the key question is whether facebook has standing and a duty to make that decision on behalf of their users' privacy, or if it should be left up to the user. Whether or not we trust facebook is relevant to that argument. If we don't buy that this is necessary for facebook to intervene and defend user privacy, the only conclusion I can come with is that they are making this decision because of a threat to their business. And I can't see much of a difference between "we should let the users share their information with Facebook for a better experience on the platform" with "we should let users share their information with NYU to power publically available research in ad targeting"
The key question to me is why users privacy needs to be protected from opting in to the NYU study, but not from opting in to using the Facebook platform.
Facebook (and other too big to fail companies) should be split. Facebook Ad company should be independent from Facebook and should have access to the platform on the same terms any other ad company could have access with all respect to privacy. Currently Facebook has a conflict of interest between the social media platform and ad business. We never have seen this before and this needs to be urgently regulated. The same with Google - their ad business has to be decoupled from search and ad company shouldn't have any access to personal data. Google search and Facebook should be forbidden by law to store personal data in a manner that does meet interest of advertising or other types of surveillance.
> Ad Observer is a browser plug-in that Facebook users voluntarily install. The plug-in scrapes (makes a copy of) every ad that a user sees and sends it to Ad Observatory, a public database of Facebook ads that scholars and accountability journalists mine to analyze what's really happening on the platform.
I agree with the sentiment, but this clearly mentions that the plugin scrapes data and sends to NYU servers. I may be wrong, but arent we taking NYU on their word that they are scraping only the ad data and nothing else? People agree to share their data voluntarily, but they are taking this add in at the face value. It may not be scraping, but can scrape. If Facebook allows third parties to scrape data (who can incentivize users any which way) where does it stop?
The question I had was, both - the independent ad analysis and the user privacy - seem important. So, what is the way both can exist? Maybe FB sharing the data by themselves? But, isnt that close to how Cambridge Analytica happened? These are grey areas, we should be debating and discussing in detail about them, not writing snide pieces with one side as the aggressor and another as victim.