Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook under fire over secret teen research (bbc.co.uk)
274 points by ColinWright on Sept 15, 2021 | hide | past | favorite | 217 comments


A parallel between Facebook and Big Tobacco had never occurred to me until I read this:

> In a move straight out of big tobacco's playbook, Facebook downplayed the negative effects of its product and hid this research from the public and even from members of Congress who specifically asked for it

What would our world be like if it were illegal to have a social media account until you were 21 years old? Interesting to think about.


I wonder if this happened, would teenagers would be upset or relieved? There's a This American Life episode[1] in which several teenage girls explain the complex etiquette of Instagram. It sounds stressful, it sounds exhausting, it sounds like a full time job. Right now, anyone who opts out also excludes themselves from a major aspect of school social life.

1. https://www.thisamericanlife.org/573/transcript


My nieces will instantly delete something they post if it doesn't get enough likes right away which seems so anxiety-inducing. :-/


I wonder what would happen if deletion wasn't an option. Like how it didn't use to be possible on Twitter. Would that make it worse? Or better, because then everybody would be in the same boat?


Social media on the blockchain, hmmm...


ha! Just like old hackernews comments :)


They would be upset. 100%. Many of them just don't care about the inner workings of the app.


agreed, it just seems easier to make being on social media mandatory; then the kids would hate it


Maybe that is the downside of the general policy of abstraction that Devs use, that software should be made to look as simple as possible, and all the apparently "unnecessary intricacies"(which could have disastrous effects) should be dumped under the hood.


The point of abstraction isn’t to hide information from you. The point of (properly done) abstraction is to allow you to work without having to worry about the details.

Abstraction has nothing to do with this. The problem is the side effects of social media. Even if Facebook open source their algorithms, I doubt anyone could have conclusively predicted this. We only know all this because of empirical sampling of the user population.


> What would our world be like if it were illegal to have a social media account until you were 21 years old?

Is that the question you take away? Mine is...

What would our world be like if it were illegal to lie to or otherwise obstruct Congress?


> if it were illegal to lie to or otherwise obstruct Congress?

Not only is it illegal to lie to Congress but it should be illegal for corporations to lie to anyone in public statements.


>Not only is it illegal to lie to Congress

Only for the plebs. As James Clapper demonstrated, there is a separate set of laws for the elites.


> As James Clapper demonstrated, there is a separate set of laws for the elites.

Surely, the banner example here id Brett Kavanaugh, whose lies in his Circuit Court confirmation hearing were punished with a seat on the Supreme Court.


Perhaps; I'm Canadian and have never heard of Kavanaugh.


Kavanaughs selection for the Supreme Court was covered heavily even in Europe


Okay. I still haven't heard of him.


about what, specifically?


About his role in the Bush Administration’s torture policy.


That would be hard to enforce given that politicians lie pretty often....


Haha, funny. But not true. At least in the times before you-know-who, politicians went through the wildest contortions to avoid lying.

Yes, there's Clinton and a few other fun examples. But they are sparse.

What happens is that people use the term "lie" for all sorts of disappointing behaviour that, however, isn't a lie. Example: Obama said he'd close Guantanamo within two years or something like that. Is that a lie? No, probably not: "I will.." always expresses an intention to attempt something. It's perfectly believable that he had both the intention to do it, as well as confidence that it can be done.

We intuitively understand all that in everyday life. When someone shows up 15 minutes late, they didn't "lie" when you agreed on the time.


I've been saying to my friends for some time that social media is the new smoking.


True. Except that smoking causes cancer in the individual and those unlucky enough to have to breathe that second-hand smoke. Social media causes cancerous decay in whole societies. and is a much bigger threat to all of us as a result.

We could choose not stand close to a smoker, not to share a ride in their car, etc. but it is very difficult to avoid having to interact with someone whose world-view has been bubbled up by their social media activity.


actually, the basis for most regulation to curb smoking is based on exactly the kind of societal argument you bring up for social media. The freedom of the individual to harm themselves had stood pretty strong, but the societal cost (mostly health care) eventually outweighed it.


I wonder what happens to Facebook, Reddit, and other social networks if they are eventually forced to pay reparations for the societal damage in much the way the tobacco companies had to pay billions to re-educate Americans about the real dangers of smoking. We see that Big Tobacco has pivoted to owning other parts of the consumer economy and they are still quite strong and healthy companies though their tobacco operations have shifted overseas where there is little to no education or regulation.

Should Facebook have to sponsor educational initiatives designed to pivot users away from social networks and/or pay restitution for teen suicides, family break-ups, politically motivated crimes committed by their users? That would be great but we tend not to hold anyone accountable for anything nowadays. The worst penalty our society applies tends to be banishment from social networks or public shaming of participants.

It reminds me a bit of the oil and gas business. In college we learned about the creation of the Texas Railroad Commission and how it was given control over oil and gas operations in the state at a time when over-production was driving oil prices down and boomtowns were lawless. The Railroad Commission created the guidelines that established minimum well spacings in producing fields, production rates, etc and they were given control over how to handle spills and most anything related to oil and gas operations.

The big brag in the course was that this was such a comprehensive set of rules that it became the model for the world in how frontier operations were managed. Basically the oil and gas industry wrote the regulations that they would need to follow and used those as a blueprint for operations globally in countries where there were no established laws. As time went on, industry tailored the rules to the sophistication level of the judicial system of the operating country allowing them to produce other nation's assets at the most favorable rates, in the most industry-favorable manner using rules they modified so as to maximize their own profits and minimize their own liability.

I'm not sure how I got off on that but it is late here and I go on tangents sometimes but it seemed relevant in the sense that Facebook and others have O&G industry levels of influence, they have no incentive to hamstring their own operations or to do anything that would diminish or negatively impact the future value of the massive amounts of data they have collected on everyone so it seems like they will get to write the rules about how their own operations are regulated since, just as the O&G industry did back in the day, they can portray themselves as the experts who are the only ones who really know how to fix the problems they themselves created.

Anyway. Sorry for the long-winded reply. Not sure if it adds any value to the conversation.


I'm increasing worried more so for what kind of nonsense my older relatives are picking up on the platform.


You'd see a picture of a blackend facebook lung every time you launched the website


Along with a Surgeon General’s health warning banner on all UX


Government Warning: (1) According to the Surgeon General, people should not pursue "likes" as it may lead to mental health defects. (2) Consumption of social media impairs your ability to drive a car or operate machinery and may cause social problems.


(3) Just put your dang phone down!!!


> A parallel between Facebook and Big Tobacco

https://news.ycombinator.com/item?id=24579498


I think the alternative would not be that teenagers would not have social media accounts, but that they would all be on Tiktok (which probably is already the case)


TikTok is a much more positive platform than anything Facebook has come up with TBH. Almost all of the creators I follow are just... normal people, making normal jokes and doing normal things. They aren't heavily airbrushed or photo/videoshopped or whatever. The content itself is generally much more positive and much more focused around acceptance and (body-)positivity. As far as social media platforms go, TikTok is much "healthier" than Instagram.


If you are too ugly, the CCP will hide you from other users on TikTok! [0]

[0] https://www.pcmag.com/news/tiktok-censored-ugly-poor-or-disa...


Outdated, they stopped this long ago.


Maybe because the TikTok algorithm is feeding you what you value. Whereas for people with different values, maybe people with certain insecurities that are in the process of change (mainly adolescents) they might be drawn to people that have what they lack just like any other social media platform.

TikTok is incredible, in my opinion, it really is the best content-delivery platform... but that is actually quite dangerous because what people want to see is not necessarily what they should see. It's important for people to have broad perspectives and I don't think TikTok addresses that so appropriately.


TikTok feeding us what we apparently value is its biggest danger.

When people out of mere curiosity watch something biased/misrepresented/discriminatory/extreme and interact with that content, irrespective of whether all their opinions at that moment of time are the same as that of the content they watched, the algorithm will aggressively push similar content onto you, and there will definitely be a few who fall into "the doom".


Have you actually used the app? This is absolutely not how it works. You aren't shoved down rabbit holes like with YouTube.


My SO has TikTok.

It seems like mostly staged videos and folks with barely informed political rants to me.


TikTok shows you precisely what you want to see, whether it's simple and funny staged videos or something more intellectually engaging.


Does it show you precisely what you want to see? Or just what you're most likely to engage with? I feel like there's an important distinction there.


The former, which is why people use it. It shows you what you want to see and will genuinely enjoy seeing.

That is why TikTok is more popular than platforms that only focus on engagement, because those platforms will show content that you will engage with even if it's in frustration.


So if I want to see you naked, TikTok will show me it?

It's disappointing to see folks in hacker news regurgitate TikTok marketing material as fact credulously. No service could show you precisely what you want to see. TikTok does it's best - but e.g. if I want to see no staged video, TikTok can't enforce this, because TikTok doesn't know.


TikTok is not an adversarial platform and there is probably an element of East vs West styles of thinking here. TikTok's engagement is driven up by people taking other's videos and building on top of it. FaceBook's engagement is driven up by people taking another's position and arguing against it. Leads to very different styles of engagement. You can find toxic content on TikTok as well and build on top of it but it is harder and you have to seek it. On Facebook, you could make a slightly partisan post and someone will take an opposite position and then you harden your position to defend against that and others join in until it turns into a pissing match.


And better yet, illegal in public places.


Spot on - we shouldn’t allow social media accounts for anyone under 21 years old.


I would expect to see it struck down by a Supreme Court ruling. You cannot just declare speech a new drug to get around the first ammendment. An 18+ law may past constitutional muster however.


It's not about speech per se but more similar to gambling.

Gambling is not banned for minors because they can't watch fruit icons and blinking lights.


Does anyone remember Cosmopolitan and other like magazines. Teenage body issues and associated anxiety are not new. What has happened is we gave everyone internet access so now they can look at pictures of unrealistic bodies all day everywhere they go. The result is body dysmorphia on steroids.


I don't think this is quite it, I think you're underrating the social aspect of it, and the algorithm aspect of it.

When I was in high school (livejournal and myspace era), I had a friend group where most of the hanging out occurred IRL. I hung out with the dorks, so there was not a lot of focus on being hot and stuff. And IRL, there aren't algorithms that constantly shove pretty people in your face. Of course you see them around, but you also see tons of normal looking or even ugly people. None of my friends really read Cosmo and stuff, because we were focused on our interests.

But in today's era, so much of the social scene is online. And the systems are set up in a way to basically constantly show you pretty people. It is hard to avoid the aspects of it you don't like without also missing out on things you do like.


> It is hard to avoid the aspects of it you don't like without also missing out on things you do like.

Your comment resonates with me. Facebook completely sucks the air out of the room by monopolizing online social networking. The newsfeed makes the world seem so small and vapid. No room for weird.


Yea, I think the other aspect is this isn't just internet. When I was younger, going on the internet meant going to the upstairs bedroom, turning on the computer, waiting for it to boot up, then waiting for AOL to connect, then chatting with people. It would take like 5 minutes of effort. Now, most of the time when I go out, for fun, I just look how many people are hunched over staring and scrolling at their stupid rectangle in their hand all day everyday. It's usually at least 80% of people. It's the social, the algorithm, and the fact that the addiction is sitting in people's hands at all moments of the day and night.


At the risk of sounding like a grumpy old man, I remember 15-20 years ago talking to people next to me in public transports and meeting a lot of people who eventually became friends.

Now when I'm in a subway, every one is hunched on their mobile phone, no one is available to engage in conversation, to meet people. They're instead focusing on that tiny bright rectangle.

And I can see that I'm also guilty of the same. I'm always either on my kindle or my phone nowadays


Same in a bar. I go in to shoot the shit. But often times now, I will see people spend their entire time scrolling while sitting at the bar. It's sad. I switched back to a flip phone almost 1.5 years ago and while I do miss certain things, the overall experience of just not having the option to scroll is so much better for me in terms of being present.


I got the iPod on release day and got so many confused glances/stares wearing headphones around the college campus and buses. That was a short phase for me but now I’m surrounded by people who are present but without presence. The world will never be the same.


Anecdata but my brother lives in a major Scottish city and cycles a lot.

He says that the biggest threat to his safety isn't traffic, it's people walking across the road looking at their phones. Often groups of them at the same time.

He reckons that when he's out cycling (and by this, I mean he cycles about 4 or 5 miles at most) he encounters at least 5 or 6 incidents on average.

I see it in the mornings when I walk my son to school. If I had to guess, I'd say more than 40% of the kids are looking at a phone.

I've also heard of some European cities putting traffic signals on the pavements so that phone users don't have to look up: I 100% disagree with this - They get killed? It's their fault!


I truly don't understands this perspective. It used to be that mass media was a tightly-controlled, walled-off garden. The set number of popular and accessible magazines, tv stations and films suffered from far more centralized control, had far less inclusion in its leadership and staff, and resulted in a hyper-narrow range of acceptable beauty and image standards. It was borderline dystopian.

Young people, by and large, could only view media that was part of this monoculture. They were already seeing it all everywhere they went. Now, with just a little looking, people can find an entire universe of body-positive media and communities that simply did not exist in the 80's and 90's. So while there may be more ways to transmit body negativity thanks to the internet, I'd argue there is not more body negativity because we had already maxed it out. It's just as bad as it had been for decades. The difference now is an alternative exists!


I'd agree that 'body positivity' is easier to find in this climate, but is it what's being often served on these platforms algorithmically? Is it the default prior to your personal 'profile' being recorded? Does it pump the 'engagement cycle' at a better rate than these 'body negative' cultures? If not, then we're not better off in this era because-- as we've seen-- the engagement cycle is a feedback loop. It amplifies itself. And the qualities of the people it attracts.

I personally do not take issue with these cultures existing; however i do take issue with how they are propagated and promoted. These systems don't afford users the full breadth of 'response options' as we would have in real life. You can't even express simple dislike or disagreement, unlike 'love' and 'like', etc. Not to mention the black-box engines of recommendation and discovery of new content in the first place.

And though i don't agree that there's ~less body negativity in the aggregate that wouldn't mean that is has less impact. A small culture highly-addicted to body negativity could have larger social impacts than widely-held-but-less-addicted. We see it across many other domains already in our 'American' culture.


Maybe this body negativity came from inaccessible people, who were famous/connected enough to be on these pages.

Now, everyone can do it, and many are around you, accessible. They are your friend's friends or your friends.

Moreover, mobile applications and web applications allows you to touch up your appearance. You look and say "I'd be so much beautiful/handsome this way, but I'm not, and never will be".

Self criticism inflicts the greatest damage since there's no restraint, no barrier to dampen down to its effects. It's very destructive.


I think the problem is people think there isn't an attractive main stream that people look to regardless of microculture (see Kaitlin Jenner for example as someone who finds themselves an outsider within a small community). But even if that were not the case, being anti-X is also a problem in itself as you are defining yourself against X.

Wanting to be "popular" has its problems but wanting to be "anti-popular" is also a problem because in the end it's not yourself. Being the not-popular also has it's own mainstream so it will have the same popular and anti-popular dynamic. So the only ones benefiting are the company and aware "influencers".


This is probably 90% rose-colored glasses, but to play the devil’s advocate, the monoculture did give people more commonality and I wonder if people were more social - and maybe even less divided - as a result. There is something to be said for people to have something in common to bond over and not being at each other’s throats all the time - often over mundane things.


It's true, but how many girls bought/could afford Cosmo, Elle, etc. subs? I suspect it wasn't as many as have Insta/Fb accounts --which is world-wide and reaches into every class and society.

And given we now know it's severely detrimental to their well-being, I think it deserves serious consideration and requires change. I don't mean to censor people, but we must stop fomenting the image problem on purpose via programming and advertising and social engagement.

Facebook knows it's a problem and they should tackle it (as well as any such legacy magazines and the fashion industry in general).


> but how many girls bought/could afford Cosmo, Elle, etc. subs?

This is not a particularly relevant question. The number of girls with their own subscription to Cosmopolitan vastly undercounts the number of girls reading Cosmopolitan, which was every girl who was interested. These magazines are stocked in libraries, shared friend to friend, and left in public places. The only thing a subscription gets you is that you can read the new issue a few days earlier than everybody else.


Where I grew up girls read Cosmo and talked about it even as teenagers. That's my point, it wasn't as prevalent or accessible as it is today. We made it more accessible and hence the problem exacerbated.


>"Facebook knows it's a problem and they should tackle it"

Preface: Not a lover of Facebook and "social media" in general and do not use it as such.

However having FB "tackle / promote" way of life I think is totally wrong. It is up to the parents and schools to tech kids that there are more interesting things in the world than this mental masturbation on computer / phone screen watching somebody else's life.


> so now they can look at pictures of unrealistic bodies all day everywhere they go

In 2018 "obesity prevalence was [...] 21.2% among 12- to 19-year-olds." [0] according to the CDC. That's one out of 5 being obese, not overweight. And it has more than tripled since the 70's [1].

Maybe our definition of "realistic" changed too.

[0] https://www.cdc.gov/obesity/data/childhood.html

[1] https://www.cdc.gov/nchs/data/hestat/obesity_child_15_16/obe...


Doesn't is show that body shaming might actually work, because the growth means the cause is likely a lifestyle choice rather than something about nature.


Young men are vulnerable to this, too. In fact, it's not unreasonable to think that it doesn't do any good for older men and women, either.


Setting unrealistic expectations and then failing to meet them is bad-feels for all brains.


If I recall the FB research from that other thread correctly, it's about half as bad for young men. Instagram is bad for everyone.


As another example "fake news" isn't new either, e.g. https://en.wikipedia.org/wiki/Yellow_journalism

However, I'd suggest that the new challenge isn't simply scale and omnipresence but algorithmization - modern platforms can tune and target to the level of the individual. In the past, (dis)information had to be broadcast in a far more one-size-fits-most style, perhaps segmented by broad geographic or demographic groups at best.


> now they can look at pictures of unrealistic bodies all day everywhere they go.

It's worse than that. Now they will be subjected to those pictures whenever they socialize online, like if 30 years ago every telephone was next to a stack of them.

Thankfully, we don't make people sit and stare at cosmo ever time they socialize. We only make people stand and stare at it whenever they buy groceries.


I think the parasocial element of engagement on Twitter and Instagram have a big effect as well. A cover model in a magazine has an air of inaccessibility. She’s clearly an aspirational figure and held above “normal” people.

The way the influencer model works, though, they’re all focused on seeming accessible and relatable. It’s not just that this is a supermodel who is above us all, she’s trying to look like your hot friend. All the products she shills are stuff that’s supposed to make you look like her.


> What has happened is we gave everyone internet access so now they can look at pictures of unrealistic bodies all day everywhere they go.

Not just that. The entire model is to chase you down, pester you with it, and shove products into your face. Which products? Whoever bids the most!


I'm concerned that this whole narrative implies that research is the problem. It feels like a continuation of when people were shocked, shocked when it was revealed that Facebook was conducting experiments on people. Well, the issue wasn't the A/B testing exactly, but rather the publishing of the experiments in scientific journals. As you know, public outrage put a stop to that. (The scientific publishing, not the A/B tests)

Now, there is outrage over "secret" internal research on the effect of Facebook on mental health. Well, i guess that means Zuck will just gut the entire research domain investigating user wellbeing. Too bad.


The outrage, I hope, is that Facebook conducted research that made it aware of something negative it was doing that made it money but kept the information tight and continued acting in a harmful way because... it makes them money.

Ignorance is one thing and can be forgiven to some degree or treated as negligence. Wanton money chasing at all costs with no regard to damages you knowingly cause because, again, money, isn't forgivable.

A side effect of this is that it can create disincentives for publishing such research but let's be honest, FB as an organization would never knowingly allow such information like this to ever be published, even if the general public ignores most of everything. That's just stupid from a business perspective. It would be like cigarette makers creating their own surgeon general warning without a mandate to do so. They have every reason not to.

Research will continue because businesses understand the value of science and research even if they don't respect it. That research will only ever be used to their advantage though.


> Research will continue because businesses understand the value of science and research even if they don't respect it. That research will only ever be used to their advantage though.

Except it can't if it requires both internal data access and approval from FB management.


What would stop the good Facebook employees from just memorializing evidence of the harm Facebook does to girls/society/the elderly/the conspiracy minded/the Rohingya/etc by emailing supervisors about it and then leaking the response or lack of response? Is Facebook just going to fire the good people that still work there?


Errr... Yes?


No, publishing research was not the issue, the issue was experimenting on people without their consent.


That was part of the journalistic fallacy. A/B testing is experimenting with people without their consent.

(And everyone agreed to the terms of service)


That's true. However, most A/B testing doesn't result in an academic paper, and there are strict ethical rules about academic work based on human subjects. Second, obviously not all A/B tests are the same. If you're testing two different fonts and comparing bounce rates, there are far fewer ethical concerns than if you're intentionally spreading an "emotional contagion" (that's the term Facebook used). And note that if you wanted to publish an academic study based on the font experiment, you would still need informed consent, not just "these people agreed to let us do whatever we want when they signed up 10 years ago, so it's fine".


There is nuance to this. The point of IRB is ethics.

1. If the experiment is minimal risk and expected to benefit participants (as the notorious fb experiment was—it was emotional contagion of positive emotions), then informed consent may be waived.

2. When there is a greater risk from the act of gathering informed consent (e.g., because you need to store identity for that purpose), then informed consent may be waived, if minimal risk.


> as the notorious fb experiment was—it was emotional contagion of positive emotions

No, they tried both positive and negative.

> When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.

> If affective states are contagious via verbal expressions on Facebook (our operationalization of emotional contagion), people in the positivity-reduced condition should be less positive compared with their control, and people in the negativity-reduced condition should be less negative.

- https://www.pnas.org/content/111/24/8788.full

The experiment was expected to make half of the subjects less positive.

Edit: This discussion of whether consent could be waived is sort of irrelevant when Facebook didn't even have an IRB at the time, and Cornell's IRB inexplicably declined to review the study even though two of the authors were part of Cornell.


You are correct. Mea culpa.

I still find it challenging that it would be completely acceptable for Facebook to run private A/B tests with the same variables to optimizing advertising revenue but the unethical part is making it public after peer review.

I guess i just don't get or agree with the ethics of that situation.


It absolutely is not acceptable to do this without publishing it, although obviously they do it anyway. Imagine if you worked at Facebook and your boss said "Hey can you crank up the negativity on everyone's feed for a few days and then track their emotions? Pretty sure it's gonna make them all angrier, but I want to be certain." You would refuse, because that's obviously unethical. Maybe doing it for published research makes the engineers more likely to go along with it, I don't know.

I guess the main point is that just because there wasn't much negative PR about this before Facebook published their study, that doesn't mean what they were doing was acceptable, it just means most people didn't know it was happening. And experimenting on people without consent in order to enrich yourself has always been and will always be unethical, no matter how many people make their living that way.

One more thing I'm thinking about: sometimes people compare this kind of A/B test to, for example, a supermarket rearranging items on the shelves to maximize purchases. I think there are important differences, although still some ethical concerns. With a supermarket you don't need to interact with or even know about the existence of the shoppers, you can just arrange the store and then monitor sales. More importantly, (I hope) supermarkets aren't running these tests with the expectation of harming anyone. The equivalent of the Facebook study would be putting the unhealthiest products in convenient places and then monitoring customers to see if they gain weight.


I guess you really don't like the advertisement and UX design industry, then! I mean, there is an endless stream of human experimentation...

What should we do about it? I personally think it is better to run the experiments in public so we understand, but i can get why you might disagree. But i would like to know what you think the policy should be.


Yes I strongly dislike the advertising industry. UX designers I don't have a problem with - they often use informed consent ("check out our new design!"), and as far as I know they don't usually run intentionally harmful experiments. I'm not sure how they could, Facebook is in a somewhat unique position there as it's so embedded in some people's lives.

I'm not sure there's really a policy solution, I think any law that made what Facebook does illegal would also make a million other legitimate activities illegal. It's good that Facebook was shamed into apologizing and creating some kind of internal review board, society at large did a great job making that happen, and we should keep that up. Maybe journals can institute better policies about not publishing unethical studies so they don't legitimize and encourage this behavior.

Note that it's not a choice between "Facebook does everything in secret" or "Facebook publishes all its research". Even if there had been no consequences for the emotional contagion study, Facebook would still keep most research internal, just like the tobacco industry buried cancer studies or the oil industry buried climate change research.


UX design uses a large number of a/b tests. By their nature, no a/b test involves consent.


If facebook doesn't do this research a third party will and it will be much worse for them because they won't have prepared.


This is more interesting (Instagram response to WSJ, linked to in the article): https://about.instagram.com/blog/announcements/using-researc...

"We're increasingly focused on addressing negative social comparison and negative body image. One idea we think has promise is finding opportunities to jump in if we see people dwelling on certain types of content."

"From our research, we're starting to understand the types of content some people feel may contribute to negative social comparison, and we're exploring ways to prompt them to look at different topics if they're repeatedly looking at this type of content."

Wow, talk about "Big Brother".

Facebook/Instagram monitors everything the user looks at, how many times and for how long, and now they will "jump in" and alert the user.

"We're watching you and we think you should look at something else."

Meanwhile the whole "business model" of Instgram/Facebook is online advertising.

Advertising, e.g., in print, is what created these "negative body image" problems in the first place!

As they say, in the world of computers, what is old is new again. Originality is rare.

Big Tech relies on the same tired, old consumerism ideas, except it operates over the internet.

The only thing "futuristic" is the surveillance. Its nothing less than incredible what young people today are tolerating. Are they studying the psychological affects of surveillance on young people.


I'd like to think that by "jumping in" they mean that they change what kind of content the algorithms shows their users... however, I'd guess that no such self-regulation could ever suffice (esp. because it would always be less important than ad money in internal metrics).


There's no difference, fundamentally, between displaying a popup to suggest looking at different material, and automatically showing different material. Both are equally creepy - actually, doing it automatically is more creepy. At least a popup let's the user know they're being spied on.


I expect so too, and was going to say the same, but it's not actually that different (to a pop-up, or message, or whatever invasive 'jump-in' alternative) is it? It just means most people won't know about it.


"We're cautiously optimistic that these nudges will help point people towards content that inspires and uplifts them, and to a larger extent, will shift the part of Instagram's culture that focuses on how people look."

The term "nudge" could have multiple meanings. For example, https://en.wikipedia.org/wiki/Nudge_(instant_messaging) or https://en.wikipedia.org/wiki/Nudge_theory

In either case, given the modus operandi of a "tech" company, its an encroachment on free will. An subversive attempt to influence decision-making deisgned to fly under the radar. "Tech" companies already do this with their so-called "dark patterns", making choices confusing or removing viable choices altogether. There is nothing new here in this "rebuttal" from Instagram, except perhaps a new "justification" (a parallel construction) they can cite as to why they corraled a user to look at something (favorable to Instagrams 100% ad-dependent business) when the user actually wanted to look at something else. Of course, Instagram could provide users with this "nudge" feature as an option (like HN has noprocrast), but we all know they wont. That is not a choice they want the user to have. The "business" of these companies is to show advertising to users. They are middlemen. Watching and plotting.


> In either case, given the modus operandi of a "tech" company, its an encroachment on free will.

Given the nature of every medium ever includes some focus inherent in what creators make and curators highlight/disseminate, this seems like a bit of an overstatement.

The decisions of a film studio, book author, or magazine editor aren't an encroachment on my free will, I can always tune out and look for something else.

Heck with social media, no matter how it may nudge via the feed. I'm usually free to choose my own walk through individual profiles, groups, and pages on a scale that would have been hard to really understand a half century ago. If there is one thing that is not our problem, it's being nudged/herded into an oversmall number of options.


Disagree with this assessment. Tech companies are able to manipulate the audience for someone else's content "on a scale that would have been hard to really understand a half century ago." Films, books and magazines are not interactive media. Computers force their users to provide input, much more input than a channel to a TV tuner. As developers, it is definitely our problem because we can control that input. We can require input to suit our needs over the user's. We can monitor the user's interaction. We can easily redirect the user to locations he never intentionally specified he want to go. The TV tuning dial, the buttons on a TV remote or a TV's onscreen menu were more or less fixed. These could not be continually manipulated to optimise for the advertiser. No redirection. There was no software developer who was paid to manipulate, positioned between a film and a viewer in a theater, or between a reader and a physical book, or between a reader and a printed magazine.

Yes, people have more options thanks to computers, but theres now a middle layer of manipulation between people and the media they are consuming.


I'm not a fan of Facebook, but it's not a bad idea. I think Facebook/Instagram being present is poisoning the well on what might be a good idea.

In a different context, you could use a similar tech to suggest opposing viewpoints. "We noticed you've been watching a lot of flat earth documentaries. Why don't you take a look at the other side of the argument?". Or maybe on Reddit "We noticed a lot of the threads you've been viewing are overly hostile. Why not relax for a little bit over on /r/chillmusicandducks?"

I also wouldn't mind a system like that on days that something terrible has happened. The stuff that happened in Kabul was terrible, but I don't think I would've minded a nudge to go remind myself that there are happy things too.

It's not like they weren't already collecting info on what you look and for how long anyways. I don't really see a change in privacy here (it was abysmal to being with), just Facebook leveraging its user data to maybe actually help users for once.

I give it 3 months until they realize it impacts engagement numbers and replace it with a toothless version that doesn't help, but also doesn't impact engagement.


Yeah. As if it's not obvious "engagement" and "influencers" aren't the real problem, no instead, it's "huh, some people actually fall for the ads (body image)".

Stop promoting trash and see the problem go away --but so will their main cash cow. That might color their decision-making a bit.


Such a shame Facebook is considered a prestigious place to work. Sure, you can become a millionaire, but at what cost? Is this what you imagined you would do as a kid interested in math/computers/science?


It's a useful tool for hiring though. I wouldn't be able to hire anyone who worked there. At least at my current job, a functioning moral compass is required due to the impact our clients (governments) have on the general population. It's part of our interview process to figure out if someone cares about how their work effects people.

Not that the pay scales overlap much anyway.


If Facebook was only full of people without a functioning moral compass then none of this research would have been done in the first place.


The people who work at companies that seem 'immoral' are able to justify it because they work on such a small part as to avoid any feeling of responsibility.

The same could be said of soldiers in an army used for oppression. They don't feel responsible for the outcomes because they aren't the ones making the decisions, or they work 'behind the scenes'.

I personally don't judge any of them but I can see how you could argue that simply by working for such a company or institution you are partly to blame for their immorality.


It feels off, before working at facebook I was unable to buy a house and wouldn't have been able to retire at 67

I've worked a for a lot of 'nicer' small companies and they pretty much threw me in the trash


I'm not sure I understand. The companies were nice and threw you in the trash? Perhaps you mean they were 'moral' in mission but bad employers?

Ultimately Google and Facebook are ad agencies (by profit). They need to pay a premium to attract workers because their mission is less attractive, although they dress it up by publicizing ancillary projects. If you go several steps farther, porn companies do the same.


> Ultimately Google and Facebook are ad agencies. T

Now. They are a combination of advertising venues and ad placement agency. “Ad agencies” are the companies you hire to make ads.

Though Google branching out into using AI to generate ads probably isn't too far off.


I would never work at Facebook and have declined to apply for their grants before because I think they're a malign influence. It's possible to make such decisions.


Ah, the Nuremberg defence: "I was just following orders".[1]

Whatever helps you sleep at night I guess.

1. https://en.wikipedia.org/wiki/Superior_orders


“Not all of them are bad” isn’t a very strong argument against the general heuristic.


I have one caveat: people whose immigration to the USA was conditioned on their employment in Silicon Valley.

Aside from that, I think _every single person_ that works at Microsoft, Facebook, Google, Twitter, etc. (the uncivil technology corps) is morally bankrupt. Every last one of them is fundamentally compromised by their work.


Weird. I know a lot of people who work in that space. None of them are morally corrupt.

What kind of people are you seeking out?


At the risk of making a No True Scotsman argument...

Can someone deriving income from such a business model, while other options are almost certainly available to them, NOT be considered morally corrupt?


"Seeking out" in which sense? As in, "who are the people that I seek to condemn?" Or, "who are the people that I seek out to speak with?" Or some other sense?

I'm not asking this to be obtuse; I'm genuinely interested in engaging on this topic, so I want to be sure I'm not speaking past your meaning.

"Who are the people I seek to condemn?" The information workers that find employment with massive, human-rights-violating corporations. I condemn these people for their willingness to dedicate such talent and intelligence towards actively making the world a more user-hostile place.

"Who are the people I seek to speak with?" Generally, everyone and anyone, including you or any of your acquaintances that happen to be reading this thread.

With respect, I think this speaks to a difference in our ethical values. I 100% believe that Facebook employees are fundamentally compromised by their employment. Don't get me twisted: I'm not accusing them of being Nazis, nor any such equivalence; these specific condemnations are very prevalent in our modern discourse. I'm not saying anything like this.

I'm also not saying that such people are necessarily unpleasant, or that their company might not be enjoyable. However, at the end of the day, they are making the world worse for me, the people I associate with, and (in my opinion) the people of the world at large. For these reasons, owing to my ideological commitments, I remain firm in my position.

Facebook employees are ethically compromised.


> Facebook employees are ethically compromised.

Again, I disagree heartily. I know FB employees who aren't ethically compromised, which serves as a counterexample to your claim. I can't think of any legal, large employer with whom people are ethically compromised simply by employment.

Also, your original claim was that they are morally corrupt. How do you distinguish between these two?

> they are making the world worse for me, the people I associate with

As a gentle reminder, a single person isn't the world, and your life being assessed by you as worse (without mention of baseline) is not a huge price to pay for free connections, central marketplace, and not to mention the groups that have been extraordinarily helpful for the marginalized. I've been in that place (marginalized), and finding support through groups facilitated by FB's platform was essential to my well-being. So again, counterexample to the totalizing claim.

Am I a fan of FB? No. But it's a really hard stretch to assume that everyone is compromised for being involved with them. Such claims aren't novel, but seem to be histrionics in most cases.


I consider "ethically compromised" and "morally corrupt" to be the same thing.

I will accede to your claim: this is largely histrionics. I don't disavow my position, but I certainly do regret having so casually made these "totalizing" claims. It's a position I'm still working on formalizing well, but posting pure rhetoric is not that well-formalism.

You are correct that a single person isn't the world; however, my singular personhood is my entire world, and I do think it's fair to use the data point of me to make judgments about the world and others.

I think I specifically need to work on how I convey my notion of "ethically compromised," because I think it's probably far weaker than it's likely to be taken.

Perhaps I should say, instead, that I can not say any Facebook employee is ethical; or that, if Facebook employees were to share my ethical values, they could not view themselves as ethical actors (although I'm certainly not contending they ought to have my values). More generally, however, I believe this of every employee of Microsoft, Twitter, Google, Facebook, etc. (the uncivil technology corps) Saving only for those employees whose immigration to the USA was conditioned upon their employment.

To me, the question is very much similar to "Were the employees of IBM during the Holocaust ethical?" The reporting goes, that IBM helped Nazi Germany design the computer systems that tabulated inmates at concentration camps. I really hate this particular formulation, because I definitely do not want to draw parallels to Nazism; nor do I wish to imply that they are on the fringes of such. However, at their most extreme, the uncivil technology corps has contributed a lot of tech and a lot of data to the CCP. I believe the reporting that the CCP is arresting Uighur Muslims and others, and placing them in concentration camps. Again, I do not wish to imply that this is the same _in scale_ as the Holocaust, nor even necessarily _in kind_ (yet), but it certainly rhymes to an uncomfortable degree. This is where I specifically stake my claim that these employees are unethical for human rights violations.

I don't see how someone can claim any degree of ethical standing if they directly aid an endeavor which profits from such malady.

I am glad to hear that you've found benefit from using Facebook, but I contend that whatever positive values Facebook might have are far-and-away outweighed by the definite societal ills it wreaks.

Again, just to clarify, I'm not saying that you need share my opinion: just as I am my own yardstick for the universe, I respect that you do the same.

I sincerely appreciate the engagement we're having. It's easy to look at someone you perceive to be displaying histrionics and snub them; thank you for our dialogue, and I hope it can continue.

EDIT: If you'd wish to continue this dialogue via email, my inbox is in my profile; else, I definitely welcome continuing this thread here publicly.


Great dialogue.

Why Microsoft though? Not sure they can be compared to Google or Facebook. But maybe I'm missing something.

If Microsoft - why not Apple?

I think most of us are morally corrupt however or let's call it applying a flexible morality.

When it comes to pleasure or money, e.g. going to McDonalds, buying fast fashion, or working for these massive ad companies, it's not always easy to do the right thing.

It's a systemic issue and we need to address the 'profit uber alles' value system.

How would you address this problem?


Microsoft and Apple both provide tech and data to the CCP, which makes them unethical vis-a-vis human rights violations in much the same ilk. They're definitely included in what I refer to as the uncivil technology corps. Generally, any tech company that enforces their product be used in only certain ways is a company I'd consider unethical and uncivil. The end-user is the only individual apt to decide how a tool is best used. Trying to subvert that, or make an end-run around the user, is a societal ill in my opinion.

I'm not sure how I'd obviate the problem entirely. I would propose a new tax on companies, commensurate to the degree they lock down their tech. This tax would be ear-marked towards a granting program for groups and individuals in the US, with the sole purpose of enabling grantees to publish new technology for the community.

That is, a grantee would be a non-corporate interest. The work of the grantee would be released as a public good, possibly competing in the same space as the companies from whom the tax came. As an example: Apple, having locked down their ecosystem so tightly, would be taxed highly under this program. The tax goes into a grant. Interested software developers apply for the tax, possibly to contribute work towards an OS or a command-line utility or sundry other projects.

I don't know if this would help. I feel that it would because it ideally helps convert technological rent from the rent-seeking corporations into tangible public goods.


Man, I think you're probably right about everyone benefiting from doing immoral things, but I don't know how we could avoid it other than hiding out in a national park or something

By that logic, the linux community is corrupted by helping and accepting submissions from evil companies and evil governments

Everyone who buys and/or uses any sort of devices is corrupt too

Kinda reminds me of "The Good Place", haha


How many people with prior experience at these companies have you worked with or know on a personal level that you're drawing this conclusion from?


This is a good question! I've known only applicants to Google, never anyone that's worked at or meaningfully aspired to work at Facebook.

I'm not making this judgment on a rational basis; my stance is based purely on my ethical values and ideological commitments.

If you find this silly, or think that it invalidates my opinion then: power to you! I do not believe that my ethical values are universal statements, nor that my ethics must be shared by anyone. I simply don't care to qualify every one of my statements with "it seems," "in my opinion," etc.

I genuinely wish to engage on this topic, so I welcome any further reply from you.


I happened to have worked at Apple, Facebook and Google. I’ve found all the people there to be generally pretty morale people who try to positively influence decisions. Sometimes it’s a losing battle and decisions get made that you disagree with. However that’s broadly true of any group of people so saying all members of a group are morally compromised seems unhelpful. You could say the same about anyone working on the government or any business or even generally associating with people.

I disagree with the position taken that you can evaluate a person’s principles solely by their choice of employment or association. It’s only one factor and usually an unhelpfully reductive way to look at the world IMO.


Applying morality to jobs is a pretty piss poor strategy. I'm a big fan of privacy, how many companies have employees that I cannot morally hire with your perspective? How can a Libertarian ever hire anyone who has served in a war? How can a religious person sell goods to all people in the community they serve?

The thing about morals is that they're written by the individual, and meant to guide the individual. They're fine to have until you try to apply them to other people.


> The thing about morals is that they're written by the individual, and meant to guide the individual.

And in doing so, guide the individual's decisions, which then affect society.


Yeah I agree. Though I don't mean morality in a general sense. All I'm saying is, it's important to figure out if someone will contribute to a product that is shown to be harmful. It has nothing to do with an individual's beliefs or feelings, only actions people take in the workplace.


> I wouldn't be able to hire anyone who worked there

> Not that the pay scales overlap much anyway

Those grapes sure are sour.


I'm doing really well, thanks!


> Such a shame Facebook is considered a prestigious place to work.

Is it? It’s not my industry but I haven’t got that impression from reading threads here.


It definitely was until ~5 years ago. I think the 2016 election was the turning point. There has been a stream of negative PR for Facebook since then, and it seems like everything they've done for damage control only made things worse (especially anything from Zuckerberg).


The irony is not lost on me that HN decries social media for being one massive echo chamber and are just as detached from the real world themselves.


I’d argue against you here. I’d say that HN represents quite a diverse cross section of people, and to our mutual credit we all manage to keep it fairly cordial and fun. We all come from different backgrounds and experiences and yet find it fun and enjoyable to come here and talk about technology and have a good time. I may disagree with folks here from time to time, but I have no doubt that HN is a home for people from all walks of life and the last thing that comes to my mind, when thinking about HN, is that it’s an echo chamber. Quite the contrary, some of the folks here help me learn to think about things from a different perspective and I enjoy it.


Look up the FAANG acronym. It’s a collection of companies that are supposedly the “best” to work for. Facebook is the F

Edit: Here’s an example Reddit thread of what I mean: https://www.reddit.com/r/csMajors/comments/mnl4u2/why_does_e...


FAANG is composed of large companies - not great companies. Those employers rarely make it onto genuine lists of best employers and if you want real pedigree on your resume you'll want to come out of a small but successful company. You can be asleep at the wheel for two years and leave a FAANG with neutral or positive reviews because you were sociable - if you helped grow a tiny company into a small company recruiters will take immediate notice.


That's not the case at all. Unless you are one of the first few engineers of what will become an Amazon, Google, Uber, etc... coming from a small company gives very little leverage on negotiations with recruiters, visbility, etc... , the scale of problems you deal with at FAANG is 100x bigger and more impacting than at a random placeholder.io company. I'm not saying there are not many engineers working at small companies better than some of the devs at FAANG, that's a fact. Also a company success very often, have very little to do with their engineers. If you want real pedigree, start your own software company, and make it a success. Working for another person doesn't give you any pedigree. Is just a job, you work, they pay you, and that's pretty much it.

https://en.wikipedia.org/wiki/Wishful_thinking


Facebook is one the highest paying companies, the highest among FAANG. High paying companies = high demand = selective = high prestige. I would say Google and Facebook are probably the two most in demand companies (out of all companies) to work for among software engineers.

(I frequent career fairs, blind and cscareerquestions)


Have you actually work at one of these companies or did you form these notions based on others' perceptions of legacy companies like IBM and Oracle? None of the FAANG are known for good WLB outside of Google. Actually a common criticism of Amazon and FB is PIP culture so I'm not sure where you can fall asleep at the wheel for two years just because you were sociable.


I thought FAANG criteria is more a measure of how much these places pay, otherwise Microsoft would be on the list (with a market cap 2.3tn).


The original acronym was for stock performance.


lets be real, recruiters will take notice no matter what


I think your average "asleep at the wheel" Facebook employee could easily do the work of 10 or 20 ethical guys from your average Midwestern consulting firm.

(Unless the work involves having some sort of strong moral compass anyway)


Maybe I'm out of the loop, but I thought Facebook specifically was known for being the highest paying of the FAANGs because they are the least prestigious (and also people have ethical concerns re: their products).


Okay I'm not sure how HN convinced themselves of this but higher paying means more prestige, not less prestige because "no one is willing to join otherwise". It would seem pretty intuitive but apparently not on here where the mental gymnastics to discredit FANG has somehow correlated lower pay with better job these days.

And prestige among the FANGs generally goes: Google and FB in their own tier (depending on whether you value WLB or compensation and promotion rate), Apple and Netflix slightly lower, and Amazon (and Microsoft, albeit not a FANG) much lower than all of them.


There is frankly a whole self-flagellatory thing going there. Plus well, selection bias is everywhere. People post more about complaints.


Some believe that facebook is a positive force in the world, despite being hated.


If someone wanted to make the case that Facebook is a net-good force in the world, I'd read that. But I haven't seen anyone even try to lay out evidence of Facebook doing any good things. I like pytorch, prophet, and their decent job policing the pedophiles on the FB platform, but what else do they do that's good and responsible?


I've never been a FB user.

Years ago, when FB was relatively new, I was very anti-FB, and would discourage everyone from using it due to privacy concerns.

Then a Sudanese friend was staying with me for a few days. He had spent years separated from his relatives who lived in multiple countries, as he was working odd jobs to save enough money for a degree. Through FB, he got to see his nephews and nieces be born and grow for a few years. The amount of joy it brought him was immeasurable.

I realized I was the asshole thinking I knew better than he. If it brought him this much joy, it definitely was worth the loss of privacy. Multiply that by millions.

Today, there may be viable alternatives, but there weren't in those days. At least none that his non-techie relatives could use. FB is what brought about the joy. Nothing else did, despite trying.

FB is a tool. Yes, there are plenty of problems with it, but they require mitigations - perhaps even legislation - not a shutdown. Killing FB, IG and Whatsapp really won't solve anything. Plenty of competitors will eat up the space. It's like saying "Marlboro is the market leader. Let's ban it."


Not sure Marlboro is a good example.

Their product kills people or makes them seriously ill - how would you improve that product so it doesn't?

So for the good of society - access to their products should be much harder, i.e. factor in the cost of all the damage those products cause and people will have less of an incentive to purchase these.


> Today, there may be viable alternatives, but there weren't in those days.

Wait, there was no email yet?


There was, and the reality is that most people were not able to use it effectively enough to be a replacement for FB. Sure, tech heavy users were fine with mailing lists, cc's, attachments, etc - but most of the world wasn't.

Email's been around forever. If email were a viable alternative, FB would not exist. The amount of sharing between the Sudanese guy and his family skyrocketed after FB came along.

And this was before smartphones.


connect and let 3 billion people communicate with each other every day help small businesses reach customers invest advance the SOTA in a lot of tech


Email connects 4 billion people and helps small businesses reach customers. And email manages to do this without starving local journalism, or algorithmically recommending extremist groups to susceptible people, or cause genocides, or release SOTA creepshot Ray-Bans, or feed COVID disinformation to my parents every day.

Pytorch is pretty neat, though.


they enable communication for 3 billion people on the scale not previously seen in the history of mankind?


I think facebook tries to keep people locked into their platform so the users believe this exact lie. Facebook doesn't enable the capabilities of the internet.


WHy is that good? Seems like it just leads to 13% of teenagers just wanting to kill themselves.


Why is anything good objectively? It is good for me because I can stay connected with my mom despite living in different country. Also facebook group for my town is nice and useful, and my wife enjoys posting in instagram from our trips.

Now multiply this by several billions.


All these things existed before facebook


Yes, but so did magazines and TV shows which also made teenage girls feel bad about their bodies.

(Disclaimer: Facebook employee who thinks Facebook and social media is a net positive)


Moral relativism will only get you so far


Can you support that belief with any evidence or a coherent argument? What good does FB do that offsets the great amount of evil it does?

What offsets FB's choices that connected millions of extremists and gave them protected spaces to radicalize each other? [0]

What offsets FB's decision to rush into regions it can't moderate, leading to FB being used as the platform that spreads genocide-inciting misinformation? [1]

Personally I've had to work very hard to keep my parents safe this pandemic, disabusing my parents of dangerous COVID related misinformation that they saw on Facebook. I've had to research some really deranged stuff that they picked up on that platform.

I see a pretty deep debt on the evil side of the ledger, but you're asserting FB is net positive. So what am I missing? Where is the good stuff FB does that offsets all of this harm?

[0] https://web.archive.org/web/20201219115127/http://wsj.com/ar...

[1] https://www.nytimes.com/2018/11/06/technology/myanmar-facebo...


This warrants a long response, that I would love writing one day but don’t have the time.

But I’ll leave a short one here to respect your comment.

I believe social media empowers people, it makes my life better, it gives many people a voice where before it only belonged to a few. Those few also incited wars and genocides, and bred rage and mistrust (From NY Times supporting Iraq war to your local news stations to Fox/CNN concentrating on stories that outrage or warm your heart and get you back) from before Facebook existed and do so to this day. All of those nasty effects of social media existed before, they just moved to the most efficient media form.

I support people’s right to communicate and own their opinions, and I accept that giving a voice to everyone will always results in problems small and large. But Facebook does spend a lot of resources trying to make social media better. I’m an insider and swayed, but I think Facebook spends much more resources than TV channels and newspapers trying to address such issues.

The rest is a proper response really depends on a lot of arguments clumped against Facebook, and addressing them depends on the person: - Is the criticism against all social media? - Is it against ranking feed items? - Is it against monetizing through ads? - More rarely, is it against censoring, or because of data policies, or supposed negligence, or more. There’s such so many issues against Facebook, yet almost everyone keeps using the products, and the only countries that ban Facebook are not ones that come of as inspiring to me when you consider their reasons.


"I think Facebook spends much more resources than TV channels and newspapers trying to address such issues."

Proportional to their extremely greater profit they better be.

"All of those nasty effects of social media existed before, they just moved to the most efficient media form."

Back in the day you could kill a few people with arrows and spears, then you had automatic rifles, now we have nuclear and chemical weapons. We actually try to regulate those...


I stopped using Facebook in 2017. I felt absolutely miserable while angrily reading the stream of ragebait Facebook kept feeding me. Facebook has been really damaging to the mental health of my parents which has been an absolute nightmare for me over the past two years. I went through my dad's feed last year too see if FB had improved and it was an absolute hellscape. There would be nice content like grandbaby pictures from his friends sprinkled between lethally dangerous COVID disinformation and content designed to enrage. Much like what was captured in this article [0].

I don't see the value of giving a platform to dangerous bullshit. People need truth about reality. Giving a platform, or voice, to people who feed bad information to others can cause those others to make suboptimal choices, like rejecting masks or vaccines. How many people have died a miserable death (suffocating because their lungs can no longer absorb any oxygen even at 100% O2) over the past 2 years because of COVID disinformation spread on FB? Based on how many hours I've had to spend on the phone with my parents explaining why things they saw on FB are wrong, there's absolutely no way it's less than 10s of thousands just in the US.

I'm pretty impressed with your ability to just brush aside a genocide caused by FB rushing into a new market with 18 million people with only a few dozen moderators that speak the language. Facebook had been running a social media site for nearly a decade by then; there's no way FB didn't know the minimum ratio of moderators to users needed to keep up with typical moderation workloads. The only explanation consistent with the evidence is that FB just didn't care about avoiding that genocide and mass displacement. And based on your reaction to it, I assume the FB culture still isn't concerned about the boring details of being responsible and ethical.

I'm not trying to shame you, I don't think shame works at changing people's behavior, but I do think being caviler about massive harms like genocide and COVID disinformation is a major red flag, and I think you should take a hard look at your values and grapple with the possiblity that working for a company that kills hundreds of thousands through (best case scenario) gross negligence may be indefensible.

[0] https://nyti.ms/3mffwXX


Nothing is objectively good, good and bad are just words we invented to describe our preferences. My preference is that people don't continue down the path of intense tribalism, but other people obviously can have different preferences.


Text, email, zoom, etc. Facebook did not invent online communication. Facebook didn't invent the ability to share photos online.

As far as I can tell, Facebook's innovation is it's great at identifying people susceptible to extremist conspiracies and at connecting them. Per Facebook's own research: """Even before the teams’ 2017 creation, Facebook researchers had found signs of trouble. A 2016 presentation that names as author a Facebook researcher and sociologist, Monica Lee, found extremist content thriving in more than one-third of large German political groups on the platform. Swamped with racist, conspiracy-minded and pro-Russian content, the groups were disproportionately influenced by a subset of hyperactive users, the presentation notes. Most of them were private or secret.

The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth.The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”"" [0].

There were QAnon groups with millions of members. Facebook created these echo chambers and then ushered people in. And the only thing you can think of is they allowed you to communicate with your mother and share images?

[0] https://web.archive.org/web/20201219115127/http://wsj.com/ar...


Logic from converse for one. If it isn't good would it be good to cut off communication despite such a power being heavily abusable? If not then why is this point a magical perfect balance of communication?


Their machinery is ridiculously effective. My Uber driver the other day was Afghan. He talked to his family fleeing their country via Messenger.

Ubiquitous universal communication. It is the stuff of utopian sci fi.


Lol, you're talking as if it's unique to Facebook. Anyone with money and a modicum of technical expertise can make a messaging platform. The hard part is monetization.


> Anyone with money and a modicum of technical expertise can make a messaging platform. The hard part is monetization.

Facebook has clearly cracked the monetization problem.


Whatsapp’s opex was $13.5m with a revenue of $15.9m before they were acquired. [0]

0: https://techcrunch.com/2014/10/28/whatsapp-revenue/


Yes, in a way that cause this quagmire.

Anyways, the post I replied to was talking about the machinery, not the monetization.


Everyone somehow seems to be capable of anything. But in the end people end up using Facebook.


Most people in the world actually don't use Facebook for messaging. Messenger is second to WhatsApp.

There are dozens of software that can do worldwide instant messaging. It's ridiculous to think that that's why Facebook is successful. It's not, monetization and network effects are the reason, not being technically superior.


Hi, I worked there for over a year and am familiar with the specific research discussed in the article.

IG is a net good because even in this specific case, it did more good than harm. The numbers in the article are all less than half. On average users self-report increased well being when they use IG, as long as they don't use it too much.

But improving things on average isn't good enough. IG also wants to improve the situation for the users who self-report being adversely affected by their IG usage. IG has added features and spent a ton on research and product improvements to improve subjective wellbeing, and have been successful in measurably improving self-reported wellbeing, reducing objective measures like bullying prevalence and negativity.


Messenger and Whatsapp for communication


> Is this what you imagined you would do as a kid interested in math/computers/science?

I work a job that I imagined I'd like as a kid.

Work is interesting, but it's a crappy job.

Of the jobs I've had, all the ones that aligned with my passion sucked. It's no longer a criterion when I look for jobs.

If you can find a job that aligns with your passion, and it's not a crappy job, by all means go for it! For many, they tire of hopping from job to job in search of satisfaction. Optimizing for money (or time) tends to be the next logical step.


The FB app has become bloated.. I don't want the shop, I don't want the TikTok or Youtube stuff. I just want my friends.


You're not the customer, you're the product. What you want doesn't matter. They keep you happy enough to keep from uninstalling the app, and that's it.


I mean in like 2 years, half of which is in matchmaking after a six figure signing bonus. Not really a moral sacrifice. By the time you think it's a good idea to voice a contrarian opinion in a company group chat, you've already graduated.


Its such a shame US is considered a great place to live. Sure you can have a good life, but at what cost? Your government props up dictators, goes and blows up a country over imaginary WMDs etc. etc. Is this what you imagined as being an enabler of through your taxes and votes?


I like money. Not at Facebook but would join if I was looking again. Someone’s gonna get the money, it might as well be me.


No - nobody needs to get the money. You can decide to only work for ethical companies and by doing so you do contribute to the pressure from the labour pool for companies to act ethically.

Yes - there are tons of developers, but a lot of senior devs won't nab offers from facebook due to their poor corporate profile and their hiring is, as a result, incredibly desperate at the senior level. Those folks who do consent to work for them command higher salaries because of this shortfall and most of us just get jobs that don't leave a bad taste in our mouth.


I don't know what an Ethical company means to you or if it should matter. Would groupon be more Ethical then facebook? Apple? Pornhub? Sport betting sites? EA? Walmart? Amazon? Microsoft? Google? Well Fargo? Standard Oil? Starbucks? Blockbusters? Ancestry? Twitter? Reddit? The NBA?

You could debate and compare all of these companies and they come out the same.

NGOs are full of unethical practices hopefully with cover. Small business does unethical at a smaller scale.

More often the successful businesses are the ones willing to be an unethical. The dating company with fake profiles or that tricky sales funnel designed to catch grandma and sell her knitting needles using misleading text.

How ethical is the company you are working for?


This is whatabouttery to a fine degree.

You start with "I don't know what an Ethical company means to you _or if it should matter_;" I'm a sincere egoist, so I declare: no! It shouldn't matter to you, or to anyone, what my or anyone else's notion of ethical is! Ethics isn't a universal, it's not endemic to the individual experience, and neither is ethics-having in the first place.

I contend that ethics-having is vital to the human experience like vitamins are vital.

Having said that, what would an ethical company look like _to me_? I have one simple test: whether a B2C company enforces that their product or service be used only in specific ways. Any company which fails this test is de facto unethical, and downright uncivil.


That test doesn't sound simple or an ethical measure.

Apple taking away the right to repair is a company enforcing rules for a product be used in certain ways. On the other side facebook tries to get you to use the product in certain ways but rarely enforces it.


It's an ethical measure because I use it to measure whether something conforms to my ethical values. I take it to be axiomatic.

Likewise, it's a simple assessment for me to conduct. "Do I think this corporation enforces that their product or service be used in a certain specific way?" Unfortunately, I'm incapable of formalizing it any better than that. Yes, it's arbitrary, and not rational. I accept that. I don't advocate that anyone use this test, nor adopt my ethical positions; however, it is what I use, and I was primarily interested in sharing my perspective (which is both arbitrary and irrational).


>Someone’s gonna get the money, it might as well be me.

Ethics are disappointingly rare in Software. Such a shame.


Or people don't share your views on ethics.


I'd be interested in seeing which set of ethical values helps someone reconcile the two notions of "I'm not a bad person" and "I directly enable human rights abuses on a scale never-before-seen in human history;" or, otherwise, which perspective a person can take that doesn't commit them to the second notion (precluding fundamentally self-deceptive perspectives, such as willful ignorance).


Well for one a stance based upon human agency. "Enabling" is also a massive weasel word of propaganda to attribute all indirect actions and consequences to the target while ignoring the actual actors. The ones committing human rights violations.

Limiting everyone to only non-toxic crayons short, dull, plastic knives that cannot even cut bread would reduce ability to inflict harm. To call blaming the tool childish is an insult to children. Is Sony now responsible for production and distribution of child pornography for making camcorders, screens, and computer memory?

I don't even like Facebook but I can see the prevailing arguments are utterly deranged.


Facebook does not have any regard for human agency, neither the platform nor the corporation, anymore than fentanyl or its producers have.

I agree that the sense of which you speak is a nonsensical perspective; I'm not interested in laying blame on a tool for how the tool is utilized.

The Facebook employee isn't unethical because of their nebulous entanglement in nth-order externalities, whereby bad-person X did bad-thing Y therefore employee Z is guilty by association.

The Facebook employee is unethical because of their direct, active, engagement in developing the actual tools actually utilized by none other than their employer for the explicit purpose of making our world a more user-hostile place.


Does that make me unethical if I contribute to free or open source software, then Facebook uses that software, even though I don't work for them? Why would the answer to that question be different for employees?


No, a non-employee contributing to open-source software doesn't make them unethical; again, I don't care to lay blame at the feet of a tool.

I take it as axiomatic that Facebook is an unethical actor. I believe I have good reasons for this. From there, I say that anyone who works to make Facebook more effective at its unethical goals is similarly unethical.

No, this doesn't include people that contribute to OSS that originated at Facebook. Improving OSS is a contribution to the community, of which Facebook is unfortunately permitted to benefit from. Facebook starting an OSS project does not make the OSS unethical, because the OSS is a tool.

The question is different for employees because of this community/Facebook divide. Facebook employees are not unethical because of the tools they use for Facebook, they are unethical for furthering Facebook's unethical aims.


Good question. Is it an open source project that originated at Facebook?

On the topic of open source, would it be possible to construct a license that removes the ability of bad actors to use it? Of course, a bad actor could use it against the terms of the license, but the intent of the software project is made clear.


Historically it’s a pretty straightforward to argue that pretty well any technology directly enabled human rights abuses on a scale never before seen in human history. Radio, telephones, railroads, you can basically go back as far as you want…wheels, bronze, iron, fire, etc.


This is true! It's a specific case of every tool being useful in its ability to affect the world, that anything which can affect the world will be at some weaponized, and any weapon will eventually be used to further marginalize the marginalized.

There is a very good discussion further down this thread. I do not mean "enables" in the sense that "a tool is used to violate human rights." I mean "enables" in the sense that "Facebook itself commits human rights violations, and employees of Facebook further the aims and means of these violations."

That is, Facebook employees are not unethical because they develop tools that are used to violate human rights. Facebook employees are unethical because they take direct part in the actual policies by which Facebook makes the world a more user-hostile place.


I mean... On one hand, I agree with your bemoaning,but then on second thought - compared to what?

I don't think plumbers,electricians,lawyers, construction workers,car salespeople, you name it necessarily have inherently better average ethics (even once and if we define and agree on what those are or should be :)


Sometimes you have to take the work you can get because the skills you have are in oversupply.

As a software dev, if you're good enough to pass the screening at Facebook, you're good enough to have your pick of employers. You can go to one which is making a positive impact on the world.


> I mean... On one hand, I agree with your bemoaning,but then on second thought - compared to what?

HN software people love to call themselves Engineers with a capital E, but it seems all too often it gets forgotten that the practice of professional engineering requires adherence to professional codes of ethics.

Every accredited engineering program in the US and Canada requires that graduates have completed a Professional Ethics course, but of course the vast majority of "Software Engineers" aren't actually degree-holding engineers. They're by and large Comp Sci grads and to the best of my knowledge there isn't the same requirement for Ethics courses in computer science programs.


> Someone’s gonna get the money, it might as well be me.

Is a perfectly defensible moral argument.


> Someone’s gonna get the money, it might as well be me.

This line can defend anything people compete to be paid for, which should be a pretty clear sign that it's not a good line of reasoning.


And this is why humanity will always suck.


We may have to come together as a society and deem these social media apps as being a harm to society, especially the youth.

Perhaps even age restrict them like alcohol, gambling, etc.


I lean towards believing that such age restrictions have a counter-effect: causing youth to be less prepared and abuse what is restricted after finally being allowed. For example, more binge drinking in the US compared to other countries with lower age restrictions.

I think properly informing/educating youth is a better solution. Treating them as some other entity not worthy of rights will not lead to a productive solution.

Found https://www.youthfacts.org/?page_id=92407 which has additional supporting arguments in the matter.


The argument of it being “big tobacco” of the present has weight, but social media also has benefit. A cigarette is a simpler villain. A public health initiative could help, but there needs to be societal pressure and legal weight behind the movement. Perhaps COVID misinfo is a potential reason to push against social media in a strongly legal way in the US or even internationally? I’m curious of how this can pan out, but social media is a very powerful tool at scale so many powerful forces will want to keep it around in less regulated fashion.


Frankly I think we need to chill the hell out - this has all of the hallmarks of a moral panic. People here are seriously and unironically saying "think of the children" and "we need censorship for society".

Good god, did they all get amnesia a few decades ago to forget their youth, and how much of a dumb panic it was?

There is a 95% chance a retort will be "But this time is different!" despite the fact it never is.


I think they all realize that the real harm comes from misinformation, whether it's fake news, anti-vax, or unrealistic expectations. They've been quite good at self-regulation so far, likely to prevent governments from stepping in.


here's the quick summary on Instagram internal research findings from this article:

A 2019 presentation slide said: "We make body-image issues worse for one in three teenage girls"

Another slide said teenagers blamed Instagram for increased levels of anxiety and depression

In 2020, research found 32% of teenage girls surveyed said when they felt bad about their bodies, Instagram made them feel worse

Some 13% of UK teenagers and 6% of US users surveyed traced a desire to kill themselves to Instagram


Okay. I mean, that's bad, obviously. But is there any evidence this is something intrinsic to Instagram as a platform, rather than the general practice of posting photos on social media? If Instagram didn't exist, surely teens would still be doing it, just somewhere else.


I wonder if they'll start taking a don't ask don't tell (e.g. simply stop doing research) approach due to these leaks. No other social media company is actually studying this so why take all the heat?


Seems like the obvious conclusion. This is far from the first time Facebook has faced outrage over research they've done. They shut down their analytics page that the top ten on facebook twitter account was using because that lead to outrage too.



Is it better that they never do such research and just keep their heads in the sand?


I'm open to new thoughts on this but this seems like Facebook is being used a scapegoat. What would be a viable solution to this problem?


> any evidence this is something intrinsic to Instagram as a platform

That the kids are specifically calling out Instagram and not, say, flikr or Twitter, is a big hint. Believe what the kids are telling you.


I'm sure the kids were specifically asked if Instagram was causing any issues for them and I imagine the rest of the internet has a direct impact on their feelings not just Facebook


Enforcing the Facebook Code of Conduct would be a start. Have you read it?


Didn’t they get in big trouble for this not long ago?


Did I not say that Facebook is the problem? [0] Let’s sit down and discuss about it right here.

How many more of this carnage can you handle from this company as a Facebook user?

[0] https://news.ycombinator.com/item?id=28512252




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: