If a poem (or book) makes 10% of its readers more likely to become geniuses and contribute to solving world problems such as cancer, but 0.1% of its readers are more likely to commit suicide, should that book be banned by law?
Today's online society is based on posts created by content creators around the world, where algorithms can barely scratch the surface at interpreting their content, humans don't scale in reviewing every post, but statistics such as the above could be arguably inferred easily based on a combination of engagement (click/scrolls) data and attrition/session-revisits numbers.
Which is really problematic, because codifying into law rules and punishments based on aggregated outcomes and impact to us as a society (or to society sub-segments such as teens) makes it a very hard process to navigate between censorship vs. positive overall outcome vs. specific negative outcome on some outliers.
Looks like you are willfully ignoring Facebook’s own findings. They know that polarizing content is more engaging yet harmful… and they choose to amplify it anyway.
The same old argument that it’s hard therefore let’s not do anything is not applicable.
Facebook is not a neutral platform that just shows all posts from your friends in a chronological order. They are actively manipulating the stream and are fully responsible for what you consume.
> Facebook [clipped] are fully responsible for what you consume.
I'm not sure how deeply you hold this belief, but I am concerned to see so many people push all blame from their own actions. While it may be true that Facebook is largely responsible for what is consumed * on Facebook *, individuals are largely responsible for consuming Facebook.
this is true and if you're going to put Facebook in the spotlight you're going to have to put a light on everyone else. The entire computer gaming industry is one big dopamine cartel. If the facebook addiction is such a big deal then it's a little ironic gaming hasn't been completely dismantled.
//edit: honestly i think politics are a little at play here. Facebook (these days) is used heavily by an older more conservative crowd and i think it's irritating to the other side
That's true, but does my mother understand what's really going on? Do you? Do I? Choosing to pick up the phone and call your daughter and choosing to go on Facebook is very different and people growing up with the former might not realize how different the latter really is.
I think they fall into more responsibility here because they’ve also designed it to be addictive. If Facebook was easier to quit, I’d hold individuals more accountable.
> While it may be true that Facebook is largely responsible for what is consumed * on Facebook *, individuals are largely responsible for consuming Facebook.
I don't see any shift of blame. Those two aspects are in no way mutually exclusive. Facebook can be 100% at fault for their manipulation and deliberate outrage generation and you can still blame an individual for being irresponsible with their social media usage.
Because they are- they actively filter out rational and positive contributing individuals from the public plaza. They remove all the good people from the world and give the bad ones a stick for leverage and a hose to spray the neighbourhood down in all caps.
I think they do. When you only see post about how vaccine cause autism, anectode about this and that person and the diseases they got from the vaccine and that on top of that the vaccine doesn't even prevent the disease it was designed against, then it becomes reasonable to become antivax.
And if effectively Facebook knowingly choose, through their algorithm parameters selection, to promote this material as it increases engagement more than reasonable content, then yes, I think they should at least be partly held responsible for the harm caused by the anti vaccine movement.
Walmart is "manipulating" the placement of products on the shelf so that it's more likely for you to engage in bulk buying when you visit their stores.
Both Facebook and Walmart have a fiduciary duty to their shareholders to create value for them.
The difference is that, with user generated content, the idea of black and white "bounds" of the law is no longer applicable and you have to devise a system of checks and balances based on probabilities.
You can consider 10'000 posts for offline analysis: give them to some human raters and decide retrospectively what engagement and thoughts (positive/negative) are they generating in teens, which should enable you to draw some statistics about the expected average outcome. This doesn't mean it's either scalable or economically feasible to do so in real time for every post (so you cannot take decisions based on something that doesn't exist at the individual post level).
You can have multiple algorithms, send all of them to human raters and get for each algorithm some aggregated behaviour, but then we're back to the book question above -- what ratio of positive vs negative outcome in outliers is acceptable, and how do you define a "legal"/"allowed" algorithm?
I am baffled by this display of lack of ethics. Do we need a Walmart comparison to put Facebook’s action in perspective? Facebook - by its own acknowledgement - negatively affects teenage mental health and the democratic processes in many countries. Do you see how different this is from selling more mayonnaise jars in Walmart?
Facebook doesn’t have a duty to manipulate content. This is a very weak excuse that works mostly for people directly benefiting from the situation. Didn’t cigarette companies have a duty to maximize profits? Pharma companies pushing accessible opioids? Is that a more apt analogy?
> Facebook - by its own acknowledgement - negatively affects teenage mental health and the democratic processes in many countries. Do you see how different this is from selling more mayonnaise jars in Walmart?
Replace mental health with physical health and you have a great argument against how food is produced, marketed, and sold. We tackled these issues first with tobacco, and food wouldn't be a bad place to turn our attention after the social media companies.
Corporations are ruthless, inhuman optimization engines. When we don't sufficiently constrain the problems we ask them to solve, we get grotesque, inhuman solutions, like turning healthy desires into harmful addictions.
I would also have OP consider that yes, maybe having corporations like Nestle, CocaCola, etc that prioritize profit above all else is, in fact, also bad. Like, lets be real here, if the CEO of Coke had a button that could double the consumption of Coke products in the USA he would definitely push it, despite the fact that hundreds of thousands of people would become more obese and live worse, shorter lives. Advertising is an attempt at such a button.
The following has been used for sure in order to commit crimes and fiddle with democracy: Verizon phone conversations, Gmail discussions, Twitter, Snapchat or Tiktok messages etc.
Nobody wakes up and says "let's be unethical today", but rather, it's the reality of life with user generated content platforms, that either you get both outcomes, or you get none.
The discussion is about making people realize that the "technology" to keep only the good parts (without the downsides) wasn't invented yet.
Hence we're in a position to argue whether it would be more ethical to shutdown / censor everything, or have fruitful discussions on how to emphasize the good outcomes over the bad ones with the current tech (by first understanding it, something that politicians seem to be very bad at, or show little interest in it compared to the negative FB sentiment engagement they're generating in their voters -- ironic :) ).
You're presenting a false dichotomy. We don't have to choose between unethical corporate actions or no social media at all. Facebook could exist quite happily without applying any content selection algorithms to your feed. If your feed was literally just a chronological list of posts by your friends, with some interspersed advertising, then they (and you) could claim with some legitimacy that they aren't responsible for any fundamental negative effects of social media.
That's not the situation we're in. In addition to social media presenting some issues around public discourse and misinformation, Facebook is actively encouraging more and more extreme engagement with their platform by explicitly selecting for polarising content. It's this second part that people are taking issue with.
By the way, the solution does not require any censorship (as you mention in your comment) but simply that Facebook stops actively selecting content for your feed (which is itself a form of censorship!)
Nobody? Give it a rest. We're not dumb enough to think everyone in technology, specifically ad tech is ethical by default. Facebook made their own bed and made the mistake of allowing the internal research out of the closed corporate box. They can mitigate the impact of their most engaged content but it would be to their own fiscal detriment which is why they fundamentally decide not to mitigate it.
My regular reminder that there is no fiduciary duty to behave unethically. Fiduciary duty is a class of highly specific legal obligations on directors to act attentively and not put their own financial interests above those of shareholders. It is not an obligation to maximise return on investment.
Walmart doesn’t stock land mines, rocket launchers, anthrax, or many other items harmful to democracy and society on its shelves, even though I’m sure it could make a lot of money selling such items.
> Both Facebook and Walmart have a fiduciary duty to their shareholders to create value for them.
I feel like the more this claim is repeated, the more pushback you're going to see against it - and rightly so.
We need to remember that corporations are themselves fictitious legal entities. They only exist because society wills them into existence, and it can do so with arbitrary strings attached - there's no natural right to form a corporation. So, if it turns out that "fiduciary duty to their shareholders to create value" inevitably leads to the abusive megacorp clusterfuck that we are seeing today, why should we be clinging to it?
It’s puzzling how many people are so ready to mask their own responsibility by shifting it to a legal entity that apparently now has a duty to do whatever it takes to generate more profit. As if individually these people wouldn’t act in unethical ways but once they put on the “I am a corporation” mask anything goes.
Whataboutism advances no discussion. Either Facebook's problems are discussed based on Facebook's circumstances and decisions and consequences, or we're better off not posting any message at all.
Comparisons, analogies, and metaphors are useful tools to increase understanding and draw parallels to ideas that are challenging to navigate and naturally, lead to a variety of thoughtful outcomes or interpretations.
Crying "whataboutism" is as fruitless as you've described above. It is often used to steer a conversation towards a single direction of bias when those comparisons lead to inconvenient conclusions/possibilities that fall outside of what the person claiming it has accepted. Just sayin'. ;)
> Comparisons, analogies, and metaphors are useful tools (...)
Whataboutism is neither. It's a logical fallacy employed to avoid discussing the problem or address issues by trying to distract and deflect the attention to irrelevant and completely unrelated subjects.
I found it an apt comparison, highlighting how something we might accept in physical space (Walmart) yet be critical of equivalent action in the online space. It’s a thoughtful and coherent argument, even if one disagrees with it, not whataboutism
Let's try to phrase it in an actionable way for the law-makers to act upon it.
Are you suggesting that any profitable company hosting user-submitted content should invest all the profits in moderation teams to the point where they are either a) becoming profit-neutral or b) all the relevant content has been reviewed by a human moderator?
And how do you define relevant content -- having had 50 views? 10 views? 1 view? Who should decide where to set these limits? Do we believe politicians are going to do a better job at it rather than the existing situation? Or should we ban any non-human reviewed post just to move the certainty of illegal posts removals from 99.9% to 99.99%? (humans do make mistakes too)
(Facebook is really big so having just 99.99% of posts in compliance still means an awful amount of them escaping the system undetected)
> Are you suggesting that any profitable company hosting user-submitted content should invest all the profits in moderation teams to the point where they are either a) becoming profit-neutral or b) all the relevant content has been reviewed by a human moderator?
Yes, obviously. Why should a company get to profit from sex traffic or any other such content on their platform, just because it would cost money to take it down?
I know that somebody is raping someone in NYC right now and somebody will be killed in Chicago by the end of the day today. Should we ban the cities or at least force them to spend all their budget on security? Or set up curfew for citizens? May be public hanging a la Taliban - those definitely reduce crime.
Humans are using FB and where you have humans they commit crimes. Trying to eradicate all crime when you have humans in the loop is generally not great idea. Besides fighting trafficking/sex slavery with very few exceptions generally means harassing women with zero benefit to society or reduction in actual sex crimes.
Would you agree that it would be wrong for telephone companies to amplify sex slavery conversations? Like they would call you directly and just let you participate in the conversation because that would generate more engagement?
That is a very good counter point. I haven't read this facebook story yet, but I am willing to assume for argument that describes what happened. I guess it would depend for me on whether people saw sex-slavery content and decided to amplify it, vs an algorithm that finds and promotes "engaging" things without being very smart about what they are.
How are you defining "amplification"? Phones already operate by complex signal amplification over long distances. Why do you think burner phones are still prevalent for all manner of illicit activity?
I don't think the phone company should be shut down because others can use it in a way that's considered devious. I don't think the phone company should play "morality police" either. I simply expect the phone company to simply provide the service I paid for.
This type of thinking strikes as the kind that would damn Gutenberg for inventing the movable-type printing press because print has been used to disseminate propaganda and debauchery to billions of people several centuries later.
Amplification not in the electrical signal amplification sense but rather in the sense of amplifying the message. Facebook is giving more visibility to content that it considers more engaging, even if that content leads to harmful outcomes (it’s own research proves that).
You were making a point regarding phone-operated sex trafficking. Your characterization of what the phone company should do was what I contended. While, I'm aware that this was made as a broader point regarding Facebook, amplifying a signal and amplifying a message isn't functionally different. Television is an example of where both are happening. Even Twitter and Tiktok engage in amplification every time there's some Tide-pod Challenge. I don't see why Facebook would have to be responsible for how people feel about themselves, what stunts bad actors pull.
Right. In the case of phone-operated sex trafficking I don't think amplification is even an option. It's not like phone companies are deciding what phone calls you should be receiving today and are lining them up for you to take part in. So they don't involve algorithmic manipulation (or optimization for engagement), unlike Facebook or other social media.
In my parent post I was giving an example of an absurd imaginary situation with phone companies attempting to amplify sex trafficking by directly deciding who will participate in the conversation for the purpose of increasing engagement.
When phone companies came into existence, that's exactly what they did -- they amplified such conversations by making it easier for people to have phone calls and talk at a distance of each other.
They also got amplified whenever long distance calls got cheaper (as the overall volume of conversations increased).
> If a poem (or book) makes 10% of its readers more likely to become geniuses and contribute to solving world problems such as cancer, but 0.1% of its readers are more likely to commit suicide, should that book be banned by law?
I really don't know the answer. I've struggled with this tradeoff myself, as I've built some tools that have powerfully impacted people on an emotional level and I've been hesitant to put them out there because of the severe damage they might do to a small percentage of the population.
That being said, I read a few essays a few years back about Frankenstein and this from a Q&A[0] in the same Slate series[1] that I try to remember when I think about creating such tools:
> Does that make it into a warning against playing God?
> It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.
> Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.
I try to remind myself of this lesson that perhaps it's not about not creating powerful things but trying to continue to maintain these things instead of abandoning them and just accepting the havoc they wreaked.
I think where I often feel the most frustrated is in believing that FB doesn't really seem to me to be trying that hard in 1) making the platform less addictive, 2) getting rid of bots, 3) suggesting which legislation they want (instead of just punting and say "we said we want regulation, it's your job Congress to create it"), etc.
I don't think they'll ever get rid of all the things that cause harm, as it's even hard to choose dinner for a party of 4 where someone won't get hurt or angry, and this is a scale almost 1 billion times larger. I just want to have the impression that they are trying, or at the bare minimum, that they have the courage to say that sometimes bad things come with the good and they openly say that they are choosing that tradeoff. Maybe they've said it that way, I just don't seem to trust them much in terms of trying to take responsibility for their creation.
Today's online society is based on posts created by content creators around the world, where algorithms can barely scratch the surface at interpreting their content, humans don't scale in reviewing every post, but statistics such as the above could be arguably inferred easily based on a combination of engagement (click/scrolls) data and attrition/session-revisits numbers.
Which is really problematic, because codifying into law rules and punishments based on aggregated outcomes and impact to us as a society (or to society sub-segments such as teens) makes it a very hard process to navigate between censorship vs. positive overall outcome vs. specific negative outcome on some outliers.