I think a lot of people are choosing to ignore that a lot of companies have done things in the past that were not illegal at the time of action. However, those actions were later decided to be made illegal because the behavior was deemed to be antithetical to our values.
For example, Standard Oil did not break any laws in its ruthless consolidation of the nascent oil industry. In fact, it exploited the law to allow it to grow into the monstrosity that it eventually became. In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to consolidate the market.
There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.
I really appreciate this point. I often see it as written rules (laws) and unwritten rules (ethics). If something breaks the unwritten rules we have about how people are supposed to interact with each other, then we often codify that rule into law. Many people will say "I didn't break the law" but where many people would say that person did break an unwritten law.
> There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.
At the same time, I believe some of the stuff FB has done is currently illegal, such as this example in one of the whistleblower's disclosures to the SEC [0]:
> Our anonymous client is disclosing original evidence showing that Facebook, Inc. (NASDAQ: FB) has, for years past and ongoing, violated U.S. security laws by making material misrepresentations and omissions in statements to investors and prospective investors, including, inter alia, through filings with the SEC, testimony to Congress, online statements, and media stories.
So it could be a combination of them both violating ethics and violating the law.
Plato put it like this in his "Laws": “Laws are made to instruct the good, and in the hope that there may be no need of them; also to control the bad, whose hardness of heart will not be hindered from crime.”
I wonder how he’d speak of a supposedly even better educated society stealing the future of the next due to circular validation of our waste filled industrialism.
Given that most Greeks of that era believed they lived after the decline of a golden age, I suspect he might be be more understanding than most people today.
> I often see it as written rules (laws) and unwritten rules (ethics).
I think this is a very dangerous line to walk. A common phrase in law is "the law often allows what honor forbids" and that is because there is a difference between the law and ethics and IMO that is a good thing.
Is it ethical to eat all the cookies in the cookie jar and leave none for anyone else? No. Should it be illegal? No.
> that is because there is a difference between the law and ethics and IMO that is a good thing.
I feel confused because I didn't think I was saying that we should get rid of ethics by codifying everything into law. I thought my main point was that sometimes when people continue to break those unwritten rules, we will decide to write down those rules and enforce more strict consequences—not saying we should always do that.
I actually would prefer if we had fewer written laws and relied more on unwritten rules of interacting with each other. And when someone breaks those unwritten rules, we would apologize and forgive and work together to resolve the conflict and rebuild trust in each other, instead of pushing to punish people in a retributive system of justice.
Let's say that if there are two or more cookies in the jar every morning I add another one to it - under that scenario (especially if we go so far as to say cookies reproduce at some fixed proportion) then yea - it's totally illegal to eat all the cookies. The most common example of this tragedy of the commons is fishing but it happens all over the place.
Specifically on the topic of cookies - it honestly is "forbidden" in a lot of households to eat all the cookies in the jar. At work you'll probably face some consequences if there's a communal cookie jar (or, the more common scenario, drinking all the half-n-half and not getting more). We don't really have "public" cookie jars so this scenario is pretty contrived, but if there was one (i.e. if NYC installed a big cookie jar in Time Square for Halloween) then it probably would actually be illegal (or at least, against a city ordinance) to eat more than X cookies. But, like I said, it's pretty contrived feeling.
Illegal isn’t just another word for “forbidden” and the Law isn’t just another word for “the rules”, particularly in the context of this larger discussion on Facebook’s actions.
Your parents can punish you for eating all the cookies but the government will not sanction you nor them for doing so (even if a hypothetical and tyrannical government in theory could; maybe let’s not theorycraft absurd ordinances).
The issue for me with this scenario is that it's already so contrived that yea - of course you're not going to be arrested for eating all the cookies from the cookie jar. But only for that action - if you scale that action up to a significant level then you absolutely might be breaking a law. There are lots of things that are legal (or at least looked the other way at) on certain small scales and a lot of penalties escalate with the severity of the offense. If the US had a $1B jar of cookies and you ate all of them (aside from now being diabetic) the government would definitely pursue you for theft.
I suppose that's the truth of the matter - it actually might be illegal to eat all the cookies in the cookie jar (assuming you're not given express permission to do so) it's just that nobody cares because of the scale of the action. It's also usually a domestic affair - but just because you live in the same house as someone else you don't have a right to all their things (it's just that usually all the people are in a family and then property laws get a bit weird).
If your roomate in college had some super rare cookies valued at $10k and you ate all of them then they'd definitely be able to take you to court. It's just that nobody cares if an oreo goes missing.
Alright, so the thing about the cookies and the cookie jar up in the original post: for the point the parent was trying to make, it was not actually contrived and the reason for that is because you can replace that specific example with about a billion other minor slightly unethical examples that also shouldn’t be illegal and his point still stands.
Where it gets contrived is when you start talking about hypothetical socialist cookie jars policed by tyrannical municipal governments or collector’s cookies. You steal $10K worth of cookie products off the shelf, that’s robbery. You take a single pack of cookies off the shelf: that’s petty theft, and if you eat all the cookies your mom or roommate made and leave none for anyone else, then you’re just an asshole. Strangely enough the law is quite capable of making distinctions.
What the law is not capable of doing though is giving people without a moral compass a moral compass, or aligning differing moral compasses from different cultures. For all the nuances the law can make, it’s still just a sledgehammer in the face of human behavior and social custom is how we self-govern ourselves the vast majority of the time without involving sheriffs and courts and legislators. Every time the law takes something governed by social convention and puts it into the hands of the courts, private society loses a little part of itself to the people with the bigger guns for good or for ill.
We've slowly seen an alternate interpretation promulgated by many: anything that is not illegal is ethical. The endpoint is practically the same (anything legal is ethical and vice versa) but it arguably makes for a worse society.
If we are trapped somewhere with no other food than the cookie jar, the we will see how long before eating all the cookies is illegal.
Justice is a messy concept because is rooted in specific circumstances, and it’s absurd to think there’s a clear line between what’s unethical and what’s illegal.
The point is that being forbidden and being illegal are different ideas. It’s bad for society to codify too much behaviour in law. Knowing the law is no substitute for knowing the difference between right and wrong.
Regulating Facebook is a great example. Congress could easily react to facebook’s indiscretions by passing new laws here which stifle innovation.
I think I got that part. I was referring to the book the tragedy of the commons (very interesting small book that I recommend) which basically says that when you have N users of some common, even if game theory says that it's in their best individual best interest to protect the common as it is the strategy that maximise satisfaction, if N becomes large enough, someone will start damageing it and soon everyone will do the same. So the tragedy is that you actually have to enforce the behaviour that's in everyone's best interest as a law.
Oh, I totally agree. When it comes to a fishing community surrounding a lake, having rules in place to limit the number of fish each fisherman can catch benefits everyone.
But the same principle applies to me and my friends eating cookies. Each of us is incentivized to eat as many as we can - because its a limited resource and they're delicious. But in this case we use social, not legal consequences to punish defectors. If my friends see me eat all the cookies, they will be angry with me. This is a strong enough system to keep us honest.
But imagine if we adding a law here, making it a crime to eat all the cookies. Would that improve our behaviour? No. We might not want to share cookies at all if its possible one of us could go to jail for it.
I think laws are often appropriate. But laws are a blunt instrument. They can't be our only tool to protect the commons.
> It’s bad for society to codify too much behaviour in law.
The issue with over-codification is one of the complexity in the laws that result - not that a large number of prohibitions is actually damaging to society. If too many laws exist then enforcement becomes intractable, arbitrary and unjust - but if enforcement could be sanely and fairly dealt out then there are lots of things that we'd appreciate being laws - i.e. sniping someone's parking spot while they're pulling in: it's a dangerous action that encourages people to park faster than they're comfortable and generally makes people act like assholes... but is it worth paying someone 50k/year to prevent sniping parking spots? Nope.
> if enforcement could be sanely and fairly dealt out then there are lots of things that we'd appreciate being laws
I'm hearing a hypothesis - that if enforcement was cheap and easy, we could make a better legal system (and a better society) by regulating far more human interaction. Presumably we'd have AI systems monitoring everyone at all times, and issuing automatic fines if you snipe someone's parking spot.
"Citizen #421 you have been found guilty by AI of the crimes of eating the last cookie, and failing to call your mother while overseas. Please proceed immediately to the nearest reeducation facility for behavior correction"
I agree that there is often marginal (and proximate) benefit in having more laws. I'm in favor of FB being regulated somehow. But my point is that more laws - even when restricting antisocial behaviour - can still create a worse society. The reason is not just because enforcement is arbitrary.
Another example of this is the controversial Canadian bill C16, which sought to make it essentially illegal to misgender someone. Critics of the bill argued that despite everyone agreeing that its extremely disrespectful to misgender someone, it still shouldn't be illegal. This is a subtle argument, but its an important one if we want our societies to stay free and healthy.
Facebook is governed by many laws and it flouts a lot of them. For example, they screw up the DMCA process continuously in a way that by statute should make them lose their safe harbor (no "Section 230 reform" needed). Just by sheer volume of mistakes related to processing DMCA notices and counternotices they should lose their safe harbor protection. Even though surely many of the mistakes are unintentional human error, it doesn't actually matter that much according to the wording of the law.
They allow the hosting of a lot of egregious criminal activity, including maintaining uncountable numbers of what the law would define as notorious marketplaces for criminal activity. The trouble is lack of enforcement. You cannot fix a lack of enforcement with new laws that will just go unenforced.
(arguably eating cookies that aren't yours is a crime, and I don't doubt that someone has in the past been arrested for it in ridiculous circumstances)
Then there is the case of things are illegal, but are not enforced. Leading to the question of what is the law? What is written, or how it is enforced?
How many of you went above the speed limit today?
I suspect that much of what goes on in the stock market is similar.
>Then there is the case of things are illegal, but are not enforced. Leading to the question of what is the law? What is written, or how it is enforced?
Ah, I really appreciate this point. I imagine when it comes to law, I would want the goal to be the alignment of what is written and what is enforced. If law is what is written but not enforced, then it seems to be law-in-name-only, whereas if it's enforced but not written, then it seems to be, I don't know, chaotic? Haha, I can't think of a better term right now. I suppose the latter might depend on how much unwritten agreement there is amongst people on the rules. If a lot of agreement, then maybe it's more like ethics. If not much agreement, maybe it's more unpredictable, almost lawless.
Having driven in quite a few different jurisdictions over the years, my impression became: The safest driving speed is the one that blends with local driving culture. In some places, that’s well above the posted limits, and in others it’s quite a bit below.
I suspect that degrees of being generally law abiding also vary across cultures.
AFAIK SEC laws and regulations about misrepresentation are only sporadically enforced to encourage compliance by example. Look at what Musk and his companies have gotten away with. Of course, I am all for these disclosures, which of course FB will pay their way out of without admitting wrongdoing. Because corporations manage our government, not the other way around.
If a poem (or book) makes 10% of its readers more likely to become geniuses and contribute to solving world problems such as cancer, but 0.1% of its readers are more likely to commit suicide, should that book be banned by law?
Today's online society is based on posts created by content creators around the world, where algorithms can barely scratch the surface at interpreting their content, humans don't scale in reviewing every post, but statistics such as the above could be arguably inferred easily based on a combination of engagement (click/scrolls) data and attrition/session-revisits numbers.
Which is really problematic, because codifying into law rules and punishments based on aggregated outcomes and impact to us as a society (or to society sub-segments such as teens) makes it a very hard process to navigate between censorship vs. positive overall outcome vs. specific negative outcome on some outliers.
Looks like you are willfully ignoring Facebook’s own findings. They know that polarizing content is more engaging yet harmful… and they choose to amplify it anyway.
The same old argument that it’s hard therefore let’s not do anything is not applicable.
Facebook is not a neutral platform that just shows all posts from your friends in a chronological order. They are actively manipulating the stream and are fully responsible for what you consume.
> Facebook [clipped] are fully responsible for what you consume.
I'm not sure how deeply you hold this belief, but I am concerned to see so many people push all blame from their own actions. While it may be true that Facebook is largely responsible for what is consumed * on Facebook *, individuals are largely responsible for consuming Facebook.
this is true and if you're going to put Facebook in the spotlight you're going to have to put a light on everyone else. The entire computer gaming industry is one big dopamine cartel. If the facebook addiction is such a big deal then it's a little ironic gaming hasn't been completely dismantled.
//edit: honestly i think politics are a little at play here. Facebook (these days) is used heavily by an older more conservative crowd and i think it's irritating to the other side
That's true, but does my mother understand what's really going on? Do you? Do I? Choosing to pick up the phone and call your daughter and choosing to go on Facebook is very different and people growing up with the former might not realize how different the latter really is.
I think they fall into more responsibility here because they’ve also designed it to be addictive. If Facebook was easier to quit, I’d hold individuals more accountable.
> While it may be true that Facebook is largely responsible for what is consumed * on Facebook *, individuals are largely responsible for consuming Facebook.
I don't see any shift of blame. Those two aspects are in no way mutually exclusive. Facebook can be 100% at fault for their manipulation and deliberate outrage generation and you can still blame an individual for being irresponsible with their social media usage.
Because they are- they actively filter out rational and positive contributing individuals from the public plaza. They remove all the good people from the world and give the bad ones a stick for leverage and a hose to spray the neighbourhood down in all caps.
I think they do. When you only see post about how vaccine cause autism, anectode about this and that person and the diseases they got from the vaccine and that on top of that the vaccine doesn't even prevent the disease it was designed against, then it becomes reasonable to become antivax.
And if effectively Facebook knowingly choose, through their algorithm parameters selection, to promote this material as it increases engagement more than reasonable content, then yes, I think they should at least be partly held responsible for the harm caused by the anti vaccine movement.
Walmart is "manipulating" the placement of products on the shelf so that it's more likely for you to engage in bulk buying when you visit their stores.
Both Facebook and Walmart have a fiduciary duty to their shareholders to create value for them.
The difference is that, with user generated content, the idea of black and white "bounds" of the law is no longer applicable and you have to devise a system of checks and balances based on probabilities.
You can consider 10'000 posts for offline analysis: give them to some human raters and decide retrospectively what engagement and thoughts (positive/negative) are they generating in teens, which should enable you to draw some statistics about the expected average outcome. This doesn't mean it's either scalable or economically feasible to do so in real time for every post (so you cannot take decisions based on something that doesn't exist at the individual post level).
You can have multiple algorithms, send all of them to human raters and get for each algorithm some aggregated behaviour, but then we're back to the book question above -- what ratio of positive vs negative outcome in outliers is acceptable, and how do you define a "legal"/"allowed" algorithm?
I am baffled by this display of lack of ethics. Do we need a Walmart comparison to put Facebook’s action in perspective? Facebook - by its own acknowledgement - negatively affects teenage mental health and the democratic processes in many countries. Do you see how different this is from selling more mayonnaise jars in Walmart?
Facebook doesn’t have a duty to manipulate content. This is a very weak excuse that works mostly for people directly benefiting from the situation. Didn’t cigarette companies have a duty to maximize profits? Pharma companies pushing accessible opioids? Is that a more apt analogy?
> Facebook - by its own acknowledgement - negatively affects teenage mental health and the democratic processes in many countries. Do you see how different this is from selling more mayonnaise jars in Walmart?
Replace mental health with physical health and you have a great argument against how food is produced, marketed, and sold. We tackled these issues first with tobacco, and food wouldn't be a bad place to turn our attention after the social media companies.
Corporations are ruthless, inhuman optimization engines. When we don't sufficiently constrain the problems we ask them to solve, we get grotesque, inhuman solutions, like turning healthy desires into harmful addictions.
I would also have OP consider that yes, maybe having corporations like Nestle, CocaCola, etc that prioritize profit above all else is, in fact, also bad. Like, lets be real here, if the CEO of Coke had a button that could double the consumption of Coke products in the USA he would definitely push it, despite the fact that hundreds of thousands of people would become more obese and live worse, shorter lives. Advertising is an attempt at such a button.
The following has been used for sure in order to commit crimes and fiddle with democracy: Verizon phone conversations, Gmail discussions, Twitter, Snapchat or Tiktok messages etc.
Nobody wakes up and says "let's be unethical today", but rather, it's the reality of life with user generated content platforms, that either you get both outcomes, or you get none.
The discussion is about making people realize that the "technology" to keep only the good parts (without the downsides) wasn't invented yet.
Hence we're in a position to argue whether it would be more ethical to shutdown / censor everything, or have fruitful discussions on how to emphasize the good outcomes over the bad ones with the current tech (by first understanding it, something that politicians seem to be very bad at, or show little interest in it compared to the negative FB sentiment engagement they're generating in their voters -- ironic :) ).
You're presenting a false dichotomy. We don't have to choose between unethical corporate actions or no social media at all. Facebook could exist quite happily without applying any content selection algorithms to your feed. If your feed was literally just a chronological list of posts by your friends, with some interspersed advertising, then they (and you) could claim with some legitimacy that they aren't responsible for any fundamental negative effects of social media.
That's not the situation we're in. In addition to social media presenting some issues around public discourse and misinformation, Facebook is actively encouraging more and more extreme engagement with their platform by explicitly selecting for polarising content. It's this second part that people are taking issue with.
By the way, the solution does not require any censorship (as you mention in your comment) but simply that Facebook stops actively selecting content for your feed (which is itself a form of censorship!)
Nobody? Give it a rest. We're not dumb enough to think everyone in technology, specifically ad tech is ethical by default. Facebook made their own bed and made the mistake of allowing the internal research out of the closed corporate box. They can mitigate the impact of their most engaged content but it would be to their own fiscal detriment which is why they fundamentally decide not to mitigate it.
My regular reminder that there is no fiduciary duty to behave unethically. Fiduciary duty is a class of highly specific legal obligations on directors to act attentively and not put their own financial interests above those of shareholders. It is not an obligation to maximise return on investment.
Walmart doesn’t stock land mines, rocket launchers, anthrax, or many other items harmful to democracy and society on its shelves, even though I’m sure it could make a lot of money selling such items.
> Both Facebook and Walmart have a fiduciary duty to their shareholders to create value for them.
I feel like the more this claim is repeated, the more pushback you're going to see against it - and rightly so.
We need to remember that corporations are themselves fictitious legal entities. They only exist because society wills them into existence, and it can do so with arbitrary strings attached - there's no natural right to form a corporation. So, if it turns out that "fiduciary duty to their shareholders to create value" inevitably leads to the abusive megacorp clusterfuck that we are seeing today, why should we be clinging to it?
It’s puzzling how many people are so ready to mask their own responsibility by shifting it to a legal entity that apparently now has a duty to do whatever it takes to generate more profit. As if individually these people wouldn’t act in unethical ways but once they put on the “I am a corporation” mask anything goes.
Whataboutism advances no discussion. Either Facebook's problems are discussed based on Facebook's circumstances and decisions and consequences, or we're better off not posting any message at all.
Comparisons, analogies, and metaphors are useful tools to increase understanding and draw parallels to ideas that are challenging to navigate and naturally, lead to a variety of thoughtful outcomes or interpretations.
Crying "whataboutism" is as fruitless as you've described above. It is often used to steer a conversation towards a single direction of bias when those comparisons lead to inconvenient conclusions/possibilities that fall outside of what the person claiming it has accepted. Just sayin'. ;)
> Comparisons, analogies, and metaphors are useful tools (...)
Whataboutism is neither. It's a logical fallacy employed to avoid discussing the problem or address issues by trying to distract and deflect the attention to irrelevant and completely unrelated subjects.
I found it an apt comparison, highlighting how something we might accept in physical space (Walmart) yet be critical of equivalent action in the online space. It’s a thoughtful and coherent argument, even if one disagrees with it, not whataboutism
Let's try to phrase it in an actionable way for the law-makers to act upon it.
Are you suggesting that any profitable company hosting user-submitted content should invest all the profits in moderation teams to the point where they are either a) becoming profit-neutral or b) all the relevant content has been reviewed by a human moderator?
And how do you define relevant content -- having had 50 views? 10 views? 1 view? Who should decide where to set these limits? Do we believe politicians are going to do a better job at it rather than the existing situation? Or should we ban any non-human reviewed post just to move the certainty of illegal posts removals from 99.9% to 99.99%? (humans do make mistakes too)
(Facebook is really big so having just 99.99% of posts in compliance still means an awful amount of them escaping the system undetected)
> Are you suggesting that any profitable company hosting user-submitted content should invest all the profits in moderation teams to the point where they are either a) becoming profit-neutral or b) all the relevant content has been reviewed by a human moderator?
Yes, obviously. Why should a company get to profit from sex traffic or any other such content on their platform, just because it would cost money to take it down?
I know that somebody is raping someone in NYC right now and somebody will be killed in Chicago by the end of the day today. Should we ban the cities or at least force them to spend all their budget on security? Or set up curfew for citizens? May be public hanging a la Taliban - those definitely reduce crime.
Humans are using FB and where you have humans they commit crimes. Trying to eradicate all crime when you have humans in the loop is generally not great idea. Besides fighting trafficking/sex slavery with very few exceptions generally means harassing women with zero benefit to society or reduction in actual sex crimes.
Would you agree that it would be wrong for telephone companies to amplify sex slavery conversations? Like they would call you directly and just let you participate in the conversation because that would generate more engagement?
That is a very good counter point. I haven't read this facebook story yet, but I am willing to assume for argument that describes what happened. I guess it would depend for me on whether people saw sex-slavery content and decided to amplify it, vs an algorithm that finds and promotes "engaging" things without being very smart about what they are.
How are you defining "amplification"? Phones already operate by complex signal amplification over long distances. Why do you think burner phones are still prevalent for all manner of illicit activity?
I don't think the phone company should be shut down because others can use it in a way that's considered devious. I don't think the phone company should play "morality police" either. I simply expect the phone company to simply provide the service I paid for.
This type of thinking strikes as the kind that would damn Gutenberg for inventing the movable-type printing press because print has been used to disseminate propaganda and debauchery to billions of people several centuries later.
Amplification not in the electrical signal amplification sense but rather in the sense of amplifying the message. Facebook is giving more visibility to content that it considers more engaging, even if that content leads to harmful outcomes (it’s own research proves that).
You were making a point regarding phone-operated sex trafficking. Your characterization of what the phone company should do was what I contended. While, I'm aware that this was made as a broader point regarding Facebook, amplifying a signal and amplifying a message isn't functionally different. Television is an example of where both are happening. Even Twitter and Tiktok engage in amplification every time there's some Tide-pod Challenge. I don't see why Facebook would have to be responsible for how people feel about themselves, what stunts bad actors pull.
Right. In the case of phone-operated sex trafficking I don't think amplification is even an option. It's not like phone companies are deciding what phone calls you should be receiving today and are lining them up for you to take part in. So they don't involve algorithmic manipulation (or optimization for engagement), unlike Facebook or other social media.
In my parent post I was giving an example of an absurd imaginary situation with phone companies attempting to amplify sex trafficking by directly deciding who will participate in the conversation for the purpose of increasing engagement.
When phone companies came into existence, that's exactly what they did -- they amplified such conversations by making it easier for people to have phone calls and talk at a distance of each other.
They also got amplified whenever long distance calls got cheaper (as the overall volume of conversations increased).
> If a poem (or book) makes 10% of its readers more likely to become geniuses and contribute to solving world problems such as cancer, but 0.1% of its readers are more likely to commit suicide, should that book be banned by law?
I really don't know the answer. I've struggled with this tradeoff myself, as I've built some tools that have powerfully impacted people on an emotional level and I've been hesitant to put them out there because of the severe damage they might do to a small percentage of the population.
That being said, I read a few essays a few years back about Frankenstein and this from a Q&A[0] in the same Slate series[1] that I try to remember when I think about creating such tools:
> Does that make it into a warning against playing God?
> It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.
> Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.
I try to remind myself of this lesson that perhaps it's not about not creating powerful things but trying to continue to maintain these things instead of abandoning them and just accepting the havoc they wreaked.
I think where I often feel the most frustrated is in believing that FB doesn't really seem to me to be trying that hard in 1) making the platform less addictive, 2) getting rid of bots, 3) suggesting which legislation they want (instead of just punting and say "we said we want regulation, it's your job Congress to create it"), etc.
I don't think they'll ever get rid of all the things that cause harm, as it's even hard to choose dinner for a party of 4 where someone won't get hurt or angry, and this is a scale almost 1 billion times larger. I just want to have the impression that they are trying, or at the bare minimum, that they have the courage to say that sometimes bad things come with the good and they openly say that they are choosing that tradeoff. Maybe they've said it that way, I just don't seem to trust them much in terms of trying to take responsibility for their creation.
That is the problem of following the absolute minimum standards in life - we (the society as a whole) have accepted that as long as businesses follow the law, its all good. We've accepted that the sole purpose of businesses is to make money within the bounds of the law. While this makes sense logically, it isn't good for anyone in the long run, practically. Also remember that all kinds of unfair laws can be passed, if you have enough money to buy politicians.
We should strive for higher standards, but who am I kidding - we live in a world of "greed is good" mantra.
> the sole purpose of businesses is to make money within the bounds of the law.
It's worse than that. Businesses constantly make risk/reward/punishment tradeoffs and will flout the law according to their estimation of the risk of being caught and paying fines. There are precious few illegal behaviors that bubble up to criminal charges for executives, so the risk is quantifiable in dollar amounts.
The problem is that without laws, what would those "standards" be?
You have to do more than the law? But how does any business in the future know what that is?
The great things about law are predictability and flexibility. If the law is not enough, we can change it. Then everyone is held to that new standard. But having a standard that is not laid out, is the same thing as having no standard at all.
Going into an area where we say companies have to meet an unwritten ethical and/or moral standard is ripe for abuse. Under those conditions, if I show certain messages or ads on my website that are wholly unethical and immoral, but not necessarily illegal in the written law, I'm opening myself up to liability based on violating unstated ethics and morals.
You seem to be misunderstanding what I was saying (or I wasn't clear). I am not at all saying we shouldn't have laws - we absolutely should have laws and rules. I am not saying that we should sue companies based on unstated ethics or morals (how would that even work anyway?). All I am saying is we should, as a society, have a better attitude and higher standards than stuff like greed is good, the sole purpose of a business is make profit, even at the expense of everything else etc etc.
I fully understand this sounds idealistic and maybe it is dumb to expect people to do better, when much of humanity is trying to do the minimum and get the maximum in return.
Humanity isn't that selfish. It is idealistic but to even have the conversation to know why is minimally ethical.
Taking the converse argument of 0 ethics except THE LAW is infuriatingly common. Is it okay to murder so long as the act of murder is technically legal? Assuming perfect proof intent with direct action indefensible confession murder. But also 100% legal. Breaking no law. Would society find that acceptable? Even for those that used the loop hole?
It seems like folks want to live in a 0 common ethical baseline reality. We're discussing the middle and if Facebook is wrong. Not if they will get away with it. They will. And only because they get away with it does not make it right. Can we stop with the definition arguments of legality or the accountability of large corporations for a moment?
Is knowingly proceeding with a damaging action acceptable? One could even argue social media isn't damaging. The study is wrong and Facebook paid for it. Not me. But at least it's not this manifest destiny morality bullshit.
>It seems like folks want to live in a 0 common ethical baseline reality.
We do live in a world zero common ethics except where these ethics pertain to the laws of physics. Anything else is determined by societal dictate by way of law or cultural fiat. If Facebook were in Saudi Arabia, there wouldn't a be a rainbow flag filter and accounts would be shadow banned for any mention of Khassogi's murder.
You cannot follow the maximum standards, because you only have so many resources. We can sort our 'recycling' and go through 'security theater' at the airport but those involve necessary trade-offs to real care for the environment or security.
I don't mind when wealth and education allows us to voluntarily do better or more. But there's danger in punishing people using hindsight.
There is another aspect about how new the industry is. It will be some time before we have well-considered laws about things like digital privacy. Tech companies can either advocate for good laws based on their advanced knowledge or be greedy and exploit/promote lackluster laws.
Treat addictive social media companies like addictive cigarette companies. Lets see some huge warning labels about how mentally harmful it is to continue scrolling on facebook right on the first result where its unavoidable to see. Lets tax the hell out of social media companies to generate local revenue just like sin taxes. It won't be a huge change but it will be a great starting point and will come with revenue that can fund potentially mental healthcare programs for people damaged by these companies.
> Lets tax the hell out of social media companies to generate local revenue just like sin taxes.
Very interesting idea, actually. There is evidence Social Media causes harm to some individuals' mental health (in a widespread manner causing some measurable societal harm), so a proposed tax on all social media companies with revenue going towards mental health programs seems worth exploring.
Generally I'm not much in favor of implementing new taxes (would rather close existing loopholes) but if implemented reasonably and backed by scientific evidence this seems valid.
That's because, so far, they've managed to deflect, deny, and discredit research and critics pointing out exactly how social media uses things like variable rewards in the same way as slot machines use them to keep gamblers pulling the lever. They do this using tactics developed by the tobacco companies to fight findings that smoking causes cancer and other harms and refined by the fossil fuel industry to prevent action on global warming.
I agree with you but a lot of the analogies and metaphors here are insufficiently subtle.
FB in some sense, but not entirely, is a form of speech, no better or worse than Grand Theft Auto or the National Enquirer. That's how I thought of it ten years ago.
Now that it is in our pockets nearly cradle to grave; a monopoly; and dependent on minutes of engagement rather than subscriptions -- it is a different animal altogether.
Yet with all those things we have laws and regulations and even restrictions for young people explicitly. FB is the wild west on the other hand and constantly lobbies to keep it that way in terms of how regulators see it.
Yes, thats the buried lede. Those are all things which you need to be old or mature enough to use responsibly - they make demands of experience and impulse control you develop as adults.
Meaning that blocking social media for kids and teens is likely on the anvil at some point.
> Uhm a cigarette you cannot change the ingredients of, it's tobacco.
You can soak the tobacco in solution which contains additives, such as more nicotine. Which is exactly what cigarette companies have done in the past (and not just the tobacco, the filters, and the paper as well).
The parallel here is filling people's feeds with divisive political news and posts, even when they have tried to opt out.
The point is tobacco itself is a carcinogen, you cannot make a cig not cause cancer because it needs to burn tobacco at least.
A social media website does not need doom scrolling or private algorithms for the feed, you can change how it works instead of adding a useless banner.
1) What would you expect be implemented to reduce/eradicate doom scrolling?
2) What would making the algorithm public do for us? I'm not an ML engineer, but presumably their algorithm isn't just an algebraic equation where x is how toxic the post is and y is how inflammatory it is and y is the number of kids who will think harder about suicide because of the post.
Maybe I'm just super naive and that _is_ how Facebook made their algorithm, but my understanding is that the algorithm is a little more of a black-box and is a little abstract. How is a lay-person supposed to evaluate something like that?
The input to these algorithms are usually human understandable and quantifiable signals like likes, text sentiment, maybe engagement history -- and the output is probably a score than can be ranked. Ultimately though even if the algorithm is a black box (entirely possible it's not ML based!) we can still evaluate it in a lab environment.
Some of the signals might be generated by ML also, like photo labels, but ultimately these things are very understandable if you have the model and data.
I don't want them to do anything regarding doom scrolling, it was just an example and came from another user.
I do want them to publish their algorithms and moderation logs so we have insight on how they are serving and moderating content.
I don't care about organic user content, I do care if FB is pulling the strings to make it either more salacious or being biased in one way or another.
I also care if they are banning certain users or content but not others.
Sorry for being pedantic, but you can absolutely change the ingredients of a cigarette. There's a ton besides the tobacco. And you can breed different strains of tobacco to have more or less of some chemical.
Smoking was happening in the 1800's. Lung Cancer rates didn't shoot up until the 1900's, it was rather rare. This is around the same time that tobacco companies figured out they could soak tobacco in ammonia. This allowed for inhalation into the lungs (e.g. it sucks to inflate a cigar deep into your lungs). It also made the cigarettes much more addicting, so people smoked way more and inhaled into the lungs. That's about when lung cancer stopped being so rare.
Yes, cig's cause cancer, but to say that it's because it burns tobacco is missing a big part of the story.
That was probably because smoking was not common outside of wealthy men during the 1800s. It was not widespread at all among most of the public until after the world wars thanks to mass produced cigarettes (which weren't around until the late 1800s) now being added to rations. Smoking rate after WWI increased 350% and was high ever since. US government didn't stop issuing cigarette rations to soldiers until 1975. Lung cancer rates have followed lock step with smoking rates, its not really that smoking suddenly became harmful. It always was, it just wasn't common to smoke and even among those who did back in those days, it wasn't common to smoke very much at all and certainly not around the clock (kinda like hookah users today).
My nintendo DS from 15 years ago gave me an eye strain warning every time i started it up and it doesn't always cause eye strain, only misuse does and I know that thanks to the informative banner.
I love that Nintendo is very aware of the potential negative effects of their products and games and tries to inform users / mitigate.
Even when it comes to encouraging positive play between users - in the new Pokemon MOBA (games known for their toxicity) there's no text chat, only communication with a few emotes you can show. Some of their decisions make for arguably worse games for "hardcore" gamers (like the way they rank users in smash, or how they focus on more casual-style in-game tournaments or make matchmaking harder) but they sacrifice that in favor of a more positive general experience, especially important since children play their games.
I thought pictochat was great too and a lot of fun. They could have opened it up and made it into a global network, but the beauty is that it operates on local networks so it was more of an in person social network, plus no way for advertisers and commercial companies to break in.
I remember pictochat, so many dicks and graphic drawings sent to each other in JR high. The sensitive world today would have had a field day with that.
This part of the thread went pretty off topic but I like it! Pictochat was certainly ahead of its time, wish we stuck to things like that.
Moreover, warnings are useless if people can't vote with their feet. So if you want to actually affect change in the dynamics of the market you need to make services compete on quality and value to the customer rather than engaging in a scramble to accrue insurmountable network effects and lock-in.
That means mandates for data interoperability. Sadly, I have no idea how to implement that in a way that doesn't utterly stifle innovation by ossifying what sorts of data models social media is allowed to have. But at the very least we could create a sort of interoperability minimum that prevents you from locking up things like photo albums or peoples' "social graphs."
Over the longer term I'd like to see some kind of disentanglement of the protocols, standards, and data models from the front-end clients. It's obviously a lot more complicated now, but in the same way that you could access AIM, ICQ, GChat, and a bunch of other stuff from a variety of chat clients it would be good to be able to do this with everything social. Hell, ActivityPub basically tries to do this now so it's not impossible.
What is the appropriate middle though? Think about alcohol culture. Should we ban beer commercials on TV? Only allow beer commercials with talking frogs rather than attractive young people having fun?
I'm honestly baffled over why beer commercials are considered socially acceptable - but then again I think that advertising (in our modern interconnected world) only ever serves to drive overconsumption. If you want a beer - you go to the bevy and pick out a beer you'd enjoy... if I'm watching TV and the TV tries to make me want a beer - that's not a good thing.
Good advertising[1] is limited to making sure your product is visible in comparison to competitors - having shiny cereal boxes is something I find pretty meh, but in the cereal aisle you're dealing with someone who wants to buy some kind of cereal and you're trying to convince them to buy yours. TV Advertising drives up demand for products which, by definition, means we're consuming more of that product than we otherwise would... that's great for business... and it's also great for the obesity epidemic.
1. What I'd consider to be ethical advertising, but that's like my opinion man.
I agree but I'd imagine the beer companies would argue with your substitution standard for advertising that any drink or even consumable product you can put in your mouth is a competing product. So then no drink commercials, and you've reduced the capital available to fund TV, and you have a domino elimination of economic activity.
I mean, ideally we could allocate all the capital we put into manufacturing, selling, and consuming beer into fitness or math education or something more harmonious with wellness and human achievement; but hey, plenty of great scientists and inventors love beer.
What is the specific harm involved here that is deserving to be taxed?
How would we measure this harm in order to know how much to tax a given company?
Should other causes of this harm be taxed/penalized as well? If not, why?
For instance, if the harm in question is some people feel varying degrees of worse after using a given product, is there any limit we as a society should set on penalizing the cause of the harm?
Should people or entities who say things that make people feel worse be fined/prosecuted by the law? If I feel worse (let's call this 'trauma' or 'anxiety' or 'depression' or 'literally shaking' or 'panic attack') after reading a book or reading a news site, should I have standing to sue the creators and medium which presents said content?
You tax Facebook but allow it to operate however it wants. Facebook is then incentivized to double down on its algorithms---like tobacco companies using chemical and biological techniques to make cigarettes more addictive---in order to regain the lost profits.
Then you can double down on the taxes you levy against them if they begin harming more people, no? The idea is the cost of doing bad business will eventually be too much to make it worth doing that sort of bad business. Same idea with carbon taxes where the costs scale to damage and incentivize shifting to good behavior rather than doubling down on bad behavior. And even with cigarette companies doubling down, far fewer people smoke today and die of lung cancer than 50 years ago, so this stuff works on the whole.
That definitely isn’t what happened with alcohol or tobacco! Instead you end up with a significant enough amount of money going to the government that the government now ends up protecting those industries to an extend - ensuring lower priced competition (e-cigs, moonshine) get stomped on and the market gets protected and not eliminated or reduced too much.
Warning labels won't be of much use, as an individual most will ignore them believing in there own prowess to discern truth.
Taxing all social media or all media may have interesting implications as this again will reduce profit for all and give some revenue for governments without making any actual change. Also people making cooking/educational videos on Youtube may resent having to pay sin tax.
It doesn't notify you when someone responds to one of your posts. It doesn't send you any nagging emails (indeed, doesn't even require an email to sign up). It has the noprocrast setting to let you set limits on your own usage. It doesn't (afaik) try to optimize for engagement - dang tries to maintain civil discourse as much as possible.
You are are able to consume both tobacco and alcohol (let's not tangent into a drug legalization discussion). Tobacco and alcohol cause measurable societal harm and measurable costs to the state - are you implying it's unreasonable for states to tax these goods for those reasons?
Generally speaking I'd rather reduce taxes but I fail to see what's wrong with e.g. an alcohol excise tax going towards rehabilitation and/or highway safety programs. "Sin tax" is just a colloquial name for an excise tax, which a state has every right to enact.
> Tobacco and alcohol cause measurable societal harm
And if I choose to smoke in the privacy of my own home (or yard)? What societal harm am I causing?
As for alcohol, the societal harm caused is a laundry list of already illegal behaviors that are illegal regardless of alcohol's involvement with the exception of sin tax avoidance.
Why not outlaw the societal harm instead?
> e.g. an alcohol excise tax going towards rehabilitation and/or highway safety programs
Both of those seem like good things regardless don't they? Why do we need a special tax on alcohol for things that are generally good? It's not like only people who consume alcohol are the only ones who need rehab or they're the only problem with highway safety.
Does the tobacco tax go toward lung cancer patients? It actually goes towards funding campaigns that overstate (ie, lie) about the dangers of smoking to the point that people vastly overestimate the dangers of smoking [1].
> Sin tax" is just a colloquial name for an excise tax, which a state has every right to enact.
Of course it's legal, it's just garbage policy. Sin taxes come from the pairing politicians wanting more money and pearl clutching interest groups pleading to think about the children.
Unfortunately it's not so simple. An individual's smoking and alcohol use can and does harm others, and the state levies excise taxes for that reason.
Another example is driving a car, which results in thousands of fatalities and many more injuries daily. Not to mention environmental impacts which affect others. The state chooses to require drivers to have insurance and their cars to pass smog tests, rather than outlawing driving.
> An individual's smoking and alcohol use can and does harm others, and the state levies excise taxes for that reason.
Smoking and alcohol use can also not harm others. Should those who smoke and drink responsibly be held responsible for those who don't? How does the tax ameliorate those harms?
For everyone responding that smokers cost the government money, it is actually the opposite in that they save the government money because on average they die sooner. From the manning study: "In this analysis, the federal government saves about $29 billion per year in net health and retirement costs (accounting for effects on tax payments). These include a saving in retirement (largely social security benefits) of about $40 billion and in nursing home costs (largely medicaid) of about $8 billion. Costs include about $7 billion for medical care under 65 and about $2 billion for medical care over 65; the remaining $10 billion cost is the loss in contributions to social security and general revenues that fund medicaid. "
Presumably COVID also saves the government money, then? It mostly kills the old who have already paid into the tax system their whole working lives and are now drawing from it. And it mostly kills the chronically ill who need more tax support than they contribute. It seems terribly cold and callous to look at it this way though, e.g. when a son is holding his mom's hand in the hospital who is dying of lung cancer, to go up to the son and tap his shoulder and whisper, "Hey kid, cheer up, uncle Sam saved $8 bil on medicaid nursing home costs 'cus mommy here couldn't stop sucking nicotine sticks."
> Presumably COVID also saves the government money, then?
It most certainly does. The retort your parent made is for those who make the argument that a tax is necessary because X (smoking, in this case) costs the country economically.
If you want to make the purely _economic_ argument, it's a benefit to the bottom line.
That's not what a sin tax does. You are still free to smoke cigarettes or drink alcohol, and were we to tax social media usage, you would still be free to use or not use that.
But a sin tax ostensibly accounts for the economic externality*. We know that cigarettes impose a cost on society beyond the individual smoker. I'm all in favor of making people pay for things that we know cause damage to society more broadly. And I hardly think it's controversial that social media is in many aspects harmful to society.
*Sin taxes are technically different than pigovian taxes, but I and I think most people tend to use the terms interchangeably.
> We know that cigarettes impose a cost on society beyond the individual smoker
What's that cost?
From what I've read, all economic costs smokers impose on society are more than made up for in their dying early, they actually cost less [1]. I guess everyone should smoke to save the state money!
Neither you or the government/state decides what you get to do with your mind. An advertising company decides what to do with it and can manipulate it however it decides best benefits itself. Not you, not society, Facebook, what makes Facebook the most money.
It’s not the state “deciding” it’s the state requiring compensation for the negative externalities created by the product. You’re more than welcome to smoke cigarettes if you so chose. But that decision isn’t made in a vacuum and it impacts the rest of us in the form of increased public health burden, insurance costs, secondhand smoke, etc. A “sin tax” serves not only to discourage the asocial behavior (we’d have a big problem if everyone made the same choice) but also to pay your fair share of the costs of your decision.
Curious, do you not wear seatbelts too? Opt for asbestos insulation since its better than anything on the market today? Plumb your home with lead since its more durable and flexible? Use leaded gas because its better for your older engine?
The state acts on the collective when the public is not making good decisions for themselves and causing net harm onto themselves, usually with the public paying the price. Sometimes thats overt like with death rates from accidents without seatbelts, or cancer from asbestos exposure. Sometimes its less overt like the behavioral issues, increased incidents of mental illness, and crime rate increases from leaded pipes and gasoline.
I'm willing to bet social media causes net harm. It hasn't enabled communication that wasn't possible before; if you can get access to a facebook account you therefore have email and access to irc. But it has cost probably trillions in productivity from people staring at it so much during all their idle time, and the cost to treat mental health issues that wouldn't have cropped up without toxic social media culture.
I say we have these companies pay for these externalities if they are forcing us to pay for them otherwise. By not passing a tax on externalities like this, the state is deciding that I need to pay for facebook's ills on society whether I use the service or not, which should anger you as a libertarian as much as it angers me as someone on the left.
Not only the age restriction but there are restrictions meant to curb some abuse at least. Drunk in public is a crime, establishments technically aren't allowed to overserve patrons who are very drunk, you can get tried for manslaughter worst case if you force someone to overconsume and they die, etc.
I wear my seatbelt, I don't smoke, I don't drink and I'm vaccinated.
Everyone keeps talking about these "negative externalities" without being specific. Why not just make the societal harm illegal and let people hurt themselves without buying permission from the government?
We require driving licenses, age restrict operation of vehicles, require vehicles to operate within parameters (speed limits, gross vehicle weights) and according to standards (traffic signals and markings), and prohibit operation while under the influence of decision or reaction-impairing substances.
Because these are all statistical precursors to accidentally killing someone with a car.
Texting while driving, while illegal (everywhere by now, I assume), causes more accidents than driving under the influence (both in total numbers and, apparently, per capita). Should we tax text messages?
So you get to ignore all the responsibilities that come with your rights so you get to clog up our hospitals with your bad decisions?
How about you take full responsibility: you get to not put whatever in your body, and you agree never to take an ambulance ride or be treated by a hospital.
Don't want the vaccine? That's fine, it's your right. But now when you can't breathe, nobody coming to help.
> So you get to ignore all the responsibilities that come with your rights so
Absolutely not. I know lots of people that manage to drink alcohol responsibly. They never drink and drive, don't regularly over indulge and it makes their and their peers lives _better_.
What negative externality are they paying for with alcohol taxes?
> How about you take full responsibility: you get to not put whatever in your body, and you agree never to take an ambulance ride or be treated by a hospital.
> Don't want the vaccine? That's fine, it's your right. But now when you can't breathe, nobody coming to help.
If I pay for health insurance, I'm already taxing myself in this instance. It would be perfectly reasonable for a health insurance company to offer incentives for people to be vaccinated just like they offer incentives to non-smokers.
If the government wants to start providing that healthcare, then they can have a say in the cost of poor health decisions.
I don't think number of users is the metric that really matters. I care a lot more about market cap because money is what gives corporations greater leverage to do bad stuff.
Public algorithms, or at least some 3rd party review. Ban infinite-scroll on social media platforms. Require feeds to be configurable (users can set to "newest first" or "top picks" or whatever else). I'm sure I could come up with more, that's just off the top of my head.
These seem like awkward things to encode into law in a durable way. Laws are long-term blunt instruments, banning something like infinite scroll will have all kinds of unintended consequences.
That's true - these things might be better implemented as regulations out of the Executive branch - but that would still require legislation authorizing somebody to implement the regulations.
State governments are much more interested in participating in the cable news culture wars to pump up next election numbers then they are in actually governing. I doubt it.
Have you ever watched a teenager (or addicted adult) scroll thought their IG feed? It's disturbing. They just scroll and scroll and scroll waiting for the tiny little dopamine hits. I don't know if a "Next" button fixes it completely, but it almost has to be better, even if only marginally so.
> I would rather take the stance of feed algorithms and moderation logs be PUBLICLY available. Transparency instead of censorship.
I think that removing CDA Sec 230 protections for algorithmically-curated feeds is the answer. It's one thing to have a basic FIFO feed, it's another to hide or reveal content based on your own internal engagement/revenue targets. At the point where you're crafting bespoke engagement-maximizing feeds, you're creating a gestalt creative work that has a life of its own and that is no longer merely a passthrough for the works of users of the platform.
In other words, consider Facebook's curated feeds as Facebook's speech, not only the speech of their users, so Facebook faces liability for that speech.
I agree. Especially since political ambition won't be to curb misinformation, it will be to get control about which misinformation will be spread. Harm to users will be an excuse and like misinformation it will be quite difficult to quantify.
I considered the widely-used 'hot' algorithm and I don't think that really qualifies as a 'creative work' on the part of the site. The 'top from last hour/day/month/year' and 'hot' algorithms are really elementary sorting methods that are in the same league as FIFO.
> Who gets to decide what algorithms are simple enough to be legal? You? Why?
I'm not talking about the legality, I'm talking about the liability. If one user posts defamatory/libelous speech and Facebook decides to blast it to 1 billion users because their secret sauce algorithm found that it really cranks the revenue dial, Facebook should share some culpability in that defamation lawsuit. A basic 'hot' algorithm that's not designed by squadrons of PhDs running machine learning on user behavior analysis isn't really the same thing, culpability-wise, imho.
> basic 'hot' algorithm that's not designed by squadrons of PhDs running machine learning on user behavior analysis isn't really the same thing, culpability-wise, imho
A "hot" algorithm can blast high engagement libel in front of people as well as a fancy algorithm.
If the "hot" algorithm and the fancier algorithm are both content neutral, on what basis can you distinguish the two as a matter of law?
Does the hot algorithm become illegal if a PhD implements it? I'm at a loss about what distinction you're actually trying to draw.
Your post, like many others on this thread, isn't articulating exactly what about FB's conduct you find objectionable?
Illegal, liable, doesn't matter --- you want to use the state to drive certain kinds of ranking off the internet.
Fine. What, precisely, is the line between algorithms acceptable to you and algorithms not?
What is the precise conduct that would make liability attach to a ranking algorithm? You can emote, but you can't describe what exactly it is you would turn into a law.
> You can emote, but you can't describe what exactly it is you would turn into a law.
I thought I made it pretty clear from the outset I was talking about removing CDA Sec 230 protections for sites using bespoke (i.e. proprietary) curation algorithms for their feeds.
No, actually, it's not any of these things. It's on HN that dang applies special rules to your comments if he doesn't like you. FB's algorithm is impersonal.
Your list of criteria is legally inactionable. Neither you nor anyone else can write down what precisely it is that FB should be prohibited from doing. All I see is a bunch of unproductive rhetoric about how this is bad or that is bad. Blah blah, filter bubbles, moderation, whatever.
What is the exact definition of the criteria you would use to prohibit ranking algorithms?
> Why the hell shouldn't I be able show ads to people who want to see them?
Nobody's stopping you. If you want to show ads to people interested in football, post ads on a football site. If you want to show ads to people interested in horses, show them on a horses site. If this type of non-invasive targeting to display ads doesn't make you as much profit, well that's a you problem.
> HN is an algorithmic feed.
You seem to be struggling with the idea of a 'bespoke' algorithm where everyone's algorithm is different, informed by an internet-wide surveillance system. This is the difference between HN and FB.
Targeted ads. Targeted. As in specifically and exquisitely targeted to individual people based upon datum gathered from surveiling everyone.
Still legitimate, and maybe even encouraged, are contextual ads, banners, sponsorships, branding, swag, paid placement, influencers, misc calls to action, etc.
Please rest your concerns. Late stage capitalism will continue unimpeded after Facebook's wings are clipped. Social media will revert to less noxious enterprises, like metafilter and craigslist.
> HN is an algorithmic feed.
Opt-in. Transparent.
What is FB's equivalent to HN's /newest?
I'm not crazy about HN's ranking. It's pretty good, non-toxic, seems fair. But someone other than me is making judgement calls.
I much prefer client side (user controlled) filtering and sorting, like with any generic RSS reader.
Repeating myself: All HN visitors see the same feed. A crucial qualitative point you pointedly do not acknowledge.
Ad targeting is just a bugbear for a section of the tech activist crowd. There is nothing fundamentally wrong with a site like Twitter noticing that I tend to post a lot about cats and showing me cat ads as a result. No, it's not self-evident that targeting is bad, and no, behavioral targeting doesn't require "surveiling" everyone internet-wide.
> Late stage capitalism
I don't subscribe to the neo-Marxist worldview that uses this "late stage capitalism" frame all the time. Capitalism is the natural order of the universe, not some temporary aberration on the way to your Utopia.
> Opt-in. Transparent.
Not transparent, and hardly opt-in: HN feed ranking is the default.
> I much prefer client side (user controlled) filtering and sorting, like with any generic RSS reader.
Okay, so people seeing different feeds is good...
> Repeating myself: All HN visitors see the same feed. A crucial qualitative point you pointedly do not acknowledge.
... and now people seeing different feeds is bad.
Make up your mind.
Different people should see different feeds because they're interested in different things. There's nothing sinister or nefarious about showing people stories about topics that interest them and not showing them stories about topics that don't.
I think even you'd agree that someone should be able to opt into seeing stories about "cooking" and opt out of stories about "motorcycles". Okay, that's good, right? Now what's wrong with using ML to infer, based on what someone reads, that he's interested in "cooking" and not "motorcycles"?
This interest inference is what FB is doing. There's nothing bad about it.
I maintain that the criticism you and others are lobbying against algorithmic feeds is logically incoherent, emotionally rooted, and unworkable as public policy.
> This interest inference is what FB is doing. There's nothing bad about it.
And circling back to the original article, there is in fact something bad about it. One guy started out with a site to rate the attractiveness of women at Harvard, and now my mother is probably going to die because his customized algorithm found that showing her lots of anti-vaccine misinformation would maximize his profits.
I think that's the right approach. Legally you could require every social media company that collects and sells data on its users to advertisers to allow the users to access their internal algorithmic interface (for their own account).
Now, what controls are on the internal algorithmic dial? Apparently that's top secret, but a legal requirement to expose the interface to the users seems reasonable.
Note that this might not affect what ads you get served (that seems more on the private business side, although banning prescription pharma ads makes sense), but it would affect what shows up in your feed, what content you get served, etc. You could write your own exclude lists, for example (i.e. if you never want to see content from MSNBC, FOX, or CNN, that would be your decision - not the algorithms, etc.)
If you get too big, you can't buy your competition (e.g. FB buying IG). Or if you get too big, you have to open your stuff up like email does. Or if you lie to congress, you get penalized. Or if you get too big, you have to make your algorithms publicly available.
I think GP is referring to the fact that email overall is a system that is based on public standards and open to new entrants. You can start Hmail.com if you want, and plug into the existing email eco-system as a new competitor very easily.
The social media ecosystems aren't like that. You can't be a chat provider and plug into FB Messenger; you can't plug into Twitter, etc.
There is an open social media eco-system called the fediverse (for its federated nature), in which Mastodon is the best-known player. But it's gotten very limited traction, because of the network effect that keeps people on FB and Twitter. No such effect keeps people on Gmail.
Email, not Gmail. I can email people from my provider even if they use other providers, including people who self host. And I can get email from them too.
I would ban algorithmic targeted media -- ie no personalized feed based on an "engagement" algorithm, for social media just see a chronological feed of posts from the people you follow. This is the most addictive and radicalizing part of social media - and the most lucrative. Much like the nicotine in Big Tobacco's case.
Could I write my own algo for personal use? Could I hire someone else to write that algo? Could I share it with others? Could I start a company to sell it? If it gets to popular will you try to ban it too?
>> To clarify, can you specify exactly what law you would like made? What do you want to be done exactly?
Honestly, social media issues are for the most part a parenting issue. If you don't have access to your kids phone, or know what platforms they are on and who they are talking to and what they're sharing, I'm not sure legislating social media is going to do much of anything. New platforms will pop up, more private networks will be started and suddenly, everything becomes to fragmented to really oversee.
I would create laws that have teeth and address issues like bullying, doxxing, SWATING and other ways people weaponize social media against other people. You start to put some teeth into laws where people are facing serious consequences for bullying and pushing people to suicide, then you might see some changes.
> You start to put some teeth into laws where people are facing serious consequences for bullying and pushing people to suicide
Counterpoint: kids aren't all neurologically and socially developed enough to understand life-altering consequences for certain actions, and that's not their fault. Legal codes and law enforcement are too crude in most child-related cases, unless you're okay with incarcerating misbehaving children.
It's on adults to make sure things kids can reach are reasonably safe for—as well as from—them.
Counterpoint to your counterpoint: Almost all kids are neurologically and socially developed enough to understand they'll be picking up cigarette butts and cleaning graffiti on the weekend if they start bullying someone. It's quite common for American schools to enact "zero tolerance" policies for any physical altercation that, barring criminal charges, is the same punishment regardless of severity and irrespective of who the aggressor was. The policy is literally to give the victim the same punishment in the name of "fairness". Now there's no reason not to escalate and it incentivizes the victim to not report it. Even with the same punishment it's still skewed towards the bully as odds are, they're probably not going to care as much about a couple days of suspension and most bullies aren't going to be in any serious risk of expulsion.
School administrators don't care about bullying and teens being driven to suicide, they care more about liability than fairness. Taking the King Solomon approach in public schools is abhorrent and injust. Kids do dumb things, but it makes it so much worse when incompetence and callous disregard create perverse incentives.
In my opinion if social media is important to society the way it seems there should be a government funded social network for users and businesses. In the USA like NPR and PBS.
The problem is companies selling peoples data and optimizing the algorithms on probability. Instead what if everyone paid some taxes to have a social network which helps people interact and businesses promote themselves, you can get rid of the ads AND the algorithms. Let users customize settings which dictate the algorithm.
You better be sure the incumbents would fight tooth and nail make this look like a very unattractive idea and lobby heavily to make sure it will never happen.
> I would rather take the stance of feed algorithms and moderation logs be PUBLICLY available. Transparency instead of censorship.
Ok let’s say that’s now the case. The FB source is now open. What changed? Any negative consequences are still occurring, and if anything, we just have had actors better visibility into what to exploit.
A lot of these issues seem difficult to regulate but one that seems more realistic is usage by minors.
What if social media platforms required all minors to have their account associated with a parent account? The parent could monitor activity, institute time limits, etc.
Minors don't use FB much anyway. It's more tik-tok now. And of course no minor would use an app monitored by her parents, she will immediately switch to another app.
Sorry, should have clarified: I was suggesting that if the government decides to regulate it should apply to all social media platforms, not just FB. Updated the original comment.
People seem to dislike this suggestion, but while I believe it is unrealistic to check for age on the internet, this idea has some merit.
Platforms that harbor minors and adults together will have to have different rules than platforms just for adults. But you also cannot make everything fit for kids. Government would actually try to do just that since ensuring safety here is their mandate. So a solution must be found. Normally minors should be supervised, but that is not trivial and you don't want constant surveillance.
Make spying on people illegal, even when a computer does it to billions of people rather than one creep doing it to one person. If you have to collect info about people to provide a product or service, make it strictly illegal to transfer or sell that info or anything derived from it. Don't like it, get into another business. No one's making you collect people's info. Yes, this should apply to e.g. credit card companies, not just big tech. This'd need some fine points hammered out (don't laws always?) but it's not that crazy.
Do something to make platforms responsible when their "algorithms" promote something. Not just hosting it, but when they promote it. Don't like it? Don't curate, then, or have a human do it so you're sure nothing you're deliberately promoting is shitty enough to land you on the wrong end of a lawsuit. "But how will tech companies show every visitor a totally different home page of content they're promoting (but in no way responsible for), and how will Youtube find a way to recommend Jordan Peterson and Joe Rogan videos next to every damn thing? How will tech companies make every part of their 'experience' algorithmically-selected, personalized recommendations of content they farmed from randos?" They won't, they won't, and... they won't. You're welcome.
Make data leaks so cripplingly expensive that no company would dare hoard personal data it didn't absolutely need to get by.
Force the quasi-official credit reporting agencies not to be so shitty. In particular, "freezes" should be free and should be the default, alerts for activity should be free, and access to one's own info should be on demand at any time, not once per year per agency. Or just outlaw the bastards completely, IDGAF.
I dunno, lots of things we could do to make the current personal data free-for-all less hellish.
> This'd need some fine points hammered out (don't laws always?) but it's not that crazy.
It sounds like you're suggesting GDPR style regulation. They're still figuring out how to enforce that but generally I support it. Too much money is against it to get anything passed in the US, though.
Another problem is that the US government seems to like when the tech sector gobbles up data on people. It gives them new powers for social control.
Freedom of speech also includes from being compelled to speak of things you don't want to, so forcing companies to make their recommendation and moderation systems publicly visible would be eve more of a free speech issue than expecting companies to moderate violent, hateful, or deliberately misleading content.
I absolutely disagree but I'm upvoting anyway because it's an argument I haven't seen before with regards to making algorithms public and god knows the discourse could use some variety.
That being said. No. This is no more a free speech issue than forcing food manufacturers to make their ingredients public.
Can you cite some case law that bears out this argument? While I agree that your point is true in the most general sense, we compel companies to make their internal information public fairly regularly via various mechanisms (admittedly, none of which are 100% analogous to the FB/social media situation).
It's astounding to me that too little censorship is characterized as "antithetical to our values", "highly dubious ethically", and worthy of potential legal sanction in the top-ranked comment on HN.
Is it too little censorship or rather amplifying problematic things and suppressing heathier things because of perverse incentives? FB and Instagram timelines are not raw feeds from ones friends/follows. They are tuned by human calibrated algorithms.
Kids need to eat vegetables and lean protein sources. But if school districts instead optimize for profit they may end up feeding the kids borderline poison like sodas and candy. When companies come to dominate a public space, like huge parts of digital comms, then maybe it's OK to demand more responsible behavior of them.
> Kids need to eat vegetables and lean protein sources. But if school districts instead optimize for profit they may end up feeding the kids borderline poison like sodas and candy. When companies come to dominate a public space, like huge parts of digital comms, then maybe it's OK to demand more responsible behavior of them.
Adults are not children and social media sites are not school districts.
The school district analogy also doesn't really hold up on its own terms unless you're talking about boarding schools, which you probably aren't given the term "school district". When I was a kid, I ate at least 2 out of 3 meals at home, and more often than not, I brought a sack lunch. I know that poorer kids rely on school lunches a lot more than I did, but that's still just one meal a day. My high school actually did have a Coca-Cola machine, but I think that's old enough for kids to start making some of their own life decisions, like whether or not to have a Coke with their lunch. I mean, high school is around the same time that students start planning for their future career and/or higher education, so if you can be trusted to decide between taking vocational classes and fulfilling college admissions requirements, I think you can also be trusted to decide whether or not to drink a Coke. 14 isn't that far off from 13, which is the legal minimum age to get a social media account.
Also, unlike going to school, nobody is forced by the government to spend multiple hours a day using social media. Of course we regulate schools. We also regulate prisons to make sure that prisoners are humanely treated, or at least we're supposed to. The better analogy isn't school districts but convenience stores, in an alternate universe where children under the age of 13 were prohibited from entering convenience stores and some people were complaining that still wasn't enough.
> Adults are not children and social media sites are not school districts.
> The school district analogy also doesn't really hold up on its own terms unless you're talking about boarding schools
Non-technical adults don't understand the minutiae of algorithms or the tuning of social platforms designed to manipulate them. These platforms control how over 2 billion people communicate. Once the networks become entrenched people can become essentially locked in, or risk becoming ostracized from support groups if they don't want to play along.
What people choose to "amplify" is none of the government's business. People are allowed to be wrong. Yes, even if you think it's about something really important.
What if it is not censorship but rather we apply same standards for Facebook feed that we have for newspapers. If a newspaper publishes something false that hurts someone, they can sue newspapers because editors should have caught false information.
Facebook or social media algorithms now optimize feeds for engagement even if information is false or harmful. To me algorithms are new editors. So people who are hurt by these algorithms should be able to sue companies that run these algorithms.
If social media companies want section 230 protection, then they should not use any form of algorithms. Show everything without prioritizing anything.
Newspapers are a poor analog to social media for this exact reason. A newspaper is the same newspaper for everyone.
The better analogue to social media, at least from the user’s perspective, is direct mail. If someone mails out libelous political fundraising letters, the liability is on whoever wrote the letter, not the postal service. The only difference with social media is that social media has ranking algorithms, but that’s because the volume of social media far outstrips the volume of direct mail. Even Gmail uses algorithms to filter your various inboxes.
If social media worked the same way as direct mail, you’d basically be forced to dig through every social media post within your subscriptions at random to make sure you didn’t miss anything you wanted to see. Which means the primary effect of the algorithm is subtractive. The algorithm sometimes adds content you weren’t otherwise subscribed to, but most of the time, it works by hiding content you are subscribed to. The primary complaint seems to be that Facebook isn’t hiding enough content.
Finally, I think you’re understating the problem with mainstream media. Settling defamation lawsuits out of court, on the rare case that false and harmful news media actually crosses that line, does not even come close to undoing the initial damage. Especially since under US law, public figures—who are the primary targets of media coverage—have a much higher burden of proof to sue for defamation. (Whenever politicians propose reforming these defamation laws, these mainstream media outlets respond with passive-aggressive antics like printing “Democracy Dies In Darkness” on their mastheads. I bet Zuckerberg wishes he could get away with that kind of thing.)
Facebook is essentially a media company. All of the ad revenue and none of the regulations or responsibilities. Facebook may claim otherwise— but if it looks like a duck …
It's astounding that something like a chronological timeline free of Facebooks outrage algorithms is portrayed as censorship.
And honestly the defense of Facebooks unethical behaviour is following a real simple pattern at this point: Shift blame to the users and make it look like Facebook critics want "more censorship".
Facebook shareholders making a few dollars less (if thats even the case) is absolutely not my problem.
> make it look like Facebook critics want "more censorship".
Unfortunately a lot of them do plainly want more censorship. Worse, they want it for the whole net. Many people always advocated to be careful sharing your life with facebook. They were completely ignored. Now when the users understood they have no control about their own data and the information that gets shared, they want to make the whole net happy with content regulations with the help of the state.
Facebook doesn't care much about ethics, that is true for any companies. But the users certainly had a choice here. Exception might be people in media that also has alsodependent on Facebook in the meantime.
One fairly common pattern seen is that companies develop in a nascent space where there were few rules and were therefore able to basically outrun regulation/ the law that moves very slowly. When that regulation eventually comes it ends up solidifying the monopolistic advantage by essentially creating a moat and closing the door on practices that helped create such growth in the first place. I think when stakes are that high, companies are generally rewarded and incentivized to be unscrupulous rather than virtuous, especially when the unscrupulous actors just become wealthy enough to buy out the virtuous ones.
I wouldn't be surprised if we are currently in the middle of a version of this regarding social media and how privacy of personal information is handled right now.
What would a developed civilization do? I doubt we would be able to prevent the “bubble up and close the door” behavior, so should it also ensure that corporations are regularly rotated (ie dismantled for others to take the space) so only those which can succeed fairly in the current law framework would survive?
This argument is brought up a lot, but it seems the lack of regulation hadn't really stopped FB and Google from monopolizing their markets anyway (or, oligopolizing if we think they're in the same market).
Internet was funny. In the case of Uber, the nascent space was virtual but that was enough for them to ask for the ability to play with the old space with a fresh empty rule set.
Not everyone has the same moral compass. It isn’t even clear that the whistleblower herself was guided by concern for society: she first went to the SEC with these complaints. That’s a weird place to go with concerns about social media’s impact on society. I wonder why? Perhaps…just maybe…it was because the SEC will give her 10-30% of any fines levied against Facebook, leading to a potential windfall of $1 billion or more to her personally.
It takes truly egregious behavior for society to agree that new laws must be passed to outlaw it. The current state of social media says much more about human behavior than Facebook’s behavior. Everyone here would also almost certainly reject the kinds of laws that would be required to make Facebook/IG a healthier place. They would likely involve serious privacy violations, just for starters. So given that legislation in this area has almost no chance of passing, it is unclear what the point of this is, other than a huge payday for the whistleblower.
How about she went to SEC because for certain things that the right place to go to? And the SEC has to fine FB first, which would mean that FB committed illegal acts. That alone is behavior that needs to be encouraged, exposing corporate wrong doing is a net positively for society.
The SEC will make extremely vague allegations that Facebook misled investors by not disclosing some of these reports. Facebook will settle to avoid further reputational damage, paying large fines without admitting wrongdoing. This woman will then buy an island to vacation on and a G650 to get there with. That is how 99% of these things play out.
I personally don’t believe that Facebook has done anything illegal here. That is not to say I don't think they have done anything wrong - their business, like many others, is morally bankrupt in some ways. But there is no codified responsibility for Facebook to do anything to cure the ills of social media. You don’t see casinos being successfully sued for causing suicides, bankruptcies, divorces, financial crimes, etc., but it happens every day. That’s because there is no law against being in a scummy business. Investors in such businesses know (or should know) what they are supporting.
Not that I disagree about the monetary award, but the SEC is one of the federal agencies that has actual teeth these days. Where else would she go, the FTC (ha!)?
It really bums me out to see these sentiments at the top of HN. What to do about "misinformation" is an interesting question for private actors to think about, if they wish, but what the government should do about it is not an interesting question. The debate has been had for a couple hundred years, already. It's over. One side already won.
Enron is another fascinating example, there is an interview with the former CFO where he talks about what he calls "legal fraud" - practices that are highly dubious but technically not illegal [1].
Legal/illegal is only a binary variable in theory. In practice it's things that are clearly white, clearly black and lots of grey in between. There are many concrete examples regarding accounting rules mentioned in the interview.
The question is where does a court draw the line, and as parent rightly points out, sometimes code/case law changes after the fact.
I think the parent is making a point about definitions - that the word fraud can only refer to illegal actions by definition. I think this is an overly literal interpretation of language' though.
Facebook may not be doing anything illegal, but it is immoral. While morality is subjective, and not enforceable, the public needs to know what is happening, so they can make their mind about supporting a given company.
i agree, legislating morality has never worked. However, legislation to inform the consumer has worked.
Social media should be forced to inform the consumer when/how they're being targeted. When a user is shown 15 pieces of content it should be crystal clear the platform is trying to tease out an emotional response from them and not just showing them their friend's posts. Maybe a warning label like "This content was algorithmically curated to elicit the maximum emotional response from you".
"In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to conslidate the market."
This tech company employee (aka the "Facebook Whistleblower") is refusing to share the documents she stole with the FTC.
Although she did share them with several attorneys general.
It appears she does not support antitrust inquiries. Heavy consolidation of "social media" with no meaningful competition is acceptable to her.
Needless to say, some would argue competition provides incentives for large players to improve their services.
Standard oil reduces the cost of oil, delighted customers, and had already a massive decline in market share by the time the antitrust stuff happened. It was driven by people who couldn't compete.
As someone who has studied this fairly extensively, I believe this comment to be factually correct.
It also didn't hurt Rockefeller in the slightest. To the contrary, he actually became far wealthier post-breakup, possibly because all former business units became more efficient in the light of open competition.
Many of them live on today, such as Texaco, Chevron, and Mobil.
In general, trustbusting almost never actually works as planned, but it always seems like a good idea -- a desperate solution, perhaps the only possible solution -- at the time.
The only thing that tends to work is upstart competition driven by new technology that blindsides the older company.
When it comes to a monopolist, the one thing that we can say historically is that, "This too shall pass."
I was never aware that the purpose of anti-trust legislation was to hurt people (e.g. Rockefeller) or prevent them from making money. I thought it was to promote competition, which it apparently did.
>possibly because all former business units became more efficient in the light of open competition.
Then it sounds like it did work in the eyes of the people who wanted more efficient corporations (and thus potential savings to be passed down to them).
Illegality is dangerous to define and enforce when it comes to speech. Especially if Glenn Greenwald[1] is right that this is not an attempt to weaken facebook, but to commandeer its power to censor.
Directory of Policy Communications for Facebook Lena Pietch just called for "standard rules" for the whole internet, increasing the danger because whomever would get that power could widely censor unwanted popular dissent and counter narrative viewpoints. Facebook already caused damage doing this by censoring the lab leak hypothesis, which delayed potential lifesaving insights.
The whistleblower also has a political connection that should be investigated; The law firm representing the whistleblower also represent Whitehouse Press Secretary Jen Psaki, and the whistleblowers lawyer Andrew P. Bakaj[2] was the principal attorney representing the whistleblower who filed the initial complaint that led to the Trump impeachment process.
This is much more concise reporting on what is happening here and makes much more sense.
I disagree that hiring a lawyer must mean there is a political connection, but the way media frames the issue is again predictable and I personally would frame this as misinformation.
We also have seen no documents. Blindly believing media isn't good advice if the problem is allegedly misinformation.
This is an interesting and somewhat irritating take by Greenwald. I'll just put a counterpoint to one point he made that I was careful to get a grasp of what was going on:
Greenwald [Today]> Their control over multiple huge platforms that they purchased enables them to punish and even destroy competitors, as we saw when Apple, Google and Amazon united to remove Parler from the internet forty-eight hours after leading Democrats demanded that action, right as Parler became the most-downloaded app in the country
Greenwald [Jan 12]> The day after a united Apple and Google acted against Parler, Amazon delivered the fatal blow. The company founded and run by the world’s richest man, Jeff Bezos, used virtually identical language as Apple to inform Parler that its web hosting service (AWS) was terminating Parler’s ability to have AWS host its site: “Because Parler cannot comply with our terms of service and poses a very real risk to public safety, we plan to suspend Parler’s account effective Sunday, January 10th, at 11:59PM PST.” Because Amazon is such a dominant force in web hosting, Parler has thus far not found a hosting service for its platform, which is why it has disappeared not only from app stores and phones but also from the internet.
This is kind of along the right lines but gets the details wrong. Greenwald is intelligent, insightful, has good sources and is not afraid to annoy people, so it is tiresome that he has a bad habit of fudging details to make a shiny claim that is not quite right. With another journalist, I'd think what happened is the journalist was under pressure, so wrote up a story whose details they didn't quite follow, but with Greenwald, I am not sure what is going on. My least unlikely guess so far is that he rewards his sources with sympathetic coverage, which in this case he gave to Parler executives.
Some thoughts:
The big truth: The triple whammy from three of the four biggest tech firms, all of which have sophisticated lobbying operations, did indeed effectively cripple Parler.
The biggest error: None of the three companies were in competition with Parler. In my opinion, the businesses of both Google and Apple would benefit from a strong rival to Facebook and Twitter; Amazon has a close relationship with Twitter, so this is a little less clear, but on the face of it AWS should normally benefit from hosting a client. This was pointed out by many commentators on the Jan 12th post, but he repeats it today.
Continued mythmongering: Greenwald's Jan 12th story essentially misses the backstory on Parler's bad relationship with Amazon, who had for several weeks essentially refused to answer demands that they comply with their TOS. There is competition in the hosting business (Greenwald's characterisation of Amazon's dominance on jan 12th was itself an error), but I don't think any major hosting provider - e.g., neither Digital Ocean nor Hetzner have major lobbying operations - would have kept the left the servers up under the circumstances. I commented on this at the time on the HN thread of Greenwald's story - https://news.ycombinator.com/item?id=25759115 - to my knowledge Greenwald has not since given a more faithful account of Amazon's role.
Good point. The state doesn't decide what's "good" or "bad". We must not forget that.
Laws are useful but are an application of power. Power of who? Of the people? Of some bureaucrats? The answer will not be black and white in most cases, but what's important is not to regard any body that can legislate as a source of moral authority based on that power alone.
And one consequence is not to buy big corp arguments about evil practices "but it's legal!". Yeah, it's legal because it hasn't been outlawed _yet_, because of strong lobbying... b/c of whatever. Never relegate the moral judgement to just "it's legal/illegal".
Same case with Enron. After the Enron scandal the Sarbanes - Oxley act was enacted to strengthen financial reporting practices along with a bunch of new modifications in the accounting principles and guidelines (GAAP).
The SOX Act was considered to be greatest and the most radical update in Accounting since the invention of the double entry method. Governance and stewardship was theorized in Accounting but they were pushed into practice with the SOX act.
Similarly we had the Dodd-Frank act post the financial crisis of '08.
Essentially law is a post-facto phenomenon. We can not be sure what to write into a legal framework unless there is an evidence for the need to such laws.
Hmm ... I'm personally getting tired of this "well, it's currently legal" .. law changes start with moral indignation. We are at that junction now. Although accurate, let's park the legality lines.
"In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to conslidate the market."
This tech company employee (aka the "Facebook Whistleblower") is refusing to share the documents she stole with the FTC.
Although she did share them with several state attorneys general.
It appears she does not support antitrust inquiries. Heavy consolidation of "social media" is to her an acceptable status quo.
Needless to say, some would argue competition provides incentives for large players to improve their services.
I think she didn't really blew the whistle on anything yet. Since tech and Washington are close, I would expect access to said document anyway and they aren't available to the public. So blind trust is required, not a good recommendation if we talk about misinformation.
The problem is, very often the law is unable to keep up, especially on technical issues. And even when it does, sometimes is way too late. Gates knew it when he asked to implement the AARD code - yes, they were sued, but they settled out of court and made the competitor's product irrelevant. A lot of Microsoft behavior in the 90s was just this: they could get away with it, but it simply didn't feel right.
I think a lot of people are choosing to ignore that a lot of companies have done things in the past that were not illegal at the time of action. However, those actions were later decided to be made illegal because the behavior was deemed to be antithetical to our values.
Which, on good days, is why we have legislatures. To make new laws to cover new situations.
Law is always lagging behind social norms, for many reasons.
Because of that, I don't think laws can change the world, this is the opposite, laws are merely acknowledging the common rules of the majority.
Technology also change the world and we need time to figure out new rules to adapt, in the case of Facebook and other giants, some changes are clearly in need, at least it seems to be a growing consensus.
Laws can absolutely change the world (at least if we take the world to mean one country). Look at de-segregation. It's obviously that the world was segregated before the de-segregation laws were passed. Of course, the laws were passed because there was ample enough support for them, but that doesn't mean that the laws only aknowledged an existing state of affairs. Even more so, the laws themselves helped accelerate the perception of segregation as evil among the majority of the population, whereas before it was just a regular part of life to many (many on the good part of the segregated world, of course).
If Congress invites you to speak without fear of arrest and the main stream media hold you up as a darling then you are NOT a whistleblower. You are doing the bidding of power... If you are exiled to Russia (Snowden) or locked up without trial like Assange THEN you are whistlw blower and dangerous to power. This chick is a shill
Bingo. The whole thing smells fishy. Of course, the headlines were talking about how evil Facebook is because it makes kids feel bad. But digging deeper, you can see exactly how this is meant to play out: More censorship and control by the NGOs and "Fact-Checkers" (since the gov't can't publicly dictate what content Facebook is censoring). But make no mistake, the same players that are calling the shots in DC will now be ensuring the public doesn't see any of that dangerous "disinformation".
It was a good 30 years or so for true free speech that was always the promise of the internet. But those days are gone.
absolutely. To me it sounds like she is exacting revenge on FB for something or may be like the other commenters suggested she is trying to get that whistleblower share of a SEC fine/settlement with FB if one is to come out of this story. Otherwise it is kind of strange turn of heart on her part from profit off the users to suddenly being so caring about the users - I mean she herself founded what is now Hinge dating app https://foundation.mozilla.org/en/privacynotincluded/hinge/ :
"And like so many dating apps, Hinge asks users to connect their Facebook account to sign in to the app. Remember, when you connect a social media account like Facebook to a dating app, both Facebook and the dating app now potentially collect more information together."
"Hinge definitely shares user data with around 45 other Match Group companies, such as Tinder, OK Cupid, and Plenty of Fish among others. The company also shares data with third parties for purposes such as advertising and analytics."
Industrial ethics can be about ensuring longterm success as well. If a journalist can’t be trusted to keep an off-the-record source unnamed then they will never get another story. They can tarnish their entire organization, so they don’t do it even if they could get one incredibly powerful story out of it.
Your point is a good one, but we should be careful not to equate [edit: completely] Facebook's actions with those of Standard Oil.
If we say that the Sherman Antitrust Act was only necessary because of unethical behavior on the part of players like Standard Oil, we cannot say the same in this situation.
If you consider Facebook's behavior unethical, how do you view the behavior of the millions of people who fund them and provide them such market power? There are many many alternatives to Facebook. But non-Facebook parties routinely force people to Facebook if they want to be involved in an event or receive a notification or provide feedback.
If you would shame Facebook for their behavior, you should also shame others for using Facebook. Users enable Facebook's behavior.
Conversely, if you hold Facebook users harmless, it is harder to sympathize with complaints about Facebook's behavior.
Their are clear parallels between Facebook and Standard Oil. But it is useful to note where there are differences, too.
Standard Oil had gone from >90% market share to < 60% by the time any legislation related to it was enacted. The legislation hurt their competitors, who were catching up using similar business strategies, more than SO.
Agreed. Facebook has a fiduciary duty to it's shareholders to maximize profits. Unfortunately, it's up to the government to create legal boundaries to prohibit the means by which they can do that.
Frankly, I'm not expecting any meaningful legislative response, given US antitrust has blessed the WhatsApp and Instagram acquisitions by Fb as well as Google's acquisition of DoubleClick and YouTube.
If their internal docs show that they know their product causes harm and is engineered to be addictive, it could be a big tobacco moment from a mental health perspective.
> a lot of companies have done things in the past that were not illegal at the time of action. However, those actions were later decided to be made illegal because the behavior was deemed to be antithetical to our values.
What you are saying is literally the opposite of hundreds of years of the rule of law.
If FB did break the law as written, then prosecute them for that in a fair trial by a jury of their peers, but yours (or anyone else's) personal feelings about "our values" should never be able to override the plain language of the law, especially retroactively.
The point they’re making is you don’t prosecute Facebook for something that we think is unjust but is not yet illegal. You make it illegal, and then after that point if anyone continues with the now illegal course of action then you prosecute them. Prosecuting someone for something that wasn’t illegal when they did it would of course be wrong.
> I think a lot of people are choosing to ignore that a lot of companies have done things in the past that were not illegal at the time of action. (...)
The definition of what represents good and evil does not come from what's passed as legislation, nor does the negative influence on society as a whole of a business.
Legislation is also a moot point given that these mega-corporations actively lobby law-makers into not passing any inconvenient legislation.
>There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.
Why, what exactly are they doing that is ethically dubious? So far based on what I have read of this whistleblower's revelations, I do not have a problem with Facebook doing any of it.
Really? You don’t have a problem with an app that causes 1% of teens that use it to develop suicidal thoughts? By the way, according to the leaked study these teens directly attributed their suicidal ideation to Instagram.
Yes, I do not have any problem with it whatsoever. There are probably plenty of books that also cause some percentage of people to develop suicidal thoughts, but I do not want to start banning those books.
This is the kicker I think. Facebook scales 'keeping up with the Jones' up and make it easier. But that's been a common trope since (google search... 1920ish). What Facebook's doing isn't new; it's simply Easier.
When you say, '1% develop suicidal thoughts' - Is that causation or correlation? Maybe I'm missing something; but this seems somewhat like 'biggest target' to me as the world had shrunk.
There should be some laws about using addictive patterns imo. I'm sure that's fine and profitable and coca cola would continue to like putting cocaine into their drinks to make their customers want it all the more, but we have laws preventing that behavior in the meatspace and therefore we can have laws preventing this sort of evil behavior with technology companies too. Tie it into website accessibility laws that are already codified in law and can be used to sue certain companies today.
In a sense, companies are right now incentived to develop the most effective 'digital crack', because anything that hijacks the reward pathway of the brain more effectively leads to more profit. It'll be quite interesting to see how the public discourse around this will progress, since digital entertainment isn't as easy to publicly mark as 'bad' as drugs were.
On the other hand, China is sending quite clear signals that it's theoretically possible to legislate against e.g. video games -- though only after you've already established an intrusive 'social credit' system, which I hope we won't see in the west any time soon.
And I would like to have a word with them. They've been given free reign to turn our kids into absolute digital junkies (this is coming from a self-diagnosed sometimes-addict who realizes these kids are on another level), deliberately dangling carrots that reward 24/7 engagement in the activity.
> digital entertainment isn't as easy to publicly mark as 'bad' as drugs were
Definitely true. We need a way to differentiate between Super Mario Brothers and Mega Crack Force Gacha Legends Online.
Completely true, I think the mobile sector is the worst offender. That said, people can perfectly spend insane amounts of time on titles that doesn't employ these mechanisms. But the industry got far worse in more recent years.
I agree completely. I've had the chance to slowly grow up with video games and witness their evolution, and still it's really hard to withdraw myself from the allure of 'just one more round of Apex Legends' and the like. And why should I try so hard, they are great games, after all!
It's decades of development in 'addictiveness tuning' unleashed upon the brain as a stationary target... I really feel sorry for the kids that never knew anything else.
Digital crack is a perfect way to describe this. I'm sure someone clever enough can write some great legislation for this. The issue is that so many industries are beholden to relying on digital crack. You might get one senator who wants this, then 99 others who are getting flooded with calls from every major employer in their district telling them to vote no. I wish we had stronger government that wasn't so susceptible to having anything good for the public exploited to make a few people very wealthy. Then again we've never had this sort of public first government in the history of our nation, its sort of always been like this out of design whenever I learn more about our history.
I disagree - I think that it should be completely legal to sell cocaine drinks as long as you inform the customers that the drinks have cocaine in them and I think that is should be legal to use even the most psychologically manipulative marketing techniques imaginable. I would rather that it be the responsibility of consumers to avoid getting addicted than to use government power to ban things. Similarly, for example I think that it should be legal to sell skateboards even though people sometimes injure themselves while riding them.
While this is extremely murky and maybe impossible to pin down from a legal standpoint, I do like the thought. It's not just Facebook and it's not just social media. It's any software (online games?) that clearly goes out of its way to induce addictive behavior as their business model.
> There should be some laws about using addictive patterns
Of course there should be.
But then you would also need to ban casinos, sports gambling, gaming, porn, cigarettes, alcohol and the myriad of other things that are addictive in nature.
Well we do have laws regulating and/or taxing most of those addictive things already. Except for social media and gaming really, although gaming is under hot water currently due to loot box gambling mechanics.
I personally find the personalized advertising great. A lot of the time I am shown things that are actually useful/valuable to me.
I think a lot of the value really depends on the individual. If you're engaging in productive activities like hobbies, you get valuable targeted ads. If you're engaging in activities that are low value like signaling to others in myriad ways, you probably get adds for things like disposable fashion.
Personalized ads are basically a mirror. They feed what the person already wants to engage in. If you want less of the bad types of advertising, then you need to start at the root which is getting people to stop being interested in activities and behaviors that are lower value.
> I personally find the personalized advertising great. A lot of the time I am shown things that are actually useful/valuable to me.
There are plenty of ways to deliver this value without secretly fingerprinting every user and delivering targeted ads at every corner. A search where you profile yourself, for instance; similar to how you provide search filters on Amazon.
All ads are fundamentally ugly in the sense that their effect is the opposite of a great work of art or entertainment. Ads are fundamentally just some pathetic person's selfish attempt to control what other people to think and feel in order to increase their own power through financial profit. In a sane world they would all be banned. Ads exist in their current deranged and disgusting form because contemporary humans have been selectively bred through social engineering to be submissive, cowardly, selfish, and stupid. Personalized/targeted advertising is not something that needs to be discussed.
Ads have been around for more than 2000 years now - would need a massive shift in mores to get rid of them (they survive in a lot of very different societies).
I listen to a podcast on football. Are they allowed to run ads that are about sports betting and NFL tickets? That is personalized to the group. Is Facebook allowed to run ads for sports betting to all people who are fans of a professional team on their site?
Is Facebook not allowed to run me ads for local restaurants any more?
I’m guessing that people use targeted advertising and personalized advertising interchangeably. The advertising industry knows full well what it means, and I’m sure the legislator should have no problem finding experts in that area to make a legally rigorous definition.
I knew something was seriously wrong the moment I saw a legitimate business (EBay) selling eye-ball space (ads) on their property that was supposedly profitable through legitimate business (hosting a marketplace, taking a cut, etc).
Ads create a negative and detrimental feedback loop by incentivizing dark patterns and other negative gamification in order to squeeze out previously non-existant eyeball time from your product. E.g. the optimal path for say EBay is to have a user come on, find what they want, browse a bit through interesting things and recommendations, buy what they want/need, then log off. Instead, ads have incentivized spam listings which do two things: More eyeball time and thus ad-impressions/clicks. And they've cause the creation of non-optimal experiences by allowing non-optimal players to exist through pure randomness. I.e. In an ideal market, it should be "winner takes all" for any unique genre or field or product space, one which should be exploring. Instead, the spam listings make it so a non-negligible amount of useless and bottom of the barrel products/sellers/companies to exist and thrive.
For FB, ads have commoditized eyeball time even more directly than the indirect example I gave above with EBay. A potential product path with FB should be people using it as a platform to interact with people they know, organize events, and to have a shared space to communicate and discuss ideas.
For example, Standard Oil did not break any laws in its ruthless consolidation of the nascent oil industry. In fact, it exploited the law to allow it to grow into the monstrosity that it eventually became. In response, Congress passed the Sherman Antitrust Act in 1890 which subsequently prevented the actions that Standard Oil had used to consolidate the market.
There should be no question, that what FB is doing here, while not illegal, is highly dubious ethically.