Hey, Facebook VP of Integrity here (I work on this stuff).
This WSJ story cites old research and falsely suggests we aren’t invested in fighting polarization. The reality is we didn’t adopt some of the product suggestions cited because we pursued alternatives we believed are more effective. What’s undeniable is we’ve made significant changes to the way FB works to improve the integrity of our products, such as fundamentally changing News Feed ranking to favor content from friends and family over public content (even if this meant people would use our products less). We reduce distribution of posts that use divisive and polarizing tactics like clickbait, engagement bait, and we’ve become more restrictive when it comes to the types of Groups we recommend to people.
We come to these decisions through rigorous debate where we look at all angles of how our decisions will affect people in different parts of the world - from those with millions of followers to regular people who might not otherwise have a place to be heard. There’s a baseline expectation of the amount of rigor and diligence we apply to new products and it should be expected that we’d regularly evaluate to ensure that our products are as effective as they can be.
We get criticism from all sides of any decision and it motivates us to look at research, our own and external, analyze and pressure test our principles about where we do and don't draw lines on speech. We continue to do and fund research on misinformation and polarization to better understand the impact of our products; in February we announced an additional $2M in funding for independent research on this topic (e.g. https://research.fb.com/blog/2020/02/facebook-misinformation...).
Criticism and scrutiny are always welcome, but using cherry-picked examples to try and negatively portray our intentions is unfortunate.
Just to cherry-pick from your reply here, if $2M is the biggest ticket item you have to show for independent research on this topic - then you're woefully short given Facebook's revenues and size.
10 short years ago, nobody could have imagined that huge swathes of the population could have been swayed to accept non-scientific statements as fact because of social media. Now we're struggling to deal with existential threats like climate change because a lot of people get their worldview from Facebook. Algorithms have decided that they fall on one side of the polarization divide and should receive a powerful dose of fake science and denialism ... all because clicks and engagement.
10 short years ago, huge swaths of the population were swayed to accept non-scientific statements like eating fat and cholesterol were unhealthy. I don't think Facebook is the problem here.
How much exactly do you think Facebook ought to donate to independent researchers? Most tech companies donate ~0$ to such efforts.
> 10 short years ago, huge swaths of the population were swayed to accept non-scientific statements like eating fat and cholesterol were unhealthy. I don't think Facebook is the problem here.
That's a horrible example, I don't think that is even remotely comparable to swaying the population into burning down dozens (hundreds?) of cell towers or accusing Bill Gates of starting the coronavirus pandemic.
So, pretty much every new medium in history has been accused of formenting conspiracies at one point or the other.
The core issue here is that the internet allows many, many voices to flourish, and some of these voices speculate attractively but incorrectly (from my viewpoint, at least).
Blaming the platform which allows the voices to spread seems like a bad move, given that the core issue is the people who choose to go along with it.
The only way in which I can assume that FB is responsible for all the alternative theories on their platform is by refusing to accept any agency on the part of FB's users, which I think is probably the wrong idea.
As an example, right now you are promoting a narrative on an internet site holding FB responsible for the behaviour of others. Do your readers have so little agency that they will mindlessly act on your words without reflection?
If so, what differentiates your post from a similar post on FB?
If not, what makes FB different?
These are genuine questions by the way, I'm actually interested in your answers.
I don't idealize the past at all. Too many bad things to write.
However, the internet amped up the level of overall craziness to an unprecedented level. Even the first twenty years, it was much like the regular world, but geekier. But then in a fairly short time all these crazy ideas just started to spread in an incredibly tiny period of time.
I have always been a student of the paranormal and conspiracy theories - as a skeptic. Suddenly random people were spouting all the obscure classics, and brand-new ones appeared every day.
Last year, before COVID, I decided that this was akin to an infectious disease - suddenly people were thrown in with hundreds of thousands of anonymous people, many with bad but infectious ideas.
Before the internet, you acquired most of your delusional ideas at birth from your parents under the guise of religion etc.
Now you could pick up delusional ideas one at a time - more, the presentation gets to evolve because the creators get second-to-second feedback. Call these Vemes for virulent memes perhaps?
Some once-friends of mine clearly have very poor immune systems as they picked up many vemes.
> The only way in which I can assume that FB is responsible for all the alternative theories on their platform is by refusing to accept any agency on the part of FB's users, which I think is probably the wrong idea.
1. Infectious diseases don't work that way!
2. Also, a lot of this is caused by a small number of actual psychopaths who literally just want to cause grief. Allowing a tiny group of people to damage the whole is wrong.
---
Trying to assign agency to the crowd is madness and not backed up by observations of humanity en masse or reading a history book. In such a mob scene, crazies, aggressive people and criminals will always win.
If it's not immediately apparent, pretty much no other medium in history (new or old) had/has the real reach Facebook has now. So taking it out of context and simply defining Facebook as a "new medium" is disingenuous. A firecracker and a bomb only differ in scale, hence the massive difference in how they're seen and treated.
With great power comes great legal and moral responsibility, and greater scrutiny. You can't shrug it off just because others have also done badly at this, especially when nobody else did it badly at such a massive scale.
I wonder what the cell phone generation would think of our pulp paper fake Necronomicons from the 1970s? There was also a book of spells at my local library in the very-white very-evangelical suburbs of Little Rock, AR when I was a kid, too. I liked the one about how to become a werewolf a lot, but didn't really get to the point of going and murdering a dog and painting myself with its blood and boiled fat.
The pearl-clutching about social media is because power has been taken away. Media executives can no longer fancy themselves in control of people's minds, because they no longer have a monopoly on eyeballs via print and TV information.
Worse yet (for them), the user customizable nature of social media feeds mean that they can't even know what people really see.
If people use the internet to make themselves worse that's a failure of people not the internet.
But back in Little Rock, AR when you were a kid. Are you sure that there was someone who sat in on every private gettogether in every house, located in a corner listening for keywords and when someone said something mildly racist this cornerman gently told your uncle that "these racist inclination you seem to have, did you know that the local bookstore White Pages™ has a book called Mein Kampf that confirms your suspicions, you should read that one, if you like of course, not telling you, just a friendly suggestion, it's actually on sale now".
Does this mean you consider scale to be irrelevant to this situation? Because that's pretty much what the difference boils down to. You mention "media" as s singular entity but in reality the media of the past was a mosaic of thousands of individual outlets, newspapers, radios, TV stations. Today FB is a singular entity with the combined power of most of those together. Do you agree that changes the game?
FB has almost every eyeball now. Can you say the same about your fake Necronomicon? Book of spells? The media that controlled your mind?
The last statement is basically one against any form of control or regulation. If people use the x to make themselves worse that's a failure of people not x. Sometimes people need help.
Saying that scale is irrelevant makes me think you are intentionally obtuse about this. FB picks which newspaper's article to show you just like newspapers choose which journalist's article to show you. And just like newspapers give a tremendous amount of power to one individual (the journalist) compared to another (you), FB can do the same for individual articles and outlets. Except FB has the kind of reach no newspaper ever had or will. They give a podium for others to climb on, they decide who gets the front seats, and they monetize it.
For all intents and purposes they should be responsible for everything that happens on the platform regardless of where the content was picked up from. They should have a responsibility but this comes in conflict with their goal to to drive engagement and profits.
Their algorithm curates what people see, shows specific links, and influences opinions. And when you can influence opinions on such a large scale and monetize it we have a conflict of interest and even a weapon. FB and others have proven this in the past. Why do you think the media is regulated so tightly around election time? FB can take even a blog post and push it so aggressively right before elections that they manage to sway opinions but still claim they were just a platform.
The counter-argument is that it's easier than ever for an individual to find critical analyses of that information. If you were suspect of the information on TV in 1983 there was no one to tell you any different, unless you put effort into finding it.
Let's put it in terms other than politics...
In the early 1980s a full third of all households in the US watched the TV show "Dallas" every weekend. Was "Dallas" the greatest performance ever made or just the best option out of 3 choices?
Honestly far more violent and highly accepted behaviors are pushed through media outlets as sane opinions every day, like military intervention or straight up misinformation campaigns (see: the OAS, who for some reason never evaluates the election health of rich northern american countries, who should logically have released some statement about facebook right now, let alone all the primary voting discrepancies). Hell the NyTimes alone had enough fuckups in the 00s alone they should be on the shortlist for fact checking suspicion.
If facebook is wading into being a truth-teller, they’re going to run straight into government-media-academia social circles a la Pinker, Chomsky, all the punditry on TV, all the punditry in opinion columns (claims still need fact checking even if the result is an opinion).
Then, how do you deal with framing the presentation of verifiable facts with extremely ominous and sinister tone/hinting? You can do an enormous amount of damage just making untestable, unverifiable implications.
IMHO facebook is gonna get squeezed till they pop over this, either by blatantly having political double standards or by becoming a misinformation-based hellhole. The accessibility of Truth is much harder than people realize, and I don’t even think it’s fair to off load this responsibility onto facebook. People are just strangely ok with believing bullshit.
Yes, the misleading dietary advice was objectively worse. How many needless deaths lost to the likes of diabetes and heart disease. The ramifications of the replication crisis (of which social media censorship also runs afoul) run much deeper.
I think you are knowingly missing the point. Nutritional experts disagreed among themselves, and probably still do. Is there even a single 5G engineer that will take up the cause of 5G causing coronavirus? How about an immunologist that will claim Bill Gates is trying to inject people with microchips delivered in vaccines?
Which is why it's a non-sequitur. Any well-adjusted adult with a modicum of common sense can see through such claims. Cults and conspiracies flourished long before FB's existence, as they will after. Removing such content just validates it in their minds — it must be true that's why "the corporations" insist on covering it up!
Most of the west has learnt that we should not penalise drug addicts. We treat the underlying problems that brought about their habitual use. The need to self-medicate disappears.
Like the war on drugs failed, so too shall the war on misinformation. The solution is tending for the human condition that leads one to not only believe, but want to believe, in these fantasies.
> Cults and conspiracies flourished long before FB's existence, as they will after
The problem is that Facebook's algorithms are directly helping those cults grow by polarising people, as stated in the linked article:
> Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
I'm sure we can all agree that you can't end all cults, but surely actively encouraging their growth is a bad idea.
Hundreds of thousands of people are still in jail for drugs in the United States! Neither presidential candidate is for legalization of cannabis alone, and there is no talk of legalizing anything else.
I hope one day the war on drugs will end - but sadly it is still alive and well in the United States and all over the world.
As for the war on misinformation, we have lost continuously on this one for years in a row, and there's no evidence at all we're succeeding.
Would literal witchhunts be a better example? Or hunting "communists"? Or Young Earth creationsim? The "idea" that your skintone and level of intelligence are "biologically" linked? Or Jet fuel cannot melt steal beams and 9/11 was an inside job? And don't even get me started about that fake moon landing... Oh no... I can hear an airplane, here come the chemtrails.
If I was a higher up at FB, I'd consider the risks to me as a business of these issues of polarization, and I'd spend according on advice and evaluation (a lot).
For example, if FB comes to be seen as a kind of mind control platform, that could be devastating as national govts decide to step in and put a stop to things. Imagine even a mid-sized country regulating that FB was responsible for tagging any posts that contained not just Covid-19 misinformation but all sorts of misinformation. That sort of thing could be extremely dangerous to FB's business model.
These sorts of risk in my mind would be very high indeed and I would devote a lot of resources to at the very least understand them. $2M is a drop in the bucket for a company with revenues of $70B for addressing such risk.
(Actually I checked and a bucket contains approx 10K drops, so this is actually surprisingly close to being a drop in the bucket).
> Worse was Facebook’s realization that its algorithms were responsible for [extremist groups'] growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
Everything social that exists today already existed when humans were still living in trees. That doesn't mean that social media like Facebook can't serve as an amplifier that turns problems from annoyances to existential threats to our society.
This is classic whataboutism. What does people being led to believe that fat was healthy have to do with Facebook’s deliberate inability to make their content less divisive
You missed the point. Too many people are credulous, and lack a rigorous education in epistemology and the scientific method. This has been a problem forever and isn't something Facebook can fix.
I don’t expect them to come out with a perfect solution but if they can’t fix it and no one else can, but they still contribute to it I don’t see how why they should be above criticism and reproach?
Also the article mentions that even though they had they option to reduce it, they actively chose not to. How is this not something that I shod be critical of?
Why is growth above everything else important? What if they disabled the group discovery part? How is that impossible?
We are talking about existential threats to humanity. It might be hard to believe but humans ability to bring about bad outcomes is growing at a very fast pace.
Key players like Facebook throwing in the towel because “we don’t see an easy way to stop contributing to problems that are growing in size and can potentially destabilize democracies, world ecosystems, or a few decades from now hunan existence” is not an option.
The problems we have today are growing exponentially in seriousness. Human beings need to learn to get along in ways they never needed to before both due to resource stress and technological powers we never had before.
Facebook’s amplification of many human weaknesses is only one of many risk factors. But I don’t think many young people realize how easily humans have fallen into disasters in the past, which amplified by progress could easily become existential today.
The while thing is, what does what other tech companies donate to have to do with anything?
Btw, some tech companies donate a fuckload to open source and other aligned initiatives. Its annoying to see this ignored for the sake of a weak argument.
You inverted the statement in the act of repeating it back - people were misled into believing eating fat was unhealthy, not that it was healthy.
And this is a part of the problem, right? It's not whataboutism, it's a lesson from history. Your belief in the incorrect expert advice of yesteryear is so strong that even trying to type out the opposite is hard.
At the moment a whole lot of people, of the type who read and post to HN, have decided that disagreeing with "experts" is divisive. The problem is these "experts" aren't really experts by normal definitions, like someone who has a strong track record of correctly understanding and predicting a complex topic. The word is instead being used to mean something more like, "people employed by the government who claim special knowledge". Nutrition is the example chosen here for non-expert experts, because nutrition has been hit hard by the replication crisis. But it's hardly the only field with this problem - basically every medical authority has discredited itself during COVID.
Disagreement with authority is a classic justification for free speech. It is inherently perceived by the ruling classes as "divisive" because that's exactly what it does - it divides people into those disputing their authority and those who don't. You thus can't combat "divisiveness" without simply shutting down all disagreement with the government in all ways.
A counter-point to this is that studies show polarization has also fallen in some countries over the past years - including ones where social media (Facebook or otherwise) is popular. Studies also show some of the most polarized segments in the US to be the older population, which uses social media less. We definitely have work to do, but this suggests there are many factors at play.
You have an interest in the outcome of the research, why should we trust you to conduct or fund it properly? Your track record is terrible. The fact you are throwing around dubious research as facts is crazy.
I'm sorry, but really, there's no reason to believe a word a you say.
You get paid from Facebook, Facebook lied again and again, how on earth do you expect people to take you seriously?
Your job and title is to give FB the optics of caring about integrity, and was invented as part of FB PR in the aftermath of the Christchurch massacre.
>why should we trust you to conduct or fund it properly?
You are welcome to conduct research or fund it. You can even help current research by criticizing their research in its substance, or engage yourself politically (at least by voting) so we can have more and better research about the topic.
Outside of actual weapons research, research can't be "weaponised" and academia is rife with incredibly strong political biases. Political bias in academia is so severe that there is actually an entire foundation devoted to trying to combat it (the Heterodox Academy).
Your posts sound like you believe corporations shouldn't ever do research and worse, that academics don't have any interest in the outcomes of their own work. But that's nonsense, of course they do. They want to publish papers, they want their research findings to be novel and widely cited, they want to build a reputation. They have all kinds of self-interested incentives that act against producing accurate research findings; hence the replication crisis!
How do you think Facebook should deal with this? If Facebook-funded research is suspect and nobody else is able or willing to fund it (why should someone else fund soemthing for Facebooks potential benefit, when Facebook are rich, and if someone else is funding research for Facebooks detriment, then I would say its equally suspect), how should such research be funded then?
How can you show results of the research in a way that you wouldn’t consider as “weaponized”?
I’m not a fan of Facebook and am quite suspicious of them, but I’m also not sure what they can do in this particular situation that we would find satisfactory,
That is interesting. My take would be that the older population may spend less time on social media (who can compete with a 20 year old with a phone welded to their hand anyway), but that a disproportionate number of these seniors are babes in the wood where technology is concerned, and are more amazed by, believing of, and susceptible to the influence of social media than youngsters who have grown up along with those social media platforms.
Would be very interesting to learn about these countries where polarization has fallen if you have links to hand.
Except now the people that use Facebook the most are grandparents (i.e. the demographic you mentioned that is most polarized). Facebook is no longer cool as it was 15 years ago.
The NBER paper that you point to in a subsequent comment doesn't have any detail on the popularity of social media; we explicitly can't make any determination on whether social media is having an effect there because the data is missing. If, say, Facebook were to provide external researchers with data about the growth in Facebook use - users, median time spent on site, average time spent, SD of time spent - for a number of years, that would help to identify whether social media has a role in such polarisation.
Although it's trivial to say "I see a lot of polarisation on social media, therefore it's worse than it was", a satellite-view paper like that NBER one gives zero insight into the role of social media, partly because the data isn't provided, but also because it doesn't examine what effects there might be on smaller groups within the population who are, say, heavy social media users.
I think the most useful thing Facebook could do would be to make more information available to researchers, rather than pointing to research which hasn't been able to use data and claim that helps exonerate Facebook.
Are you really trying to take credit for a flimsy correlation between facebook usage and polarization (in some countries, though you didn't say which ones or how many compared to other heavy use countries)
I think it's the reverse. They're trying to distance themselves from the notion that Facebook is the primary factor in what's polarizing (certain) people.
> Now we're struggling to deal with existential threats like climate change because a lot of people get their worldview from Facebook
You state this as a matter of fact. How do you know this?
Even if it were to be true, that people were more polarized in the climate worldview by Facebook and more so to the wrong side than the right side, we all know that climate change is the result of our behaviour the last centuries and that counter efforts has been resisted the last 50 years.
Something like 85% of the planet believes in non-science. There are 2.3 billion Christians, 1.9 billion Islam, 1.1 billion Hindu and probably a billion "other" religion. The fact that people believe non-science has got nothing to do with Facebook.
If you're not being given the benefit of the doubt, it's because your employer has 16 years of lying about this and related issues. Zuck long ago torched whatever shred of trust ever existed, so no, we are not going to be impressed by an extra 0.003% of annual revenue thrown to problems you've created.
On top of that - it seems their efforts to prioritize friends and family may not take into consideration this is where the divisiveness seems to begin? How many of us have friends and family that share news articles, worldview opinions, and memes that fit into divisiveness, fake news, and/or borderline racism.
You can reshuffle the deck but the same cards are still inside.
The fact that FB has not banned political ads is pretty shocking and absolutely related to this topic.
Twitter managed to do it, but FB continues to allow political parties to spread misinformation via the algorithm, and FB profits from it.
So essentially VP of Integrity, your salary is paid for in part by the spread of misinformation. Until you at least ban political ads your integrity is non-existent.
But of course! Every company should have a VP of Integrity, especially Facebook. Creating the impression of trustworthiness while profiting from lies is important at any company. At Facebook it’s the core business.
Fcebook's record to date is such that it has all but no credibility on this account. This gives it virtually no room for effective action.
Partners have walked from billions of dollars in shares, its former concience, Alex Stamos, quit after being repeatedly blocked, stymied, subverted, undermined, or backstabbed. And the very label "VP of Integrity" reads as so perfectly Orwellian and ironic that the position negates itself.
Without an ability to fully rope in Zuck directly it is every bit as toothless as it sounds.
Being receptive and responsive to critics is a function of marketing. And that's what this is -- marketing.
Further more just because the role exists doesn't mean it matters. My Org has all sorts of diversity and sustainability managers, and their relevance ends the moment business decisions come into play.
Integrity should be woven into the fabric of a company's culture. It can't possibly be a role. Making it a role looks like window-dressing and effectively an empty gesture.
Hi there. Have you ever considered making the decision making open? I mean it seems you are obsessed with criticism and scrutiny. Then here is an idea for you: invite journalists from major media outlets to your decisions making. Then you can avoid these "unfortunate" cherry-pickings as you put it.
Why I am saying this: it seems you sit backwards on your high horse, criticising those people who for all intents and purposes have very limited insight into the decision making.
Me and my close friends are fed up with Facebook and how obviously it is trying to polarize everyone in this world.
No sympathy for you on my side, and I cann assure you I speak on behalf of my friends too.
Yep we actually do this! (invite journalists to decision-making meetings). One of our regular meetings is about the content policies, we publish the minutes here - https://about.fb.com/news/2018/11/content-standards-forum-mi... - and have also hosted journalists and outside academics from time to time.
Thanks for the reply. But I am still not convinced. It seems these are watered down versions of outlines of the actual decision making. Which is different from what I am suggesting.
Here is an actual example:
> "• Question: How much will we communicate about this? We usually don't want people to
game the Feed but this change might hit their pocketbook so my instinct is to be open
about this.
> • Answer: Comms is aware and we will be proactive about communicating this. Inside
Feed is one channel we can use to do this. The only way this can happen at the domain
level is with full transparency and product is aware we can't do this unless we have that."
So the commitee passed the buck and all the transparency is gone. Also there are no names or responsible persons assigned. And by the way, the words "feed", "news feed" or "ranking" appears only once throughout the span of 4 months. I very much doubt there were no decisions in any form throughout those 4 months regarding the news feed.
So overall this looks to me like pixie dust rather than a true representation or involvement in decision making.
We have entered an era in which non-state actors like Facebook have power that was once the exclusive domain of governments [1]. Facebook understands this, and justifiably views itself as a quasi-government [2].
I would really like to understand Facebook’s theory of governance. If I want to understand my own government, I can read the Federalist papers. These documents articulate an understanding of history and a positive view of the appropriate role of government in society. I can use these documents to help myself evaluate a particular government action in light of the purpose of government and the risks inherent in concentrated power.
Has Facebook published something like this? I struggle to understand Facebook’s internal view of it’s role in society and its concept of what “doing the right thing” means. Without some clear statement of governing principles, people will naturally gravitate to the view that Facebook is a cynical and sometimes petty [3] profit maximizer.
Without some statement of purpose and principles, it is hard to level criticism in a way that Facebook will find helpful or actionable. We are left to speculate about Facebook's intentions, instead of arguing that a certain outcome is inconsistent with its stated purpose.
This may come off as condescending, but I'm honestly just curious.
From the outside looking in, it seems as though you are paid to drink cool-aid and paint FB in a positive light. How does one get to be in your position? What are the qualifications for your job?
Remember that VPN app that Apple pulled from the app store, the one that Facebook was using to spy on users' internet usage to gain intel about potential competitors. When Facebook acquired the VPN app this guy came with the purchase. VP of Inegrity. Oh, the irony.
Facebook's "VP of Integrity" Guy Rosen (guy_ro) co-founded Onavo, a spyware company that Facebook acquired in 2013. Onavo's flagship app was Onavo Protect, a VPN service that Facebook used to monitor the activity of its competitors, including Snapchat. Facebook acquired WhatsApp and copied features from Houseparty based on the data it harvested from Onavo users.
Onavo Protect was removed from the App Store in 2018 for privacy violations, and from Google Play in 2019. Onavo then rebranded to Facebook Research and marketed itself through targeted ads to teenagers on Instagram and Snapchat. Apple revoked Facebook's developer certificate because Onavo was using it to bypass the App Store review process. Three U.S. senators (Richard Blumenthal, Ed Markey, and Mark Warner) criticized Facebook Research for harvesting data from children. Facebook Research was discontinued later that year.
Guy Rosen moved on to be Facebook's VP of Product Management. In 2019, he briefly adopted the "VP of Integrity" title when performing damage control for Facebook's live stream of the Christchurch mosque shootings, and then reverted back to VP of Product Management after he was called out on it.
> The next time Facebook needs a public relations injection, it should consider using someone other than the co-founder of Onavo.
It seems like they knew exactly what they where doing. They want money and I don't believe for a second that the most ethical road is where the money is, you have to be able to say no to money and Facebook clearly shows us time and time again that money has the highest priority.
The highest value seems to be in companies that brand themselves as trustworthy but really isn't. No matter if it is Goldman Sachs, Facebook, Google or Nestle.
Unbelievable. Just when I think I've seen it all in tech, how can someone like this take their title seriously having peddled their spyware to the highest bidder. Hope the cognitive dissonance keeps these execs with a broken moral compass up at night, if not for what they've done for themselves, then for contributing to making the world a worse place for their children and beyond. Thanks for sharing, wish there was a database full of these snakes and their slimy legacy they've left behind.
Fair enough, but I also get to see first-hand how decisions are made and how rigorous debates take place, so I have more faith in the process. We've got lots to do to improve transparency of how this stuff happens, because I know people care about it. One way we started a while ago is publishing minutes to one of our meetings where decisions on content policies get made. https://about.fb.com/news/2018/11/content-standards-forum-mi... -- lots more to do!
Can you qualify what you mean by “rigorous” ? I think this is an important point of contention because Facebook is so accustomed to using data to justify decisions. Yet these ethical issues often either have no data and/or the consequences of Facebook’s actions impact users who are not on Facebook. Moreover, the data is not available to the public (despite the public generating the data) so the public can’t actually reproduce the warrant used internally at Facebook. This lack of reproducibility is why I think there’s so much friction around Facebook claiming their decisions are made with “rigor.” Thanks.
His work history is on LinkedIn. VP Integrity is essentially an executive product management role so qualifications would be along those lines.
Edit: LinkedIn says he was an engineering manager, then took a series of roles culminating in a data startup that got him into Facebook in the current role.
'A data startup'? How very generous of you. He co-founded a spyware company that was bought by Facebook to discover what other companies/apps to buy or clone.
You do not get to the position he has at Facebook by having either ethics or morals. Having seen this up close at Facebook HQ I can assure you that everyone at VP level or above there knows what they are doing, knows the long-term consequences, and simply does not care because the paycheck is far too large for simple ethics to enter into the discussion.
Morality and strong ethics do not place a glass ceiling on anyone’s achievement. Arguably, consistently good ethics lead to more opportunites and better outcomes.
The main issue here is people have different ideas of what is ethical. Disagreements arise when countries and increasingly companies exert power of people given to them by those same people.
In my mind, the same problems existed before facebook and expecting facebook to solve them for everyone is rediculous. Even more rediculius is expecting everyone to accept facebook’s solutions.
Potentially useful context: the parent here was the Co-Founder and CEO of Onavo which he sold to Facebook for $120 million. If the name "Onavo" doesn't trigger any bells: https://en.wikipedia.org/wiki/Onavo . It was ostensibly a VPN but tracked its users' behavior and Facebook used the data from Onavo to judge how much traffic various startups had when deciding whether to acquire them.
I think that the fact that the founder/CEO of Onavo is now Facebook's VP of Integrity is entirely consistent with everything else we've read about Facebook over the years.
> The Ministry of Peace concerns itself with war, the Ministry of Truth with lies, the Ministry of Love with torture and the Ministry of Plenty with starvation. These contradictions are not accidental, nor do they result from ordinary hypocrisy: they are deliberate exercises in doublethink.
-- 1984
His title VP of Integrity is not accidental; it can't be.
Anyone want to hazard a guess at the total comp of a VP @Facebook? Curious
I hope there are more people at the top with the integrity to stand up for injustice like Tim Bray. I have all the respect in the world for someone who puts their neck on the line for what they believe in. Thank you Tim @tbray
> We reduce distribution of posts that use divisive and polarizing tactics like clickbait, engagement bait, and we’ve become more restrictive when it comes to the types of Groups we recommend to people.
This seems inherently a political evaluation. What are the criteria and is this driven manually automatically?
Would you care to post the same stats but updated for 2020 then? Also i see that this is statement is also copy pasted on your twitter so I have a hard time believing you actually do read this feedback and didn’t just post This comment as damage control.
If we're talking metrics, the 64% stat cited in the article turns out not to be a good way to measure impact of recommendations on an extremist group. We internally think about things like prevalence of bad recommendations. More on prevalence here - https://about.fb.com/news/2019/05/measuring-prevalence/. I don't have a metric I can share on recommendations specifically but you can see the areas we've shared it for so far here: https://transparency.facebook.com/community-standards-enforc...
You deny the statistic in the article. Then you use corporate speak like "we internally think about things..." Then you link to a blog post titled "Measuring Prevalence", which doesn't have a single data point with a number in it. Not to mention that it's another mess of CorpSpeak that sounds like a PR committee wrote it, not a human being.
The problem is not your intentions. It's clear that you actually want to engage people, given that you're replying actively on this board, even to impolite comments. The problem is the style of communication. It sounds constructed, artificial, developed through a process, like you and a bunch of people are sitting there drafting replies and tweaking until you have something that won't come back later and bite you in the butt. The problem with that strategy is that you end up saying very little of meaning at all.
So maybe to make things more concrete - is there a statistic on measuring the impact of recommendations that you do agree with that you can share?
Thanks for the message. Does your group have a centralized place where your team’s research, recommendations, and roadmap for changes are visible to the public?
Your MO over the years seems to be "we're working on it" from your frontline interview in 2018 regarding Myanmar to a business insider article from last year regarding the New Zealand shooting.
$2M for independent research. haha, that's really generous of Facebook. You all probably made that in a days worth of misleading political ads you've sold.
One question I have for you mr VP of INTEGRITY... I haven't used FB in over 8 years but I would like the full set of every data point that you have on me, my wife and my kids. Where can I get that? And if I can't, please explain to me why.
I guess you've been drinking so much moneysaurus rex kool aid that you seem to not understand that people just don't believe anything you all say anymore. Your boss can't even give straight answers in front of the government.
Maybe you all should change your tagline to:
Facebook, we're working on it.
Maybe that's how Facebook builds their organization: VP of Integrity, VP of Honesty, VP of Sincerity. So that Zuckerberg doesn't have to take on any of those roles.
Yes - but in this case the CEO has none so he needs to outsource his integrity to someone else, and when you are a vacuum of integrity, everyone else looks like a saint. Therefore, top comment.
Might be a long list. We've got >35,000 people working on safety and security. I expected some challenges recruiting people with all the bad press we got over the last few years, but instead I've seen that talented people eager for a really hard challenge are even more excited to join and wok on this area.
So where's that list? I would imagine it is not hard for a company at the size of FB to reliably host a static file of 35000 redacted names. :-)
And my snarky response is inspired largely by what I would call your diversion of a response. To me reads like you are not acknowledging the real answer: no, as FB VP of Integrity I can't and won't provide the list.
"Might be a long list. We've got >35,000 people working on safety and security. I expected some challenges recruiting people with all the bad press we got over the last few years, but instead I've seen that talented people eager for a really hard challenge are even more excited to join and wok on this area."
I didn't read the article, because paywalled.
So you're implying seriously that 35'000 people are actively involved in such decisions? Really?
Let me wager a guess:
A Google search reveals 44'942 Facebook employees as per December 31 2019.
You're implying here that 78% of all your employees are directly involved in such decisions?
Here's my take. 99.998 % (and yes, the number is pulled out of thin air) of those 35000 people are lowly paid contractors sifting through the horrible crap that some of your users post.
Sure, they're working on "safety and security" in the broadest sense, but are sure as shit far away from any such decisions.
Then why did you offer? You aren't confronting anything directly, everything you are saying here is indistinguishable from any other PR agent replying but dodging questions to seem like they are transparent.
Respectfully, to a top brass executive at Facebook: I respect the hard work and innovation that has gone into building Facebook. It's allowed billions of people to connect worldwide in ways that were never possible before. It is truly a billion dollar platform.
The problem is that your entire executive leadership believes it is a five hundred billion dollar platform. They have to, because the investors demand it.
Why are you asking third parties to conduct this research?
Why isn’t this an initiative driven by an internal team? Where applications for this program advertised outside of Facebook?
Is $2M realistic for this research? I know I wouldn’t be enthused considering my total compensation. Do you expect top quality researchers to apply?
The platform is constantly evolving. At any point millions of individuals could be part of an A/B test that alters their experience. How can a third party navigate these conditions without corrupting their findings?
Well, when your management has such a record regarding their own integrity why on earth would you expect us "dumb fucks", as Zukerberg put it, to trust you?
Facebook has lied again and again, took any shady approach to get data, mishandle that data, practically endorsed a genocide. What integrity are you talking about?
You maybe serious about your job, but you work for people that proved they have no moral compass. I'd quit if I were you and actually believe in what you try to accomplish.
Edit: just realized you are the founder of onavo, a spyware bought by facebook. Were you also heading project atlas? Were you responsible for inviting teens to install spyware so you can collect all their data?
The mere fact YOU are chosen as VP of integrity just says it all. When the head of integrity is a spyware peddler... Well....
Yeah, I guess it's a cynical move I should've expected of Facebook.
> What’s undeniable is we’ve made significant changes to the way FB works to improve the integrity of our products, such as fundamentally changing News Feed ranking to favor content from friends and family over public content (even if this meant people would use our products less).
Emphasis added.
What is that supposed to mean? Would you like a sticker for doing the right thing?
Meta question. How is this post from a user created 6 hours ago, with 66 Karma at the top of this comment section? This is particularly interesting since just this week, TripleByte’s CEO’s top level comments were buried. Is there some kind of change or is this really the naturally occurring top comment?
Thanks for chiming in here! Honestly the response to your comment reminds me of the youtube comment section. I'm not sure why people aren't capable of civil discourse, would have expected better from HN.
“We come to these decisions through rigorous debate where we look at all angles of how our decisions will affect people in different parts of the world”
If you had any integrity, you would delegate this governance back to the captured provinces.
We've got this idea stuck in our heads that only the website itself is allowed to curate content. Only Facebook gets to decide which Facebook posts to show us.
What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.
Instead of being tuned to line the pockets of Facebook, the AI is an agent of your own choosing. Maybe you want it to actually _reduce_ engagement after an hour of mindless browsing.
And not just for Facebook, but every website. Twitter, Instagram, etc. Even websites like Reddit, which are "user moderated", are still ultimately run by Reddit's algorithm and could instead be curated by _your_ agent.
I don't know. Maybe that will just make the echo chambers worse. But can it possibly make them worse than they already are? Are we really saying that an agent built by us, for us, will be worse than an agent built by Facebook for Facebook?
And isn't that how the internet used to be? Back when the scale of the internet wasn't so vast, people just ... skimmed everything themselves and decided what to engage with. So what I'm really driving at is some way to scale that up to what the internet has since become. Some way to build a tiny AI version of yourself that goes out and crawls the internet in ways that you personally can't, and return to you the things you would have wanted to engage with had it been possible for you to read all 1 trillion internet comments per minute.
The primary content no user wants to see any every user agent would filter out is ads. Since ads are the primary way sites stay in business, they are obligated to fight against user agents or other intermediary systems.
The ultimate problem is that Facebook doesn't want to show you good, enriching content from your friends and family. They want to show you ads. The good content is just a necessary evil to make you tolerate looking at ads. Every time you upload some adorable photo of your baby for your friends to ooh and aah over, you're giving Facebook free bait that they then use to trap your friends into looking at ads.
I sure am tired of hearing about "the fundamental flaw" in empowering people. What you describe is not a flaw in empowerment, it's a flaw in their business model, and it's one that can that be fixed (i.e. "innovate a better business model"). Can we stop propagating the idea that people who do not want to use their limited bandwidth and processing power to rasterize someone else's advertising are somehow "flawed"?
The only thing more insane than blaming users for having self-interest are the people who pretend that Facebook et al. are somehow owed the business model they have, painting ad-blockers as some kind of dangerous society-destabilizing technology instead of the commonsense response to shitty business practices it clearly is.
The point is that fighting Facebook on Facebook is a losing game and "innovate a better business model" has been tried, is being tried, and is not working because it is hard. A plan that does not work is indeed "flawed", no matter how noble and natural the intentions.
"Users don't care about your ad network" is not a "plan," it is a reality, and calling it "flawed" is just corporate propaganda. I'm sure you're arguing in good faith but the very foundation of this assessment is fundamentally incompatible with reality.
Well, no, today's model is "users tolerate your ad network in return for free content". Which is clearly true given that Facebook is making profits despite everyone and their mother grumbling about ads.
Yes, tolerating the ad network definitely falls under "users don't care"... until they do, and they start taking measures to counteract it, and then we're back to "something has to pay the bills," which is where I'm suggesting some R&D investment might be wise.
It's a site that they run, they are the source for those numbers.
A quick google says somewhere from 20%-40% use adblockers. 60% sounds high, but depending on the website, I can see it. For example, I bet at least 60% of visitors to gnu.org are blocking ads and/or trackers.
We all know users won’t pay enough for a subscription service to make a huge profit that makes the VC happy, makes the founder one of the ten richest people in the country, and supports a ton of offices and salaries in some of the priciest places to live and work.
But the tiny Mastodon server I run for myself, with a total user count in the low triple digits, costs about fifty bucks a month, and the users who are willing to pay cover half of that. I could probably get more of them to cover it if I was more aggressive about asking, but I prefer to keep it super low-key. I could also lower those costs if I felt like putting some work into optimizing it.
It’s not my job, it’s a thing I run on the side and put a few hours of technical work into every few months. I ain’t gonna get rich from it but it gives my friends a nice place to chat on the Internet.
Yeah but the vast majority of Facebook users don't care enough to learn how to use Mastodon. Try asking the average Social Security recipient to use IRC.
I reject the notion that "We all know users won’t pay enough for a subscription service to make a huge profit that makes the VC happy, makes the founder one of the ten richest people in the country etc" because...if you can figure out how to get Grandma to use a federated Mastodon-like service, then you would do just that.
“Hi Mom! We’ve decided to leave Facebook. Jane’s set up our own little substitute. It’s where we’ll be posting all the pictures of the kids from now on. I’ve written up the basics in this letter; if you have any questions we can talk about it when I see you next week.
I’m giving something to my friends. This makes them happy, and it makes me happy to see them happy. I’ve made a few new friends because of this, I’ve gotten to know some acquaintances better too. I am richer in my connections.
There’s a low-key buzz of occasional thanks and favors in my life that I wouldn’t get if I wasn’t doing this, either. And occasionally this connection lets one of my friends help out another who needs it, financially or emotionally, when they wouldn’t even necessarily be contact with each other, much less interested in helping out, without the shared space I’ve created.
I think this is a great experience.
(Total active user count is more like a few dozen, btw.)
I see two problems with this. First of all, the service Facebook provides isn't valuable at all unless all your friends and family are also using it and posting content. So unless you can get a critical mass of users to switch to a new platform with a different business model, it won't succeed. Secondly, we've become accustomed to not having to pay for social media, and asking to pay for a social media platform is a little like asking to pay for air. Sure, yours might not have as much pollution, but I can get something almost as good for free.
I've actually experienced the latter, as I looked for an alternative to Gmail. I just found it hard to justify paying for an email provider, where the only real value add to me is the absence of ads, and not being Gmail. And really, the price is mostly irrelevant. For me to be willing to pay _anything_, it would have to have a really compelling reason to move. The value of not seeing ads is just not that high for me. And I don't think you would say there isn't value in an email provider.
I use Fastmail for my important email because I want to be paying a company to take me seriously as a customer. They’re not going to just lock me out of my account because of some random abuse trigger elsewhere in their system. You’re probably not seeing the value because the bad thing hasn’t happened to you yet, but it might and when it does there’s not much recourse.
Perhaps if people don't find a service valuable, then we should everyone to stop using it.
If that argument sounded absurd to you, it's probably because it is. The services are valuable because people ultimately do use them — a lot of them, even. They pay for them indirectly by agreeing to look at ads.
There are loads of services we do not directly pay for, like the fire department and the public library — and yet they are immensely valuable.
The argument sounds nonsensical because it’s missing a verb.
I don’t agree that just because people will use a free thing that means it has a lot of value. Note I didn’t say that FB has no value, just that it might not be as valuable as one might think.
Considering most of their value is their messenger platform I don’t think FB is really worth much at all beyond their social graph.
> The argument sounds nonsensical because it’s missing a verb.
That's a typo on my part — it doesn't change the veracity of the argument.
> I don’t agree that just because people will use a free thing that means it has a lot of value. Note I didn’t say that FB has no value, just that it might not be as valuable as one might think.
Sure, but how do you measure the "true" value? If you can answer that question, you will probably become a billionaire.
> Considering most of their value is their messenger platform I don’t think FB is really worth much at all beyond their social graph.
What are you basing this on? You may only find the messenger platform to be valuable, but how do you know how others perceive the FB platform/product?
I’m going on what I’ve observed in FB users around the world. Their most dedicated users are people in developing countries whom they have convinced that Facebook is the internet.
Facebook is only 16 years old. The idea of social networks is only a few years older than that. Surely we can't have tried and failed at every possible alternative already?
I can buy web hosting for less than a coffee per month that can sling thousands of static HTTP requests per second. In a world where something like Mastodon/GNU Social was the norm, any hobbyist could opt to run one fraction of a grand federated social network out of the goodness of their hearts, for spare change, or for a small fee to their users.
Centralized, siloed social networks are only expensive to run because they're centralized. Things were better when the norm was to start a blog instead of using someone else's walled garden.
I'm complaining about the structure of the dialog around this issue, not casting aspersions on the parent post's argument itself. It's impossible to have a reasonable discussion when the terminology in use is strongly prejudiced against one of the key parties in the relationship.
Reality is strongly prejudiced against one of the key parties in the relationship. Users, today, tolerate ads in exchange for free content. Any reasonable discussion — where continued delivery of content is a desired end goal — needs to come up with an answer for how we pay for it. Calling out "fundamental flaws" is one such way of doing that.
Stating things nakedly, using the assumptions and perspective of the big guy, can be a powerful rhetorical style when advocating for the little guy. See any of Chomsky's political writing as an example.
> I sure am tired of hearing about "the fundamental flaw" in empowering people.
I'm all for empowering people. But adding personally controlled user agents to Facebook is a fundamentally flawed solution. There is no path for that to succeed because the primary content users will want to filter is ads, and the primary content Facebook needs people to see is ads. Thus user agents are an existential threat to Facebook and since Facebook controls all the content, they will ensure user agents are not allowed.
The core business model does not align Facebook's incentives with user's incentives. You can't fix that at the content level.
I thought it was obvious I meant from the sector in question. My mistake, I'll try again.
Do you know of a social media company (or ad company using a product to garner the data, if that helps) that uses that model with a comparable size to Facebook? If not, are there any that are making ground?
"The ultimate problem is that Facebook doesn't want to show you good, enrishing content from your friends and family."
Well, it is someone else's website. What do you expect Zuckerberg has his own interests in mind.
In 2020, it is still too difficult for everyone to set up their own website, so they settle for a page on someone else's.
If exchanging content with friends and family (not swaths of the public who visit Facebook - hello advertisers) is the ultimate goal, then there are more efficient ways to to do that without using Zuckerberg's website.
The challenge is to make those easier to set up.
For example, if each group of friends and family were on the same small overlay network they set up themselves, connecting to each other peer-to-peer, it would be much more difficult for advertisers to reach them. Every group of friends and family on a different network instead of every group of friends and family all using the same third party, public website on the same network, the internet.
Naysayers will point to the difficulty setting up such networks. No one outside of salaried programmers paid to do it wants to even attempt to write "user agents" today because the "standard", a ridiculously large set of "features", most of which benefit advertisers not users, is far too complex. What happens when we simplify the "standard"? As an analogy, look at how much easier is is to set up Wireguard, software written more or less by one person, than it is to set up OpenVPN.
I don't think that "user-agents" are the hard part either. At this point, I think any grad student would happily write a NN implementation that took various posts as input, and returned to you a sorted list based on your preferences (with input layers like: bag of words, author, links, time, etc, that the user could put more or less weight into just by simple upvote/downvote).
The problem is that no one has the incentive to host such a service for free, and users wants the content to be available 24/7. So it's not as simple as just setting up a peer-to-peer network. Users who just use a phone as their primary computer will still want to be able to publish to their millions of followers, and so it wouldn't work to have those millions of people connect directly to this person's device. Maybe you can solve that with a bit-torrent like approach, but the problem gets harder when you include the ability to send messages privately.
"Users who just use a phone as their primary computer will still want to able to publich to their millions of followers, and so it wouldn't work to have these millions of people connect directly to this person's device."
You have shifted the discussion from small overlay network for friends and family to large overlay network for "millions of followers".
Those methods of sharing content with "millions of followers" are already available and will no doubt continue to be available.
A small private network is a different idea, with a different purpose. People will always have the choice of using a public network however a small overlay can avoid sending traffic through third party servers, like Facebook's.
There is no requirement that a service has to be "free", or supported by ads. This is something else you injected in the discussion. I use free software to set up overlays, but I have to pay for internet service and hosting. The cost of the "service" is not the setup it is the internet access and hosting.
Your idea doesn't sound particularly tenable. A pay social network that limits you to only your close friends and family, that will have no network effects, have less features, and be more difficult to set up...
It's easy for people to point what they don't like about Facebook, but I don't think you are really comprehending why they dominated the social network space to begin with. It's not as easy as making a product that doesn't advertise, if it costs money to use instead.
Tenable for what? You are injecting your own ideas. Why would I care about the reasons Facebook is popular. The idea I submitted was to make the process of setting up overlay networks easier, not to try to start a web-based business. Making software easier to use is a tenable idea. The ideas you are introducing might not be tenable. However, they are not my ideas. The pattern I see is you introduce some idea, attribute it to me, then shoot it down.
I guess it's worth clarifying what exactly you're talking about. If you want to share images and text with a small group of people, okay, that might be useful in some cases. But that's not the use case that Facebook users have in mind - you're setting the bar almost comically low and the impact on the actual landscape of the web will be exactly zero.
If you mean something that can make a splash in the social media space to address the "user agents on Facebook" problem, color me skeptical about the prospect of competing on useful features with the Facebook behemoth while fighting the complexity up and down the stack of making everything decentralized and trying to make it friction-free for casual users to run, with no funding from ads, and starting out at square one on network effects. Yes, Mastodon is a possible counter to this line of argument, but Twitter is so stagnant and their product is so simple that I feel like it's almost a unique case. And for most people the Facebook use case includes being able to find all their real-life contacts; by that measure even Mastodon would fail.
You can be as skeptical as you want to be. Even assuming this idea was intended to "make a splash" (it isn't), these sort of comments make no difference whatsoever. We have all seen how HN commenters have criticised ideas that, rightly or wrongly, later went on to become successful businesses. The thing is, this is definitely not intended to be a business. It is just some software that exists and that works. If it works for me, then it is "successful". There is no budding "founder" to shoot down. Just a user with some software that works. The assumptions of "make a splash in the social media space" and "competing with Facebook" are all wrong.
The business of Facebook is not the "comically low" bar of sharing text and images with friends and family. An overlay solution that avoids sending traffic to a third party server is not "competing with Facebook". However it could be used to avoid Facebook which is the point we are discussing here. The fact that only a small number of people actually use a solution does not mean it is a "failure". If the software is relatively small, compiles fast, runs on different OS and architectures, stays available for download and reliably works as intended, then to me it is "successful". The way I evaluate "success" and "failure" of a software is probably different from many commenters/readers.
And, secondarily to your main point, the idea that "sharing images and text with a small group of people" is some weird niche case that Facebook users don't care about is... pretty off-base. I'd say it's the main use case for the majority of Facebook users.
i have a fundamental issue calling a content-curating, psychological-experiment-running platform visited by hundreds of millions of people daily 'someone else's website'. the fact that it is privately owned doesn't matter if nation states use it to wage information wars against other nation states' citizens. to make matters worse, the 'someone else' in question knows about it perfectly well and is fine with it because it means he's showing more ads.
Well, that is what it is. No one knew a single website could grow so large, but it did. Even though there are thousands of people working for its owner, when reading articles like the OP we are reminded how much control he still has over it. No doubt he still thinks of it as his personal creation. Of course, "99.99999%" of the content is not his. Perhaps most of the people who sign up on Facebook are not employed by nation states but just ordinary people who want an easy way to stay connected to friends and family. Maybe these people should have a better way to stay connected than using a public website.
Why are you defending Zuckerberg for being a dick? If you have power, you have responsibility: full fucking stop.
The idea that it's okay to be a selfish child with power is tantamount to allowing driving while drunk. Power is deadly, you can just as easily crush a persons life as you could their legs with a car. Don't drive drunk, don't be in power if you can't be a responsible citizen about it.
Power does not imply responsibility, nor visa versa, there is definitely a venn diagram here. Dictators - all power, no responsibility. Manager of a homeless shelter - lots of responsibility, little power.
I believe, with nothing to back this up, that social media would be improved with a better educated population. One that knows and values the basic tenets of critical thinking and debate. No amount of policing will make people smarter or more courteous. People have to choose to be more civil and be more interested in views counter to their own.
The poster doesn't defending Zuckerberg, merely trying to explain why Zuckerberg did what he did.
>If you have power, you have responsibility: full fucking stop.
No, if you have power you get to decide the rule.
Same as drunk driving, the reason we are not allowing driving while drunk is because the people who are against drunk driving has more power than the people who are pro drunk driving. The side that has more power get to decide what the law is.
Excuse my ignorance, but isn't the overlay network setup problem one that has problems at almost every level of the stack? If there is not any definitive technical problems to overcome, why is it not possible to create a mobile app that friends and family could use as their own private network?
Isn't the internet supposed to be every node acting as it's own server and client simultaneously anyways? Is the problem just the inability to truly decentralize discovery, registry, and identity authentication of nodes in the network? Or is the problem that most ISPs don't want people operating services out of their homes or off of their phones?
"Excuse my ignorance, but isn't the overlay network setup problem one that has problems at almost every level of the stack?"
It works for me and has worked for others. The keyword here that distinguishes this idea from almost every other "peer-to-peer overlay network" project that you can read about is "small". If you limit the size of the network, you can avoid some problems. Most projects you read about aim at the ability to create a single, large network that potentially everyone can join. Open to the public. However using a different approach it is possible to create only small networks that are only open to people you know, e.g., friends, family, co-workers. There are still problems, but there are always going to be some problems. The internet you are using right now has problems. The question is does it work well enough. The small overlay network idea has worked well enough for me that I consider it one of the better ones.
It is really impossible to debate these ideas on the internet. Opinions are strong and negativity is even stronger. If you want an answer you need to try things out yourself and draw your own conslusions. No peer-to-peer solution is "perfect" and if you are always looking for the solution with zero negatives and zero limitations, you will never find it. Worse is if you never actually try these solutions, you just read about them. After you try many of them and learn what you like and do not like about the design/implementation, it is easier to chose one idea that works for your use case. Everytime someone starts promoting a peer-to-peer project you can quickly evaluate it, based on what types of designs/implementations have worked for you and which ones didn't. Well, that's my opinion, anyway.
It may not be that hard to set up things for yourself, have been toying with something like that for messaging https://cweb.gitlab.io/StoneAge.html. The deeper question is how to sustain this kind of products and make them competitive without comparable funding.
Email exists. Wordpress has free site hosting. SMS is ad free. If people wanted to create a free webpage for friends and family to see there are loads of options. The problem is that, generally speaking, most people do not find the value proposition as a Facebook replacement compelling.
Facebook does not provide software to set up small overlay networks.
Nor is the business of Facebook to provide messaging or free web pages. That is just bait. The business of Facebook is providing a way to advertise to targeted audiences within the billions of people who visit Facebook. That is what people pay for. It is a website with billions of daily visitors.
Unless you have a webite with billions of visitors then you are not competing with Facebook.
A small overlay network is not a website, it is a computer network. It does not have a large audience because the number of nodes is small. It is an interface you see when you type ifconfig or ip addr, not an email server, a blog website or an SMS provider. You could could use it for those things or you could use it for other things, anything you can do on a LAN.
It makes zero difference what you think people want. This is not a popularity contest of any sort. If you want to argue against the premise of my comment then you need to argue that small, personally-managed overlay networks and peer to peer connections are less efficient for sharing content with friends and family than using a public website, subject to "curation" and "censorship".
> In 2020, it is still too difficult for everyone to set up their own website, so they settle for a page on someone else's [for] exchanging content with friends and family
This is patently not true. People were using email for this prior to Facebook's existence, which worked well enough ("Share photos via email! Share videos via email! Here's a funny story from grandma! RSVP to my birthday!") and was painless to set up; in fact, all the people who have Facebooks also have emails! But very few people are doing this anymore as their main mode of communication with friends and family; this kind of activity is now happening on Facebook, where people are happy with the ease that it facilitates. Small networks works fine, but nobody's interested, hence why Facebook is worth billions and billions of dollars and is a large website.
You say "nobody's interested". That of course is "patently false". For one, pwdisswordfish2 is apparently interested. The author and other users of the software he/she says she uses are obviously interested. There are numerous projects that aim to use overlays. The overlay idea was used by one company that sold multiple times, ultimately to Microsoft for billions of dollars. The HN readers who upvoted the comment describing small overlays are presumably interested in seeing the idea presented in a comment.
The stated purpose of this forum is intellectual curiosity. There is nothing in bobthepanda's comments that is directed at that stated purpose. Why is he/she arguing about email usage when the quoted sentence refers to setting up websites? This looks like more of the "straw man" argument technique.
https://en.wikipedia.org/wiki/Straw_man
Some families already have their own slack. chat, post articles, share pics, and even put content in appropriate channels that you're free to join or leave! I already do this with my friends group of about 20 cuz it's more than good enough.
> The primary content no user wants to see any every user agent would filter out is ads
"no user"? Nope. People buy magazines, that are 90% ads. Subscribe to newsletters. Hunt for coupons. Watch home shopping channels. Etc, etc.
There's large part of population that wants to see ads. Scammy and bad ads? No. Good and relevant ads? A LOT of people do want them. Even tech-folks, who claim that ads are worst thing for humanity. Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?
> Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?
No, not at all. Ads don't just inform about deals, they also incite you to buy. I don't want to buy just because some random company got the chance to put some psychological manipulation in front of my eyeballs. If I want or need something, I will specifically seek it out. If I don't specifically seek it out, then I probably don't really need it.
If the price (heh) of that is that I miss out on some better deals for stuff that I would otherwise seek out, I'm fully happy eating that cost.
> If I want or need something, I will specifically seek it out
Sometimes we don't know that we have a problem that is easily solvable. As an example, I had no idea that there was an industry to reduce aws spending. One day a marketer made it past my filters and made me aware of an industry that I benefit from today. Just one example of many where marketing is win-win.
On the other hand I agree that there is too much psychological manipulation, unfortunately it exists because it works. Maybe this is where we need to disrupt the industry through a different approach. Just brainstorming, what if we regulated advertising to prohibit emotional/manipulative messages and rely instead on advertising facts?
> Ads don't just inform about deals, they also incite you to buy. I don't want to buy just because some random company got the chance to put some psychological manipulation in front of my eyeballs.
I'm curious why you don't want to encounter these "manipulations". Is it because of the brainpower it uses to identify them as such? Are you particularly susceptible to giving in to them? Something else? I'm genuinely curious.
I purposefully read a site called OzBargain, which is basically a compilation of ads. Sometimes I buy stuff I don’t need. Nonetheless, I like OzBargain.
> Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?
No. I want less intrusions on my senses, not more. I buy few things, and ads are irrelevant for the central ones: groceries and utilities are dependent on my place of residence.
If given the choice, I’d welcome with open arms the chance to never see an ad again anywhere, including offline. I’ve never seen an ad with a deal as good as that one, and I doubt I ever will.
There's a fundamental mismatch between how consumers value ads shown to them and how advertising platforms value the ads.
Suppose there is a $100 product that produces $110 worth of value for a specific person and costs $50 to produce and deliver to them. To use economics terms, there's a $10 consumer surplus here, and a $50 producer surplus. The consumer willingness to see the ad is proportional to the consumer surplus, while the producer's willingness to pay for ad placement is proportional to producer surplus.
This is a fundamental divergence in interests; because the ad networks are paid by producers, they'll serve the ads that tend to make ad buyers the most money, not the ads that best enrich the viewers' lives.
Honestly a truly well-made ad targeting on me, something that really ticked all the right boxes... gosh I'd spend 3-4× more, easily. Because ordinarily I just don't lose time on shopping websites, but I'm kinda of a spender when I do find things that I like. And I'm a sucker for good deals, I'm a patient prey hunter, I can wait months in ambush for the right deal.
Anecdotally of course, as one of those tech-folks who's got nothing against good and relevant ads.
> Anecdotally of course, as one of those tech-folks who's got nothing against good and relevant ads.
Be careful what you wish for. This TED talk[0] on persuasion architectures brings up some interesting moral arguments against this idea. If we generally accept that ad conversion is a metric of an ad's success, how do we handle the situation where people that are susceptible to addictive behaviors like gambling and compulsive shopping are exposed to ads that exploit their condition? Data-driven advertising systems are built particularly well to find and exploit these kinds of people. We already see this a lot in the scam call/email world. Once you fall for one, the amount of inbound traffic you receive from scammers increases dramatically.
> how do we handle the situation where people that are susceptible to addictive behaviors like gambling and compulsive shopping are exposed to ads that exploit their condition?
Ads like that used to be common. By your description, it sounds like they died out on their own.
How? Data-driven ad systems are built specifically to target people who have a high probability of conversion. The perfect person for such a system is one that would constantly (i.e. impulsively) buy or convert.
I guess I just don't buy enough things to want to see any ads at all. I don't have any particular issue selecting products either.
A well targeted ad doesn't need to serve you well, it just needs to convince you to buy; I have a lot of faith in myself and I'd say those are often the same thing, but experiences may differ.
The “ads just aren’t good enough yet” poster while right doesn’t really take into account the terrifying things companies would ask to know about you in order to properly target those ads
Marketing is typically about getting to the customer before the customer does any homework. The more informed the customer is, the less conversions the marketing heavy company will get. This is because they spend their budget on marketing (thus you seeing their ad as opposed to the competition). That means they don’t have the same resources dedicated to their product.
Sure at some scale, marketing becomes a requirement to build your brand for many other purposes and I will also concede that in some cases, like with services that hinge on network effect, marketing is required for the product to even become viable. But most things are not social networks and we would be better served to learn of the product in context with its competition in a less biased setting.
> Don't you want to learn about sale for new tech gadgets? Discounts for AWS? Good rent deals?
Sure. The thing is: when people are interested in something, they look for it. When I want to buy some games, I open my PS4's store. Any advertising I get there is absolutely fine because I asked for it. I told the software to show me the stuff that's available for me to buy.
The problem is when I open a link to some web page and 80% of my phone's screen turns into advertising noise I didn't ask for and couldn't care less about. I make it a point to delete this noise.
Honestly I think the only way to make an ethical social network is to make a non-profit one. Fund it alongside other public goods like PBS, public education, highways, rail networks, healthcare, etc.
And yes, I know: good luck getting THAT to happen in the US given how badly funded everything else in my list is. If you’re in another country that actually funds public goods maybe this is a thing you could talk to some of your fellow techies about and make a proposal, especially if your country is getting increasingly tired of Facebook?
Alternatively, ground-up local funding of federated social networks might be workable; I run a Mastodon server for myself and a small group of my friends and acquaintances, with the costs pretty much evenly split between myself and the users who have money to spare. It is not without its flaws and problems but it is a thing I can generally keep going with a small investment of my spare time every few months.
The solution is to nationalize FaceBook (and any site that proves to be similar) and allow everyone free access to it w/o censorship. Give Zuckerberg a $1 for his time and effort.
Meahwhile, Google must be split up: e.g., Alphabet can go do AI work as a private corporation but Google Search needs to be nationalized (or split into at least 3 entities - i prefer nationalization).
Amazon must be broken up and parts nationalized, but this is a more complex case for later discussion.
A) These companies are American, and outside of certain, very specific, industries explicitly mentioned in the War Powers Act, the American government has no authority to nationalize them at any time for any reason.
B) Companies like Facebook in particular can never be nationalized due to the First & Fourth Amendments, and government being barred from competing with private entities in the private sector.
C) I do agree that Google and Amazon should be tried for anti-trust violations and that Google in particular should be split for violating anti-trust laws against vertical integration across markets.
Pay for Facebook then. 1.5% of total YouTube users subscribe for YT Premium. I love how the smartest minds will ignore the most primitive economics. Ads work. For everyone. Except deluded.
You know the worst part? I do pay for YT Premium and yet Google still finds a way to throw ads at me on Youtube videos through videos it suggests via Google Feeds (the leftmost screen on a Pixel). I bloody pay for the service and yet I am still getting ads on any youtube video I play when clicking any youtube suggested video on that feed. How annoying do you think that is? When you give in and pay, yet you are still getting harassed.
I had this issue as well and logging out (in my case, switching accounts) wiping Google app cache, and I think rebooting to make Pixel Launcher refresh, then logging back into the YT Premium subscribed account fixed it for me. Convoluted I know, but I think the issue was it wasn't picking up some profile variable or token denoting me as a subscriber. Hope that helps!
To be honest, I'd happily pay for YT Premium if Google didn't use my data to personalise other results and content on the internet. I personally stop using products/services that dictate what content is deemed "suitable" for my consumption. I'll happily be served adverts so long as I'm not getting manipulated.
If everybody paid for facebook, it would have as many ads, if not more. That companies would leave money on the table with no incentive to do so is a bizarre self-justifying myth that people who live off advertising tell themselves.
You pay for cable. Paying customers are a better audience for ads than deadbeats.
But everybody doesn't pay for Facebook, and the reason they don't do so is because Facebook is funded by ads and no one has paid for a Facebook without ads. But sure, Facebook might hypothetically still have ads if users paid, and my grandmother might hypothetically be a bicycle if she had wheels.
And the way they make money is by someone paying for it. Some sites collect payments from users directly. Facebook collects payments from advertisers because most users wouldn't use Facebook if they had to pay for it.
Do you have a point? This reads like an unfinished thought.
there is absolutely no reason that they cant do both at the same time, there is a mental short circuit going on when people think paying for a website means no ads. Its never meant that in pretty much any other media.
Yes there is a reason, users often don't expect to pay for services that show ads, and users who pay for services often don't expect to see ads. That's why most popular online services don't mix the two, eg. Facebook, Spotify, YouTube, Netflix, Crunchyroll, Google, etc.
"mental short circuit" better describes your argument that jumps from "it's possible to both show ads and charge user fees" to "they always will".
On Feb 4th, Google said there were 20 million Youtube Premium users, and I believe the latest estimates put Youtube at 2 billion users, which would be a 1% subscription rate.
It would be interesting to see how they count a "user" too. If a good portion of those 2 billion people don't use YouTube very much then the % of users who are using it regularly and subscribing might be a lot higher.
The irony is that there's a disincentive to skip ads on 'paid' users even more -- because people willing to pay for things are even more valuable to advertisers -- so if you make a paid option with no ads, you're also gutting the value of the freemium-ads option (beyond the average user loss)
Paid-for Facebook would be a viable business if it wasn't competing with free-facebook. It's not ignoring economics to think that Facebook is causing significant negative externalities that ought to be priced or regulated to allow more ethical alternatives to thrive.
free facebook should be regulated out of existence. what else is free that is good for you? in big cities you have to pay for clean air to breathe already.
That does not tell us much. Where can we look at YouTube's balnce sheet? There is likely more to YouTube as a business than selling ads on YouTube. For one, YouTube under Google is like AC Nielson on steroids. The combination easily rivals any "smart" TV.
I paid for youtube for a while, but I did not get a different algorithm. It was the same feed of addictive, stressful content. I stopped paying once I noticed this.
Of course they conveniently only bundle these features so they can pretend it’s what customers want—hell downloading and playing when the screen is off should be a part of youtube. Just another step in the long saga of companies intentionally crippling their own services to bilk customers.
Let's imagine for a moment that a decentralized social network actually took off.
How long until those ads crop back up anyway? Instagram should give us some idea on how sponsored content might look in such a system. According to some random site, the average price for a "sponsored" instagram post is $300. You think your friends are above showing you an ad when real money is on the line? Maybe they won't be making that kind of money with very few followers, but when Pizzahut asks you to post an ad in exchange for a free pizza, I think you'll see plenty of takers. Now, granted, at least the people being paid are your friends, instead of Zuck.
But the kind of people who get $300 for a post should have a very large group of followers. Which should imply that there should be a reasonably large group of people prepared to part with some money in support of their work.
And for the small group of people it seems to me that it should be easily self correcting by normal social cues since there’s no network effect to offset it.
This played out in LiveJournal, and the outcome, generally speaking, was that paid promotions were too blatant to be effective outside of influencers proper (where it's part of the explicit contract between them and their audience).
> Since ads are the primary way sites stay in business
Flaw? It seems that the point would be to force FB to transact with currency rather than a bait-and-switch tactic. The site would also be more usable if they were forced to change business model.
That is how it is today. But does it have to be like that ? What is the minimum revenue per user required for service like FB to run.
While everyone is sceptical on whether such a service can reach critical mass to make financial sense, a brand new FB replacement may not be able to do it, However FB itself can certainly give that as an option without hurting their revenues substantially.
I was sceptical on the value prop for Youtube Premium, I am constantly surprised how many people pay for it, if google can afford to loose ad money with YT premium, I am sure FB can build a financial model around a freemium offering if they wanted to.
Minimum doesn't matter, the only question is if it's more profitable than the current approach. Facebook makes $9/user/quarter. That's every user no matter how little they use the site.
The issue however is that the users advertisers care about are the ones with disposable income. The users most likely to opt out of ads are the ones with disposable income. Thus the marginal cost to Facebook from such users is significantly more than $9/quarter.
>>> The ultimate problem is that Facebook doesn't want to show you good, enriching content from your friends and family. They want to show you ads. The good content is just a necessary evil to make you tolerate looking at ads.
>> That is how it is today. But does it have to be like that ? What is the minimum revenue per user required for service like FB to run.
> Minimum doesn't matter, the only question is if it's more profitable than the current approach.
Only if you think strictly inside the box.
The real problem here is one is a misalignment of incentives: Zuckerberg is managing Facebook to maximize the metric he's being evaluated on (profit and wealth), not the value provided to society.
>So it's not like we get to side-step horror-avoidance.
Which is my point. It's easy to say "just force them to optimize for value to society" and then ignore what that really entails in practice. And what giving someone the power to do that tends to cause.
Actually solving the related problems and making things actually better is probably possible but it's messy and hard and complicated.
"force them to optimize" doesn't mean putting a gun to somebody's head. But it's clear that the existing socioeconomic arrangements overall incentivize companies, including Facebook, to do a lot of harm while chasing profits. Forcing them to not do so can also mean changing the arrangements to remove the incentives.
> It's easy to say "just force them to optimize for value to society" and then ignore what that really entails in practice. And what giving someone the power to do that tends to cause.
> Actually solving the related problems and making things actually better is probably possible but it's messy and hard and complicated.
What you describe is politics, and it's inescapable.
> I am sure FB can build a financial model around a freemium offering if they wanted to.
They probably could. As they could also charge you a premium and then profit two times on top of you — with your fee and then by selling your data to third parties. Why? Because who would know that was happening? Corporations have no moral compass dictating their actions. The bottom line being what's best for investors.
Google uses my watching data in YouTube premium paying or not. They only claim not to show ads , that’s what most users care about . Even if FB sells / uses the data , as long as you don’t see ads and promoted content there are enough people who will pay for it
20 million for YT @ 5$/month is more than 1 billion in revenue per year. Given that 20M is just 1% of the user base , there is probably not a lot impact on the ad revenue either.
In don’t see Facebook surviving such a transition. Without the manipulation and data mining for engagement you’re just left with a few features that is probably easily subsumed by other services in some federated fashion. It would provably look as exciting as hosted e-mail from the business side.
Only a minority of users would pay for it. <1% of the users going by YT numbers .
It is not either free or paid , it can be both . It satisfies a need for the people willing to pay for the privilege of not seeing ads and gives additional revenue even.
No radical business changes required only a ideological change to treat their users as humans in required from FB
I think another tangential but related issue is with how these companies measure success. They measure success by engagement, and things that drive the most user engagement aren't usually the best for the user.
YouTube has been getting a lot of flack for this recently.
>Since ads are the primary way sites stay in business, they are obligated to fight against user agents or other intermediary systems.
Not all users hate ads in principle, just in practice. In theory, you'd be making the users select ads for relevance and not being annoying. But obviously, the site wants to show ads based on how much they're paying and "not being annoying" only factors in if pushes people off the site entirely.
How are the user agents funded? Probably through ads.
The problem is actually how to fund the timeline publication services. But systems like Medium etc seem to work OK.
I am now spending several hundred dollars a year on content subscriptions. Plus subscriptions for Gmail, Zoom and a few other things where I have outgrown the free service. A freemium model for the timeline publication services would probably work.
No, they wouldn't, unless advertising gets banned. They'd instead accept your payment and find a way to shove ads in anyway, in a covert or overt way, just as many paid services do, because why leave money on the table?
Spotify is reliant on copyright exclusions to keep competing free services at bay. So aren’t really providing any value.
YouTube is still reliant on ads, and ad-funded content creators, and will continue serving you manipulative content wether you pay or not. So can’t really count that as a success either.
The thing is when you remove the ugly side of those business what remains can bee done better for free using p2p networks or federation over open protocols.
> The thing is when you remove the ugly side of those business what remains can be done better for free using p2p networks or federation over open protocols.
I don't know about that. I buy a lot of things. I wish something would help me buy what I need and didn't know I needed so I didn't have to spend time shopping and researching.
maybe you're interested in hearing about X tech, or you can tell your "Agent" that you want to buy Y thing, or travel to Z.
That's where ads and reviews get thru.
I think transparency matters more. I liked Andrew Yang’s suggestion to require the recommendation algorithms of the largest social networks to be open sourced given how they can shape public discourse and advertising in all mass media is regulated to prevent outright lies from being spread by major institutions (although an individual certainly may do so).
Open sourcing the algorithms (however we define it) does absolutely nothing. What use is a neural network architecture? Or a trained NN with some weights? Or an explanation that says - we measure similar posts by this metric and after you click on something we start serving you similar posts? None of those things are secret. More transparency wouldn't change anything because even if completely different algorithms were used, the fundamental problems with the platform would be exactly the same.
It's silly to so confidently assert that opening up a closed source algorithm to 3rd party analysis will "do absolutely nothing". How could you possibly know there is nothing unusual in the code without having audited it yourself?
Seeing how the sausage gets made certainly can make lots of people lose their taste for it.
A lot of how big systems work is embodied in large neural networks, and the detailed structure of how they make decisions is an open research problem. So it’s not silly for OP to state that at all; it’s empirical fact.
It’s also not possible to audit the code for anything unusual without taking it all the way back through all tools source, all hardware, down through the chips, through the doping in the chips, and even lower. This stack is such a hard problem that DARPA has run programs for a long time to address this. Start by reading Thompson’s ACM article titled something like “Reflections on Trustimg Trust” where he shows code audits don’t catch program behavior, then follow the past few decades where these holes have been pushed through the entire computing stack.
Toolchain compromises are a non-zero risk but involve a lot of orchestrated resources to subvert systems to meaningful effect (simple exfil is sufficient for most corporate espionage v. stuff like Stuxnet to enact specific changes covertly). A company doing that given legislation to keep recommendation behavior and policies transparent to the public would be violating the spirit of the regulation by creating more opacity and delusion, no question. Admittedly, they're not going to be prosecuted in our current regulatory cyberpunk-esque hellscape, but neither would any public-benefit regulation pass anyway making the discussion of subversion moot, right? So presuming such a societal environment where regulation _could_ pass, we would hopefully have a more effective regulatory policy framework where subversion of the intent to be transparent for the sake of public safety and trust while still protecting trade secrets would be under sufficient scrutiny. All I know is that engineer-activists like Jaron Lanier are working with more tech-aware activists / politicians like Yang in proposing more effective tech regulatory frameworks than the past, and their efforts should be a lot more effective than either the current collective actions of the throwing up of our hands or yelling, whining, and screaming hoarsely.
From a regulatory standpoint mirroring the nature of our organizational tendencies, I posit that the _policy_ models should look similar to Mickens' security threat vector model - Not-Mossad or Mossad.
>It’s also not possible to audit the code for anything unusual without taking it all the way back through all tools source, all hardware, down through the chips, through the doping in the chips, and even lower.
Are you implying that since we can't audit every single thing, auditing anything is useless?
>Are you implying that since we can't audit every single thing, auditing anything is useless?
No, I am pointing out that your statement implying one can know if there's anything unusual in the code can be found by an audit. It cannot. And most of the activity by big companies is not in code, it's in data, and "auditing" it is currently beyond anything on the near horizon. Some of the behavior is un-auditable in the Halting Problem sense - i.e., the things you'd want to know are non-computable.
>No, I am pointing out that your statement implying one can know if there's anything unusual in the code can be found by an audit. It cannot.
This is so plainly false its silly. Have you ever heard of a code review? What do you think security researchers do? Google Project Zero. Plenty of things are found all the time at the higher (and lower) levels of the stack, even if something unknown remains deep within.
>And most of the activity by big companies is not in code, it's in data it is currently beyond anything on the near horizon.
Audits have no problem finding out what type of data is being collected (see PCI || HIPAA compliance). That would be a great start: for people to be made explicitly aware of all the data points that are being collected on them.
You're simply wrong. Did you read the article that shows quite clearly exactly how to do this I told you about? No, you didn't, or you'd stop making this false claim. Before you repeat this, RTFA, which I'll post again since you didn't learn last time [1].
There, read it? There's decades of research into even deeper, more sophisticated ways to hide behavior. At the lowest level, against a malicious actor, there is no current way to ensure lack of bad behavior.
>What do you think security researchers do?
Yes, I've worked on security research projects for decades, winning millions of dollars for govt projects to do so. I am quite aware of the state of the art. You don't seem to be aware of basic things decades old.
Do you actually work in security research?
>Plenty of things are found all the time
You're confusing finding accidental bugs with an actor trying to hide behavior. The latter you will not find if the actor is as big as a FAANG or nation state.
If simply looking at things was sufficient, then the DoD wouldn't be afraid of Chinese made software or chips - they could simply look, right? But they know this is a fool's errand. They spend literally billions working this problem, year in and year out, for decades. It's naive that you think simple audits will root out bad behavior against malicious actors.
Even accidental bugs live in huge, opensource projects for decades, passing audit after audit, only to be exploited decades later. These are accidental. How many could an actor like NSA implant with their resources that would survive your audits?
Oh, did I mention [1]? Read it again. Read followup papers. Do some original research in this vein, and write some papers. Give talks on security about these techniques to other researchers. I've done all that. I have a pretty good idea how this works.
>what type of data is being collected
Again, you miss. I am not talking about the data being collected. I'm talking about the data in big systems that make decisions. NNs and all sorts of other AI-ish systems run huge parts of all the big companies, and these cannot yet be understood - it is literally an open research problem. Check DoD SBIR lists for the many, many places they're paying to have researchers (me, for example - I write proposals for this money) to help solve this problem. For the tip of the iceberg, read on adversarial image recognition and the arms race to detect or prevent it and how deep that rabbit hole goes.
Now audit my image classifier and tell me which adversarial image systems it is weak against. Tell me if I embedded any adversarial behavior into it. Oh yeah, you cannot do either, because it's an unsolved (and possibly unsolvable) problem.
Now do this for every learning system, such as Amazon's recommender system, for Facebook's ad placement algorithms, for Google's search results. You literally cannot.
Don't bother replying until you understand the paper - it shows that a code audit will not turn up malicious behavior if the owner is actively trying to hide stuff from you.
>You're confusing finding accidental bugs with an actor trying to hide behavior. The latter you will not find if the actor is as big as a FAANG or nation state.
>it shows that a code audit will not turn up malicious behavior if the owner is actively trying to hide stuff from you.
Yes, code obfuscation is a thing, but it's not a perfect silver bullet like you falsely claim, and often only serves to slow down researchers.
Of course it is true though that many bugs and vulnerabilities remain hidden which we may never find, and yet that's not a valid reason not to look for them, because there are many which are found every single day.
>> Do you actually work in security research?
> Yep!
Your comment history is interestingly lacking any evidence of that. Care to demonstrate it?
>>The latter you will not find if the actor is ...
> Wrong, wrong, wrong.
Tell me how you can audit an image classifier to ensure it won't claim a specially marked tank is not a rabbit, where an enemy built the classifier, and where you don't know what markings the enemy will use. You're given the neural net code and all the weights, so you can run the system to your hearts content on your own hardware. Explain how to audit, please.
Good luck. It's bizarre you claim people can find such things, when it's a huge current research problem, and it's open if such things can even be demonstrated at all.
Same thing for literally any medium or large neural network. None can be audited for proof of no bad behavior.
>code obfuscation is a thing, but it's not a perfect silver bullet like you falsely claim,
I've never said code obfuscation. Unless you understand the paper which you seem to repeatedly ignore, you'll keep making the same mistake.
The paper demonstrates how to remove the code that has the behavior while still embedding the bad behavior in the product. You cannot find it from auditing the product code. There is zero trace in the source code. The attack in the paper has been pushed through all layers of computing stack since then, and now is at the quantum level. And as these effects become more important, there can be no audit since what you want to know runs up against physics, such as the No Cloning Theorem.
That you don't realize this is possible is why you keep making the same error that looking at source code will tell you what a product does.
If your naive belief in this were that simple there would not be literally billions of dollars available for you to do what you claim you can. DoD/DARPA/Secure foundries would love to see your magic methods.
Have you read the paper? Maybe it will show you the tip of the iceberg on why audits are used to find accidents, but are much weaker against adversarial actors, to the point of not providing any value for really complex systems.
>The perfect is the enemy of the good.
I'm not saying don't audit. I'm pointing out your initial claim that this will find anything unusual. It can find common errors. It can find bad behavior inserted by unskilled people. But against groups that know about current work, you won't find anything, in the same way you cannot audit a neural network.
That you continue to think code obfuscation is the only way to embed bad behavior in a stack shows that you're unaware of a large section of security research. Read the paper.
>I'm not saying don't audit. I'm pointing out your initial claim that this will find anything unusual.
By "unusual" I mean anything that has intended or unintended negative effects on society, such as what was seen with Cambridge Analytica, or FBs emotional manipulation studies.
>It can find common errors. It can find bad behavior inserted by unskilled people. But against groups that know about current work, you won't find anything
Yep, and without a 3rd party audit, we can't even begin to approximate the degree of hypothetical bad behavior that exists affecting billions of people due to regular developers doing what they're told by their product managers (or of their own volition), let alone a nation state APT.
You keep ignoring both how to audit NNs and how to address behavior not in code. Have you read the paper yet? Explain how audits work in light of the paper.
Without answering those you’re simply wasting time and effort by claiming audits can find things they cannot.
I quite doubt you work in security from your inability to grasp these things. Please demonstrate you’re not lying. Your posting history shows a tendency to be a conspiracy believer, and there’s zero evidence you do anything professionally in security, unlike the history of those I know that do work in security.
The entire stack contains enough holes as to be swiss cheese, auditing the open code means nothing if and when something before that code in the stack manipulates the outcome of the code. This is one of the reasons those big security issues in Intel CPUs the last few years were such a big deal. The entire stack needs to be reworked at this point.
>auditing the open code means nothing if and when something before that code in the stack manipulates the outcome of the code.
In terms of software security vulnerabilities, there is so much low hanging fruit making exploitation trivial. Even if a small team within an intelligence agency knows about a zero day deep in the stack, addressing vulnerabilities higher up in the stack that are easily exploited by script kiddies necessarily reduces attack surface.
However, what we're talking about here is not so much about security vulnerabilities, as it is about design flaws (or features) which have harmful effects on society.
There isn't a simple fix, or likely any "fix," for the issues you want to be knowable. Besides the economic impossibility of it, there are too many places to hide behavior that we cannot foresee due to quantum effects, complexity, etc. So reworking the entire stack is not reasonable or likely very beneficial.
It's better to incrementally address issues as they are found and weighted.
Not the recommendation engines. The graph. All the social media companies (and indeed Google and others) profit by putting up a wall and then allowing people to look at individual leaves of a tree behind the wall, 50% of which is grown with the help of people's own requests. You go to the window, submit your query, and receive a small number of leaves.
These companies do provide some value by building the infrastructure and so on. But the graph itself is kept proprietary, most likely because it is not copyrightable.
The graph in itself is pretty close to privacy issues that border closely as well. Even if FB et al were government funded that wouldn’t make it good either. And said data could be considered competitive advantages but perhaps not. If everyone got a copy of various social networks’ friends lists, the number of viable alternatives would skyrocket quickly because the lock-in effect would be gone. Perhaps this needs to be theorized more along modernized anti-trust laws (which don’t work in a tech given anti-trust laws were based around trying to lower consumer prices).
Yeah, pretty much. It's easy for Facebook to claim that it's popular and the best thing going when you specifically need a FB account for contacting people
>advertising in all mass media is regulated to prevent outright lies from being spread
Advertising in mass media is regulated. You are very much allowed to publish claims that the government would characterize as outright lies, you just can't do it to sell a product.
Does that actually work? If they create some complex AI and then show us the trained model, it doesn't really give much insight into the AI doing the recommendation. You could potentially test certain articles to see if it is recommended, but reverse engineering how the AI recommends it would be far more time consuming than updating the AI. As such Facebook would just need to regularly update the AI faster than researchers can determine how it works to hide how their code works. Older versions of the AI would eventually be cracked open (as much as a large matrix of numbers representing a neural network could be), but between it being a trained model with a bunch of numbers and Facebook having a never version I think they'll be able to hide behind "oops there was a problem, but don't worry our training has made the model much better now".
It would make it clear or not whether the site tries at all to restrict certain recommendations like harmful content at least and the model would be different and less subject to top-down rules / policies like recommending government propaganda sites over independent sources. It could be used in later, better worded and targeted subpoenas for how said filtering and censoring works. It would also show if there exists a special promotion system for a company’s own products and so forth. In many respects, it acts like an org chart and to determine _what_ to scrutinize with more concrete actions as regulators and the public. It provides a map and that’s better than a black box or Skinner Box where we are the subjects.
Setting aside the concerns about the efficacy of the idea, it also seems like an arbitrary encroachment on business prerogatives. I think everyone agrees that social media companies need more regulation, but mandating technical business process directives based on active user totals isn't workable, not the least of which because the definition of "active user" is highly subjective (especially if there is an incentive to get creative about the numbers), but also because something like "open source the recommendation algorithm" isn't a simple request that can be made on demand, especially with the inevitable enfilade of corporate lawyering to establish battle lines around the bounds of intellectual property that companies would still be allowed to control vs that which they would be forced to abdicate to the public domain.
The risk is that it behaves like a reinforcement learning algorithm which essentially rewards itself by making you more predictable, I'd argue that's what curated social networks do today.
If you're unpredictable you're a problem. Thus, it makes sense to slowly push you to a pole so you conform to a group's preferences and are easier to predict.
A hole in my own argument is that today's networks are incentivized to do increase engagement where a neutral agent is in most ways not.
So perhaps the problem isn't just the need for agents but for a proper business model where the reward isn't eyeball time as it is today.
But you are predictable, even if you think you are unpredictable, you are just a bit more adventurous. Algorithm can capture that as well. It will be easier for algorithm that works on your behalf.
This makes me think of a talk with an AI-optimistic Microsoft sales guy I had a few years ago. His argument was essentially the same:"Look, it's no problem to have an AI curate everything for you because the algorithm will just know what you want, even if your habits are unusual!"
Of course this hasn't happened yet and I doubt it ever will. Maybe I'm just insane, but most of the recommendations from services I have fed data for hundreds of hours (YouTube) are actually repulsive.
Interesting because I think I have some rather random assortment of hobbies that generally tend to have no overlap and I get pretty good recommendations all the time.
What you’re referring to is splitting the presentation from the content. The server (eg Facebook) provides you with the content, and your computer/software displays it to your liking (ie without ads and spam and algorithmically recommended crap).
There’s a lot of history around that split, and the motivation for HTML/CSS was about separating presentation from the content in many ways. For another example, once upon a time a lot of chat services ran over XMPP, and you could chat with a Facebook friend from your Google Hangouts account. Of course, both Google and Facebook stopped supporting it pretty quickly to focus on the “experience” of their own chat software.
The thing is that there is very little money to be made selling content, and a lot to be made controlling the presentation. So everyone focuses on the latter, and that’s why we live in a software world of walled gardens that work very hard to not let you see your own data.
There is some EU legislation proposal that may make things a bit better (social network interop), but given the outsized capital and power of internet companies i’m not holding my breath.
> you could chat with a Facebook friend from your Google Hangouts account
This was never true. There was an XMPP-speaking endpoint into Facebook's proprietary chat system, but it wasn't a S2S XMPP implementation and never federated with anything. It was useful for using FBChat in Adium or Pidgin, but not for talking to GChat XMPP users.
Your friends provide you with the content, not Facebook. You only need Facebook now because you don’t have a 24/7 agent swapping content on your behalf and presenting it how you like it.
That’s a very good point. One line of thinking I’m interested in is social networking over email.
Everyone has email, so you could imagine a social networking app that’s just a thin layer over your email, and every interaction is encoded as an email being sent under the hood. Want to share a picture with your friends? Send an email. Someone wants to comment on it? They just send an email. Etc.
The main purpose of the app would be to offer a nice, device responsive, consistent presentation. Additionally if this were an open, documented standard, an entire ecosystem of “email apps” could emerge.
(Of course as far as your actual email account goes you’d want to auto archive the emails + not get notifications for them, but that’s easily configurable)
We could do all the same things on the web, so long as the standards are open. But that's exactly the problem - lock-in is how social networks make profits, so the largest ones (where most people already are) are also the least likely to support anything like this.
Separating presentation and content is one way to do it, but it's not the only way.
For example, Facebook could create some kind of plugin API that allows you to interpose your filtering/ranking code between their content and their presentation.
For example, maybe they give you a list of N possible main page feed items each with its own ID. Your code then returns an ordered list of M <= N IDs of the things that should go into your feed. That would allow you to filter out the ones you don't want and have the most interesting stuff displayed first. Facebook could display the M items you've chosen along with ads interspersed.
Something like that could run in the browser or Facebook could even allow you to host your algorithm in a sandbox on their servers if that helps performance. (Which means you trust them to actually run it, but you have to trust them on some basic things if you're going to use their service at all.)
In other words, changing the acoustics of the echo chamber doesn't mean you need to be the one implementing a big chunk of the system. You just need a way to exert control over the part you want to customize.
I don't see this as a bad thing. I experience this as a good thing.
The RSS feeds I subscribe to give me plenty of "presentation" or "branding". Logos, written descriptions [both short- and long-form], clear names of what I am subscribing to, URLs. Just the right amount for me, in fact; if I wanted to go to their website(s) for their particular buffet of blog posts, featured puff pieces on Page Five, twitter mentions, &c I can do that ... or not. I'm glad I don't have to if I don't want to, and all of these folks are more than able to drop into their RSS feed a "Please go here for our tour information with new stuff in our online shop" mention just as you are able to go straight to some website full of deep-thumping media flashing into your senses as you get to where you want to go instead of using RSS.
ActivityPub and other federated networks are the answer. They do exactly that: if you aren't satisfied with the rules on existing servers, you host your own. The network itself is wide open, and its control is distributed across many server admins. The way the content is presented is of course completely up to the software the user is running. Having no financial incentive to make UX a dumpster fire visible from space also helps a lot.
They're not the answer as long as they don't have loads of people. The attraction of FB and the like is that almost everyone has a FB account, just like almost every public figure has a twitter account. The downside of things like Mastodon is how do you know what server you want to connect to? For a non-technical user it doesn't offer any more obvious utility than a FB group.
There is indeed the problem of discovery that Mastodon doesn't feel like addressing. Like, you pick a server, make an account and now what? There's no way to bring your existing social graph with you. Even if your friends are there, you won't ever find them without asking each and every one about their username@domain. But I have some ideas on fixing that for my fediverse project — like making a DHT out of instance servers, thus making global search possible while keeping the whole thing decentralized.
I agree this is the main problem with ActivityPub, can you elaborate on your DHT idea? I'm thinking of doing something similar. How can the search be fast if the data is on many nodes, will you store the cache on a single instance and update it on some interval?
To be honest, I haven't really explored this yet, it's just that DHT feels like the most sensible approach. It's a rather ambitious project and I'm currently making the core functionality work (and interoperate with Mastodon where applicable). I'll probably do a Show HN post at some point.
I like this, and so does my friend Confirmation Bias, who is pretty clear that the AI would select completely unbiased content relevant to me, not limited by any of the Bias family. It would be 100% better than the bias filters in place now, because my thoughts and selections are always unbiased, IMHO. (FYI: Obviously I'm not being serious. You clearly knew that, this notice is for the other person who didn't.)
> I don't know. Maybe that will just make the echo chambers worse.
This.
Also. What incentive does a walled garden even have to allow something like this? Put a different way, what incentive does a walled garden have to not just block this "user agent"? Because the UA would effectively be replacing the walled garden's own "algo curated new feed" - except if the user builds their own AI bot -- the walled garden can't make money the way they currently do.
I think the idea is very interesting. I personally believe digital UA's will have a place in the future. But in this scenario I couldn't see it working.
True, but we have ad blockers and they're effective. They're effective against the largest, richest companies in the world. There are various reasons for that, but at the end of the day it remains true that I can use YouTube without ads if I choose to. There's clearly a place in the world for pro-user curation, even if that's not in FAANG's best interests. I think it's antithetical to the Hacker ethos to not pursue an idea just because it's bad for mega-corps.
I was in agreement with you until I read that. People don’t need to have content dictated to them like mindless drones whether it is from social media, bloggers, AI, or whatever. Many people prefer that, though, out of laziness. It’s like the laugh track on sitcoms because people were too stupid or tuned out to catch the poorly written jokes even with pausing and other unnecessarily directed focus. It’s all because you are still thinking in terms of content and broadcast. Anybody can create content. Off loading that to AI is just more of the same but worse.
Instead imagine an online social application experience that is fully decentralized without a server in the middle, like a telephone conversation. Everybody is a content provider amongst their personal contacts. Provided complete decentralization and end-to-end encryption imagine how much more immersive your online experience can be without the most obvious concerns of security and privacy with the web as it is now. You could share access to the hardware, file system, copy/paste text/files, stream media, and of course original content.
> And isn't that how the internet used to be?
The web is not the internet. When you are so laser focused on web content I can see why they are indistinguishable.
I think your suggestion is a bit out of scope for what's actually being discussed/not really a solution.
I'm active on the somewhat (not fully) decentralized social medium Fediverse (more widely known as Mastodon, but it's more than that) and I think a lack of curation is a problem: Posts by people who post a lot while I'm active are very likely to be seen, those by infrequent posters active while I'm not very likely to go unnoticed.
How would your proposed system (that seems a bit utopic and vague from that comment, to be honest) deal with that?
> People don’t need to have content dictated to them like mindless drones whether it is from social media, bloggers, AI, or whatever.
If the AI is entirely under the user's control, why not? It's like having a buddie that's doing for me what I'd do for myself, if I had the time and energy (and eyebleach).
In response to it just creating more echo chambers:
- it can't be worse than now
- At minimum, it's an echo chamber of your own creation instead of being manipulated by FB. There's value in that, ethically.
- Giving people choice at scale means it will at least improve the situation for some people.
Isn't facebook (and reddit, and twitter) showing you posts by people companies etc. that you decided to follow? (And some ads)?
I am pretty sure things can be worse than right now, pretending like we are in some kind of hell state at the bottom of some well where it can't possibly be worse, seems unrealistic to me.
Neal Stephenson explores something like your “user agent” idea and comes up with an different solution in his novel “ Fall; or, Dodge in Hell.”
Spoilers ahead:
In Stephenson‘s world people can hire “editors” to curate what they see, and those editors effectively determine reality for people at a mass scale. This is just one of the many fascinating ideas Stephenson explores and I highly recommend reading the book.
This interview covers some of the details if you’re not willing to dive into an 800+ page novel:
Highly recommend reading Reamde first if you can. The story is entirely different, but is the same world and comes chronologically first; I felt the continuity added a lot when reading Fall.
Part of the concept was that the agents would actually roam onto servers on the internet on your behalf raising complicated questions around how to sandbox the agent code (came in useful for VPSs and AWS-style lambdas in the end).
At Baitblock (https://baitblock.app), we're working on something similar. It's called the Intelligent Blocker, and has the same intended goal as your user agent 'AI' (not yet open to general public, under development right now). With it you will be able to block all Facebook posts that are for example say not from your family, or not of a specific type or from specific person.
Or comments on different Internet forums that are blatantly spammy/SEO gaming etc.
Or block authors in search results or Twitter feed or any comment that you don't like. Basically the Zapier of content filtering.
This will be available to the user as a subscription service.
Some of these thigs are not possible on mobile platforms (Android, iOS) unfortunately because the OS do not allow such access, but we hope that Android and iOS in the future open up to allow external curation systems, apart from the app platform it's self as it's in the interest of the user.
I think the overwhelming majority of users don’t want to deal with this kind of detail. IMO most people would end up using some kind of preset that matched their preferred bubble.
I haven't touched this in years, but one time I made a little project[1] to analyze the people I was following on Twitter and recommend who I might want to unfollow based on their attitudes. People who posted negative stuff very frequently were at the top of my list to ditch; I don't need extra input pushing me toward misery. The first few runs were very illuminated, but not surprising, like "wow, now that you mention it, Joe does saw awful stuff approximately hourly".
I would love to have an agent that could apply those sorts of analyses to my data sources. In my case, I wouldn't want to filter out bad news, but unnecessarily nasty spins on it. I'd find that super valuable.
We're a small team working in stealth on this exact challenge. Shoot me a note if you're interested in hearing more or getting involved. itshelikos@gmail.com
This type of thing is nothing new, but it's important to recognize that it doesn't take off because it's illegal.
As soon as Facebook realizes you're a risk, you'll get a C&D ordering you to stop accessing their servers. These typically have the force of law under the CFAA.
You won't access their servers, but just read the page that the user already downloaded? You'll still get nailed under the Copyright Act.
"User agents" in the sense used by the OP are as old as the internet itself. There's an active, serious, and quiet effort to abuse outdated legislation to ensure that they never become a problem.
I mean Facebook doesn't really decide what content I see, I do. I aggressively police my timeline and unfollow people who post garbage content. I don't really need an AI to do that for me...
Another early assumption about the internet and computers in general is that users were going to exert large amounts of control over the software and systems they use. This assumption has thus far been apparently invalidated, as people by far prefer to be mere consumers of software that are designed to make its designers money. Even OSS is largely driven by companies who need to run monetized infrastructure, though perhaps you don't pay for it directly.
Given that users are generally not interested in exerting a high level of sophisticated control over software they use, how then is the concept of a user agent AI/filter any different at a fundamental level? It probably won't be created and maintained as a public benefit in any meaningful way, and users will not be programming and tuning the AI as needed to deliver the needed accuracy. I don't think AI has yet reached a level of sophistication where content as broad a range as what's found on the internet (or even just Facebook) can be curated to engage the human intellect beyond measuring addictive engagement, without significant user intervention.
Hopefully I'm wrong, as I do wish I could engage with something like Facebook without having to deal with ads or with content curated to get my blood boiling. Sometimes I do wonder how much it is Facebook vs. human tendency under the guise of an online persona, as both are clearly involved here.
There are models for this that could probably work. Tim Berners-Lee has been working on a scheme called Solid for years now.
It is important to realize that Facebook is not the first, second or even tenth of its ilk. FaceBook combines a bunch of ideas from previous systems, in particular MySpace and USENET. It is more or less the third generation of Web Social Media. There is no reason to believe there can't be a fourth.
My interest in these schemes is to provide a discussion space that is end-to-end encrypted so that the cloud service collecting the comments does not have access to the plaintext. This allows for 'Enterprise' type discussion of things such as RFPs and patent applications. I am not looking to provide a consumer service (at this stage).
The system you describe could be implemented in a reasonably straightforward fashion. Everyone posts to the timeline service of their choice and choose between a collection of user agents discovering interesting content for them to read. These aggregation services could be a paid service or advertising supported. Timeline publishing services might need a different funding model of course but bit shoveling isn't very expensive these days. Perhaps it could be bundled with video conferencing capabilities, password management or any of the systems people already pay for.
As for when the Internet/Web was not so vast. One of my claims to fame is the last person to finish surfing the Web which I did in October 1992 shortly after meeting Tim Berners-Lee. It took me an entire four days of night shifts to surf every page of every site in the CERN index.
In the context of this discussion Solid sounds amazing. I'd be super excited to tune the social web to my own preferences. Sadly however, I couldn't make heads or tails of this garbage jargon laden website. WTF?
"Time to reset the balance of power on the web and reignite its true potential.
When Sir Tim Berners-Lee invented the web, it was intended for everyone. The excitement and creativity of its early days were driven from the notion that we can all participate — and the impact was world-changing.
But the web has shifted from its original promise — and it’s time to make a change.
We can still unlock the true promise of the web by decentralizing the power that’s currently centralized in the hands of a few. How? By using the power of Solid.
Solid is the technically potent, open-source platform built to decentralize the web. Inrupt is the company that’s helping to fuel Solid’s success."
Why would a personal AI which curates your content be any “better” than FB’s AI which curates your content? Isn’t the current AI based on what you end up engaging with anyway? If you naturally engage in a variety of content across all ideological spectrums, than that’s what the FB AI is going to predict for you. Unfortunately, the vast majority of us engage with content which reinforces our existing worldview - which is exactly what would happen with a personal AI.
Because an algorithm under your control can be tweaked by you. Could be as simple as reordering topics on a list of preferences. Facebook's algorithm can't be controlled like that. Also, an algorithm you own won't change itself unbeknownst to you.
I tried building this 10 years ago as a startup. Maybe time to revisit, the zeitgeist is turning more and more towards this and computing power has gotten cheap enough ...
This misses the point. Facebook refuses to look inwardly or mess with their core moneymaker, regardless of how it affects people. Noone is ever going to sip from the firehose just like we'll never again get a simple view of friend's posts sorted by creation date.
I think the real problem is Facebook's need to be such a large company. They brought this on themselves trying to take over the world. Maybe they need a Bell-style breakup
Zuck doesn't care about anything healthy as long as that healthy content reduces ad revenue/and or user activity(MAU/DAU) metrics. Basically he wants to extract enough time/money from each user to just be bearable for that user that they do not leave the site in disgust. Once you realize this cardinal truth from FB all the reprehensible actions from Zuck/senior leaders make perfect sense.
I like the line of thinking, but who actually provides the agent, and what are their incentives?
This is far from a perfect analogy, but compare it to the problem of email spam. People first tried to fight it with client-side Bayes keyword filters. It turns out it wasn't nearly as simple as that, and to solve a problem that complicated, you basically need people working on it full time to keep pace.
Ranking and filtering a Facebook feed would have different challenges, of course. It's not all about adversaries (though there are some); it's also about modeling what you find interesting or important. But that's pretty complicated too. Your one friend shared a woodworking project and your other friend shared travel photos. Which one(s) of those are you interested in? And when someone posts political stuff, is that something you find interesting, or is it something you prefer to keep separate from Facebook? There are a lot of different types of things people post, so the scope of figuring out what's important is pretty big.
Holochain apps are versatile, resilient, scalable, and thousands of times more efficient than blockchain (no token or mining required). The purpose of Holochain is to enable humans to interact with each other by mutual-consent to a shared set of rules, without relying on any authority to dictate or unilaterally change those rules. Peer-to-peer interaction means you own and control your data, with no intermediary (e.g., Google, Facebook, Uber) collecting, selling, or losing it.
Data ownership also enables new frontiers of user agency, letting you do more with your data (imagine truly personal A.I., whose purpose is to serve you, rather than the corporation that created it). With the user at the center, composable and customizable applications become possible."
I am thinking about the concept of “the last mile to user’s attention”.
Currently, software clients of Mastodon or Twitter hold that mile. Mastodon gives all content unfiltered, which could be too much at times, while Twitter does some oft-annoying opaque black magic in its timeline algorithms.
A better solution would be to have a protocol for capability that filters content with logic under your control. A universal middleware standard that is GUI-agnostic, can fit different content types.
By adopting this, open/federated social could start catching up on content filtering features to for-profit social (in a no-dark-patterns way, benefitting user experience), hopefully stealing users.
Ideally it could be used by the likes of Twitter and Facebook—of course, given the size of for-profit social, such an integration would take some unimaginably big player to motivate them to adopt (the state of their APIs is telling), but if it’s there there’s a chance.
Excellent idea, soon this will be a requirement for using the web in any productive way, considering the ratio of good information to junk info is getting worse rapidly. We already do this in a way; only visiting certain sites that we like and following certain users. A personal AI would make this process much more efficient.
I do see a content filtering AI as very difficult to achieve, and I don't think it will be possible for quite some time. There are so many small problems, even getting AI to recognize targeted content is difficult, given that websites can have infinitely different layouts. And what about video or audio? The most practical way to achieve a content AI would be to persuade websites to voluntarily add standardized tags so that the only problem becomes predicting and filtering. Although I could see some issues with that like people trying to game the system.
I agree - wasnt the browser intended to be the user agent? And counterpoint to some of the replies to you, surely people can just pay instead of sites being ad-based, what other industries operate in this absurd way? The public must think there’s no cost to creating software if everythings always free.
> What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.
That would be great. Having an artificial intelligence as a user agent would be perfect. That'd be the ideal browser. So many science fiction worlds have the concept of an intelligent navigator who acts on behalf of its operator in the virtual world, greatly reducing its complexity.
Today's artificial intelligences cannot be trusted to act on our best interests. They belong to companies and run on their computers. Even if the software's open source, the data needed to make it useful remains proprietary.
It’s really not as sophisticated, but these guys[1] created an extension that in addition to their main objective of analyzing Facebook’s algorithm also offers a way to create your own Facebook feed. If I got it right, they analyze posts their users see, categorize them by topic and then let you create your own RSS feed with only the topics you want to see.
It’s not clear to me whether you may see posts collected by other users or only ones from your own feed and it seems highly experimental.
> What if, instead, you had a personal AI that read every Facebook post and then decided what to show you. Trained on your own preferences, under your control, with whatever settings you like.
There is a feedback problem, though, which is that your preferences are modified by what you see. So the AI problem devolves to showing you the kind of content that makes you want to see more of it, i.e. maximize engagement. I think a lot of people are addicted to controversy, "rage porn," anger-inducing content, and these agents are not going to help with this issue.
If we could train AI agents to analyze the preferences of people, I think the best use for them wouldn't be to curate your own content, but to use them to see the world from other people's perspective. If you know in what "opinion cluster" someone lies and can predict their emotional reaction to some content, you may be able to identify the articles from cluster A that people from cluster B react the least negatively to, and vice versa. And this could be leveraged to break echo chambers, I think: imagine that article X is rated +10 by cluster A and -10 by cluster B, and article Y is rated +10 by cluster A but only -2 by cluster B. It might be a good idea to promote Y over X, because unlike X, Y represents the views of cluster A in a way that cluster B can understand, whereas X is probably some inflammatory rag.
The key is that you can't simply choose content according to a user's current preferences, they also have to be shown adversarial content so that they have all the information they need about what others think. This is how they can retain their agency. Show them stuff they disagree with, but that they can respect.
I expect that a system like the one I'm describing would naturally penalize content that paint people with opposing points of view as evil or idiots, because such content is the most likely to be very highly rated by the "smart" side and profoundly hated by the "stupid" side. Again, note that I'm not saying content everyone likes should be promoted, it's more like, we should promote the +10/-2 kind of polarization (well thought out opinion pieces that focus on ideas which might be unpopular or uncomfortable) over the +10/-10 kind of polarization (people who disagree with me are evil cretins).
In the right medium, perhaps the user agent would also decide when my posts are shown to people versus an ad being shown in place of my post such that I make money. Then a site like Facebook would only make a small portion of my ad revenue in exchange for hosting it.
Sure, you can't read every facebook post, but if your browser extension is scanning your feed and suppressing posts for you, how can they even stop you?
It's a violation of copyright under current interpretations of the Copyright Act. Companies like FB are well aware of this and send C&Ds to this effect every day.
That is shocking. How could something like uBlock Origin or Privacy Badger ever exist? They're doing the exact same thing; modifying the status of the page payload. Even BrowserName developer tools would run afoul of this. I can't fathom how these are materially different.
These projects continue to exist because no company has felt it's in their interest to bring suit, I guess. This is the type of thing that companies like to keep quiet, and if something like uBlock Origin isn't a pervasive threat, they won't risk publicizing and potentially losing the loophole. In particular it would be dumb to sue the EFF for Privacy Badger, since part of the reason the EFF exists is to fight such things in court.
Look up the "RAM Copy Doctrine". Basically, it means that every time data is copied within the computer's memory, it's a potential infringement. The HTML source of the page you've downloaded undoubtedly qualifies for copyright protection. Your argument would be an implied license to modify the page to make it suitable for display, but standard ToS will typically forbid any type of modification, removing any ambiguity into whether third-party extensions that modify the page may have an implied license. Your license would typically be limited to displaying the page as transmitted, and allowing an extension to read and modify the page would be an infringement.
I'm not a lawyer but I had a neat little SaaS business that was killed by a C&D along these same lines. Multiple attorneys reviewed and this is basically the summary. When I suggested that we do a peer-to-peer data transmission thing to avoid handling or transmitting any copyrighted content, I was warned that doing so could easily be interpreted as conspiracy and that it'd be best not to go down that route. Maybe someone else's attorney would say something different, but that's what mine told me.
This already exists — most social media is already curated. You only see tweets and posts from those you follow or friend. You can already block or ignore any undesirables. This works fine for self-curation.
There is no need for holier-than-thou censorship short of legal breaches. Good to see FB take this change of direction.
Twitter shows me garbage I don't like, as does YouTube. They do this with very little regard for who you follow nowadays and give you no say in what types of stuff you actually want to be recommended. Sometimes they're nice enough to say why they recommend something (which should be standard), but most of the time it's just infuriatingly stupid.
I'm not against machine curation at all, mind you. I want the infrequent poster to have a higher weighed voice and such. But I want to be able to control the parameters.
I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect. Specifically, and more importantly, combined with the human inability to actively decide to not pay attention when things are inconsequential or not yet worth arguing about.
Sometimes, just shutting up about an issue and not discussing it is the best thing for a group to do. Not more advocacy or argument. Time heals many things. No app is going to help you take that approach -- and that's not what technology is going to help solve (or is incentivized to solve). Just like telling a TV station that's on 24 hours to not cover a small house fire when there's no other news.
People are not good at disengaging from something when that's the right thing to calm the situation. And Facebook somehow tapped into that human behavior and (inadvertently or purposefully) fueled so many things that have caused our country (and others) to get derailed from actual progress.
There is no vaccine yet for this.
And not to dump on the Facebook train, since others would have come to do it instead. But they sure made a science and business of it.
In general and not necessarily related to just facebook, but one of the best things I've come to learn about myself and the world around me is that sometimes the absolute _best_ thing you can do for yourself is to just shut up and walk away, even if you know in your heart of hearts that you are correct.
I think this is generally helpful to keep in mind.
I also think there’s an art to deescalation and discussing ideas or persuading someone you disagree with to see an alternative view (and then giving them space to change their mind).
Productive discussion isn’t possible with everyone or even one individual depending on where they are in their life, but I’ve generally found it works better than expected when you can remove your own identity and feelings from it.
It’s rarely in the spotlight though because it doesn’t get retweeted or shared as much as combative arguing that’s more a performance from each side (with likes and cheering on the sidelines).
Personally, I think most of the persuasive related research is not done through public research (universities, public funding) but more through corporate or military (?) research. I can even imagine that there is obtained knowledge being used to help persuade the public for political gain. Which the wider public or public universities aren't being shared and is oblivious to the public and the wider science community.
Guess, it sounds bit like a tin hats theory but I can imagine the above is the happening at the moment.
To be honest, I pay a lot of attention to the research coming out, and there’s not be much which is counterintuitive. All of it builds off of stuff we’ve seen from the days of eternal September.
Generalist subs/topics collect junk. Directed subs have more focus and are healthier in defending crap.
>I also think there’s an art to deescalation and discussing ideas or persuading someone you disagree with to see an alternative view
This doesn't really work with core views like politics. Some of the most polarizing topics are not because the opposite side can't see your view, it's because you fundamentally disagree on priorities.
Examples:
Anti-abortion people aren't going to be persuaded by yet another view on women's rights. They think you're arguing to murder babies. The plight of a woman in poverty is not going to suddenly make them go, "oh, well then I guess a litte murder is okay."
A libertarian isn't going to be swayed to suddenly think a planned economy is a better approach even when presented with spectacular market failures. They completely understand the failures suck and understand the alternative views just fine. Another anecdote is not realistically going to alter a belief that central planning is worse overall.
Studies on persuading people out of prejudice do exist (see e.g. https://www.ocf.berkeley.edu/~broockma/kalla_broockman_reduc... ). The problem is that the approaches are not as emotionally satisfying as just affirming your moral superiority and calling people bigots, so not many people do it (though some do, search for "deep canvasing").
Abortion is tricky because once you recognize the life of the embryo as a value to protect it's difficult to have it come as less compelling than a right to bodily autonomy. Still, most anti-abortion people would still carve out a lot of exceptions (rape, genetic problems, danger to the mother's life and so on). Most people recognize that the even right to life is not absolute (another example: they would agree it would be unlawful to refuse to obey an order in time of war that would almost certainly result in a soldier's death).
I think abortion is a lot easier when you frame the argument around suffering.
I think part of the problem with abortion is the left argues that "it is not a life" which is generally a weak argument. It's better to accept/concede that you are ending life, but doing so without suffering before there's a neural net that can recognize anything - I think that's the important bit (and why third trimester abortions are banned anyway).
The push back then tends to be that life itself is sacred and can never be ended (suffering is not relevant), but this is generally not truly believed by the people making the argument so it's easy to point out their contradictory support for the death penalty. They then usually say there's a difference between innocent life and people who've committed crimes at which point you're back to negotiating conditions and suffering seems like a pretty good condition to use.
[Edit] It's also a messier issue because I think a component of the debate is shaming women for sex. That they should be forced to have their baby as some sort of penance for having sex. Obviously this is largely unsaid in favor of more palatable arguments, but if it's the true driver then it's hard to even start because you're not addressing the true motivation (which may not even be fully realized by the person arguing).
> It's better to accept/concede that you are ending life, but doing so without suffering before there's a neural net that can recognize anything
First, this assumes that such a "neural net" is required for suffering. I personally don't have a problem with that, but making an ironclad scientific case for it is going to be very difficult, since we don't understand how "neural nets" actually produce suffering even in the case of humans with fully developed brains.
Second, by this criterion, it's not just third trimester abortions that should be banned, but abortions at any time after the "neural net" develops. That's a lot earlier than our current jurisprudence draws the line (neural activity can be detected in the brain of a fetus at about six weeks, vs. viability at roughly 24 weeks as more or less the current jurisprudence line), which means that our current jurisprudence is allowing a lot of suffering by this criterion.
So I'm not sure this framing actually makes the argument any easier.
[Edit: Can no longer edit my above comment so putting it in a reply]
I had some time to think, I think you're right and at some point it becomes a utilitarian trade-off where there is no obvious answer, but a lot of factors that can help support/determine what the best policy is.
I think the suffering framing does move the needle away from the 'life is sacred argument' to something more actionable and specific (also honest/consistent with other beliefs often held by that crowd like the death penalty). There is a baked in assumption here that everyone is arguing in good faith though which I don't think is necessarily true (see the edit on my initial comment).
> I think part of the problem with abortion is the left argues that "it is not a life"
I don't think this is accurate. Getting a typical pro-choice person to discuss the fetus at all, much less whether it can be called alive, takes some serious cornering (I am pro-choice, to be clear).
Sure, but diverting the question from the topic where your point is weakest is just misdirection and isn't very persuasive.
There are a lot of good reasons other than this one to support pro-choice, but those reasons will be irrelevant to someone who views 'abortion as murder'. You have to put yourself in their position and reason about it like they would, then think about what is the best argument from their position.
Basically steel-manning their side and then tackling the best argument head on.
I think this is where really interesting discussions happen and where minds can change, otherwise you end up just discussing the same tired points without making any progress.
> I think abortion is a lot easier when you frame the argument around suffering.
Really? I think it becomes much more difficult. It invites arguments for infanticide (see the 2013 Giubilini paper on after-birth abortion for a famous example of this). The same arguments concerning a woman who is not able to take care of a child apply equally well after birth if suffering is the only consideration, because it's entirely possible to end the life of the baby in a painless manner. As someone who is pro-life, I've generally found the suffering angle to be the least compelling of the pro-choice counterarguments.
I do think you're right that there's an extra element beyond just suffering (otherwise you can argue that killing infants instantly is okay if they don't notice and they're not yet self-aware).
I think it's a mixture of suffering and having a neural network formed enough for ...something? I have an intuitive feeling that it's wrong to kill infants before they're self-aware even if 'done painlessly', but I don't feel that way about a blastocyst or a fetus without a sufficiently formed neural network that can suffer.
I recognize this isn't perfectly consistent though and I don't have a great answer for why.
Suffering matters - of the mother. The death in hospital of a woman who was denied an abortion was the catalyst for the successful campaign in Ireland to get the constitution changed to permit abortion.
The GP is not talking about prejudice; he's talking about a genuine difference in priorities. Calling that "prejudice" implies that one of those choices of priorities is simply wrong; it ignores the possibility that there is no one "right" choice of priorities.
I think the argument doesn’t suffer if you replace “prejudice” with “strongly held beliefs”. The human mind doesn’t have a secret truth-o-meter, so from the inside, prejudice and strongly held beliefs are indistinguishable. The fact that some people hold a belief strongly is itself proof that that belief can be held, and therefore that people can be convinced to hold it.
Basically, a technique that works to convince people away from prejudice over and above what presenting them with truth does should be applicable to any belief.
> The fact that some people hold a belief strongly is itself proof that that belief can be held, and therefore that people can be convinced to hold it.
The fact that I'm tall is proof that people can be tall. But not that you can become tall.
In general people have the opinions they need to have to feel good about themselves. That's hard to change.
Alright, fair point about the analogy. Still, we generally believe (perhaps incorrectly…) that people can change their minds, so I'll stick by it.
Regarding feeling good: yeah, that's a problem, and that makes it harder, no argument from me. The point I thought GGP was making (likely incorrectly; see my response to their response) was that you can't persuade people out of "genuine differences in priorities", which I think is untrue.
> I think the argument doesn’t suffer if you replace “prejudice” with “strongly held beliefs”.
Yes, it does, because the post I originally responded to said "persuading people out of" these beliefs. How is that justified if the beliefs are not known to be wrong? "Prejudice" implies that the beliefs are known to be wrong, so it's justified to try to persuade people out of them. "Strongly held beliefs" does not carry the same implication.
> How is that justified if the beliefs are not known to be wrong?
My bad, I thought you were arguing that "prejudice" is something that can be argued-out-of, whereas a "genuine difference in priority" cannot. If you're arguing persuading people is unethical, then…I disagree incredibly vehemently. Like, that's what a peaceful society is built on; I'm not sure what other method of change you imagine would take its place?
> If you're arguing persuading people is unethical
It depends on what you mean by "persuade". Trying to convince people to change their minds about something, and understanding that a lot of times you'll fail and accepting that, is one thing. Trying to force them to change their minds, or at least to act as though their strongly held beliefs were simply wrong and yours were right, for example by using the power of the law, is another.
> that's what a peaceful society is built on
A peaceful society is built on trying to convince other people, but accepting that a lot of times you'll fail, and accepting that when you fail, the law should not take either side. In other words, the force of law should only be used if there is a very strong consensus on a policy, to the point where the only people who don't agree with it are obvious outliers. It should not be used if there is just a 51% majority that favors a policy.
Given that we agree on just about everything, I think we're going to end up arguing about whether "convince" and "persuade" are synonyms. :-) I agree that minorities have rights, and that you therefore don't force something down everyone's throat because 51% of the population thinks it's right.
I think "priorities" is a pretty good way of framing it, at least when considering the abortion debate. I'm pro-choice, but I don't consider my position to be any kind of moral right; I just believe that in this situation, the priority should go to the mother and her wishes, not the fetus. I don't think that giving priority to the fetus is inherently illogical or wrong, it's just not the choice I'd make.
The problem that I have, though, is that I don't believe that many pro-life advocates look at it that way; instead of thinking about what's best for the people around the potential baby, they resort to religious or strictly emotional arguments in support of their views[0], which I will never consider persuasive.
The cut-off point is entirely up to society's consensus. You could go to the extreme and say that vasectomies (or even male masturbation) and tubal ligations are murder, because they destroy germ cells that could turn into children eventually. Some religions prohibit birth control of any kind. Many people aren't comfortable with the morning-after pill. Some people are fine with an abortion up to N weeks, but not after.
And that's what I find sad about arguments on this topic: people have drawn their line in the sand, and they believe that they are right, any other option is wrong, and that they must impose their rightness on everyone else, regardless of any disagreement in beliefs.
As a result, I just tend to not get into arguments about this, as I don't think it's worth the blood-pressure increase to engage a pro-life advocate in discussion.
[0] I may be wrong about this; I frankly do not have many (any?) pro-choice friends, so I only know what I read, and that may be a case of me just hearing the loudest voices, not the most representative ones.
That's a learned behaviour - to ignore opposing arguments. They train themselves to counter arguments they don't like with their own prefabricated arguments, learned from the mass media.
Like, for example: politician X is corrupt, he was caught taking bribes. Counter: everyone is stealing, at least his party gives our group more benefits than the other party.
No, it’s not about ignoring arguments. It’s about talking past each other by adding more anecdotes to an entire class of data the other side does not prioritize.
Yep, I think you're dead on. There are competing and diametrically opposed values out there, and in many cases both can be fairly argued in favor of. For example fairness vs freedom. No amount of shouting about the details will convince someone who primarily values fairness that freedom is more important, and the arguments are largely pointless unless the participants are genuinely seeking to examine the ideas, which is rarely ever the case online.
I think people understand that at a very base level, and that's why online arguments are often really more of a performance to score points with your side, or to take shots at the other side. Rarely is anyone actually attempting to convince or learn, they're just playing out some weird tribal warfare ritual and dressing it up as debate.
As the larger thread here suggests, the best move in this game is simply not to play it.
From my own experience, I've had many many many conversations regarding topics like this that haven't changed my views in any substantial way. BUT, I have had a handful that did and those are so valuable that I think they it IS worth banging our heads into each other's walls most of the time for these rare moments.
I disagree on this - I have a more optimistic view of the ability for people to change how they think.
You're right that a core belief tied into someone's identity is not going to be changed by new evidence, unless you can get people to value trying to figure out what's true and updating on evidence itself (rather than having an 'answer' already and just using motivated reasoning to come up with arguments that support their 'answer'). This is hard.
I know I've personally gone from someone who made these kind of bad reasoning mistakes - the smarter you are the more insidious they can be because you're better at being a clever arguer and coming up with plausible sounding reasons while ignoring or rationalizing contradicting evidence. I've worked hard to get better at it (and I still am, it's an ongoing process). Yes, this is only a sample size of one, but I think it's possible.
I have an optimistic view of the capacity for a person to learn how to think better, while simultaneously having a pessimistic view of the general public's current ability to think rationally. This may seem like a conflict, but really it just means that I think it's possible for us to be a lot better than we are, while recognizing it's a bigger project than just stating the specific evidence available for any specific argument.
People have to be willing to consider why they believe what they believe, and be honest about the potential to change their mind based on new information that contradicts what they believe to be true.
I think that's the goal we have to work toward first.
> unless you can get people to value trying to figure out what's true and updating on evidence itself
What evidence could you give to disprove the belief that abortion is murder? The belief is not a claim about evidence; it's a claim about priorities, as the GP said. Or, if you like, about what actions count as belonging to what categories.
A more interesting question is why is abortion consistently used as a tribal issue in US politics.
Of course there is an underlying difference of opinion, and of course it matters to those on both sides.
But it matters because the media have done an exceptionally good job of herding people into different camps - by focusing on a small and standardised collection of divisive issues and amplifying the rhetoric around them.
Does someone benefit from these divisions, and from the loss of civility and civic cohesion they create, and perhaps also from the implied promotion of violent oppositional defiant subjectivity over rational argument that powers them?
> A more interesting question is why is abortion consistently used as a tribal issue in US politics.
I think a factor here is that the US pushes the boundaries of what it really takes to have a free country with a diverse population more than other countries do.
Other countries--or at least other developed countries--have a more homogeneous population than the US has, and also do not have the same tradition of skepticism about and distrust of government that the US has. Also other countries do not have quite the same Constitutional provision for the free exercise of religion that the US has.
A less homogeneous population means there is a wider range of traditions that people are brought up with. That creates a lack of common ground about a lot of things. For example, I'm not aware of any other developed country that has a significant population of young earth creationists.
A tradition of skepticism about and distrust of government means that people are less willing to accept a legal rule that conflicts with their personal convictions, and more willing to complain about it publicly (or indeed to take even more drastic action). Note that this applies to both sides of the abortion debate: to extreme pro-lifers who feel that any abortion at all is wrong, and to extreme pro-choicers who feel that any restriction on abortion at all is wrong. Current US law and jurisprudence is actually not close to either of those extremes, so both extremes have plenty of reason, in their view, to complain.
The Constitutional protection of free exercise of religion means that "personal convictions", if they are backed by a religious tradition, carry a lot more weight. This is most obvious in the US on the anti-abortion side of the debate.
> Does someone benefit from these divisions
I think someone taking political advantage of divisions within the population can happen in any country, but it might well be true that the US, for the reasons I described above, presents more opportunities for it to happen.
It's unfortunate you're being downvoted, because I think in many ways you're right. People -- especially people who hold strong views on divisive issues -- usually will not change their minds when presented with new evidence[0]. They're swayed by emotional appeals that get them to change how they feel about an issue.
Sure, there are exceptions, and some people can be dispassionate enough to weigh evidence and change their minds, but that is definitely not the norm.
[0] I read a fantastic article on this a year or two ago, but can't find it now; will update with an edit if I find it before the edit window expires.
You're right on an individual level, but when you extrapolate a less argumentative approach and encourage curiosity rather than beligerence I do believe society overall is amenable to change. On a macro scale it works. I'm loath to point to abortion specifically as in the example as it's a devisive issue but Ireland which voted to allow abortion two years ago in a landslide referendum is an example of society changing its views as a whole, even while some individuals in that society remain immovable.
Sadly, I think you are right. All reasoning starts with postulates that you cannot prove. For ethics and politics, these axioms are our emotions and values. Two people who have a different set of values can't have a logical argument because they're using entirely different systems of reason.
Damn. I ended up caught up in peoples reponses to the abortion debate question and totally forgot my disgust of Facebook which was top of mine as I started reading the comments. Ironically it confirms the article and how Facebook can keep distracting from focus on itself by having platform users head down rabit hole after rabit hole in an attempt to satiate their flawed human desire to be right all the time.
I've been doing that more recently too. I say to myself do I really want to do this? Or why am I getting involved in this? Especially Twitter where you can't unselect yourself from a conversation.
My new mantra is the saying, "not my circus, not my monkeys".
I do this a lot, especially in Reddit. Trick I learned is I use my notepad now to type the reply, then wait for an hour and post it. This helped with few things:
1. Slowly improving in checking for typos
2. Reading something after a break helps framing my point better and helps remove the heated emotion from the text
3. I also don't save the comment, so I have to spend time searching for it. This further helps in filtering the topics that I don't care about
4. Using services while not logged in, basically be a lurker
I learned this lesson recently, and I am sure I will continue to learn this lesson in the future as I've already learned it in the past. Just in many different contexts and ways!
I agree, as long as you can find time to research and understand if you’re really correct. This way you avoid conflicts but you still learn if you were wrong.
100%. Some of my biggest learning experiences were when I walked away when I thought I was completely, utterly correct only to find out a little later that I was actually completely wrong!
I say that in the past tense just because it makes in terms of what I'm trying to convey but it still routinely happens too, and will continue to happen.
I call this the "outrage economy". There are several companies (facebook, twitter, reddit, youtube, etc) that grew based on user activity of varying types. The more bickering and polarization, the bigger X Company gets and need to hire more employees and get more funding, and that feeds into more growth. There is also a secondary economy built on or used by these original companies (software tooling, ad software, legal, clickbait, etc). We now have a big chunk of the economy feeding pointless bickering.
> some hidden human need/instinct to argue with people who they believe are incorrect
This is perhaps a form of "folk activism" [1]:
> In early human tribes, there were few enough people in each social structure such that anyone could change policy. If you didn’t like how the buffalo meat got divvied up, you could propose an alternative, build a coalition around it, and actually make it happen. Success required the agreement of tens of allies — yet those same instincts now drive our actions when success requires the agreement of tens of millions. When we read in the evening paper that we’re footing the bill for another bailout, we react by complaining to our friends, suggesting alternatives, and trying to build coalitions for reform. This primal behavior is as good a guide for how to effectively reform modern political systems as our instinctive taste for sugar and fat is for how to eat nutritiously.
Facebook is a collection of your friends or your "tribe", so repeated arguments with your tribe members is what our unconscious brain pushes us towards. That coupled with the dopamine hit of validation via likes (which is common to other online discussion platforms).
I really don't like the "it can't be helped" attitude about what Facebook has become.
They made a choice to throw gasoline on the flames of these aspects of human behavior. Few people seem to realize that Facebook could have been a force for good, if they had made different choices or had more integrity when it comes to the design and vision of their platform.
The way that things happened is not they only possible way they could have happened, and resigning to the current state as "inevitable", to me, reeks of an incredible lack of imagination.
I am not sure I can agree. Facebook did not change in any significant way. It still serves as a platform to boost your message. It is, at best, simply a reflection of the human condition. The previous example was the internet and some of the revelations it brought about us as a species. FB just focused it as much as it could.
Force for good. I do not want to sound like this, but how, in your vision, that would look like? This is a real question.
It could have been a platform that enlightens, informs, and uplifts people instead of exploiting attention and anger, and profiting from misinformation.
You can make money by making people feel good instead of bad. You can be rotten.com, or you can be Pixar. You have a choice. An organization with integrity will look at not just how much money they're immediately making, but whether they're pushing the world towards better or worse. A hands-off attitude of "it's not my problem that people like this sh*t" is not integrity, it's a rationalization for greed.
You can make choices that result in making less than the absolute maximum amount of cash you can get your hands on, in service of building a product/experience/brand with value and goodwill of its own. There are countless examples of this in other places— just look at any company that builds its reputation based on quality. Each of these brands could make their products for cheaper and lower quality, and make more immediate profit, at the (much larger) long-term cost of destroying the brand, the customer goodwill, and the market advantage. Defending against such short-term greediness is uphill work, but it's both the enlightened and profitable thing to do.
Instead of actively amplifying memes and misinformation, they could have chosen to build features supporting community and/or expert moderation. They built an algorithm that optimizes purely for attention, but they could have made something that accounts for quality; paying attention to patterns in good and bad sources of information, and reliable/unreliable discriminating tastes in the community. The emphasis on quality and reliability of content was the pitch for Quora, for example, and they did much better at that task than Facebook. Which is not surprising, because Facebook seems to clearly not be trying to optimize for this at all. Wikipedia and StackOverflow are also two huge success stories of community/expert moderation. It works if you actually prioritize it.
They could have chosen to hire journalists, editors, and artists to produce and vet high-quality content to drive people to the platform, and step responsibly and effectively into the media void that was created when newspapers began to collapse. An analogy for this would be the way that Netflix, Amazon, HBO, and friends have created a new boom and golden age of content creation to fill the void left by the dying medium of broadcast TV. There could have been something like this for print, and Facebook was well positioned for it.
[Jaron Lanier](https://www.ted.com/talks/jaron_lanier_how_we_need_to_remake...) has lots of ideas about how to make an internet that isn't hostile toward its own users. One of his revolutionary ideas: Charge people money for services instead of siphoning their data and their attention in ways that hurt them.
There are a zillion directions they could have gone.
I did not respond right away, because I wanted to process it properly. I tried to imagine a world you describe and, I will admit, it does sound better than what we have. The problem, as is often the case with things that sound nice, however, is that we got here by not accounting for human drives. Those needs are not channeled in any way. For example, you say that companies ought not to seek maximum profitability in spite of clear indication that this is what they seek.
Until we figure out how to properly channel that very human tendency, the choice you suggest is theoretical. And I am saying this while agreeing with the various oughts listed.
That's the thing— it does not require any great leap of faith or imagination to conceive of things that work this way, because there are already so many examples in the real world. To build on a previous example, Jimmy Wales leapt further in conceiving Wikipedia, which had no precedent before it was created. The leap I ask for is therefore less.
Companies have to make money to exist and survive— but optimizing for money above literally all else is a choice. Again, the word for that is "greed", and it's optional. The leaders of a company can steer towards it or away from it; the company will follow where they lead, raise or lower itself to the standards that they (fail to) set. I worked for 10 years at a successful, well-known company which emphasizes quality over easy money, and it's maddening to see people claiming that such a thing is impossible. It's really not.
This is the exact perspective that I'm talking about. It's the perspective of the GM manager in the 1980s who'd insist— despite all the evidence to the contrary— that Toyota's worker-positive, lean production strategy is impossible. Not only is he wrong, but he'd be losing money because of his lack of imagination and his unwillingness to raise the bar out of the dirt.
It's the attitude of officials in second- and third-world countries, who see corruption, fraud, and abuse— the bottom of the moral barrel— and shrug their shoulders; they justify it to themselves, to others, to the public under the mantra "it can't be helped; the bar cannot possibly be any higher. It's too much to ask". Lo and behold, those countries are economically irrelevant, while countries with higher institutional integrity exist and dominate all around them. Lowering the bar doesn't help them.
Given all the examples available, "a company with high standards" is not "theoretical" at all. Saying so is but an excuse to insist on a pitifully low bar. And it's disappointing to me to encounter the perspective that "behaving with integrity" is beyond conceivable.
> I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect.
I think everyone has a natural human need to feel that they have agency in their community. The need to feel that they participate in the culture that surrounds them and that they can have some affect on the groups that they are members in. The alternative is being a powerless pawn subject to the whims of the herd.
In the US, I think most people lost this feeling with the rise of suburbia, broadcast television, and consumer culture. There are almost no public spheres in the US, no real commons where people come together and participate. The only groups many people are "part" of are really just shows and products that they consume.
Social media tapped into that void. It gave them a place to not just hear but to speak. Or, at least, it gave them the illusion of it. But, really, since everyone wants to feel they have more agency, everyone is trying to change everyone else but no one wants to be changed. And all of this is mostly decoupled from any real mechanism for actual societal change, so it's become just angry shouting into the void.
I think it’s important to note that Facebook didn’t invent any of this. They just built the biggest mainstream distribution channel to do so. Nothing they ever did in terms of facilitating pointless arguments has been all that original either.
People have been doing this forever, and even on the Web much, much longer than Facebook has existed.
Now that said, they know what they have on their hands and how it makes them the money. They aren’t going to fix it. It is a big feature of their product.
To be fair, people will go to great lengths to argue over things they think are wrong. People make alt accounts on Reddit and Twitter to do it. Heck, people will even navigate 4chan's awful UI and content, fill in captchas, just to tell someone that they're wrong.
Facebook could make it harder to post content, but I doubt that would make much of a difference
think it’s important to note that Facebook didn’t invent any of this
I think that’s literally true. They told their algorithm “maximise the time people spend on Facebook” and it discovered for itself that sowing strife and discord did that.
Facebook’s crime is that when this became obvious they doubled down on it, because ads.
Facebook, and others, absolutely innovated with their recommendation engines. Enabled by implementing the most detailed user profiling to date coupled with machine learning.
Part of this is that Facebook makes the opinions of people you know but don't really care about highly visible, which I think leads to some of the animosity you see on the platform. When the person you're confronting is the uncle of someone you talked to once back in high school, there's little incentive to be kind.
> I think it’s important to note that Facebook didn’t invent any of this.
I don't agree with that. I very strongly think that Facebook did invent a lot of this.
> They just built the biggest mainstream distribution channel to do so
Scale does matter though. There is a lot in life that is legal or moral at small scale but illegal or immoral at large scale. Doing things at scale does change the nature of what you are doing. There's no 'just' to be had there.
> Nothing they ever did in terms of facilitating pointless arguments has been all that original either.
I don't agree with that either. They have even published scientific papers, peer-reviewed, to explain their new and novel methods of creating emotionally manipulative content and algorithms.
> People have been doing this forever, and even on the Web much, much longer than Facebook has existed.
I also don't agree with this. Facebook has spent 10+ years inventing new ways to rile people up. This stuff is new. Yes I know newspapers publish things that are twisted up etc, but that's different, clearly. The readers of the paper are not shouting at each other as they read it.
I think it's super dangerous to take this new kind of mass-surveillance and mass-scale manipulation and say, welp, nothing new here, who cares? I think that's extremely dangerous. It opens populations to apathy and lets corporations do illegal and immoral things to gain unfair and illegal power.
Facebook should not be legally allowed to do all the things they are doing. It's invasive, immoral, and novel, the way they deceive and manipulate society at large.
That's an interesting thought, for sure. I should point out that this doesn't only apply to facebook, but other large discussion forums as well: reddit, 4chan, tumblr, twitter etc.
> some hidden human need/instinct to argue with people who they believe are incorrect
I've said it before, I'll probably say it again: this place is chock full of people just itching to tell you you're wrong and why. Don't get me wrong: obviously there's also a hell of a lot of great discussion and insightful technical knowhow being shared by real experts — but in my experience I also do have to wade through quite a lot of what feels like knee-jerk pedantry and point-scoring.
When I'm wrong I want it explained to me why. Even when I'm right about something controversial, I want to see the best arguments to the contrary.
I don't want people to be less argumentative. I just want a higher intellectual caliber than what's generally available on facebook or twitter, and HN fits the bill reasonably well.
When people here tell you that you're wrong, they tend to do it with style.
One time I made a comment about how dividends affect stock price and someone spent like 2 hours writing a program in R to do some analysis just to prove me wrong.
Extremely true, also relevant for work disagreements between people who have existing positive relationships. A surprising number of disagreements disappear if left on their own for a time.
I find that many people with engineering backgrounds (myself included) can struggle letting conflicts sit unresolved. I suspect that instincts learned debugging code get ported over to interpersonal issues, as code bugs almost never disappear if simply left to rest.
Close your eyes, hold your breath and hope the situation resolves itself, that's your solution? I don't believe in a: "hidden human need/instinct to argue with people". There is nothing hidden about human conflict. It is as natural as any conflict; as natural as space and time. In fact, without conflict evolution can not exist. Obviously, a good portion of the arguments being had have the potential of bearing no fruit, but I would argue that just as many of them not only should but NEED to be had, and are quite productive on the whole.
I always have to think about that one when I had just been spending 20 minutes trying to formulate an elaborate response to someone's comment, just to proceed to sigh and close the thing without posting it.
Probably most of the time it was for the better too.
I completely agree. Often I don't proceed because the point I wanted to make is just not properly defendable at that point.
But the ideas written down in the process don't disappear and at the same time I got a better appreciation of the other side's point of view.
I'd rather hold on to those ideas and evolve them further than to get too attached and be forced into a situation where I eventually defend my opinions because I moved myself into a position where I feel like this is a deep personal belief rather than a well defined objective-ish argument I tried to make.
Where do you draw the line between someone on the internet "being wrong", and someone on the internet spreading dangerous misinformation? Sure, walking away from the first is often a good course of action, but what about the second?
I actually really enjoy having a good argument with random people online, but I don't as much enjoy arguing with my friends and family. 1) I don't like being mad at or contemptious of people I'm close to and 2) they're usually not worth the effort of arguing with because they're just cutting and pasting stupid shit they found elsewhere and it's _exhausting_ to continuously correct the record when they put zero effort into copy and pasting it to begin with.
I first purged everyone that posted that stuff from my feed, and then eventually quite facebook altogether.
> I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect.
Hidden human need/instinct to argue, period. These arguments aren't intellectual debates, it's people getting pissed off at something, and venting their rage towards the other side.
It's odd how addictive rage can be. But that's not a new phenomenon. Tabloids have been exploiting this for decades before Facebook.
Most of my facebook feed is just memes and selfies.
(I'm venezuelan)
When facebook as new/trending up years ago there were some political discussions but people quickly figured out it was worthless, how come USAians haven't?
In NZ, we follow along behind the USA in most things, so getting angry over politics is growing and growing. I see it as a hobby, growing in popularity. But also a hobby you can discourage those you know from getting into, and if the social momentum pulls us away from it then the hobby doesn't take hold.
In my line of work, we need to dig deep and find the root cause on anything we 'touch'. I have noticed (since day 1 in this line of work) that elaborate, complex truths tire the audience, they want something snappy and 'sexy'. I remember a French C-suite telling me "make it sexy, you will lose them".
Facebook managed to get this just right: lightweight, sexy (in the sense of attractive), easy to believe, easy to understand, easy to spread. The word "true" is completely absent on the above statement. That generates clicks. That keeps users logged in more. That increases "engagement". That increases as revenue. Game over.
The masterminds/communications brilliant minds could never get so many eyeballs and ears tuned-in with such a low cost before.
I've mentioned before that FB = cancer
It gives 1 (ability to communicate) and it takes 100.
Sometimes, just shutting up about an issue and not discussing it is the best thing for a group to do.
Then the terrorists win.
That used to be the conventional wisdom on trolls, but there are now so many of them. Worse, about half are bots.[1] (Both NPR and Fox News have that story, so it's probably correct.)
> And Facebook somehow tapped into that human behavior and (inadvertently or purposefully)
It's not just that they tapped into it, it's the entire mission statement in a sense. 'to connect the world' if you want to treat it like a sort of network science basically means to lower the distance between individuals so much that you've reduced the whole world to a small world network. There's no inhibition in this system, it's like an organism on every stimulant you can imagine.
Everything spreads too fast and there's no authority to shut anything down that breaks, so the result is pretty much unmitigated chaos.
The vaccine is the thing people complain about all the time, the much maligned 'filter bubbles', which is really just to say splitting these networks into groups that can actually work productively together and keeping them away from others that make them want to bash their heads in.
People do go on Facebook and argue with others, but that's not the core of the divisiveness. Rather, people sort themselves into opposing groups and spend most of their time talking amongst themselves about how good they are and how horrible the other group is.
You are on to something. I interpret it as it is fantastic fun/addictive/dopamine-short-term-win to argue or discuss with someone. Especially if you can afford the hangover that outrage might lead to.
Face to face with people I know or at least recognize as human, not a bot, educated or at least not cartoon-hick-personality - Arguments can be great, because of the ability to see when to pull back and stop something from escalating. We are all human after all.
In internet-powered discussion, where numbers of people observing can be huge, and every username can feel inhuman or maybe even just trolling in an attempt to create a stupid argument - that Argument gets painful. But the dopamine hit is still there...
Given our current (social) media ecosystem, converting outrage into profit (per Chomsky, McLuhan, Postman, and many, many others), what does a non-outrage maximizing strategy look like?
I currently favor a slower, calmer discourse. A la both Kahnemann's thinking fast vs slow, and McLuhan's hot vs cold metaphors.
That means breaking or slowing the feedback loops, removing some of the urgency and heat of convos.
Some possible implementation details:
- emphasis on manual moderation, like metafilter, and dang here on HN
- waiting periods for replies. or continuing allowing the submissions but delay their publication. or treat all posts as drafts with a hold period. or HN style throttling. or...?
- only friends can reply publicly.
- hide "likes"
- do something about bots. allow aliases, but all accounts need verified real names or ownership.
Sorry, these are just some of misc the proposals I remember. I should probably have been cataloguing them.
I can't see that working for anything other than niche networks. A social network will make less money doing this, so what's their incentive? The bulk of people will stick with a network that gives them constant and instant feeding of their addiction. I think the "vaccine" would need to be broader to be effective or major networks would need to grow a serious conscience.
I'm very curious about niche networks. I have hunch that ravelry.com has something to teach us. I bookmark any niche networks I hear about, like pray.com. Hope to someday do a survey, feature comparison, and whatnot.
I hosted a very niche BBS network. Think The Well for CAD and computer graphics. True, there was no business model, but it was awesome and fairly long-lived.
Facebook is now fairly long-lived. All of its predecessors eventually perished. Some day Facebook will too. I'm curious what the successors will look like.
I had the same realization recently, and deleted my Twitter account in favor of a new one where I only follow people I know in real life.
That worked great for a couple of weeks, but now I log on Twitter and half of my feed is tweets of people I don't know or follow, with the worst, most infuriatingly stupid hot takes. No wonder they have literally hundreds of thousands of likes. The platform is built around this "content".
> I really didn't realize until perhaps the last 2 years that Facebook fundamentally tapped some hidden human need/instinct to argue with people who they believe are incorrect.
Funny, years ago, around the Aurora shooting in Colorado, it was Facebook that made me recognize this behaviour in myself.
A lot of people here are saying they will write responses then wait before posting.
Could this be part of the solution? If a discussion is getting particularly heated, put responses on a time delay. Maybe even put the account on a general delay for engaging with heated subjects, so the outrage doesn't crop up elsewhere.
Of course this would decreased engagement. It might even push users to more permissive platforms.
Yeah. There's a lot of relief in letting go, accepting that other people are outside of your power to control, and just practicing acceptance no matter how wrong or annoying or stupid you think people are being.
If you realize it’s a dumpster fire then delete your account and move on with life. If that line of thinking is a challenge in absolutely any way the problem is addiction.
It's been 5 years now. Facebook ads used be creepy, but polite: "Come back to Facebook"
But not anymore, black text one white background:
"Go To Facebook"
next ad:
"See What You've Missed From Friends And Family"
Kill it with fire! The only advice I have about the company and it's products.
I am a progressive, so liberal I verge on socialist, and I think one of the "Left's" great flaws in the US is its inability to walk away, to just ignore. Engaging vociferously is seen almost as a moral imperative; "we must fight evil wherever we find it" sort of thing. But all that does is bring attention and a form of validation to the more lunatic attempts to enrage them. They aren't accomplishing a single thing by getting so righteously angry over statements the speakers probably aren't even making in good faith.
You can even see in the memes. It's the right that loves "trolling libs," and the left that's taking the bait. I think it's telling that the stereotype hardly ever seems to go the other way; you almost never see people talk about liberals "trolling reps."
You really don't need to engage with everything in order to be a good activist. In fact, I believe taking time and emotional energy to do so is actually being a bad activist. You're just wasting effort, nothing you say or do will change anyone's minds because mostly, the whole reason they're saying whatever it is is specifically to make you upset. To trap you into unwinnable arguments just to laugh at how heated you get.
Really we all need to be better at just walking away from crazy, whatever side of whatever spectrum we find it. By regularly surrounding yourself with such conflicts and by regularly basting yourself in such a soup of intense negativity, you are quite literally doing nothing more than causing physical harm to your body and mind via the morass of cortisol, etc. you are unleashing. You are accomplishing nothing.
I agree that Facebook makes this painfully easy, although Twitter and Reddit are right there as well.
Disclaimer: I don't agree with your conclusions regarding general needs and instincts (of human) and that a human possess abosolute/built-in/DNA-engrained inabilities (therefore, I do believe a human can fly). I also don't agree that Facebook is to share much of the blame for the chaotic human zeitgeist present today.
I do believe a human is highly malleable and impressionable, and that these qualities have been exploited historically at various scales for various reasons.
"There is no vaccine yet for this."
There may not be any vaccine, but there may be a cure. If we change the language used to communicate within a setting/platform such as Facebook, possibly by using a subset of the language previously used or by adopting a more Formal construct.
But Facebook is a virtual neighborhood, with greatly increased bandwidth and range. It is difficult or impossible to achieve it in their settings.
I don't personally think it's productive for me to engage with these kind of people but I will definitely support and cheer on others doing so: https://www.youtube.com/watch?v=Q65aYK0AoMc (NSFW content)
(Personally I get too wound up in internet arguments and it's just not a healthy space for my head to be in)
The point of my question is that activists tend to talk about things, so "shutting up about an issue and not discussing it is the best thing for a group to do." won't ever actually happen.
They tapped into that human behaviour 'as somehow as' hn doesn't have an orangered envelope when somebody replies to your messages. It's by design and not by coincidence.
There are plenty of vaccines for this, but not in the sense that you can apply it to people by force, like you can apply a vaccine to babies.
Meditation, yoga, religions, sports - there are many ways to calm the mind.
Here's the paragraph I found most damning. It would make me want to assign liability to Facebook.
> The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
> Facebook's mission is to give people the power to build community and bring the world closer together. People use Facebook to stay connected with friends and family, to discover what's going on in the world, and to share and express what matters to them.
Encouraging group communication is the primary goal, regardless of the consequences.
It’s one thing to enable people to seek out extremist communities on their own. It’s quite another to build recommendation systems that push people towards these communities. That’s putting a thumb on the scale and that’s entirely Facebook’s doing.
This is one example, and it’s quite possibly a poor example as it is a partisan example, but Reddit allows The_Donald subreddit to remain open, but it has been delisted from search, the front page, and Reddit’s recommendation systems.
It sounds like an honorable goal, doesn't it? But when you build a community that becomes simply a place for shared anger, you allow that anger to be amplified and seem more legitimate.
I thought the most interesting part was Mark asking not to be bothered with these types of issues in the future. By saying do it, but cut it 80%, he sounds like he wants to be able to say he made the decision to "reduce" extremism, but without really making a change.
Hey, Facebook VP here (I work on this). We’ve made some meaningful changes to address this since 2016. We’ve strengthened our enforcement in groups and have been actively working on our recommendation tools as well, for example removing groups with extremist content that violates our policies, from recommendations.
Of course, it's hard to assign blame without looking at how "extremist groups" are defined and at whether the recommendation tools do good as well as harm.
The problem really is platforms that give people content to please them. An algorithm selects content that you are likely to agree with or that you have shown previous interest. This only causes people to get reinforced in their beliefs and this leads to polarization.
For example, when I browse videos on Youtube I will only get democratic content (even though I am from Poland). Seems as soon as you click on couple entries you get classified and from now on you will only be shown videos that are agreeable to you. That means lots of Stephen Colbert and no Fox News.
My friend is deeply republican and she will not see any democratic content when she gets suggestions.
The problem runs so deep that it is difficult to find new things even if I want. I maintain another browser where I am logged off to get more varied selection and not just couple topics I have been interested with recently.
My point of view on this: this is disaster of gigantic proportions. People need to be exposed to conflicting views to be able to make their own decisions.
Sorry for the self-reference outside of a moderation context, but I wrote what turned into an entire essay about this last night: https://news.ycombinator.com/item?id=23308098. It's about how this plays out specifically on HN.
Short version: it's because this place is less divisive that it feels more divisive. HN is probably the least divisive community of its size and scope on the internet (if there are others, I'd like to know which they are), and precisely because of this, many people feel that it's among the most divisive. The solution to the paradox is that HN is the rare case of a large(ish) community that keeps itself in one piece instead of breaking into shards or silos. If that's true, then although we haven't yet realized it, the HN community is on the leading edge of the opportunity to learn to be different with one another, at least on the internet.
The thing is that HN is essentially run like singapore - a benign-seeming authoritarian dictatorship that shuts down conflicts early and is also relatively small and self-contained. One thing that doesn't get measured in this analysis is the number of people who leave because they find that this gives rise to a somewhat toxic environment, as malign actors can make hurtful remarks but complaints about them are often suppressed. Of course, it tends to average out over time and people of opposite political persuasions may both feel their views are somewhat suppressed, but this largely reactive approach is easily gamed as long as its done patiently.
This is why I like HN. I am always challenged with different points of view on here, and in a non-argumentative way. It's just a rational discussion. Often I will see something on FB or Twitter that is outrageous to me (by design), but when I look it up on HN and find some discussion on the details, truth is often more sane than it seems...
One of my theories about the success of HN is that we are grouped together based on one set of topics (on which we largely agree), but we discuss other topics over which we are just as divided as the general public.
I believe there is an anchoring effect -- if you are just in a discussion where someone helps you understand the RISC-V memory model, it feels wrong to go into another thread on the same site and unload a string of epithets on someone who feels differently than you do about how doctors should get paid.
First of all, less divisive environment means you interact with people of different opinions which means that few interactions will be with exactly like-minded people.
Environments where all people tend to think exactly the same are typically extremist in some way, resulting from some kind of polarization process that eliminates people that don't express opinion at the extreme of spectrum. They are either removed forcibly or remove themselves when they get dissatisfied.
One way HN stays away from this polarization process is because of the discussion topics and the kind of person that typically enjoys these discussions. Staying away from mainstream politics, religion, etc. and focusing mainly on technological trivia means people of very different opinions can stay civilized discussing non-divisive topics.
Also it helps that extremist and uncivilized opinions tend to be quickly suppressed by the community thanks to vote-supported tradition. I have been reading HN from very close to start (even though I have created the account much further). I think the first users were much more VC/development oriented and as new users were coming they tend to observe and conform to the tradition.
(I red your piece. I think I figured it out. The users actually select themselves on HN though in a different way. The people who can't cope with diverse community can't find place for themselves, because there is no way to block diverse opinion, and in effect remove themselves from here and this is what allows HN to survive. The initial conditions were people who actually invited diverse opinion which allowed this equilibrium).
I agree with you but this is an incredibly hard problem to solve. How are you going to get your friend to engage with videos that are in direct opposition to her world views? Recommendations are based on what she actually clicks on, how long she actually watches the videos, etc.
And from the business perspective, they're trying to reduce the likelihood that your friend abandons their platform and goes to another one that she feels is more "built for her".
A start would be to recognize that businesses are not allowed to exploit this aspect of human nature because the harm is too great to justify business opportunity.
It's easy to solve. FB gets to either be a platform for content or a curator for content. They can't be both because that would be a conflict of interest.
Then what's the business model? Who pays for all of it?
I'm not defending a specific approach or solution, but just pointing out that at this point, FB is a huge entrenched business that makes a lot of money on the status quo, and so convincing them to change "for the better" is barking up the wrong tree until "for the better" means "more profitable".
Splitting the platform and curation means the platform needs a revenue stream. If the curator pays the platform, then all you're doing is shifting the conflict up a notch, not solving it.
I think that is not quite right, but the distinction is subtle. The algorithm selects the content that you are most likely to be engaged with. For most people likely that is the filter bubble, and seeing only what they agree with. But for some folks, they actively like to have debates (or troll one another) and see more content they will not agree with, because what they don't agree with gets more engagement. The intent is to keep you engaged and active as long as possible on the site, and feed whatever drives that behavior.
This isn't necessarily bad all the time. But when content is used to form opinions on real world things that actually Matter, it definitely becomes a problem.
In other words, Steam, please filter games by my engagement in previous games I've played. News organizations, please don't filter news by my engagement in previous news.
Facebook's problem is it acts in two worlds: keeping up with your friends, and learning important information. If all you did was keep up with your friends' lives, filtering content by engagement is kind of meh.
Same with youtube. I mostly spend all my time on there watching technical talks and video game related stuff. It's pure entertainment. So filtering content is fine. But if I also used it to get my news, you start to run into problems.
That is a really annoying issue I have with YouTube.
I occasionally watch some of the Joe Rogan podcast videos when he has a guest I'm interested in. I swear, as soon as I watch one JRE video, I am suddenly inundated with suggestions for videos with really click-baity and highly politicized topics.
I've actually gotten to the point where I actively avoid videos that I want to watch because I know what kind of a response YouTube will have. Either that or I open them in incognito mode. It's a shame. I wish I could just explicitly define my interests rather than YT trying to guess what I want to watch.
This is the exact same behavior I have noticed from YouTube as well. I miss the "old" YouTube around 2011, when it was a terrific place to discover new and interesting videos. If I watched a video on mountain biking, let's say, then the list of suggested videos all revolved around that topic. But in today's YouTube, the suggested content for the same mountain biking video is all unrelated, often extremely polarizing, political content. I actually can NO LONGER discover new interesting content on YouTube. Like you say, it automatically categorizes you based on the very first few videos and that's all you see from there on out. That is why I have now configured my browser to block all cookies from YouTube. I'm annoyed that I can no longer enjoy YouTube logged in, but at least now I feel like I've gotten back that "old" YouTube of what it once was. It's a whole lot less polarizing now, I feel much better as a result of it, and the suggestions are significantly improved.
Exactly. I remember clicking on homepage to get selection of new, interesting videos. Now I just get exactly the same every time I click. Useless. I would like to discover new topics not get rehash of same ones.
In the case of Facebook they absolutely do not try to please me. They quite literally tries to do the exact opposite of everything I would like from my feed.
Chronological with the ability to easily filter who I see, and who I post to. On each point capabilities has either been removed, hidden, or made worse in some other creative way.
Adding insult to injury, having to periodically figure out where they've now hidden the save button for events, or some other feature they don't want me to use is always a 'fun' exercise.
It doesn't address all of those, but if you visit https://www.youtube.com/feed/subscriptions it looks like it's still just a reverse chronological list of videos from your subscriptions.
What really scares me is how many people I know who acknowledge that platforms like Facebook and YouTube are designed to create echo chambers which tend to distort people's opinions and perceptions towards extremes... but still actively engage with them without taking any precautions. They know it's bad for them, but they keep going back for more.
Having awareness probably means they can engage in a meaningful way. Some degree of maturity and critical thought are required to dam up invaluable media. It's something akin to junk food; junk media.
Same goes for non-political content. I often have to log out of youtube to find something new and interesting (even though I have hundreds of subscriptions).
Interesting. The diff appears to be (a) they changed the headline from "Facebook Knows It Encourages Division. Top Executives Nixed Solutions." to "Facebook Executives Shut Down Efforts to Make the Site Less Divisive", and (b) they inserted a video most of the way down the article, captioned "In a speech at Georgetown University, Mark Zuckerberg discussed the ways Facebook has tightened controls on who can run political ads while still preserving his commitment to freedom of speech."
Wow, Cloudflare's 1.1.1.1 DNS server sets up a man-in-the-middle (broken cert gives it away) and serves a 403 Forbidden page when clicking on this link. Verified that 8.8.8.8 works fine.
I don't want to derail the discussion too much either, but anyone curious about the reasoning can see this comment from CloudFlare [0]
>We don’t block archive.is or any other domain via 1.1.1.1. Doing so, we believe, would violate the integrity of DNS and the privacy and security promises we made to our users when we launched the service.
>Archive.is’s authoritative DNS servers return bad results to 1.1.1.1 when we query them. I’ve proposed we just fix it on our end but our team, quite rightly, said that too would violate the integrity of DNS and the privacy and security promises we made to our users when we launched the service.
>The archive.is owner has explained that he returns bad results to us because we don’t pass along the EDNS subnet information. This information leaks information about a requester’s IP and, in turn, sacrifices the privacy of users. This is especially problematic as we work to encrypt more DNS traffic since the request from Resolver to Authoritative DNS is typically unencrypted. We’re aware of real world examples where nationstate actors have monitored EDNS subnet information to track individuals, which was part of the motivation for the privacy and security policies of 1.1.1.1.
I'm not sure if it's a separate issue, but I've noticed 1.1.1.1 sometimes can't resolve my bank. Adding 8.8.8.8 as an alternate DNS service resolves the issue for me. I don't know if it's just balancing the requests or only using 8.8.8.8 if the primary fails. I'd like to know the answer to that.
I will make a parenthetical point that the WSJ, while expensive to subscribe, is a very high quality news source and worth paying for if it's in your budget. There are discounts to be found on various sites. And god knows their newsroom needs all the subscribers it can get (just like NYT, etc) to stay independent of their opinion-page-leaning business model that tends to be not so objective (the two are highly separated). Luckily they have a lot of business subscribers who keep them afloat, but I decided to subscribe years ago and never regretted it.
Every platform ultimately makes choices in how users engage with it, whether that goal is to drive up engagement, ad revenues or whatever metric is relevant to them. My general read is that Facebook tries to message that they're "neutral" arbiters and passive observers of whatever happens on their platform. But they aren't, certainly not in effect, and possibly in intent either. To preserve existing algorithms is not by definition fair and neutral!
And in this instance, choosing not to respond to what its internal researchers found is, ultimately, a choice they've made. In theory, it's on us as users and consumers to vote with our attention and time spent. But given the society-wide effects of a platform that a large chunk of humanity uses, it's not clear to me that these are merely private choices; these private choices by FB executives affect the commonweal.
It's pretty laughable for Facebook to claim they're neutral when they performed and published[1] research about how tweaking their algorithm can affect the mood of their users.
Even if they hadn't done that, it would still be a laughable claim prima facie.
There's something of an analogue to the observer effect: that the mere observation of a phenomenon changes the phenomenon.
Facebook can be viewed as an instrument for observing the world around us. But it is one that, through being used by millions of people and personalizing/ranking/filtering/aggregating, affects change on the world.
Or to be a little more precise, it structures the way that its users affect the world. Which is something of a distinction without much difference, consequentially.
If the private platform is de facto the primary source of news for the majority of the population, this affects the public in incredible ways. I don’t understand how the US Congress does not recognize and regulate this.
“It is difficult to get a man to understand something, when his [campaign fundraising] depends on his not understanding it.” - Upton Sinclair (lightly adapted)
Consider the following model scenario. You are a PM at a discussion board startup in Elbonia. There are too many discussions at every single time, so you personalize the list for each user, showing only discussions she is more likely to interact with (it's a crude indication of user interest, but it's tough to measure it accurately).
One day, your brilliant data scientist trained a model that predicts which of the two Elbonian parties a user most likely support, as well as whether a comment/article discusses a political topic or not. Then a user researcher made a striking discovery: supporters of party A interact more strongly with posts about party B, and vice versa. A proposal is made to artificially reduce the prevalence of opposing party posts in someone's feed.
Would you support this proposal as a PM? Why or why not?
That's beside the point, though. The point here is that Facebook executives were told by their own employees that the algorithms they designed were recommending more and more partisan content and de-prioritizing less partisan content because it wasn't as engaging. They were also told that this was potentially causing social issues. In response, Kaplan/FB executives said that changing the algorithm would be too paternalistic (ignoring, apparently, that an algorithm that silently filters without user knowledge or consent is already fundamentally "paternalistic"). Given that Facebook's objective is to "bring the world closer together", choosing to support an algorithm that drives engagement that actually causes division seems a betrayal of its stated goals.
Same. I miss the days of the chronological feed. Facebook's algorithms seem to choose a handful of people and groups I'm connected to and constantly show me their content and nothing else. It's always illuminating when I look someone up after wondering what happened to them only to see that they've been keeping up with Facebook, but I just don't see any of their posts.
yesterday, in fact, I saw a post from a family member that I really wanted to read, I started but was interrupted. When I had a chance to focus again, I re-opened the FB app and the post was nowhere to be seen, scrolled up, scrolled down, it was gone. I had to search for my family member to find it again. Super frustrating, and makes you wonder what FB decided you didn't need to see (which I guess is the point of this whole thread)...
I agree with this. I have a mildly addictive personality and found I had to block my newsfeed to keep myself (mostly) off facebook. I follow a couple of groups which are useful to me and basically nothing else.
I deleted all of my old posts to reduce the amount of content FB has to lure my friends into looking at ads. But because of the covid-19 pandemic I was using facebook again to keep in contact with people. Now that restrictions are eased in my country I can see people again, and have deleted my facebook posts.
No. Why should the only desirable metric be user engagement?
Is the goal of FB engagement/virality/time-on-site/revenue above all else? What does society have to gain, long term, by ranking a news feed by items most likely to provoke the strongest reaction? How does Facebook's long-term health look, 10 years from now, if it hastens the polarization and anti-intellectualism of society?
> Is the goal of FB engagement/virality/time-on-site/revenue above all else?
Strictly speaking, Facebook is a public company that exists only to serve its shareholder's interests. The goal of Facebook (as a public company) is to increase stock price. That almost often, if not always, means prioritizing revenue over all else.
That's the dilemma.
Then again, I believe Mark has control of the board, right? (And therefore couldn't be ousted for prioritizing ethical business practices over revenue - I could be wrong about this)
> Strictly speaking, Facebook is a public company that exists only to serve its shareholder's interests.
That's a very US-centric interpretation, which fits because Facebook is a US company.
But it's still reductive to the issue considering how Facebook's reach is also far and wide outside the US.
In that context, it's not really that much of an unsolvable dilemma, it only appears as such when the notion of "shareholder gains above all else" is considered some kind of "holy grail thu shall never challenge".
This is a false choice. The real problem stems from the fact that the model rewards engagement at the cost of everything else.
Just tweaking one knob doesn't solve the problem. A real solution is required, that would likely change the core business model, and so no single PM would have the authority to actually fix it.
Fake news and polarization are two sides of the same coin.
I'd just suggest the data scientist was optimizing the wrong metrics. People might behave that way, but having frequent political arguments is a reason people stop using Facebook entirely. It's definitely one of the more common reason people unfollow friends.
Very high levels of engagement seems to be a negative indicator for social sites. You don't want your users staying up to 2AM having arguments on your platform.
This is why the liberal arts are important, because you need someone in the room with enough knowledge of the world's history to be able to look at this and suggest that maybe given the terrible history of pseudo-scientifically sorting people into political categories, you should not pursue this tactic simply in order to make a buck off of it.
Agreed. Engineers have an ethical duty to the public. When working on software systems that touch on so many facets of people's lives, a thorough education in history, philosophy, and culture is necessary to make ethical engineering decisions. Or, failing that, the willingness to defer to those who do have that breadth of knowledge and expertise.
"The term is probably a shortening of “software engineer,” but its use betrays a secret: “Engineer” is an aspirational title in software development. Traditional engineers are regulated, certified, and subject to apprenticeship and continuing education. Engineering claims an explicit responsibility to public safety and reliability, even if it doesn’t always deliver.
The title “engineer” is cheapened by the tech industry."
"Engineers bear a burden to the public, and their specific expertise as designers and builders of bridges or buildings—or software—emanates from that responsibility. Only after answering this calling does an engineer build anything, whether bridges or buildings or software."
You don't need liberal arts majors in the boardroom, you need a military general in charge at the FTC and FCC.
Can we dispense with the idea that someone employed by facebook regardless of their number of history degrees has any damn influence on the structural issue here, which is that Facebook is a private company whose purpose is to mindlessly make as much money for their owners as they can?
The solution here isn't grabbing Mark and sitting him down in counselling, it's to have the sovereign, which is the US government exercise its authority which it has forgotten how to use apparently and reign these companies in.
A lot of people wouldn’t know about the policy avenues that can be used to regulate these companies (of which FTC is not the only one), or how even advisory groups to the president could help.
You voluntarily put yourself in this position with no good way of fixing it. No one's forcing Facebook to do what they (and now you) do, eh?
My perception of reality is that you and your brilliant data scientist are (at best naive and unsuspecting) patronizing arrogant jerks who have no business making these decisions for your users.
You captured these peasants' minds, now you've got a tiger by the tail. The obvious thing to do is let go of the tiger and run like hell.
- User-configurable and interpretable: Enable tuning or re-ranking of results, ideally based on the ability to reweight model internals in a “fuzzy” way. As an example, see the last comment in my history about using convolutional filters on song spectrograms to distill hundreds of latent auditory features (e.g. Chinese, vocal triads, deep-housey). Imagine being able to directly recombine these features, generating a new set of recommendations dynamically. Almost all recommendation engines fail in this regard—the model feeds the user exactly what the model (designer) wants, no more and no less.
- Encourage serendipity: i.e. purposefully select and recommend items that the model “thinks” is outside the user’s wheelhouse (wheelhouse = whatever naturally emerging cluster(s) in the data that the user hangs out in, so pluck out examples from both nearby and distant clusters). This not only helps users break out of local minima, but is healthy for the data feedback loop.
If you restrict yourself to 2 bad choices, then you can only make bad choices. It doesn't help to label one of them "artificial" and imply the other choice isn't artificial.
It is, in fact, not just crude but actually quite artificial to measure likelihood to interact as a single number, and personalize the list of discussions solely or primarily based on that single number.
Since your chosen crude and artificial indication turned out to be harmful, why double-down on it? Why not seek something better? Off the top of my head, potential avenues of exploration:
• different kinds of interaction are weighted differently. Some could be weighted negatively (e.g. angry reacts)
• [More Like This] / [Fewer Like This] buttons that aren't hidden in the ⋮ menu
• instead of emoji reactions, reactions with explicit editorial meaning, e.g. [Agree] [Heartwearming] [Funny] [Adds to discussion] [Disagree] [Abusive] [Inaccurate] [Doesn't contribute] (this is actually pretty much what Ars Technica's comment system does, but it's an optional second step after up- or down-voting. What if one of these were the only way to up- or down-vote?)
• instead of trying to auto-detect party affiliation, use sentiment analysis to try to detect the tone and toxicity of the conversation. These could be used to adjusts the weights on different kind of interactions, maybe some people share divisive things privately but share pleasant things publicly. (This seems a little paternalistic, but no more so than "artificially" penalizing opposing party affiliation)
• certain kinds of shares could require or encourage editorializing reactions ([Funny] [Thoughtful] [Look at this idiot])
• Facebook conducted surveys that determined that Upworthy-style clickbait sucked, in spite of high engagement, right? Surveys like that could be a regular mechanism to determine weights on interaction types and content classifiers and sentiment analysis. This wouldn't be paternalistic, you wouldn't be deciding for people, they'd be deciding for themselves
I feel like this is a false presentation of the PM choice. If I was the PM there, I would question the first assumption that the users want to see more of the stuff they interact with. That's an assumption, it's not founded in any user or social research (in the way you've presented it).
And even if it was supported by research, I would think about the long tail. What does this mean for my user engagement in the long run. This list might satisfy them now, but it necessarily leads to a narrowing down of the content pool in the long run. I would ask my marketing sciences unit or my data science unit, whatever I have, to try to forecast or simulate a model that tells us what would the dynamic of user engagement be with intervention A and intervention B.
I feel this is one of the biggest problems of program management today. Too much reliance on short-term A/B testing, which, in most cases, can only solve very tactic problems, not strategic problems with the platform. Some of the best products out there rely much less on user testing, and much more on user research and strategic thinking about primary drivers in people.
If you were to use this approach - you might see that actually, the product you have with choosing to optimise for short-term engagement brings less user growth and less opportunity for diverse marketing - which, it is important to note, is one of the main purpose of reach-building marketing campaigns.
I would say the way this whole problems is phrased shows that the PM, or the company indeed, is only concerned with optimising frequency of marketing campaigns, rather than the quality, reach and engagement with marketing campaigns.
Obviously, hindsight 20/20 and generals after battle and all that. I'm still pretty sure I would've thought more strategically than "how do I increase frequency of showing ads".
As a PM, I'd support it as an A/B test. Show some percentage of your users an increased level of posts from the opposite party, some others an increased level of posts from their own party, and leave the remaining 90% alone. After running that for a month or two, see which of those groups is doing better.
They've clearly got something interesting and possibly important, but 'interaction strength' is not intrinsically good or bad. I would instead ask the researcher to pivot from a metric of "interaction strength" to something more closely aligned to the value the user derives from their use of your product. (Side note: Hopefully, use of your product adds value for your users. If your users are better off the less they use their platform, that's a serious problem).
Do people interacting with posts from the opposite party come away more empathetic and enlightened? If they are predominantly shown posts from their own party, does an echo chamber develop where they become increasingly radicalized? Does frequent exposure to viewpoints they disagree with make people depressed? They'll eventually become aware outside of the discussion board of what the opposite party is doing, does early exposure to those posts make them more accepting, or does it make them angry and surprised? Perhaps people become fatigued after writing a couple angry diatribes (or the original poster becomes depressed after reading that angry diatribe) and people quit your platform.
Unfortunately, checking interaction strength through comment word counts is easy, while sentiment analysis is really hard. Whether doing in-person psych evals or broadly analyzing the users' activity feed for life successes or for depression, you'll have tons of noise, because very little of those effects will come from your discussion board. Fortunately, your brilliant data scientist is brilliant, and after your A/B test, has tons of data to work with.
They did as you say (you are a PM, after all!), and next week they rolled out the "likelihood of engagement" model. An independent analysis by another team member, familiar with the old model, confirmed that it was still mostly driven by politics (there is nothing much going on in Elbonia, besides politics), but politics was neither the direct objective not an explicit factor in the model.
The observed behavior is the same: using the new model, most people are still shown highly polarized posts, as indicated by subjective assessment of user research professionals.
We used newsgroups and message boards long before Facebook. They weren’t as toxic, I’m assuming due to active moderation. The automated or passive or slow moderation is perhaps the issue.
I think they weren't as toxic because content creators didn't realize divisive content drives much more engagement. It's not about moderation, it's a paradigm shift in the way content is created.
In regards to a predictive model and privacy/ethics/etc, regardless of your objective function and explicit parameters a model can only be judged on what it actually predicts, thus it is enough to answer the prior question to be able to answer this.
This is because of the fact that machine learning models are prone to learn quite different things than the objective function intended, hence the introduction of different intent or structure of the model must be disregarded when analysing the results.
To any degree the models predict similarly, they must be regarded as similar, but perhaps in a roundabout way.
Agreed, as a general rule I shy away from predicting things I wouldn't claim expertise in otherwise. This is why consulting with subject matter experts is important. Things as innocuous as traffic crashes and speeding tickets are a huge world unbeknownst to the casual analyst (the field of "Traffic Records")
I would take a step back and question the criteria we are using to make decisions. “Engagement” in this context is euphemistic. This startup is talking about applying engineering to influence human behavior in order to make people use their product more, presumably because their monetization strategy sells that attention or the data generated by it.
If I were the PM I’d suggest a change in business model to something that aligns the best interests of users with the best interests of the company.
I’d stop measuring “engagement” or algorithmically favoring posts that people interact with more. I’d have a conversation with my users about what they want to get out of the platform that lasts longer than the split second decision to click one thing and not another. And I’d prepare to spend massive resources on moderation to ensure that my users aren’t being manipulated by others now that my company has stopped manipulating them.
I think the issues of showing content from one side of a political divide or the other is much less important than showing material from trustworthy sources. The deeper issue, which is a very hard problem to solve, is dealing with the fundamental asymmetries that come up in political discourse. In the US, if you were to block misinformation and propaganda you’d disproportionately be blocking right wing material. How do you convince users to value truth and integrity even if their political leaders don’t, and how do you as a platform value them even if that means some audiences will reject you?
I don’t know how to answer those questions but they do start to imply that maybe “news + commenting as a place to spend lots of time” isn’t the best place to expend energy if you’re trying to make things better?
I would think engagement would be a core metric you would be measured against in this example. And if that’s the case, this certainly isn’t a side effect.
The degree to which “damned if you do, damned if you don’t” is in effect here is remarkable. If Facebook literally removes anything, then HN is outraged because it’s censorship, paternalism, all that. But if Facebook does not adopt an actively paternalistic attitude where it shows people content that they deem is “good for them”, then that’s outrageous too. Both complaints predictably rocket to the top of HN.
Which is it, guys? How can you simultaneously be outraged that Facebook is imposing any restrictions on speech at all, and horrified that it isn’t actively molding user behavior on a massive scale?
There’s an amusing comment from
a Facebook employee downthread asking: if division is caused by showing people opposing political opinions, should we try to stop that to reduce division, or should we do nothing, to avoid forming filter bubbles? Predictably, every single reply condemns him as evil for not realizing one of the options is obviously right, but they’re split exactly 50/50 on what that right course of action is.
> How can you simultaneously be outraged that Facebook is imposing any restrictions on speech at all, and horrified that it isn’t actively molding user behavior on a massive scale?
There is a simple answer to that - HN is not a homogeneous set of people; different users have different opinions and express them at different times with different intensity.
True, but I would have expected at least a little visible disagreement. You never see anybody saying “wait a second, I think making the filter bubble effect a bit worse is actually worth it!” Each individual submission’s comments is just full throated, unanimous condemnation —- even when adjacent comments are directly contradictory. They just don’t engage with the possibility that deciding what to do might actually be hard.
For 15+ years, I've run a sports forum and it is inevitable that societal/political debates emerge in many threads. In recent years, they've become increasingly as you describe. There's almost never a calm, reasonable comment. People get at each other's throats and slinging left-right insults immediately.
I think a lot about how @PG almost never posts on HN any more. Might be wrong, but I suspect he's come to resent some of the discourse/attitudes and time-wasting here.
If this were true you'd have the average opinion filter out - naysayers would downvote supporters and vice versa. The truth is in fact that hn is hypocritical and becomes outraged for the sake of outrage just like every other opinionated group (where the raison d etre is to express an opinion rather than discourse).
Edit: think hn isn't just about getting attention? Then explain to me why responses are ranked? Or even ranked without requiring a response? Even Amazon reviews in principle require leaving ratings only in good faith (ie having engaged with the product). It's obvious and dang and whomever could make that a perquisite of voting. That they haven't proves my point.
Responses have to be ranked, because some comments are much more valuable to the reader than others. Well-thought-out comments and spam lie at the extreme ends of the spectrum.
I think there's merit to the idea of requiring a response for upvotes or downvotes, but how can you tell whether a response is "genuine"? It sounds like it'd devolve into people leaving vacuous replies just to clear that prerequisite.
>comments are much more valuable to the reader than others
This presupposes some kind of universal value function.
>but how can you tell whether a response is "genuine"? It sounds like it'd devolve into people leaving vacuous replies just to clear that prerequisite
This is like perfect is the enemy of good counterargument. I don't know and I'm not going to hypothesize right here right now where one misstep on my part gives credence to the idea that it's impossible.
Here's what I'll say though about modern forums: they fixed a system that wasn't broken as far as discourse goes. When I was in high school the were forums where responses were ordered in time rather than by popularity. Those places were actual venues for discussion. Ranking only exists for monetization. If you don't agree with this then consider professional venues for discourse: academic journals. I have never browsed a journal or arxiv by the number of citations or the hindex or whatever.
> This presupposes some kind of universal value function.
Sorry, I did not mean to suggest that some comments are intrinsically more valuable. But we do want to sort comments so that, on average, the distance between the "ideal" order for any given reader and the actual order is minimized, don't we?
> This is like perfect is the enemy of good counterargument.
I'm not trying to argue anything here, I just wanted your thoughts on how this could be implemented effectively.
> I have never browsed a journal or arxiv by the number of citations or the hindex or whatever.
I don't know if arXiv is a very good comparison — the posts are much fewer, the range of interests is much narrower, and the site itself is very heavily moderated.
>But we do want to sort comments so that, on average, the distance between the "ideal" order for any given reader and the actual order is minimized, don't we?
why? who cares? do you not know how to skim and skip irrelevant text?
>I just wanted your thoughts on how this could be implemented effectively
you could think of any number of ways to vet comments. we live in the future after all; you could require a minimum length, you could classify comments according to sentiment and reject those that have unwanted overtones, you could use topic modeling to see whether in fact the comment was on topic, etc etc etc. ranking algorithms have had thousands of labor hours invested in them across all social media sites - apply the same fervor to this problem and there will be an adequate solution.
>the posts are much fewer, the range of interests is much narrower, and the site itself is very heavily moderated.
you're wrong that there are fewer submissions to arxiv
you're also wrong that it's moderated - the only thing that you're required to have to submit is endorsement. but i also don't understand how heavy moderation is a counterpoint? yc is one of the most successful vcs in the world - they can't afford moderators? i also don't know what the relevance of arxiv's narrow range of topics is.
> why? who cares? do you not know how to skim and skip irrelevant text?
Ignoring inflammatory content can be more taxing to people than you pretend, and moderators and flagging are too slow to act. It's not the end of the world, but it's annoying enough that I would consider alternative services.
A forum is nothing without its users, and a forum that puts its users first shouldn't irritate its users with off-putting content without good reason.
> you could think of any number of ways to vet comments.
Alright, I was just checking whether you had any new ideas, and it seems that you do not.
> you're wrong that there are fewer submissions to arxiv
HN receives at least twice as many submissions per month, and an order of magnitude more comments. This makes the arXiv a very poor analogy for HN.
> you're also wrong that it's moderated
From [0] (see also, [1]):
> All submissions are subject to a moderation process that verifies material is appropriate and topical. Material that contains offensive language, non-scientific content, or is plagiarized may be removed.
Looks a lot like they have moderation to me.
> i also don't understand how heavy moderation is a counterpoint
If you already heavily filter by quality, then the order in which posts are presented obviously becomes much less important.
> yc is one of the most successful vcs in the world - they can't afford moderators?
HN does have an excellent moderation team. They just aren't anywhere near as stringent as the arXiv, on quality, on politeness, or on any number of other characteristics.
If HN was as heavily moderated as the arXiv, then they wouldn't need a voting system either. It'd be a much colder place, though, which is probably why they don't do that.
So long as people's value functions have a directional bias in hyperdimensional value-space, the averaged value function embodied by the upvote-downvote tendencies is useful.
(If there were no bias- if people's opinions/tendenfies varied equally likely between all possibilities- all comment scores would be 0. Of course, practically survivor bias soon sets in, as radical suicide advocates wouldn't stick around long enough to like/dislike much.)
Academic journals have a barrier to entry and the content is already judged, the journals decide what information they publish. It is not a random collection of every single article submitted, so you are already starting out with above-average content.
> This presupposes some kind of universal value function.
Yes: "a majority of HN readers of this comment found it useful / not useful". That's the best you're going to get.
But for the most part, it does actually work. If 80% of people on the site agree that a comment is useful, that's good enough; it's unreasonable to expect any evaluation of subjective matters to be universal.
And that's why moderators will step in and do some manual tweaking if things get very divisive. If that 80% number (that I made up) drops too low, getting too close to 50%, then that means that the discussion just isn't likely to be productive. HN is not a homogeneous group, and sometimes the there isn't a clear majority in agreement.
> Here's what I'll say though about modern forums: they fixed a system that wasn't broken as far as discourse goes. When I was in high school the were forums where responses were ordered in time rather than by popularity
I don't agree with this.
One reason is the lack of threading. phpBB and its contemporaries just had a list of posts, and all replies linearly under each post link. Digging through for the bits you actually care about was a huge pain in the ass, and it's one of the reasons why I never enjoyed fora like that.
The other reason is just scale. I'm not sure how old you are, but I was in HS in the late 90s. Back then the internet was much smaller, not as commercialized, and people generally behaved decently well toward each other. We didn't have the spambots we do today, and every forum site wasn't under constant attack by people who want to destroy online communities just for fun. I'm not saying it was perfect, but it was a lot easier to manage communities back then and keep discourse civil and on-topic, with very few automated tools at hand.
These days it's pretty much impossible to create high quality discussions at any scale without a ranking system. Sure, you see smaller communities of a few hundred, maybe even a thousand, people where these things work sorta like they did 20+ years ago (but I guarantee you the board itself is doing a ton of automated spam filtering to get you there).
HN has... what? Tens of thousands, or likely more than a hundred thousand users. If you enabled showdead in your profile page, you'll see a lot of garbage that gets through the automated filters and ends up flagged out of existence. And even regardless of that, with a community this size, you're going to have enough disagreement on the fundamentals of any complex topic that you're going to get a ton of disagreement. Voting and flagging is far from perfect, but it can help it from devolving into a cesspool of low-effort comments and outright name-calling.
Facebook is sometimes in a difficult position. But, with respect, I don't think it's correct to say that it's the fault of their critics for putting them in a dilemma.
Facebook does not face outrage for "literally remov[ing] anything". In the real world, Facebook is removing thousands of items every day, maybe tens of thousands. Most of them are totally justified. Is HN outraged about that?
Also, keep vs. remove is a false dilemma. Facebook has many more options than keeping or removing items. Most of the time, it's about what they choose to boost or reward in other ways.
https://twitter.com/BKCHarvard/status/1263891198068039680
That said, I think you're right that being an effective arbiter on planetary speech is a difficult place to be, even for people with the best intentions, and FB didn't get where they are today by having the best intentions all the time.
But personally, I think that calls into question whether Facebook or anything like them should even exist.
> Most of them are totally justified. Is HN outraged about that?
Well that's the crux of the issue isn't it? Obviously there would be no controversy if everyone agreed on which removals are justified. Every censorship article that reaches the front-page of HN includes a litigation over the specifics of the removal, including those who are quite clear that in an ideal world all large social media websites would be prohibited from filtering out content based on the site operator's judgements about content acceptability.
> Also, keep vs. remove is a false dilemma. Facebook has many more options than keeping or removing items. Most of the time, it's about what they choose to boost or reward in other ways
A distinction without much difference for the sake of this discussion; if the system is hiding the content it's as good as deleted or perhaps even worse than deleted in the same way that shadowbans are often perceived as more hostile than explicit bans.
> But personally, I think that calls into question whether Facebook or anything like them should even exist.
I share that perspective. I believe that social media is a net negative to society despite the nice things its given us like a stream of friend and family photos, but the genie is out of the bottle and there is no going back. Ultimately this is a cultural problem, it costs a user nothing to have the Facebook app sitting on their phone even if they only open it once a year. Literally billions of Facebook users don't care at all about what Facebook removes from the site, most users know what they want out of the internet and know which places to go to get the things they want.
> But personally, I think that calls into question whether Facebook or anything like them should even exist.
The fundamental problem with Facebook is that it serves the people, and as a rule, we are too reactive, too judgmental, too ignorant, too emotional, too irrational, and too hateful. If you want to find something bad on Facebook, you will succeed, because everybody is there, and this is just how we are. Turn back the clock and you'll find ethnic hatred spread through private Whatsapp messages, bullying on Myspace, ignorance in chain email, disingenuous appeals to emotion in yellow journalism, and hysteria in a mob whipping itself into a lynching frenzy in a literal public square. The platforms have changed hundreds of times, the people have not.
What amazes me is that critics of social media don't see this -- despite their vaunted liberal arts degrees, they act as if Zuckerberg invented human flaws in 2004, and pretend that he has the power to get rid of them with a technological solution. Destroy Facebook tomorrow, and the people will just move somewhere else. The exact same problems will reappear, because the people are the same.
If Facebook followed some deterministic algorithm like "show all content from friends, in chronological order" then I don't think there would be such loud voices calling for it to also solve $social_problem.
But Facebook does exercise editorial control, in the service of engagement. It's fair to ask that this curation consider other objectives as well, or at least counterbalance the side-effects it's known to have (divisive content is more engaging and so is amplified; at least correct it back down to neutral).
Is that reason enough not to try? Because people will complain? You know very well that people will complain no matter what happens, yet we still must seek to work towards truth and a better way, even though "better" is in opinion word, and sometimes so is "truth."
good luck filtering human content via deterministic algorithms, and if people can reverse engineer these then they'll get gamed... it's just a stupid idea.
But that’s not how it actually works. For example, it’s agreed among journalists covering social media that Facebook’s greatest crime is that it “created a genocide in Myanmar”, as you can read in an ocean of longform articles. What they mean is that people shared hateful messages privately via Messenger and Whatsapp, two services that use a completely deterministic algorithm (“show all incoming messages by recent”).
The only practical way to stop this is to employ an army of censors to read private conversations and delete the ones deemed bad. But of course this is also horrible and dystopian. So, as always, it’s trivial to write a scathing critique of Facebook no matter what they do.
The problem is that hate doesn’t need an algorithm to amplify it; it can just amplify itself, as it has throughout all of human history. The problem isn’t the algorithm, it’s the people. And you can’t just “fix” people.
> show all content from friends, in chronological order
Then typical user Facebook feed will be 99% pictures of funny cats from one of their loved friends, which are too good to unfriend (if that friend is, for example, mother).
(Not to mention, it's practically impossible to configure filtering for 99% users).
And then these users will just leave the service because it's boring, and that's it, end of Facebook.
We already had a lot of services with no feed ranking. All of them died or started doing the ranking.
> Facebook does exercise editorial control
As any other mainstream service with feed and user-generated content. For example, Twitter or TikTok.
> It's fair to ask that this curation consider other objectives as well
The big neural network which defines feed ranking is developed by hundreds of engineers, and already optimized for (I guess) hundreds of parameters, including time spend of website, ads revenue, fraud protection, lower risk of suicide and so on.
I agree. The second you start censoring $VIEW_X, people will start questioning why you aren't also censoring $VIEW_Y. Are you not censoring $VIEW_Y because you endorse it, Facebook? Is Facebook $VIEW_Y-ist? Rinse and repeat for every fringe view.
The right course of action is to stop using Facebook. That's it, very simple. people here being polarized is exactly what Facebook wants. Current facebook users are akin to opioid addicts... probably found some relief from whatever social pain they were feeling that now they have accepted selling their privacy for their fix.
I would say I can't wait for the day that Facebook is gone but it will be a long time considering the amount of insecure and unintelligent people that need a platform like that to avoid ever being challenged in the real world.
The other day a friend told me about a takeaway place I should try. I couldn't find the menu on their website, and my friend replied that it was on theirbl Facebook page, and I should stop being a luddite.
No, no, dammit! We're literally giving the world wide web to a single corporation, and worse, normalising it - to a lot of people, Facebook basically is "the internet".
How can this possibility be in the best interest of users? It almost feels like the balance has swung too far, and it's perilously close to the point of no return...
> The right course of action is to stop using Facebook.
That'd give rise to something akin to a Gresham's Law problem [0]. I think we have a civic duty to engage patiently — and politely — with our friends who hold views we disagree with, because (A) they get to vote; (B) angry invective isn't persuasive; and (C) social proof is a thing, and sometimes people can be persuaded to come around to their friends' point of view, eventually. It's a long shot, but worth a shot.
So stay on Facebook. Keep in mind that while you're trying to patiently and politely engage, Facebook continues to build a profile about who you are and how to manipulate you. The more you feed it, the more it learns and knows how to use yourself against yourself.
In terms of civic duty, this definition of American civic duty made me laugh:
Citizenship connects Americans in a nation bound by shared values of liberty, equality, and freedom. Being a citizen comes with both rights and responsibilities. Civic duty embodies these responsibilities. Such civic duties help uphold the democratic values of the United States.
The problem with civic duty is that I feel mine is to tell people to stop using Facebook... but others... well, they are going to show up at a state capital carrying military grade weapons because they don't want to wear facemasks in a pandemic. Would you like to patiently and politely engage with them? I sure as hell wouldn't because they focus more on their perception of "rights" while ignoring their responsibilities.
My point is that civic duty in America is a pretty romanticized concept that is certainly not based in any reality in 2020.
When I did use Facebook briefly back before 2012, I found it disappointing that my "christian" extended family members would send me incredibly hateful memes about how Obama was a muslim terrorist or satan in a tan suit. If that is civic discourse, no thanks. The one upside is that I haven't spoken to them in almost a decade and never plan to again.
> others... well, they are going to show up at a state capital carrying military grade weapons because they don't want to wear facemasks in a pandemic. Would you like to patiently and politely engage with them?
I have a few friends like that on Facebook, and yes I do patiently and politely engage with them — but on occasion I've had to remind them that they aren't the only ones who own guns and know how to use them ....
> Which is it, guys? How can you simultaneously be outraged that Facebook is imposing any restrictions on speech at all, and horrified that it isn’t actively molding user behavior on a massive scale?
Perhaps because it's not the same people in the two different cases? HN is a pretty diverse bunch at this point; I'm sure there are a lot of people here who want FB to be a disinterested publisher, and others who want FB to curate like mad.
I liked the idea of tweaking the newsfeed based on the frequency of activity of the producers, and bumping down the influence of bots and chain accounts. It would probably make for a higher quality newsfeed, and it wouldn't play favorites. The problem is, as the article suggests, it might decrease the only metrics Facebook cares about, and there's no guarantee it would look fair to the outside world.
I am not at all outraged that Facebook would impose "restrictions" on speech, nor I think are people in my social circles. You can't have a working newsfeed without a significant amount of filtering, for example. So I suspect that's a straw man.
All that being said, I have left Facebook because I prefer to exercise my freedom _not_ to hear the kinds of speech Facebook has made their stock in trade. So, as far as I'm concerned, Facebook can do as they like.
There is no single voice here on HN. It is a divisive, complex issue. Not only are (some) individuals uncertain on where they stand, but the community itself does not entirely agree. I know personally that I draw the line somewhere, though where exactly isn't clear. I suspect that to fully articulate my opinion on this, I would have to spend about a week doing research and forming a truly educated opinion.
I don't think this would be a good idea. There's a lot of content created on Facebook at any given moment. If you show things only in chronological order, it would take an incredible amount of filtering (that the vast majority of users won't do) to get to anything reasonable.
Reddit and HN front pages are both driven by an algorithm sampling both popularity and time. I think HN also has a chronological view, but that's not considered the front page.
That’s true, but it wasn’t when I posted my comment. It looks like comments just naturally get more reasonable over time... similarly, about half my comments get to negative score instantly, but most then slowly climb back up to positive over the next 24 hours.
All of HN other than you isn't actually one person arguing with themselves. There are a variety of people here with a range of opinions, each with different degrees of internal consistency. This smacks of the "everybody says all kinds of things, so you should just ignore everybody" or the "they're yelling at me from the left and from the right, so that's proof I'm doing everything correctly" defenses. The second is the moderation fallacy; the first might be called the "Argument From Sociopathy."
Personally I don't think Facebook should censor anything except spam and posts that break the law like direct threats of violence and child abuse. However, I do think they have a responsibility when it comes to paid content and the algorithms they use to push content on people. Its one thing to have an uncensored forum where people might be exposed to things they seek out, and entirely another thing for Facebook to choose and collate what people are seeing. Once they do that, they share responsibility for the content people see, rather than just being a platform.
I've always wondered how such discussions go in company meetings where some product/feature has harmful effect of something/someone but is good for the business of the company.
I cannot believe that everyone is ethicality challenged, only perhaps the people in control. So what goes through that minds of people who don't agree with such decisions. Do they keep quiet, just worry about the payroll, convince themselves that what the management is selling is a good argument for such product/service....
Luckily I've never had to face such a dilemma, but can't be envious of those who have faced and come out of it by losing either their morals or jobs.
> some product/feature has harmful effect of something/someone but is good for the business of the company
If you start with such black-and-white assumptions, you will never be able to actually empathize with those people. Nothing is that simple when you're close enough to see the details.
Things good for the company should be and frequently are good for the people using the product. The same thing can also harm the same people, or a different set of people, or the company, in a way that's impossible to disentangle from the good.
There's a whole back and forth about Facebook and political divisions. It starts with someone assuming that tech companies put people in bubbles and echochambers, assuming they'll only be engaged with stuff they agree with. Then you run the numbers, and realize that people are far more isolated from opposing opinions in real life than they are on the internet, you interact with more people online, and they censor themselves less. But at the same time, you can change your mind about echochambers, and decide that this is a bad thing, being exposed to different opinions makes you more entrenched in what you actually believe.
It's never as simple as "this is bad for everyone except us but at least we're getting rich". Everything has more nuance than that when you experience it up close
> Things good for the company should be and frequently are good for the people using the product. The same thing can also harm the same people, or a different set of people, or the company, in a way that's impossible to disentangle from the good.
> It's never as simple as "this is bad for everyone except us but at least we're getting rich". Everything has more nuance than that when you experience it up close
This too needs more nuance. These points even apply to outright crime. Legal prohibitions should sometimes be expanded in the public interest, because sometimes it essentially is the case that something is bad for everyone except some small group.
This is reflected in the way data-protection laws now exist in many countries, for instance.
People are more isolated in the real world? Please provide a source. Aside from the fact that this is hard to measure now that the underlying medium has itself been modified — I would hardly expect this to be the case. Online I am connected to those whom I socialize with or am otherwise professionally connected to. In the “real world” this constraint is largely absent.
This is the hardest source I can find, but it only measures what happens on Facebook. The numbers do seem higher than what I'd expect for IRL conversations, though:
> Online I am connected to those whom I socialize with or am otherwise professionally connected to. In the “real world” this constraint is largely absent.
This seems entirely backwards to me? Maybe you talk more with strangers IRL than online, but I doubt it. I only have n=1 (me), but we are talking right now. Who knows where we live in relation to each other?
So much of politics is split between urban and rural environments. Those groups are defined by where they live, so I expect very few conversations in person between the two, especially about politics.
Thanks for the link. Reading now. Regarding my reply, I was thinking more about social networking apps like Facebook, Instagram, Snapchat, WhatsApp, or linkedin and less about hackernews/reddit types. Mainly because I think the bulk of social interactions happen there.
It does seem logical: your in person interactions are mediated by your personal relationship with people. Online you can come across anything and everything. The in person equivalent would be walking by ten or twenty small protests set up with megaphones loudly arguing for various things you vehemently disagree with.
This connection doesn't mean shit compared to someone you see face to face and share experiences with.
Yet this watered down form of connection seems to have replaced the latter, which I think is the fundamental social problem of the internet.
Does it matter the quality of the connection? The argument is about being shown different viewpoints and that the internet shows you more than in person.
Is that hard to disagree with? I didn’t even know atheism was a thing until I was on the Internet. No one in my community was an atheist and the media we were provided didn’t reference it much.
I think quality is almost the only thing that matters.
Personal anecdotes aside, we're mostly terrible at dealing with new ideas when they conflict with stuff we already know or is close to our identity. Remove the human element of the connection and we're even more likely to dismiss said conflicting ideas outright as stupid (I'll try link to that research). It's not hard to imagine how that might lead to strong yet poorly justified social division.
> In the “real world” this constraint is largely absent.
In the real world you are connected to people living and travelling around you, and that is not necessarily an unbiased set of people. It can be quite far from the average random group. You're still in a bubble.
yes, it's never simply black-and-white, but you're overstating that case, especially with facebook. by now, nearly everyone in tech and many adjacent industries (e.g., entertainment) has heard about and probably internalized the downsides of facebook, particularly the mechanisms and tactics employed to advance facebook at the detriment of society at large. it's pretty clear many of those people at facebook are avoiding or ignoring inconvenient truths when it comes to removing those mechanisms and tactics to the benefit of society at large but at the detriment of facebook.
That's not a counterargument. Nuance doesn't contradict the black-and-whiteness of the situation. Sometimes nuance just means there are many shades of black.
The same thing can also harm the same people, or a different set of people, or the company, in a way that's impossible to disentangle from the good.
It might be impossible to 100% disentangle. But it is nonsense to suggest it could ever be impossible to >0% disentangle. And they have a moral obligation to prioritize disentangling them, to maximize the good and minimize the harm, and to structurally incentivize themselves to succeed at that.
But your attitude creates the exact opposite incentive: the more entangled the good with the harm, the more defensible it is for them to passively enrich themselves thru their inaction.
Don't fall for it. Demand more.
Demand structural changes that incentivize real fixes, for example, pledging that ad revenue from hate content and fake news be returned to the advertiser and the same amount also donated; or pledging that feelings of community vs feelings of divisiveness affect executive or company-wide bonuses. These particular ideas might be stupid, but don't let them get away with not even trying.
> Things good for the company should be and frequently are good for the people using the product.
I think there's a misalignment here. In traditional business what you said may be generally true (with some striking counterexamples like cigarette companies). In internet advertising things good for the company should be and frequently are good for the company's customers. Facebook's users are not its customers, and Facebook is generally incentivized to keep users on the site and consuming content (and advertising) by any means necessary - regardless of the long-term harm it might cause the users.
I've been there, obviously not to the level of a facebook board member.
IMO the feeling is not really that different from making choices as a consumer ("was this shirt made by child labor?", "was the animal this meat comes from treated humanely?", etc). People tend to turn a blind eye to those questions unless something comes up that hits close to home.
To be clear, I'm not saying that's justifiable or a good mindset to have, just what I think happens.
I disagree and think it is significantly different. Facebook decision makers have way more agency in the directions their company takes than a consumer has in their choice of clothes to buy at Target (or wherever).
Shirt consumers don't have much of a choice. They can only buy what's for sale (and in their price range). And then, how can they be sure if a shirt was or wasn't made by child labor? How would an individual consumer's behavior lead to ending child labor?
According to the article, Facebook execs understood what the product was doing, and, while they have the ability to stop it, don't. Maybe I understand what you're saying if we're talking engineers/middle managers, but that's a boring conversation. The buck has to stop somewhere.
Are you seriously arguing that consumers can't spend $5 less on a shirt so that instead of having "BALR." it was made under less shitty conditions? Consumers have plenty money for t-shirts, they just choose to spend it on fashion statements instead of thinking about working conditions of people half a planet away.
There's plenty of choice. It's not about choice, it's about what's on your mind, and what you put on your mind. If you want to look cool, you put the working conditions concern off of your mind. If you want to make money, you put the division concern off of your mind.
The buck stops at every stop.
edit: did a quick google, first result on a plain white t-shirt that's fair trade is $25, first result on 'fashionable' plain white t-shirt (by balr or supreme) is $60...
Basic economic theories require that consumers have full information and make rational decisions. Neither of those are valid assumptions.
In this case, the vast majority of people don't know if a shirt was made with child labor or not. If this information was clearly communicated to every consumer I'm sure you'd see consumer behavior change to some degree.
I actually feel the opposite. Consumers have the ultimate choice -- their choice is not beholden to anyone except themselves. Then they can execute their choice unilaterally.
A VP or even the CEO is beholden to shareholders, their employees, their advertisers, their own ethics, their users, various government regulations (and government interests that are not laws but what they prefer). So almost everything they do is a tradeoff.
What a cop out. You can't just pass the buck forever. You want to bring shareholders into this? Was exploiting the human brain’s attraction to divisiveness put to a vote? What does it matter when Zuckerberg has a controlling share of the company [0]? He answers to himself.
Facebook spent almost $17MM in lobbying efforts last year [1]. I wonder why governments doesn't exactly have an eagle eye on this...
The rank and file employees at Facebook have no say about this. Tim Bray leaving Amazon to no ill effect shows this.
We're talking about Facebook exploiting the human brain to increase time on the platform. The users have little to say about this, and as long as the users are there, advertisers have nothing to say to Facebook.
So that leaves Facebook answering to their own ethics. Yes. that's the problem.
A corporation is a device for maximizing profit and minimizing ethics. Everyone can say they're behaving ethcially. Consumers can say, "Well, all my friends are there, I can't quit," and it's true for some people. The CEO and other decision-makers can say, "Well, I have to do this otherwise the shares go down and I could get fired," and they may be right. Shareholders can say, "I'm just investing in the most profitable companies, if they were doing something bad, it should be illegal," and they have a point too.
This is where governments come in. Companies should behave ethically, but ultimately we shouldn't just leave it up to them. That's why societies have laws. What we really need to do is use regulation and penalties to force Facebook into ethical behaviour.
Of course, this isn't going to happen because there's no political will to do so, generally due to "free speech" or "free market" objections.
This is not passing the buck. It's acknowledging that there are many stakeholders involved in a company+platform, and that many decisions are about making tradeoffs rather than having a "right" answer.
If you always go with the populist vote, like when users rioted about the news feed when it was first introduced, https://techcrunch.com/2006/09/06/facebook-users-revolt-face... then you may be sacrificing the long-term viability of your company. This harms employees, investors, and eventually the public. Are you saying that's not even a consideration at all?
We're not talking about "Facebook exploiting the human brain to increase time on the platform". You brought up Target and shirts. So we're talking about who has more agency, users or executives, in a general manner. That consumers generally only need to concern themselves with their own ethics, versus the complex entanglement of ethics at a company, gives users more agency to make choices reflecting their ethics.
Why couldn't you choose where to buy your shirt. Shirts can be made anywhere it should be one of the easiest to find multiple venders for.
If you are saying at walmart or another big place they only have 4 brands in your price range and how can you tell which ones involve child labor. You could research if you cared.. by not buying a brand you reduce your risk by 99%.
As consumer, you may not be able to stop child labor but you can vote with your wallet.
Several of my friends buy clothes from a few vetted brands because of exactly this issue.
Then I have another friend who was huge cruise ships fan. He encouraged me to go on my first cruise too. But then there was a report about mistreatment of cruiseship employees, and he is totally against cruiseships now. His actions probably won't change anything alone but if enough consumers start to act like him, a change may happen.
I often wonder. Even if people stop buying, the feedback signal to a company can be very inefficient.
They might not understand where they went wrong and think they need to lower prices or something. Of course, that just leads to more pressure on working conditions.
This kind of thinking, looking behind the veil of money, has convinced me to stop using currency altogether, for now, for the most part. I still pay for web hosting and domains, I still buy bottled water for lack of better options, but for anything else like clothes, food, houseware stuff, etc., I've stopped buying altogether. Everything you buy carries a huge veiled cost of human health and lives, animal and plant health and lives, environment damage, habitat loss, and so on. I just don't want to be complicit anymore. I wear the same clothes, and I pick up the clothes people leave in boxes on the street or go to churches. There is a glut of consumable goods and the charities are throwing tons of it away everyday. Same goes for food, kitchenware, paintings, decorations. I've been told my great-grandmother used to say, "God gives you a day, and then food for that day." That is the approach I have taken. Went for a walk yesterday, found two paintings. One of them needed finishing, which I'm happy to do. For 3+ years, I have not used any "external" products like shampoo, lotion, cream, etc., not even soap, except occasionally buying a bar of dr bronners soap (paper wrap) and using that for laundry. Almost everything in that department, even the "organic" or "natural" or "eco-friendly" has a long ingredient list full of what I want to avoid both putting on myself, as well as drinking, which is what's going to happen if I put them down the drain. Also, all of it fucks up the skin biome. I've not had any skin problems since I unsubscribed from them. And so on. I know it's not an option for everyone, but it's the only option for me, as long as I have a choice, to choose this way, and keep pondering how to do better every day.
I live in a city, so mostly from dumpsters. Tons of recoverable food is thrown out every day. Way, way more than I can figure out what to do it.
I've also gotten more into fasting and eating less, but so far, no involuntary fasting has occurred.
I've also become more social, so sometimes others share their food with me, even in these difficult times. Yes, they bought it with money, and fed the eco-shaver, but I think it's still less than if I'd done it myself.
Occasionally, I go to restaurants towards closing time, and ask if they have any leftovers they are throwing away.
A great book on all this I read on this is called "The Scavengers' Manifesto". I learned a lot from meeting others on the street and looking through the trash.
I've done a bit of foraging when in wilder areas, and I've seen places where people grow most of their food themselves, in small communities. I think this is the future.
I think what an FB exec is trying to decide is more analogous to "should we use child labor to make our shirts?" or "should we incur higher costs to run a humane farm?"
From my experience there are very strong currents in a group that are very hard to go against as an individual. Only very contrarian people will go against the grain in formal meetings with high level executives or other individuals with status in a group. This is why often big organizations are able to produce decisions that the team behind it doesn't agree with and that look silly from the outside. Many people in such a team will not feel personally responsible because they feel like they didn't have any influence on the decision making proces, even if they could have said something. There are other dynamics at play I think, but this is one of them. (The contrarians seem to not survive long in the corporate world)
This dynamic is present in FB the website as well. You find clusters or groups of folks who re-amplify a point. It's so effective that you can find "Re-Open" rallies in your state driven by a shady "gun-rights" nonprofit. Even though polling largely supports the lock down and actions taken to curb the pandemic. You also find that outside the group people are a lot more nuanced and reasonable. It's fascinating. What is even more concerning is that a lot of bots drive this behavior.
I think the issue is that in the long term it dilutes FB. I know many people who don't post on FB, preferring Instagram etc... I know these are still FB platforms but it's a big shift. So FB will eventually become Usenet and effectively non-functional.
There's some type of social network that's between Instagram and FB that doesn't exist yet.
Also, IME, if you do say something, others jump down your throat quickly and viciously. I still remember this one former cow-orker and his words: 'they debate, they decide, we deliver': this project ended up losing the company millions and left it as a has-been in ecommerce because people chose to accept and support the utter insanity that was going on right in front of their faces.
> I cannot believe that everyone is ethicality challenged
No, but it's not always clear what the ethical choice is. In philosophy, this is known as pluralism [1] -- the fact that different people have irreconcilable ethical views, with no way to find any "truth".
That might seem like a lot of justificatory mumbo-jumbo, but there are genuine ethical arguments on all sides. For example, did you know that in the postwar 1950's, the lack of polarization and divisiveness in American society was seen by many as a major problem, because it didn't provide enough voter choice between the two parties? [2]
There are also plenty of ethical arguments that giving people what's "good for them", rather than what they want (click on) would run counter to their personal autonomy, and therefore against their freedom. This is what critics of paternalism believe. [3]
Then there's the neoliberal argument that markets always work best (absent market failure). That most of human progress over the past couple of centuries has resulted from companies doing what's most profitable, despite how non-intuitive that is. In that sense, Facebook doing what makes the most money is ethically right.
I'm not saying I agree with any of these -- in fact, I don't.
But I am saying that supposing there's some kind of obvious right ethical answer, and implying bad faith towards people at Facebook that they're somehow making decisions they genuinely believe to be wrong but making anyways, is not accurate.
> For example, did you know that in the postwar 1950's, the lack of polarization and divisiveness in American society was widely seen as a major problem, because it didn't provide enough voter choice between the two parties?
There was not a lack of polarization and divisiveness in American society.
The divides in American society and politics didn't map well to the two major political parties because there was a major political realignment in progress and the parties hadn't yet aligned with the divides in society.
The problem was the divide between the major parties not being sharp on the issues where there were, in fact, sharp, polarizing divides in society, preventing members of the public from effectuating their preferences on salient issues by voting.
In the 50s and 60s, there were really four parties, joined into two by coalitions. On the Democratic side, there was a social democratic, leftist faction, tensely allied with a Southern party (the Dixiecrats). On the Republican side, there was a pro-corporate but moderately liberal faction (the Rockefeller Republicans) allied with a harder-line conservative/liberatarian faction (the Goldwater Republicans).
Two things happened in the 60s and early 70s: the Goldwater faction largely took power in the Republican Party, and because the Democratic Party embraced civil rights, the Dixiecrats first flirted with independence (George Wallace's campaign) and then gradually switched parties, so now we have the oddity that there are people who fly Confederate flags but are registered members of the party of Lincoln. Many people who would have been Republicans in the old days are now the moderate/neoliberal faction in the Democratic Party.
So we still have four parties, they were just reshuffled. Now the tension in the Democratic Party is between the old FDR/LBJ new deal supporters, and their younger socialist allies, and the more pro-business neoliberals. On the Republican side it's between the business side (they don't care much about ideology, they just want to make money) and the hard-core conservatives.
> So are you saying polarization makes it easier for people to vote?
No, I'm saying that the description that polarization was absent is wrong.
I'm also saying alignment of the axis of differentiation between the major parties in a two-party system and the salient divides in society makes it easier for people to make meaningful choices, and feel they are doing so, by voting.
When there are sharp polarizing social/political divides, as there were over many issues in the 1950s, and they are not reflected in the divides between the parties (as they often weren't in the 1950s), then the government cannot represent the people because the people cannot express their preferences on important issues by voting.
I am sorry to say, this seems like a thoughtful answer but there is a lot of nonsense in it is as well.
For example, pluralism doesn't state there is no way to "find truth", but that in light of multiple views, to have good faith arguments, avoid extremism, and engage in dialog to find common ground.
> but there are genuine ethical arguments on all sides.
These ethical arguments, however genuine they may be, are not equal however, otherwise, you would be falling victim to making the false balance fallacy, commonly observed in media outlets, or the "both sides" argument we have so unlovingly become aware of in recent times. The False balance fallacy essentially tosses out gravity, impact, and context.
> That most of human progress over the past couple of centuries has resulted from companies doing what's most profitable, despite how non-intuitive that is.
Despite the over-simplicity of framing it as companies simply doing what is most profitable, this is, in fact, extremely intuitive, and has been studied, measured, and observed. I am curious what you find unintuitive about it?
> But I am saying that supposing there's some kind of obvious right ethical answer, and implying bad faith towards people at Facebook that they're somehow making decisions they genuinely believe to be wrong but making anyways, is not accurate.
This view may be true in a vacuum, but it is irrelevant. We live in American society, and there is an American ethical framework in which Facebook's actions can be viewed as unethical. Other countries that have this similar issue have their own ethical frameworks in which to deem Facebook's actions ethical/unethical.
> pluralism doesn't state there is no way to "find truth"
To the contrary, that is literally what pluralism as a philosophical concept says. You can read up on Isaiah Berlin's "value pluralism" [1], for example.
> These ethical arguments, however genuine they may be, are not equal however
On what basis? Again, the entire premise of pluralism provides no method for comparison.
> this is, in fact, extremely intuitive
Many would disagree. You might enjoy reading [2], which explains just how hard it is for citizens to understand it, from the point of view of an economics professor.
> and there is an American ethical framework
Except there isn't, that's the point. For example, Republicans and Democrats obviously believe in deeply divergent ethical frameworks. And there's far more diversity beyond that. Plus there's no way to say that any American ethical framework would even be right -- what if it were wrong and needed correction?
> For example, pluralism doesn't state there is no way to "find truth"
Well, there are lots of different ideas lumped together as “pluralism”, but most of them not only hold that there is no way to find truth on the issues to which they apply, but that there is no “truth” to be found.
> We live in American society,
Some of us do, some of us don't.
> and there is an American ethical framework in which Facebook's actions can be viewed as unethical.
Sure, but there are many, mutual contradictory and, often mutually hostile American ethical frameworks, so that’s true of virtually every actor’s actions, and virtually every alternative to those actions.
> American ethical framework in which Facebook's actions can be viewed as unethical
I'm curious what you mean by this, because I'd expect the American values of independence and free expression to be counter to wanting Facebook to actively supress divisive discourse. (Yes, I know the first amendment only applies to the government; the point is the spirit of the "American ethical framework")
The profit maximizing (shareholder value) argument is fairly recent.
At many other times, the concentration of wealth, and therefore power, was identified as a problem and actively mitigated. For example, the founding fathers of the USA were quite anti corporate and actions like the Boston Tea Party were explicitly so.
Nah. The founding fathers were the richest colonists and George Washington was the richest of them all. It was some rich people opposing richer people overseas that they were descended from.
They didn’t want concentration of political power but they had the economic power. Interestingly the political power endangers them because it has the power to take away their economic power. That’s the real battle still going on today.
Because it wasn’t concentration of power they were concerned with. They were only concerned with concentration of power against them (political power against their right to profit).
It was a selfish play not a principled one. For example, slavery was written into the constitution. How the hell does that happen when all men (and no women) were supposedly equal? Slavery was enshrined as an economic and then a political right (2/3 vote).
Not all of them were for slavery but that was the end result of the document/of the competing forces at play. It institutionalized slavery in the new nation.
“According to those scholars who saw the root of Jefferson's thought in Locke's doctrine, Jefferson replaced "estate" with "the pursuit of happiness", although this does not mean that Jefferson meant the "pursuit of happiness" to refer primarily or exclusively to property.”
What has gradually happened is that personhood has been gradually extended to more and more entities (sometimes non human).
The colonists were ALL for maximizing economic power (pursuit of estate). They were ALL for limiting political power against economic power.
So this notion that colonists were against economic power is just wrong. Others may have held the notion but not as the colonists if you go by the Declaration of Independence and Constitution.
And if that is the case, then you have people taking both sides of the argument over a long period of time.... Pro economic freedom vs limits to economic power.
It isn’t well recognized. It’s just a debate/fight people have been having for awhile.
It's this dynamic that some people want to treat each other as peers in some ways. This is because they are stronger as a group i.e. united we stand, individually we fall. However they to exclude others since if you include everyone, then there is no advantage (us vs them, the other).
Also the Boston Tea Party wasn’t anticorporate. It was against the tea tax to be paid to the government of England. “No taxation without representation”. It was anti government without representation.
> I've always wondered how such discussions go in company meetings where some product/feature has harmful effect of something/someone but is good for the business of the company.
I mean, it's one thing if we're talking about something like an airbag, where harm can result from normal usage because of a design flaw. It's another thing to talk about the Ford Pinto -- where harm could happen due to accidental misusage.
Does Facebook encourage division? Do ice cream ads encourage obesity? Or alcohol ads encourage drunk driving? (I get that Facebook's "engagement algorithms" are designed to maximize profit, and has a side effect of showing you things that are upsetting and frustrating... but that isn't their design. I'm no fan of "the algorithm", and don't think they should use it, but I think they should be free to.)
In this instance, I don't think it's fair to say Facebook has a "harmful effect". The abuse, misuse, and addiction to Facebook can be harmful, for sure... but that's not Facebook's fault. That's the end user's fault.
Should Facebook come with a warning label, like cigarettes? I don't think so. (I also don't think cigarettes should be mandated to come with images of people dying of lung cancer when alcohol can be sold without images of people with liver disease... but I digress.)
Everyone wants to "mitigate harm". But you need to be able to separate "harm due to malfunction", "harm due to accidents", and "harm due to abuse". This seems to be firmly in the third category, which is the least concrete and most "squishy" category.
Especially squishy, when "harm" is considered to be people saying and/or thinking the wrong things.
> In this instance, I don't think it's fair to say Facebook has a "harmful effect". The abuse, misuse, and addiction to Facebook can be harmful, for sure... but that's not Facebook's fault. That's the end user's fault.
Yeah, it wasn't me who posted this reply, it was the cells in my body. It's their fault... I think complex systems create effects that go beyond the individual parts. Facebook is running and profiting from such an 'effect' on society.
Their right to freely express their creativity by making the feed how they wish should be balanced with the large scale (negative) effects that appear in the system.
Facebook internal memo by Andrew Bosworth, VP
June 18, 2016
The Ugly
We talk about the good and the bad of our work often. I want to talk about the ugly.
We connect people.
That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.
So we connect more people
That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.
And still we connect people.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.
That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.
That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.
The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.
I know a lot of people don’t want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that’s why we get to do that great work. We do have great products but we still wouldn’t be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.
In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren’t losing out on a bigger picture. But connecting people. That’s our imperative. Because that’s what we do. We connect people.
I mean, he's not wrong. Facebook sucks because a lot of people are not-great human beings, and Facebook just allows you to see that. Oops. People might think that peer pressure would shame people into better behavior, but the concept of shame no longer exists in the post-modern world. Everyone feels justified in whatever they believe, and the Covid-19 situation on the platform couldn't be a more perfect example in illustrating the problem.
I say this from first-hand experience. I discovered that people I called friends were racist. I now consider those friends merely acquaintances, and I have since deleted my account. Better to just be ignorant of people's ignorance when I can't do anything about it.
I read that discussion as it was happening on the internal FB@work. Oh man, there were so many true believers replying about how this was so wise and inspiring. As far as I remember, no one questioned him. I wish I had posted that in a biological context, something that grows without bound or care for its environment is cancer. There is Boz arguing that Facebook is cancer.
Cancer is just a specialized case of evolution that in many instances is turbocharged by genetic instability...essentially the biological form of 'move fast and break things'. This results a very adaptive germline that handily outcompetes everything constrained by purpose while also overcoming novel threats thrown at it by the greatest medical minds of our time.
If it didn't kill people that we love we'd marvel at its capability.
Is Facebook a 'cancer'? I think it's more of a cultural radiological device that exposes the cancer that's already there.
Even that is very handwave-y. It talks about "connections" and events, but not that the algorithm (in the broad, commonly-used sense) encourages and incentivizes that which builds "engagement."
>Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing
Is this certain? The effects of useful features on growth are longer term and harder to measure than, for example, placing and styling friend suggestions in a way to confuse users into thinking they're friend requests.
Where does he bring up the subject of Facebook connecting people to the level of addiction? With the only goal of maximizing screen time (and dopamine) to sell more ads? It's not "connecting people", it's "addicting people".
It is as if a 3rd world foodbank for Africa was bragging that they feed the world so well that 90% of Africa is now overweight, but that's good because they continue to "feed people".
> I cannot believe that everyone is ethically challenged
Right, so what assumptions are leading to the conclusion that this situation can only be caused by everyone being ethically challenged? Are ethics shared and absolute enough for the answer to this question to be easy or black & white? https://en.wikipedia.org/wiki/Moral_relativism
> Luckily I’ve never had to face such a dilemma
Are you certain about that? I realize you’re talking specifically about C-level execs debating something in a board room, but consider the ways that we all face lesser versions of the same dilemma. For example, do you ever consume and/or pay money for things that are generally harmful to society? Environmental concerns are easy to pick on since more or less everything we buy has negative environmental effects... ever bought a car? flown on an airplane? Smoked a cigarette or enjoyed a backyard fire pit? Bought anything unnecessarily wrapped in plastic? It’s really hard to make the less harmful choice, and a lot of people don’t care at all, so by and large as a society we put up with the harm in favor of convenience. As consumers, we are at least half of the equation that is leading to socially harmful products existing. If we didn’t consume it, the company meetings wouldn’t have anything to debate.
> "Boeing 737 MAX killed 346 people. So, it seems that death is not a deterrent."
I really don't understand your point, unless you're implying that there was a meeting where Boeing planned to kill those people. I am not an aviation expert, but what happened with the MAX seems to be a product of the certification process, urgent business needs, systems engineering issues, and bad internal communications at Boeing.
I haven't seen any evidence that someone specifically predicted the chain of events which would unfold on those flights, and clearly communicated the issue, then had executive(s) respond that it was 'worth the money'.
As an aside, I have seen quotes about the 787, which were similar to those in your linked article (mostly with respect to production quality issues), yet the 787 has not had similar accidents. One problem with working on such huge projects is that the line engineers do not understand that managers are constantly hearing alarmist 'warnings' which don't pan out. If 1% of Boeing staff give false alarms in a year, that means there are 1600 false alarms.
> I haven't seen any evidence that someone specifically predicted the chain of events which would unfold on those flights, and clearly communicated the issue, then had executive(s) respond that it was 'worth the money'.
People understand the consequences of what they say. I doubt that most people will say that statements out loud, even when they know that are true.
But, people knew and money was involved.
* February 2018
“I don’t know how to refer to the very very few of us on the program who are interested only in truth…”
“Would you put your family on a MAX simulator trained aircraft? I wouldn’t.”
“No.”
* August 2015
“I just Jedi mind tricked this fools. I should be given $1000 every time I take one of these calls. I save this company a sick amount of $$$$.”
I have read similar quotes about most modern aircraft development programs, yet aviation is quite safe. The fact you can find a few alarmists in a company of 160,000 is rather unsurprising.
Those quotes would be much more convincing if those employees put every prediction they ever made on the record, not just the ones that turned out to be sort-of right in hindsight.
From manager's perspective, you can't listen to everyone complaining about being rushed, understaffed, and underfunded (, because everyone looking to cover their butts in a bureaucracy does all three). On the other hand, you have to be on the lookout for credible issues.
If someone does not make specific and testable predictions which turn out to be right, they are useless alarmists. If you want to read about how to assess predictors (and improve predictions), I suggest you read: https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_...
I did not present a false choice between two options, I only defined what an alarmist is. I regard alarmists as an extreme on the spectrum of forecasters.
Bifurcating would have been saying that everyone is either a superforecaster or an alarmist, and I never said that.
You may not agree with me, but that doesn't mean that I fell into a logical fallacy.
It's more that there were several meetings where issues were raised that would kill people if they occurred, and those in charge decided the risk factors were minimal enough that they could execute on the plan.
Nobody planned to kill the astronauts on the Challenger. Such a systemic failure to anticipate and manage risk correctly is a team effort and heavily incentive-driven. Putting incentives in place that reward risk-taking increases the odds someone will die.
I think I have a very different understanding of the root cause of the o-ring failure on Challenger than you do.
The common understanding seems to be that the managers decided to launch when the booster temperature was cold (though not necessarily out of limits), and some were warning that it may cause some unforeseen issues.
My read is that each limit in the operations manual should have been backed by a test to failure, or at least a simulation of what would occur if the vehicle was operated outside the limits. Such a process allows the operators to clearly understand what can go wrong, and why the limits are set where they are. This is what they did on the SSMEs, but not on the boosters (because they thought the boosters were fairly simple).[0]
Of course, no one planned it. But encouraging or demanding to take shortcuts is what caused it.
I have been in software industry for 15 years and this happens all the time, being forced to release unfinished features, asked to ignore security, backups, etc. I would imagine same thing happens in other industries.
My understanding of the MAX issues is that the issues were not really shortcuts, though they might look that way in hindsight (because every mistake looks that way in hindsight).
From my non-aviation perspective, it looks like they basically pieced together a bunch of complex systems, with each team making a number of (different) assumptions about each system. The systems themselves were influenced by FAA requirements to maintain the old certificate, which meant that certain desirable changes were impossible, so workarounds were devised. The problems were due to misunderstandings about how the systems would work when assembled, and these issues were not discovered and/or communicated. It really seems like a systems engineering problem, aggravated by a number of external influences (including business reasons and certification).
There is no FAA requirement to maintain the old certificate. Boeing and it’s customers wanted to do that for cost savings.
It is supposedly costly in time and money to acquire a new rating but it has been done obviously.
The airlines wanted a single pool of interchangeable pilots flying in name interchangeable planes (their existing 737s and the 737 MAX). Supposedly one of the airlines threatened to take new business to Airbus and had penalties written into the contract to make the 737 MAX fly under the existing certificate.
So it wasn’t the old certificate driving these issues, it was Boeing and it’s customers wanting to maintain the old certificate that drove the issues. That is a very large difference.
Perhaps my previous post was vague, but I meant 'FAA requirements [of commonality, required to] maintain the current certificate'.
The FAA may be in the right or in the wrong, but it has made certifying new designs almost prohibitively expensive and time-consuming; for evidence of this, simply look at the Cessna 172 (still in production on a 60-year old certificate), and what happened when Bombardier tried to put a new airliner into production.
You're definitely right that the airlines wanted interchangeable type ratings for crew, but the issue slightly more complicated than you're painting it.
I never argued the old certificate forced the issues, the certification system just strongly incentivized 'upgrading' the 737. This was one of many causes.
Wrong. Boeing engineers raised up concerns that were dismissed.
“Frankly right now all my internal warning bells are going off,” said the email. “And for the first time in my life, I’m sorry to say that I’m hesitant about putting my family on a Boeing airplane.” [1]
>>"I haven't seen any evidence that someone specifically predicted the chain of events which would unfold on those flights, and clearly communicated the issue, then had executive(s) respond that it was 'worth the money'."
In large projects like the MAX, there are always people raising concerns.
I think that's a really interesting question, but I think the answer is orthogonal to your dichotomy. In my experience, very successful projects depend on the great managers that know who to listen to in each different situation, and they know how people will react in each situation.
One of the best examples of this is Dave Lewis, who lead the design of the F-4 Phantom II, one of the most successful fighter aircraft of all time. He directed the structural design team to design for 80% of the required ultimate load, because he knew that everyone was conservative in their numbers; then the design was tested. The structure ended up lighter than comparable aircraft, and the Phantom II had phenomenal performance.
This comparison is flawed in several respects. The most obvious is that cigarette companies spent decades intentionally misleading the public about the dangers of their product. This is not the same as just selling a potentially dangerous product, especially one where the dangers are so viscerally obvious as with a parachute.
If you use a parachute one time in case of emergency, yes, it is a life saving device that still has a high level of risk. However, I believe they were referring to the people that choose to parachute for sport/recreation rather than emergency situations.
But in the case of parachutes, it's not the device, it's the activity. I know it's splitting hairs, but it's important, especially when it comes to assigning moral responsibility to manufacturers.
It's kind of a combination of all the above. Majority of employees are working for a paycheck and they don’t really care what goes on as long as they get paid. If the person is in an executive type role then their goal is to increase revenue so they convince themselves that it’s good for the company.
That's perhaps the greatest power of the corporation: it allows people to do shitty things without any specific person being at fault.
Executives have a "duty" to increase "shareholder value". It's not that they necessarily wanted to do X, but their hands were tied because the "data" clearly showed that X was best for shareholders. Plus, if X was so bad, it's really the government's fault for not making it explicitly illegal.
Shareholders aren't individuals either, they're mostly mutual funds, pension funds, ETFs, etc... that makes algorithmic investment decisions. They didn't ask for X, but the funds they invested in will react to not getting X.
For the beta roles (because I can't help mapping wolf/pack behavior to most corp meetings anymore) about all a person can do is mount a weak defense. Which gets ignored by upper mgmt as they justify ASPD with a framework that says the number one priority is the corporate profit statement.
What percentage of people in these meetings are so wealthy they can risk everything over morally gray area decisions like this? Further how many can get away with it repeatedly should they choose to fight a battle like this?
I've found that when people use "wolf pack" (or "caveman times") explanations, what they're actually doing is using social models that (surprise!) reflect the culture that created them: humans in the twentieth century.
I don't think this is a question of someone doing "sketchy" things. Its a question of someone in the room questioning a morally questionable action, being implemented by a part of the organization as a whole. Quiting over it, or whatever likely doesn't even have an effect. Someone on the team required to implement it is going to follow the bosses orders. This appears to have happened a few times with members of the US president's cabinet over the past few years.
So, its more a "stay and fight" or "get rolled over and threaten/quit" decision. I'm betting most people just weigh the monthly mortgage payment against that and they raise the issue, but it doesn't get pushed beyond the discussion phase. If this goes on long enough, they switch jobs, or they become that person that just keeps their head down and do what they are told.
Part of this is a focus on short-term initiatives that are easy to measure and repeat. Boiling down billions of software decisions to a few KPIs seems short-sighted IMO but hey it makes money.
The Wolf of Wallstreet was a scathing critique of capitalist excess. To think otherwise is to consider a lifestyle where your wife hates you and you crash your car on quaaludes because you've got nothing better going on glamorous.
> Your film is a reckless attempt at continuing to pretend that these sorts of schemes are entertaining, even as the country is reeling from yet another round of Wall Street scandals. We want to get lost in what? These phony financiers' fun sexcapades and coke binges? Come on, we know the truth. This kind of behavior brought America to its knees.
My point is that we did find it entertaining to the tune of $0.4B, and that doesn't bode well for our general level of moral development.
Almost everyone is ethically challenged, we just need the right circumstances for particular expressions to emerge. The people who do right and wrong by you might be alternative persons under alternative scenarios.
The very poor and very rich are often placed in front of ethically interesting bargains, such as a trade of life for money, whereas Hacker News has trouble even daring to ballpark the dollar value to life -- a middle class aesthetic where one has neither the resources nor the desperation to trade in flesh.
People tend to rationalize it as not that ethically challenging or by compensating through some other societal benefit.
I knew someone who ran a FB group that devolved into conspiracy theories and absurd levels of anger to the point that members of the group were lashing out at local politicians.
The group owner liked the power and influence so rationalized it as "increasing public engagement in politics." This person is otherwise a vegetarian who fosters animals and works in the medical field.
> I cannot believe that everyone is ethicality challenged
The difficult ethical discussion probably never happens. The decisions being made in those meetings are usually seen as small/inconsequential. The problems caused by those "small" decisions are ignored. Eventually those problems become normalized allowing another "small" decision to be made. Humans seem to be very bad at recognizing how a set of "small" decisions eventually add up to major - sometimes shocking[1] - consequences that nobody would have approved if asked directly. Most of the time, nobody realizes just how deviant their situation had become.
For a good explanation of the mechanism underlying the normalization of deviance (as an abstract model), I strongly recommend this[2] short talk by Richard Cook.
I've been in that situation. I argued as much as I felt I could get away with and made the strongest arguments I could against unethical behavior. I was eventually forced out. A couple years later, the company was investigated by law enforcement and subsequently declared bankruptcy.
The people in control were the only ones pushing for the unethical actions, but most others were a lot more quiet than I was and several stuck around until the bitter end.
The discussions in this article are never shared with employees, it is just a matter raised in closed high level board meetings. Companies never discuss openly negative positions, and if they do it is only to dismiss them.
> I cannot believe that everyone is ethicality challenged, only perhaps the people in control.
Seems likely that social media as an industry selects more strongly for unethical executives, presumably because online advertising is the only effective way to monetize social media and it is more or less fundamentally unethical. I imagine the same effect can be observed among tobacco and fossil energy executives--these are industries where there is no ethical monetization strategy, at least not one that is in the same competitive ballpark as the unethical strategy.
Online advertising as a concept is fundamentally unethical? I think you're speaking in hyperbole here. Stealing user data without consent (or with fake here read this 500 pg legalise consent) is unethical for certain.
But a bike blog putting ads for bike saddles on the bottom of their page to pay for their server costs and writing staff? Hard to see how that's unethical unless you think selling anything is unethical.
> Online advertising as a concept is fundamentally unethical?
No, I meant "online advertising as an industry". It's unethical to the extent that it depends on stealing user data, which presumably is the overwhelming majority of the industry by value (i.e., I'm assuming your privacy-respecting bike saddles ads don't account for even 1% of the industry's value).
I have an example. We built a feature that would be good for users. However we found out that it would result in lost revenue. The decision of whether to keep the feature got bounced up management. Eventually we were told to can the feature and that the decision was made at the very top. Keeping it would have affected quarterly revenues. So no go.
That showed me what kind of company it was. The decision went directly against one of the company’s supposed core values. This was not a small company. Don’t work there anymore.
Nobody thinks they are complicit but in reality we all are. Some can accept this while others let the cognitive dissonance drive their behavior in convoluted and hard to discern ways. Redemption only comes after accepting that we’re born of original sin. Anybody who supports or uses non-free software has worked to finance the amoral tech decision making that you’re decrying. Even Stallman makes compromises. Welcome to modernity.
Not at boardroom level, but I was in a couple meetings in past jobs where this happened.
In one case, people had different ideas of what's more ethical/user friendly, since we can't resolve those disagreements with more arguing, we go with metrics, and metrics have no morality.
In another case, everyone agreed that it was slightly shady, but it was a highly competitive market and we have to do it to stay alive.
On the bright side, if a company ventures too deep into bad practices, it will eventually lose trust of the public. Which is why the capitalistic world hasn't descended into complete madness portrayed in dystopian sci/fi films.
In my case, I told my manager about a system design problem that would cause a daily annoyance to 100k people, forcing them to input their passwords more often than necessary. He said, "they'll accept it." I said, "I quit."
Is it clear that echo chambers and polarized discussion are good for the bottom line? I imagine they help with user growth and user retention, but would people engaged in these polarized echo chambers actually spend more on advertised products?
I think what happened here is a little different than how you describe. For me, it seems they had a hypothesis, found support for their hypothesis, then changed its definition for speculative motivations with tangible harm.
> What kind of harm do you propose is the kind that should have pushback?
"Some 700,000 members of the Rohingya community had recently fled the country amid a military crackdown and ethnic violence. In March, a United Nations investigator said Facebook was used to incite violence and hatred against the Muslim minority group. The platform, she said, had “turned into a beast.”" https://www.reuters.com/investigates/special-report/myanmar-...
So why facebook but not movies and TV over the air or streamed via other platforms? What, because it comes from studios and other sanctioned organs? Are they above propaganda and above having agendas?
I’m not saying FB is not culpable, but I’m saying if they are, then so are others.
Having an agenda is normal and is good. Everybody that plans for the future has an agenda. What is wrong is to have a "hidden agenda".
A "hidden agenda" is wrong because is a form of manipulation. When an organization has a "hidden agenda" means that they are lying to achieve a goal that they are hiding.
If a movie agenda is to "create awareness of human trafficking", and it shows how "human trafficking" impacts peoples lives, that is not "hidden" and it is actually an agenda that most people supports.
So, to have an agenda is intelligent, needed, common, awesome behavior. Stones have no-agenda, rocks have no agenda. To have a "hidden agenda" is what should be criticized.
Why will anyone think that to have an agenda is bad?
So, I work in healthcare - as a doc, and at various times, as an admin in healthcare centers as well as in health insurance. I don't know how much of that experience relates to FB's behavior, but I have some idea of what it's like to work in a field and be either called a hero or a devil, depending on the day. I am neither.
Deep breath.
As an industry, we are often doing things that are perceived to be evil. I've noticed the following:
1. Some of that interpretation is just wrong. People from the outside tend to have a poor understanding of what we do (providers, centers, insurers) and draw conclusions based on highly imperfect information. This is compounded by the fact that journalists have a terrible comprehension of what we do and an incentive to dramatize and oversimplify it - resulting in people reading the news and walking away misinformed and wrongly feeling like they're now educated on the topic. This happens a lot.
2. We sometimes do things, or want to do things, that have potential harms and potential benefits - e.g., in health insurance, I'd love to have had the ability to twist people's arms into coming to get a flu shot. It would have been a huge net benefit to their health. It would have been a net reduction in our costs. It would have been great! If we'd had the ability to ignore patient autonomy and force it, or carrot-and-stick it, we probably would have. We would not have conceptualized it as "ignoring patient preference," we would have conceptualized it as "preventing a bunch of preventable hospitalizations and deaths and, for the elderly, permanent consequences of hospitalizations." And that would have been true! And would have allowed us to not think about the trade-off so much. It's not lying to yourself: it's looking at the grey, round-edged parts of a cost-benefit analysis and subjectively leaning it in your direction. My motivation there isn't even about the money - the money just gets it on the radar as something my employer would be willing to prioritize.
3. Resource scarcity. I only have so many resources to allocate. One may benefit a patient X; another may benefit them 10X. If X benefits my organization and the 10x choice doesn't, I'll probably choose X. By itself I'm not choosing to do harm - I'm choosing a win/win. Enough decisions like that, in enough contexts, probably do give rise to net harm. But the choice isn't to do harm.
4. Not every battle can be a "will I burn my career over this?" battle. If I'd ever been faced with a choice that I thought was harm > benefit to patients, I would have burnt the house down over it. But I haven't. I've been faced with lots of little grey questions with uncertain costs and uncertain benefits where there was, in fact, benefit, and usually not just to us but to the patients too. I imagine that's where most organizations go awry: a thousand decisions like this, shaking out under the pervasive organizational need for profit. Like a million million particles of sand moved by the tide, settling out into an overall pattern due to gravity. I think the badness is generally an emergent pattern, not a single person choosing to do evil, or choosing themselves over causing harm to many. I've never been in that position, ever, so either my career is highly anomalous, or that's just not how those choices present themselves in real life. I suspect it's the latter. (Or, I guess, my being amoral is a valid third possibility.)
Capitalist systems sieve out people whose goals are at odds with the accumulation of capital. By the time you get to a boardroom, everyone has been tested hundreds of times for their loyalty to profit. All deviations are unstable: over a long enough period of time they will be replaced or outcompeted.
Not sure why this comment is being downvoted. The people who rise through the ranks are exactly the kind unburdened by ethical or moral issues that get in the way of the business generating revenue. In fact, such folks use their short term gains from breaking such implicit expectations to jettison themselves ahead of their peers. As such, this kind of behavior is incentivized.
Those with such issues either quit or work in non controversial parts of the org.
Agreed. And it works between companies as well as between people within companies. The system is set up so that only those who push the boundaries and exploit externalities can compete.
I think there is a crowd that kneejerk downvotes ideas they interpret as anti-capitalist, without reading the argument.
An example: I am not a Marxist. But I think the Marxist question of "surplus value" as an ethical question is relevant and interesting. I pointed it out on HN a few times. Again, without being a Marxist, just intellectually curious. Nobody ever asks me if I am really a Marxist. I get downvoted pretty severely when I point it out. I get an impression that they smell a whiff of the opposing sports team and turn negative.
By an arbitrary definition of sociopath invented by a researcher that has little to do with the commonly-accepted definition of sociopath, who used his broadened definition to build his career on the pillar of running around making surprising declarations about "sociopaths."
I'm really, really tired of hearing about the "sociopath CEO" numbers. They're not real.
I've been there at Google a few times and can imagine exactly how this went :/. The one time I can tell about is the blogger disaster [1]. The top leadership, spearheaded by chief of legal was basically ignoring everyone's logical arguments at the meetings, the town halls, etc. We kept coming to the mic and telling them that their ideas of what is and isn't sexual are arbitrary, as are anyone else's. They said "no, we have experts and we have a clear definition" (they didn't). We explained that post-facto removing content people wrote is cruel and unnecessary. They claimed "nobody would care or miss it". (of course they would). We told them that this will hurt transgender people, who used to find support in blogs of others going through the same life challenges and blogging about it. Those blogs would be banned under the policy. They said they had data that impact would be minimal. (They had no data). Normal rank-and-file people at google all knew the idea was a bad one. We fought hard. They scheduled a 8-am townhall and announced it the day before at 9pm! We showed up anyways en masse! There was a line to the microphone!
They had microphones in the audience. I walked up and directly asked for the "data" they claimed to have showing no impact will be had. They claimed and I quote "we have no hardcore data" (audience was laughing at the word choice given the topic). I said that "well, then how can you claim to be making a data-driven decision?" Drummond answered that "we know this is right and we are sure." The town hall was a waste of time. Nothing we said was heard and all they did was recite lines at us from the stage that made it look like either they did not understand what we had to say, or they were trying very hard to appear to not understand. Both sides were talking, but nothing we said seemed to change their mind, They came there to deliver a policy, not to collect feedback on it, despite claiming this was a meeting to discuss it. That was clear.
We did not give up. Google's TGIF was the next day. A number of people came there early and lined up and the microphones, ready to bring this up again and again. In front of the whole company and the CEO as well (Larry and Sergey were not at the town hall and claimed to have not heard of the policy until "the ruckus started").
I guess they saw the large line of people and relented. Before the scheduled TGIF began they announced they will reverse the policy.
This was a rare victory, for this sort of a situation. I am willing to bet that there are lots of good people at facebook who also fought as hard or harder against this. They just probably lost. Having seen how this plays out internally, I am not surprised, just sad.
To anyone at FB who fought against this, I send you my thanks!
They quit. The process selects for the most sociopathic because the fitness function is heavily weighted towards bringing profits in the short term. Ethics are only a consideration to the extent that they affect public perception (hence profits) or safeguard against litigation ( protecting profits).
I'm ethicality challenged if I think the biggest (or at least up there) forum of public discourse shouldn't be micromanaged like a day care, with "divisive" people sent to time out? Is it unthinkable to you that some people value free expression over being protected from negativity?
I've typically found my employment via companies who deal with a variety of contracts, some of them for weapons or defense contractors.
I could go down the rabbit hole of chasing down all those contracts and would probably find that many of the products my company makes get sold to groups and causes that I don't support. But in the end; I've gotta eat.
Do I want to throw away my career which is 99% unrelated to the SJW cause I support just because 5% of our products eventually get used against that cause. What about the 95% of our products which go to worthy causes?
I'll say it again... I just gotta eat, man. What's good for the gander is probably good for the goose too.
Are all your employment options equally in the moral grey area? Or did you just not want to think about it?
Look, do what you want, it's your life. I spent a decade working in defense and now I don't. Some times were uncomfortable. I hope you keep your eyes open when making decisions to avoid some of the discomfort I've felt in the work I've done.
While that is true, I've worked in manufacturing environments with high tech equipment. This manufacturing equipment is so sensitive it gets covered with tarp during dog-and-pony shows. We are using equipment and techniques in the USA that other nations could only dream of implementing. Why do you think most airplane manufacturers are located in the USA? Don't you think an airline would buy aircraft engines from China if they could?
Keeping America on the forefront of technology has its benefits. If we don't invest in cornering these technologies; our adversaries will.
Unfortunately it's the same technology that has kept us in the middle east that's also been a forceful deterrent which safeguards all Americans.
Products that may be sold to terrorists include canned beans and Toyota trucks. Your situation might actually be less morally compromising than the Facebook stuff being discussed, because in their case they are the "questionably motivated 'freedom fighters,'" (i.e. they're directly doing the morally questionable stuff) whereas you're just selling stuff to a broad market that may include questionably motivated "freedom fighters." It's sort of the difference between selling lockpicks that may eventually be used in a burglary or might also be used to get Grandma's safe open, versus breaking in yourself.
Forgive me for the bluntness, but nobody with any set of technical skills "gotta eat" by supporting those kinds of efforts. I've worked to practice what I preach, too; I've consistently worked in do-no-harm jobs. I make rowing machines today and the worst you can hang on me from the past is that I had a daily-fantasy-sports site for a client for a while (which I'm not proud of but it's a pretty venal sin)--and I have made more than enough money to do very well for myself.
Are you actively looking for employment elsewhere so that you can transition away from supporting harmful causes? Or are you using the excuse that you have to eat as a reason not to do hard things in your life?
I have used that excuse myself. I'm trying to get better at not using it.
As actors in the World, we are machines that turns sensor data into a linear stream of actions. To the extent the decision process in not completely random, there exists a metric that ends up maximized by the decision process, sometimes referred to as 'god' or even 'God'. The vast majority of economic decision processes in the modern economy are driven by one metric: money, sometimes referred to as 'Mammon'. A corporation is an aggregation of human / computerized actors that work to maximize the corporation metric: money earned by said corporation.
The discussions are very simple: Course of action A makes us X$, course of action B makes us XXX$. Therefore course of action B is taken. There is no consideration of other effects besides, perhaps, a quantization of risks. Risk of losing the 'good guys' facade, counterbalanced by PR expenses, or risk of being sued, counterbalanced by legal expenses.
> Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.
Imagine an average level of civility in a society `C`.
Lets say users of your product, due to your product, operate at `0.5 C`.
Is changing the product so they operate at a higher `0.75 C` or back to `C` "paternalistic"?
Why?
I can see the argument for moving `C` to `1.5 C` as paternalistic. But when you're already actively affecting `C` in one way, why do we moralize about moving it the other way? What makes down OK, but up BAD?
You keep on saying "paternalistic" as if it's a bad thing. I left another comment in this subthread suggesting it is not.
Yes, some people are wrong and some are right. With government, there are basic freedoms that allow people to be wrong, and not to be incarcerated or unduly burdened by government policing thought.
But society? Facebook? Even government messaging ala "The Ad Council"? Yes, absolutely, to hell with disinformation, trolls, and toxic platforms.
Honestly, this does give me much more confidence in Facebook's internal governance, even if the platform often bows to media demands.
Much to the Chagrin of many on HN, Facebook is, and has been a fairly open platform to people of all convictions, backgrounds, and political stripes. Even if it has been unsteady handed at times. This, as well as their sorting algorithm may well be contributing to the collapse of institutional trust and cultural balkanization of the western world.
To a progressive liberal or political moderate who directly benefitted from the economic and technological booms we've experience over the last 30 years this is upsetting, because the global order (and its associated stability) from-which they've benefited, which brought us to where we are is disintegrating around us.
To me, hand-wringing about Facebook's relatively hands-off approach the political dialogues on their platform is just resentment about the loss of a prescribed cultural narrative and familiar cultural coalitions, the collapse of-which has given every stakeholder in their nation's future an opportunity to speak up for their own convictions and interests, in hopes that theirs will be the dominant narrative of the new political landscape.
I wish these dialogues and factional aggregations were occurring on a more federated network. But so far as centralized platforms go, I can't think of any company more fit (that's not a compliment, but a lament) than Facebook to host them.
Fake news (the actual kind), name-calling, absurd conspiracy-theorizing, memes that remove all nuance from complex issues, botnets that amplify anti-science/anti-intellectual nonsense ... aren't political dialogue, they're the breakdown of it.
> Fake news (the actual kind), name-calling, absurd conspiracy-theorizing, memes that remove all nuance from complex issues, botnets that amplify anti-science/anti-intellectual nonsense ... aren't political dialogue, they're the breakdown of it.
I understand the disdain for all of this, but this has been the state of American media for the vast majority of its history. Ben Franklin et al would find the anomalous post-Cold War period of journalistic "neutrality" to be unrecognizable. Fake news, name calling, absurd theorizing have been society's way of communicating about issues since the dawn of the printing press — or in other words, "political dialogue".
They're indicative of a collapse of consensus on the part of society. It could well be argued that prior to our current political era, especially in the US, that the political domain was largely constrained to a discourse on cultural aesthetics, wherein the Democrats and and Republicans argued over trivialities (on a broader national, not individual respect) such as abortion, marriage, and immigration, while they operated on an implicit consensus concerning foreign policy, and had a functional stalemate in terms of the size of the state, farming out many of their policy decisions to thinktanks, corporate donors, and well-established bureaucrats within our regulatory bodies
America's role as international security guarantor, its trade policies, and its government's role in domestic affairs was never really up for debate, and it only really changed stepwise in a stochastic manner, responding to situations and incentives day-by-day with no conscious consideration to the role of America or its state on a broader scale.
What we're seeing now, is large portions of the population coming to realize that that existing bipartian components of the political consensus - which I believe to be a legacy of the cold war, no longer serves their cultural or economic interests.
This process is naturally fractious, chaotic, sometimes violent, and full of dirty tricks, because politics isn't just about flavor of the month policies anymore. We're in the process of reinventing who we collectively are, and what we want to be. As a result, we're running across real, fundamentally irreconcilable political and moral differences that have been buried for decades, as well as confronting the failures and controversies of our past.
Many of those fundamental agreements settle neatly along class, racial, and professional boundaries. Others, not so much.
Science denial and anti-intellectualism is the natural result, because much of science communication has become a carrier mechanism for policy prescriptions predicated upon society operating under a specific ideological consensus, when in fact someone of a different political persuasion might objectively consume the scientific data and come to a different policy conclusion based on the same data.
For the less educated, who encounter proposals from scientists they consider to be politically unworkable, and which might rightfully be considered manipulatively framed, it is easier to reject entire specialized fields of research out of hand than to investigate further and attempt to conceive of alternative proposals because they lack the tools to engage with the information effectively to begin with.
All of this is messy, but it constotutes a real political dialogue on the part of society.
Normal society encourages civility by offering the inclusion into a needed physically-near social group. Digital society deincentises civility by offering a multitude of alternative groups.
A community based on geographic locality is the prerequisite for a functioning society and state. Anything that polarises that geographic community ultimately damages the state by attacking its precondition.
Facebook admits to doing large-scale emotional manipulation of its users. They published a 'scientific' paper where they showed that they tried and succeeded to make 1 group depressed (hundreds of thousands of people), and 1 group feel happier (also hundreds of thousands of people).
They psychologically manipulate people into depressions, on purpose.
Facebook is not "just" an extension of open society. Facebook is a specific powerful corporation that makes immoral decisions to emotionally control their users.
There are a lot of sources to read, including follow-up papers by other teams that evaluate if Facebook had "informed consent" (they did not) to emotionally manipulate their users.
Yes, a company which owns large chunks of India and has a well-used private army numbering in the tens of thousands, is a great analogy for a social-media company. </s>
The East India Company was responsible, at least in part, for tens of millions of deaths in various famines, and to equate the two fails both by being ridiculous (Facebook is not a private empire with an empire), and trivializes the actual damage done by that institution.
In much of South/South-East Asia, for many people, Facebook is the internet. (And remember Facebook Zero? Facebook was aware of and tried to engender this).
A staunch defender of the EITC would claim they were "just" engaging in mercantilism and facilitating the exchange of goods, and the war and deaths were just unfortunate side-effects. Facebook is "just" engaging in connecting people and facilitating the exchange of information, and stoking violence and racial conflict are just unfortunate side-effects.
You're not going to convince me (or hopefully, anyone) that an institution with an army that actually goes about the business of conquering and killing people, has any moral equivalence with a misguided (and I'm not contesting, destructive) social media company.
We can say that things are bad, while at the same time admitting that in the past, people did far worse things. It's a new, different, less-bad-but-still-bad, thing. It's OK.
Defenders of the EIC at the time surely said "yeah some bad stuff happens but think about the squalor the average Indian lived in prior to the Englishman coming in and bringing great wealth to their country. Think of the untold famine and poverty we're helping ameliorate by bringing western Christian ideals and wealth to a primitive people.
How DARE you compare some unfortunate incidents of the EIC to the human misery that existed before the Brits arrived, you're being ridiculous!! "
People have always been able to use motivated reasoning to explain away the terrible externalizes of their choices when there's a shitload of money on the line.
FB has been a tool to aid genocide, they've contributed to incivility in societies throughout the world while they're cashing checks but don't want to appear "paternalistic" of course so it's fine.
>The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
They're responsible for 64% of extremist group joins. Is trying to change that number to 0% paternalistic?
I assume I'm currently not responsible for any extremist group joins. Am I being paternalistic by not pushing people toward joining extremist groups? Is it only paternalistic if you first find yourself responsible for some extremist group joins, and then try to lower that number?
Isn't it preferable to be somewhat paternalistic when you have paternal amounts of power over your userbase? Its not like giving up the power is on the table.
There is of course the well documented problem of moderation - it inevitably turns into an issue of a subset of the users vs the moderators. Facebook gets by pretending to be neutral "platform providers", but they actively optimize for their benefit. They are about as neutral as a bathtub salesperson on water heaters.
This whole idea that they don't have control only has the ability to stand based on the indifference of its users. I can only hope it eventually falls and the next grand experiment in mass social interaction is a lot more gentle for society.
That isn't a bad thing. We are constantly influenced by design and society. It's going to happen. And in Facebook's case, with respect to Rush: "If you choose not to decide, you still have made a choice". Choosing not to build a user experience that disarms unnecessary conflict, or that can limit disinformation, is a clear choice.
The idea of designing human interaction and government policy with the knowledge of how humans react is not shocking or new. Heck, the "Pandemic Playbook" from the CDC continuously references group behavior when discussing how to communicate facts to the public. For example: If you tell people to stay home on day 1, the public may doubt or tune out your advice. So what do you do on days 1-3 so that on Day 4, government advice is heeded? Get private companies on board, ramp up voluntary advice for some time, before letting the big news fall.
If you'd like to learn more, check out Nudge by Cass Sunstein [1]. And another book by the same man, specifically covering the ethics of governments using the technique. [2]
"If two members of a Facebook group devoted to parenting fought about vaccinations, the moderators could establish a temporary subgroup to host the argument or limit the frequency of posting on the topic to avoid a public flame war."
Most of the suggestions they considered were fairly modest product design choices that probably would improve user experience. To call these choices paternalistic is a stretch.
Also, the platform is already paternalistic - it polices nudity, pornography and a range of other legal content.
Unless Joel is advocating allowing nudity on the platform then he is just blowing smoke. Facebook is inherently paternalistic and Joel Kaplan is right-wing hack.
I don't think this makes sense. It works off of assumptions that are clearly untrue.
1. Consequences of language on the internet are equal to that in person
2. Networking effects
For 1. If somebody on the street comes up to you and says "hey I'm going to come beat up your family." At a bare minimum, the cops are being called and it is somewhat taken seriously. On the internet though, it is a reality for many people (especially women) that there are no consequences for such horrible language and communication. Also, people make different decisions in real life when it comes to certain types of language. I don't just go around swearing like in real life, but people are way more offensive on the internet. There are physical realities that don't map to the internet, that causes different communication patterns on the internet.
For 2. When it comes to spreading disinformation through idiots sharing links to each other, the effect is much more pronounced than when a conspiracy theorists goes out to a street corner and starts shouting ideas at people or has a million signs. Its clear in the latter case they might have a few screws loose, however in the former, everybody's "opinion" seems equal, but we can't use our other senses to vet them and b/c communication is slow/unclear on the internet, we also can't have a protracted conversation to figure out what their ideas are and where they come from (something you can easily do in person). This then causes really bad ideas to spread because people have lots of connections on facebook and there is no good way of vetting people or ideas.
The idea to not be "paternalistic" only makes sense if you think that communication in person is equivalent in every way to in person communication, which is fundamentally untrue. The only reason they don't do this is b/c they don't know how to solve this problem for N countries generically and don't want to be held liable for a policy that makes sense in country A, but not in B and causes potential legal issues.
News organizations present a limited, curated view from fact checked, verified sources. The information flow is mostly one way, from the news organization to me.
A social media news feed might present the same underlying story to me, but via some opinion blog that has not fact checked it or verified sources. It might also come with assorted speculation by the posted, ranging from wild ass to outright insane conspiracy theories.
And social media is designed to get me to offer my opinion on it, and to see other people's opinion, and for all of us who read it to discuss it in a semi-pseudonymous free for all.
The news organization approach is much more effective if the goal is to actually inform people about the negative event.
They even have the same business model, in which users are not the customers. If you are Sylvester McMonkey McBean, you do not want to place ad impressions in groups of star- and plain-bellied sneetches who share an interest in underwater basket weaving. You will happily spend to place impressions for star-on machines among groups of plain-bellied sneetches, and star-off among star-bellied.
That sounds right. Fear and conflict drives higher engagement. Although it makes business sense to chase higher engagement, I wonder how much of people's distrust with Facebook the brand is just a reflection of how people feel when engaging the product.
I think this is perhaps an example, on a grand scale, on why you (generally) can't use technology to solve cultural problems.
Before the Battle of Trebia, Hannibal wanted to set up an ambush for the Romans. He gathered 200 of his best troops together, and told them (a) that they were squad leaders; (b) they each should pick 10 of their friends to form their individual squads; and (c) the plan of attack.
A modern person might ask: "Well, why didn't he just gather ~ 2000 soldiers together and communicate the plan of attack directly? That must be better than passing a battle plan via a game of telephone".
The simple answer is that, before electronics and without a purpose-built theater, you really could only speak, directly, with about 200 people at once, because that's as far as the unaided voice can carry.
Many inventions and societal changes over the course of the past 2,000 years have accelerated the flow of information. Printing presses, movable type, widespread literacy -- I doubt that many of Hannibal's troops could read! -- postal systems which eventually spread across the globe, telegraphs, telephones, fax machines...
Each of these increased the speed, range, and coverage of communication to a varying degree, and each had a profound impact related to the scope of that change.
Humanity is only about a decade or so into a world where J. Random Person has the potential, through viral spread, to communicate with the full extent of their social graph, and to find like-minded actors.
Compared to before, this is a massive change, and if you weren't old enough to understand The World Before The Internet, it's hard to grasp just how massive of a change this has been.
Our cultural and legal mechanisms simply haven't caught up yet, nor have they adapted to the new evolutionary tempo.
This is also why I'm never particularly concerned about somebody popular being kicked off a platform. The ability to quickly go from being unknown to having millions of views, distributed using somebody else's servers, paid for by somebody else's money, is unprecedented in human history. If Youtube and Facebook started banning people en masse, the degree of free speech in our society would at worst regress to about what it was in early 2000s, hardly a dystopia.
This is a great example! Thanks for writing it up. I appreciate especially the 200 number for unaided speech, really puts the power of technologically aided communication into perspective
People always blame Facebook when the existence of Internet forums has always led to radicalization of individuals. Facebook's crime is making forums accessible to all.
These are just your fellow people. This is how they are in the situation that they're in. So be it. Let them speak to others like them.
The cost of that is many angry people. The benefit of that is that folks like me can find my people. That benefit outweighs the cost.
People always blame Facebook when the existence of Internet forums has always led to radicalization of individuals. Facebook's crime is making forums accessible to all.
If it were only that, I would have a hard time assigning blame to Facebook. However, it is not only that. Facebook exercises editorial control through its recommendation engine. Users don't see all posts in chronological order. They see posts ranked by Facebook based on invisible and inscrutable algorithms that are optimized for engagement.
It just so happens that making people angry is an effective way to keep them engaged in your platform. Thus it's not fair to call Facebook a neutral party if they're actively foregrounding divisive content in order to increase engagement.
I'm sympathetic to this position. I've heard people say the same about YouTube and I don't have a concrete position on this.
On one hand, if someone were to tell me "The Mexicans are ruining America" and I were to say "Damned right! Who else do you know who says these great and grand truths about America?" I would expect that person to introduce me to more people like them and my radicalization and engagement would increase out of my own desire to have more of this thing. That aspect of Facebook's recommendation engine just seems like a simulation of a request for more like what I want in a very obedient manner. That is, the tool is actually fulfilling what I am expressing I desire.
On the other hand, the inputs are inscrutable and not clearly editable. For instance, suppose I look at myself and say "God damn it, some of these things I'm saying are really bigoted. I don't want to be like this", I cannot actually self-modify because there is no mechanism on Facebook to modify the inputs. It'll select for me the content I have these auto-preferences for but not the ones I have higher order preferences for.
Essentially it's a fridge that always has cake even though I want to lose weight.
So, yeah, I'm sympathetic that I cannot alter the weights on my recommendation and say "I want to clear your understanding of the person I want to be. Stop reinforcing the one I am now."
Certainly the recommendation engine is a flaw. I do like recommendations though and that's my favourite way of browsing YouTube in the background. It's pretty good at music discovery. So, perhaps it needs to be only opt-in. Imposed by choice rather than by default. It still has to be possible to turn it off.
Even then, I'm not sure. This is an ethical question I've been thinking about for ages: Is it ethical to allow someone to make a choice that could be detrimental and that they cannot recover from? What are the parameters around when it is ethical? Opting in to recommendations could be a one way trap.
The difference is that facebook is unlike a forum. It's not actively moderated, and content is bumped according to engagement/marketing potential rather than chronologically by genuine user interest alone.
I don't think an open society can be built on top of an advertising platform. Facebook is not a neutral party here - they control who sees what content at what time with little accountability or transparency.
Everybody who uses Facebook should spend about ten minutes on it. Catch up with the important things friends are doing and leave.
Unfortunately, this behavior is not in Facebook's best interest. For them, it's Facebook now, Facebook later, Facebook as far as the eye can see. Everything is Facebook.
There is a premise to this article that needs to be called out and expunged. I have come to the sad conclusion that Facebook is a company that should not exist. It's laying waste to huge sections of the economy that used to provide valuable, informative content, it's in a battle to suck your entire day away from you with streaming and other services, and it's premise is in direct contradiction to how we know societies evolve. You can't start with "how do we fix it" and end up anywhere good.
They're not dummies. There might be a lot of happy-talk, echo chamber discussions happening inside the company, but they know the score. That's why they're picking political winners and losers. I imagine there's a ton of money heading out to both parties to provide cover over the next few election cycles.
I think looking back, if we manage to navigate our way through this period, it's going to be viewed as a very sad and dark time, much like the dark ages. I sincerely hope I am completely wrong about all of this.
This entire thread, discussion and the article in focus make me so relieved. I'm so proud of my decision to facebook, twitter and reddit altogether. There is soooo much less noise in my life. I'm finally reading books, enjoying my hobbies while still getting what 'I' like from the internet - RSS feeds to give me the latest and most popular developments in news without any user generated comments. 1-on-1 messaging services to help me stay connected with my loved and dear ones and an occasional tour of websites like HN and my favorite blogs from the bookmark folder. I do not want the reader to assume my model is perfect, it's subjective. But that's the point - it is what I make out to be the perfect browsing model and intended use-case of internet to me. Another minor point, ever since I moved away from reading what 'people' have to say in comments, it de cluttered my mind.
The internet is what you make of it. I let it direct how I used it, and getting myself away from that grip and 'sucked into' environment is a blessing.
I'm no fan of Facebook. But for what it's worth, back when I was still using it in ~2010, it helped me learn a lot about the worldviews of people on the opposite end of the political spectrum who I rarely if ever interacted with in person. The mechanism for this was Facebook Groups - I'd hang out in climate change denial groups talking to denialists and asking them questions. And although it didn't change my mind and I didn't change theirs, I (and my, err, opponents) both actually learned a lot and came to see the other side as more honest and less irrational/evil than we once thought.
I don't know if Facebook still serves this purpose today.
It's less like that anymore. Group raids involving post reporting became a huge issue a while back, so most political pages use membership application questions requiring you to positively affirm or signal in-group association before joining. Nothing prevents you from lying to get into a group, but it's oddly effective as a mechanism for preventing partisan opponents from engaging in any dialogue.
I think a big part of the shift of interactions over the last 5 - 10 years is the communication platform (Facebook, in this case) bringing in new users who had zero experience debating in a text-only format. It’s probably inevitable, unless the platform tries to educate and heavily police new users on what proper behavior is.
Facebook was incentivized to grow as fast as possible. Comments and discussion was one of many vectors for growing; photos, news, and silly images was just as important. The quality of all that wasn’t as important as the content coming from people you know and trust.
Contrast that with a community like HN, where quality of comments and content is much more important, since you have little to no trust for almost all people submitting content.
Facebook and other similar systems reward engagement. Engagement happens when people are surprised. Surprise happens when people come across new apparent "information". New information is most easily propagated through the use of lies.
It follows pretty clearly. If they don't want divisiveness, they have to either step away from rewarding engagement, or they have to stop people from lying. They're in a bind, except it's society that is bearing the cost.
It just feels like weaponized Usenet from the mid-'90s, or almost every popular online forum since then. Multiplayer game communities even. They're like tinderboxes for negativity. Very small numbers of bad faith actors (griefers, trolls, scammers, spammers, or just plain assholes) can trivially derail entire communities. Even without people trying to screw everything up, plain old human nature, and the nature of electronic communications, can make it happen as well. It just takes a little longer.
Put another way, each flame begets one or more flames, whereas each good comment might get responses but maybe it stands on its own. Over time the signal to noise ratio of any forum tends to degrade to nothing as the forum becomes more popular because of this. Moderation, scoring systems, etc. can ameliorate this but in general the less specialized the forum, the worse it is. It's like entropy in that it only goes in one direction, it's just a matter of time and how much you can push back on it. Bad comments beget more bad comments, but good comments don't necessarily beget more good comments. And at some point, the ratio of bad comments to good comments drives away any potential good commenters and the event horizon is crossed and the forum dies. Or it lives on as a cesspool for whatever.
The difference between Facebook and Twitter in 2020 vs comp.os.linux (or whatever) in 1995 is that it's not specialists screaming at each other about which distro or programming language or OSS license is best (or worst). It's a much wider net of far less informed or rational people, encouraged to argue about infinitely dumber and less knowable or debatable stuff. It's like scammy clickbait, but for arguments rather than clicks. The other difference between Facebook and Twitter in 2020 vs online communities of the past is that Facebook and Twitter make money off of it. All this BS fuels "engagement" and keeps larger volumes of people posting and therefore revealing themselves to trackers and creating a stream of ad views for the platform owners. At some point I do think the toxicity of the platforms will start costing them users, but that doesn't seem to be happening anytime soon.
Why does facebook need to do anything about this? People have been disagreeing with each other violently or otherwise for as long as humans have existed. Do they think they can do anything about this?
There has never been a mechanism whereby everyone can be against everyone else about everything.
When my high school english teacher and my aunt are arguing about politics and they've never met each other, it's clear this is a new development in human conflict.
Why should we care about having true beliefs? And why do demonstrably false beliefs persist and spread despite bad, even fatal, consequences for the people who hold them?
The outrage towards Facebook causing divisiveness is a red herring. You want to see divisive content, go to foxnews vs cnn. Pretty much the entire media is partisan and biased towards their constituents' points of view. For Facebook, it would be nice if they stick to showing whatever is posted by a user's friends or organizations they like/follow without much curation, but my view is that their impact on divisiveness overall is miniscule
Are we are going to have to wait for a generation to die and for millions of lives to be lost (indirectly, say, through a demagogue's botched response to a pandemic needlessly leading to the infection of millions) before the average person is comfortable using a protocol (say, ActivityPub and RSS) instead of these parasitic for-profit platforms?
As long as the search for truth is burdened with advertising on platforms democracy and freedom are doomed.
If you don't see these things are linked, then you're part of the problem.
This is why twitter and facebook don't have dislike buttons. By removing a quick and easy way of voicing dissent to a point, people take to the comments to verbally punish others. For a site that is dependent on user engagement, anger/outrage/frustration/negativity in general is a gold mine. I remember when reddit tried removing the down vote button the comments got NASTY. They back-peddled very quickly from that decision.
"Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”"
Then:
"In keeping with Facebook’s commitment to neutrality, the teams decided Facebook shouldn’t police people’s opinions, stop conflict on the platform, or prevent people from forming communities."
Does not compute.
How can they claim to be neutral about the very problem they themselves created?
There's a lot of daylight between proactively accelerating extremism and censorship. This is not a binary choice.
I'm right alongside Kara Swisher on this topic: Facebook's leadership team is apparently incapable of nuance, self awareness, or acknowledging culpability.
I think NYT has set a decent example with how to deal with internet comments sections. I like the idea of a US House of Representatives type approach to comments where every person in the house is given an equal amount of time to address the house so you can hear all perspectives.
The way NYT has done this is by introducing "Featured Comments". A team at NYT, presumably ideologically diverse, picks insightful features to highlight out of all comments. You can still view comments sorted by number of recommendations, but they default to the Featured Comments.
The web forum I think needs this more than any else is the r/politics subreddit of Reddit. Someone please let me know their experience, but I don't think the comments on highly upvoted content are insightful at all. A lot seek to exacerbate and misrepresent which IMO adds fuel to the flames of the flame wars.
It’s hard to tell why automated feeds are treated differently to manual feeds. If a blogger repeated a libellous claim, they could be sued, even if their primary activity was curating other people’s opinions. Why companies that run algorithms that do the same thing at scale get a free pass is hard to fathom. These aren’t dumb pipes, but carefully programmed algorithms, tweaked for generating maximum engagement.
I would suggest that any service that provided a curated feed of content pushed to users, should be treated as a publisher and held liable for the content it promotes. Importantly, “curation” would include spam filtering.
Google News, Facebook and Twitter would all be deemed publishers under this rule. Search engines wouldn’t.
It would probably kill their business models, but that could well be a net gain for humanity.
Politicians, news companies, and yes, Facebook, all promote divisiveness IMO. It seems to trigger a primal instinct in (some?) humans to belong to "this" tribe or "that" tribe, and they will go to great lengths to preserve and promote "their" tribe while at the same time trying to squash & demoralize the "other" tribe. I've known people like this, and for them, it seems to be almost like a sport: they enjoy arguing about why they are right and you are wrong.
A person like me who doesn't share this black/white tribal thinking will eventually (usually quickly) walk away from someone super aggressive about their opinions (which they usually believe are facts). It's boring and becomes quickly obvious that there is no point in trying to have any kind of intelligent discussion, because thinking is not part of the process.
Now if you can get two of these aggressive types going at it against each other, now you have a real spectator sport. It will never end, because neither side is thinking about what the other is saying; they are both just defending their entrenched positions, and they both enjoy the "battle".
The arguing becomes like entertainment for these folks, and the more they argue, the more engaged they become in the argument. IMO, that's why politicians, news, and companies like Facebook, _want_ divisiveness. They want their audience to feel compelled to interact and engage.
The thing I don't understand is that, while I don't know it for a fact, it seems that the "middle", those who are not fanatics, is a much larger audience. They are turned off by all the aggressive black/white arguing among politicians, news, and internet sites like Facebook. I've never been on Facebook or looked at a Facebook page. I stopped watching TV news after it turned into shouting matches over opinions instead of delivering facts. Same for politicians.
It seems like courting the middle, moderate audience would lead to a larger customer base. But that must be wrong, because surely these gigantic media companies would have tried it by now.
A rather persistent recruiter from FB contacted me recently, and given the new WFH scenario there I was almost considering looking into it further, despite it probably being a frying-pan-fire thing (coming from Google)
IF facebook offered me the option of paying $5/mo to just get API access to the things my friends posted, and I could display them however I want (LIKE FOR INSTANCE IN CHRONOLOGICAL ORDER!) I would happily pay it.
It's 20 years in the future where facebook and similar services are much, much worse. Wealthy people pay for editors to remove misinformation from their feeds. And the country gets bifurcated with coastal elites having access to editors and flyover country ("Ameristan") has turned into a conspiracy plagued wasteland.
I remember content from friends, then no related content when Facebook was testing feed changes and now, it's mostly based on meme pictures and group posts. I forgot that there are any "friends".
I simply cannot understand the motivation of people who seemingly want to be made angry.
I’ve had friends tell me I’m just buying my head in the sand, but I don’t think I am. I’m trying my best not to be manipulated into a worse emotional state. I don’t go on Facebook anymore because I realize that objectively time spent on Facebook made me less happy.
I'm building a social media site that will bring people together rather than drive people apart. It's called Belief Challenge. It's social media for open-minded people. Try it! http://beliefchallenge.com
They decided against paternalistic meddling and let discourse happen naturally? That sounds best to me. I don't want Facebook to be a school teacher hovering over a lunch table to make sure nobody swears. People posting "divisive" content is far preferable to the alternative.
It's not people posting divisive content that is the big problem, the big problem is divisive content getting all the eyeballs, causing people to (due to completely normal human psychology) to believe everyone either are completely against them, or completely with them, and nothing in between.
Even disregarding anything but mental and physical health, the consequences are significant and quite real.
No, they don't need to become the gatekeeper of all "bad things"(tm) the same way they protect us from accidentally gazing at a terrifying nipple, that would be preposterous, but they could probably try a little harder to not act completely opposite to their users best interest as often as they do.
Especially when that happens to be a significant fraction of all the people on earth, that's probably not too big of an ask?
If FB wanted to "let discourse happen naturally" and not be paternalistic, they wouldn't use an opaque, non-chronological algorithm to control who gets to see what in such a way that primarily benefits FB's bottom line.
Optimizing for engagement does not favor any particular viewpoint. The authors of this article are incensed that Facebook doesn't engage in more viewpoint-based adjustment of the conversation. Favoring or disfavoring a post based on the viewpoint it expresses is very different from optimizing an algorithm to give a user more of what he wants, whatever that is.
Optimizing for engagement tends to favour extreme, simplistic, and highly emotional viewpoints. In other words, it caters to human nature. This tendency is harmful to rational discourse, regardless of whether or not you happen to agree with any given viewpoint.
Phone companies don't set up incentive structures that encourage a certain kind of content. Facebook has an "algorithmic" feed, likes, and "engagement" metrics that rewards certain behaviours and punish others. They are rightly being pilloried when these incentives encourage and promote constant outrage, conspiracies, and completely fact-free fear mongering.
It would be, yes, and if Trump acts on his threats to investigate censure on social media then this may be a good position to take.
The problem is that being a publisher brings greater legal liability for the content that they publish; whereas as carrier/platform can wash their hands of the data that they transmit and claim that they have no part of it.
> Phone companies don't set up incentive structures that encourage a certain kind of content.
I'm not convinced of that. Through technical and billing means, phones encourage one-on-one conversations while discouraging conversations with multiple participants. By disincentivizing certain kinds of conversations, they disincentivize certain kinds of content. It's hard to say exactly what sort of impact this may have on society, but I doubt it doesn't have any.
This may be a far cry from Facebook's deliberate algorithmic tweaking to manipulate the emotions of their users, but I think it's interesting to consider in it's own right.
If you mean that Facebook should be regulated as a utility, by all means make that argument - I think you’ll find broad support.
As it is, Facebook is constantly making editorial decisions in terms of what content is shown (which posts, in what order, with what presentation). Their own research had found that some of those editorial decisions have externalities in the form of increasing social conflict. Rather than take steps to address it, or even research this question more, they wiped their hands of it.
Note the voting on your questions as opposed to the engagement of the discourse you've started. There are a percentage of users who don't like you asking these questions and a percentage of them who want to understand what these questions mean.
Phone and cable companies do not create polarization because they carry ALL data (usually). Services like Facebook, Twitter and HN all provide the ability to modify the content, in place. This is done with automation (code) and we can expect that automation to become more aware moving forward (AI).
This ability to modify content in place by the companies produces revenue at the same time it creates the ability for some types of divisiveness to form. Humans are divisive, under certain conditions, and there isn't much that can be done about it other than education about how to stop being divisive.
Education becomes impossible when the entities controlling the channels do so in a way that prevent users changing what type of content they see (such as education about how to avoid divisiveness), maybe due to the fact it kills revenue.
Worse, the more choice you give users (free, decentralized internet anyone?), the more some users will choose to introduce behaviors that give way to divisiveness in a given group. Trolls using imagery to build propaganda filled stories.
Trolls have taken over the Republican party, if nobody has figured this out by now. Note how they use strong imagery to glue their never-ending stories together.
It's a no-win situation. The best thing to do is simply walk away from it or maybe build a personal search engine AI crawler thing that works for just you and only you.
It's clearly not "end of story". What does it mean to be controlled? Regulations or nationalization? What kind of regulations? Do the regulations vary across countries? What kind of social media - just Facebook, or all social media platforms?
Saying "end of story" is the sort of needlessly dismissive and self-righteous rhetoric that always makes me sad to see on HN.
A decent of percentage of people get on facebook primarily to argue and that increases these session time for these users. Probably wouldn't be beneficial to FB but probably to people.
It's a little different. Reddit doesn't choose the content presented to users, they allow the community to self-sort into community-managed subreddits with their own cultures and preferences and voting behavior. In fact reddit only barely exerts any control over the selection of subreddit moderators (mostly stepping in only to resolve things in extremis).
Facebook's algorithms decide on everything in your feed. If you aren't interested in politics on reddit you might never see it at all. If Facebook thinks you might be a republican (and often that's just a demographic thing coupled with a few past clicks on political stories), they will literally fill your screen with paid advertising designed to drive your political preferences.
The point is that division is visible on Reddit (and everywhere), but driven and encouraged by Facebook. And that these are different phenomena. I'm not completely sure I agree, but the point isn't as simple as "division exists".
What I can't get in my head at this moment is why Facebook does this. When it was still very young there was a lot of people who loved their product and they said so no way.
Its a new societal urge: the addiction to feeling righteous indignance. Endorphin rush, available to anybody with a keyboard. Gonna be hard to put that genie back into the bottle.
You can more strongly control - and capitalise on - people when they're divided and isolated, triggered and engaged, in their own little world where they think they're engaging with the whole world, but in reality they have no idea that it's just their tiny little access of it.
This is control. Not uniting humanity. It's 'divide and conquer', through business. The users are the conquered ones.
It is hurting civilisation greatly. Facebook is the archetype of capitalism needing to be reined in by government due to its bad effects on society. It's like pollution, but sociocultural pollution.
I don't care that a company made money while producing the pollution. I don't care that people voluntarily chose to buy their products whose production produced the pollution. That doesn't justify their business activities.
Zuckerberg’s invincibility as CEO is nothing short of one of the greatest failures of modern capitalism. It’s simply astounding that such a terrible leader has retained control of what is clearly a company out of control. And the market accepts all of it while individuals constantly criticize his and Facebook’s actions.
People always throw around “well stop using Facebook” but that clearly isn’t a reasonable solution from a scalability standpoint. What percentage of those people also hold Facebook stock, either directly or through a hedge fund, ETF, etc.? It could be more than we think.
At the end of the day, profits don’t care about people, and this is the consequence we all have to live with.
In essence, Facebook is under fire for making
the world more divided. Many of its own experts
appeared to agree—and to believe Facebook could
mitigate many of the problems.
The company chose not to.
Unless you are actively pushing to change it from the inside, you should leave now. Take a reasonable amount of time to find a new job and leave.
A few years back, there was a documentary called "The Brainwashing of My Dad" about how Fox News and conservative radio turned a relatively non-political Democrat into an angry, active Republican.
In the past couple of years, I witnessed the same thing happen to my mother, except driven almost entirely by Facebook and its non-stop parade of right-wing pro-Trump racist memes.
Perhaps important journalism, but it is behind a paywall, so apparently WSJ is satisfied that only their subscribers know this information about Facebook.
Meanwhile, Facebook is not behind a paywall, so they can monitor the conversations of billions of people despite monthly stories that circulate illustrating gross misconduct.
As a total outsider following this from a distance, I sort of feel for Facebook and other social media platforms facing this problem -- they've run up against a fundamental issue for which there doesn't seem to be any satisfying solution. Misinformation and propaganda are rampant on their platforms definitely, and echo chambers that reinforce divisive worldviews have probably deepened real societal divisions, but how do you actually implement a policy to stop this? What is "propaganda"? What is "misinformation"? The entire core of Facebook's existence is advertising, which means user engagement and reach is the only thing that drives your bottom line; they want to drive users to Facebook and keep them there, and keep them engaged. They've just happened to discover a universal human truth along the way, which is that people like feeling validated, and people like being a member of a tribe. Facebook is the way it is because thats what users want, whether they will admit to it or not.
Anything that Facebook does will be perceived as making a political and/or moral statement, which they obviously are trying very hard not to do, because as soon as you take a position you alienate half of the population (at least in the US). They've apparently decided to go the route of burying their heads in the sand instead of trying to make things less tribal and divisive, which in all honesty is a pretty understandable position to take, and yet even while actively trying not to piss off conservatives they have still landed in hot water over perceived favoritism towards the left. They are damned if they do and damned if they don't.
So honestly, what is the proposed solution here? What would you do if you were in Zuckerberg's shoes? Do you campaign for regulations that take this issue off of your hands but that let the government call the shots somehow? Do you look at your board members with a straight face and tell them you're going to tank user engagement for some higher, squishy moral purpose for which there is no clear payoff?
I've posted a lot over the years about FB being leveraged by genocidal regimes and bad actors. While I don't think they necessarily pursue such ends, the fact is that social media is a battlespace from where real-world aggression can be launched, and that renting out platform space to this end has been extremely profitable.
Perhaps it has already been posted elsewhere in this very long thread, but if not I heartily encourage more ethically minded FB employees to leak the presentation in question and indeed anything else they consider relevant. At some point it will be too late to feel bad about not having done so when it could make a difference.
More seriously: Arms dealers are not exactly benefitting from facilitating peace making efforts either, so economically this makes all the sense in the world to me.
>Another concern, they and others said, was that some proposed changes would have disproportionately affected conservative users and publishers, at a time when the company faced accusations from the right of political bias.
This is the same thing they were worried about in the lead up to the 2016 election when they fired their newsroom for not promoting pizzagate and other conspiracies that would be deemed as "biased" against conservatives. And they clearly still haven't learned anything about why letting engagement algorithms run wild is bad for society.
Because I feel that the freely accessible HN should not be considered a glorified comment section for another pay-only news site, here is the article archived in full text:
Full page screen capture plugin on Chrome plus a community that posts to a IPFS node and updates some decentralized search thing to be able to find it?
Can't read the article, but I've seen a lot of my friends unfriend other people that have political opinions that differ from theirs. And the ever so popular post "If you disagree with thing xyz let me know know so I can unfriend you!"
This isn't Facebook's doing. People self-select monocultures.
That's hardly surprising given facebook's track record of censoring conservative users. Liberals can deny that all they want. It doesn't make it untrue.
NSA's a black box whose sole purpose is the aggregation and analysis of any information with potential relevance to US national security. It has the capacity to compel or infiltrate companies like Facebook to make them cooperate with its goals, and data sharing agreements with multiple nations. Privacy violation isn't a side-effect of its business model, it's its raison d'être.
I wouldn't dismiss NSA so offhandedly along this metric, even if it's ostensibly more constrained along legal boundaries.
I downvoted parent, not because I don't think nationalizing FB is a worthwhile conversation - it is - but because his or her comment was completely lacking in substance.
---
The biggest issue that I see with nationalizing Facebook is: what does it mean for the US government, bound by the First Amendment, to manage a social media platform? Can there be literally any moderation at all without infringing on the First Amendment? Honest question. Clearly, fake news and the like cannot be removed. What about spam? Personal attacks? What about when those attacks get racist and vile (the US does not have hate speech laws)?
The two party system does not affect this discussion. Facebook's algos will show you more and more $x content if you've liked $x or subscribed to it, and never show you $y content since you'd probably not like and engage with $y. Doesn't matter how many parties/topics/underlyingIssues there are.
If FB were neutral they would show you every FB post, millions per second whizzing past your screen, but they can't do this, they have to curate a wall for you to slowly scroll through and for most revenue, like, share, or comment on.
Therefore, to show you the most content that you will like, share, or comment on, they repeat the type ($x) you've already liked, creating the echo.
So no, it is not mostly a problem of the underlying issue of the two parties, this is entirely about how FB curates your wall and simply doesn't show you "the other party"/$y or anything deviant/$y of your likes.
Edit: changed political parties to variables to illustrate point.
It is a feedback loop. Politics has become more polarized, I believe, because of the need to be "pure" so avoid the wrath of the party's highly polarized base.
30 years ago an R and a D could cut a deal to get things done and few people would notice that they compromised by giving a little to get a little.
Now when such deals happen the deal makers are branded as traitors and RINOs (do people use DINOs too?) and must be primaried.
FB encourages polarization because it increases engagement with their advertisers, which is useful to FB. The polarized base is useful to parties because it motivates them to donate, proselytize, and vote. That base polarization leads to polarization in candidates, and the division grows.
“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”
According to the article, FB is not taking a passive role in this; they're actively trying to exploit people.
There are more than 1.5 billion users on Facebook. If they are not worried, and want to be misused, why the hell are others so hell bent on bringing down Facebook lol.
If the users really cared, we wouldn't be having this talk.
Also this is the media wanting to bring down the enemy.
This WSJ story cites old research and falsely suggests we aren’t invested in fighting polarization. The reality is we didn’t adopt some of the product suggestions cited because we pursued alternatives we believed are more effective. What’s undeniable is we’ve made significant changes to the way FB works to improve the integrity of our products, such as fundamentally changing News Feed ranking to favor content from friends and family over public content (even if this meant people would use our products less). We reduce distribution of posts that use divisive and polarizing tactics like clickbait, engagement bait, and we’ve become more restrictive when it comes to the types of Groups we recommend to people.
We come to these decisions through rigorous debate where we look at all angles of how our decisions will affect people in different parts of the world - from those with millions of followers to regular people who might not otherwise have a place to be heard. There’s a baseline expectation of the amount of rigor and diligence we apply to new products and it should be expected that we’d regularly evaluate to ensure that our products are as effective as they can be.
We get criticism from all sides of any decision and it motivates us to look at research, our own and external, analyze and pressure test our principles about where we do and don't draw lines on speech. We continue to do and fund research on misinformation and polarization to better understand the impact of our products; in February we announced an additional $2M in funding for independent research on this topic (e.g. https://research.fb.com/blog/2020/02/facebook-misinformation...).
Criticism and scrutiny are always welcome, but using cherry-picked examples to try and negatively portray our intentions is unfortunate.