>Frances Haugen: So, you know, you have your phone. You might see only 100 pieces of content if you sit and scroll on for, you know, five minutes. But Facebook has thousands of options it could show you.
>The algorithm picks from those options based on the kind of content you've engaged with the most in the past.
This is the primary reason I stopped using Facebook. I did everything I could to signal I did not wish to see political content - unfollow people, hide posts, even unfriend and block people. Next thing I'd see on the News Feed? The exact same irritating political posts, now from some person I barely even knew. Meanwhile I'd manually look at the profiles of people I actually knew in real life and cared about, and major events I would love to have known about happened in their lives weeks before.
Hit the nail on the head there. It’s impossible to actually get what was the damn promise of Facebook to begin with, and instead everyone gets practically drugged with crap because it makes Mark more money.
Facebook was actually pretty good when it just showed me my friends posts and nothing else. But when they moved to their 'timeline' and started fiddling with algorithms that try to decide what I want to see, they got it all wrong and it became a chore to wade through it all.
I don't understand how they think an algorithm can decide better what I want to see than me myself (by adding friend and blocking the ones that irritate me). In the end they lost me as a customer(or... Product :) ) altogether and I assume that's not their end-goal.
> they think an algorithm can decide better what I want to see than me myself
I agree with your points, except I believe there is a limited role for algorithms that help users discover new things they might like but not know about. I prefer Spotify's approach, where there's a specific area of the app (a "Discover Weekly" playlist) that gets refreshed but not so often that it becomes a dopamine lever. Although that kind of recommendation algorithm can't seem to handle multifaceted "loves rap but also renaissance music" users' tastes very well yet—in that regard Pandora's separate stations were better.
I've never seen an algorithm better than Pandora's recommendations (back around 2011 when I used to listen daily). I found so much music that I still love to this day because of how "smart" that thing was/is.
Agreed. That's probably because of just how much manual work went into it:
"Each song [representative of other songs in its group] is analyzed by a musician in a process that takes 20 to 30 minutes per song. Ten percent of songs are analyzed by more than one musician to ensure conformity with the in-house standards and statistical reliability." https://en.wikipedia.org/wiki/Music_Genome_Project
YouTube’s used to be really excellent. I discovered so much good music when it seemed to work by automatically cueing songs or videos that other people who enjoyed the current video had enjoyed as well. They seemed to make a change a couple years ago to cue items I had previously watched instead. So now, instead of fresh and interesting items relevant to my tastes, it just plays things I’m already familiar with.
Pandora, if I'm remembering correctly, also has a feature that Spotify lacks, which is to say "I don't like this" for a piece that it chooses. Although I couldn't say how much that influenced its later choices, it feels better as a user to be able to convey it.
I know for several of my "stations", they kept trying to suggest Coldplay. I'd constantly give them the thumbs down as I didn't feel Coldplay matched what I was going for. It didn't seem to affect their later suggestions.
Yeah Facebook was great back in 2006 when you only really had your friends on there.
I think the real downfall was not that they added the timeline, it's that it's impolite to not accept friend requests from e.g. coworkers or relatives that you know but don't really care about.
And then you end up with a news feed full of people you don't care about. I think by the time they added "unfollow" options everybody had moved to WhatsApp where conversations are between 15 people you know quite well rather than 300 people you met once.
I have no idea how they can fix that, or even how they could have prevented it. Google clearly foresaw that problem with their "circles" feature.
Google+ was started in 2010, so Google didn’t “fore”see the problem as much as “see”. I think I banged into the 1500 (1600?) FB friend cap in 2008 or 2009 from the rampant friending, and already started reducing the number and openness of my posts once older and younger relatives started joining in 2006/7ish. The timeline redesign did pour gas on the fire, but the issue of unbridled friending was noticeable far sooner to users and observers.
> I don't understand how they think an algorithm can decide better what I want to see than me myself
Presumably, they don’t think they can decide what you want better than you can. But they think they can decide what will make you engage more, and they’re probably right. The damage is not accidental.
Twitter is just an hopeless platform really, the combination of commenting out of context (everything is out of context), low character count, lack of scoping & zero moderation (beside the most illegal stuff) is just a recipe for disaster, you even have external websites to enable you to read threads... Unlike Facebook, I'm not even sure Twitter is fixable.
I don’t know if I’d be cynical in that particular way. Maybe the real cynicism is more banal. FB was and always will be “viral.” When Zuck was in college, this pretty much amounted to stalking all your classmates (for those who remember when it was university students only). Zuck is getting towards middle age now. “News” is the new viral.
We want to believe that FB is a corporation and therefore must act with a certain economic logic, but within the bounds of what can be optimized for the bottom line, there can often be a huge amount of room for founder neuroses, fantasies, etc. (I mean cmon, Jeff Bezos literally just went to space. And if you like history, remember William Randolph Hearst?)
Drugging people with viral was always the main gig unto itself. It was the core neuroses. Money just happens to be the thing people “universally” care about in middle life, as opposed to university life, but that much is an outgrowth of the neuroses.
> Drugging people with viral was always the main gig unto itself
I don't think this is a useful way to summarise it. You could say any business' main gig is not making money if you want to (this bank isn't about making money; we just intrinsically love signing people up to financial services!) I don't know if it adds anything, and I think it does subtract.
There are some companies that are set up to make money. Most of the banks in existence exist to make money and they won’t hesitate to say so, and they will say that they want to make you money too.
Facebook was not created to make money. It came about from the era when people thought, just grow really big and figure out how to monetize later. It was not yet established that you could monetize something after making it big (which was a thing in the dot-com bust of 2001). It seems targeted advertising like a given today but back in 2011 (a decade ago, and not long after DoubleClick was bought by Google), this was still being established. It took a long, long time for FB to monetize.
To that end, you need to have been there in the thick of it. Back then it may have seen like it was fun and games. Playing silly “viral” “games” was a big chunk of how people were spending time, back in 2008.
But more than FB’s specific case, founder’s personas can be a bit unappreciated as the guiding hand in what happens with the company. We know that founder disagreements are a large percentage of startup failure. It really come down to the people, and their base impulses. Which is so trite and petty, but sometimes life just is.
Even if we grant that the business's focus is completely on making the next dollar, someone, or some cluster of someones, is deciding how best to go about that. Their interests, knowledge, and blind spots will absolutely affect what they choose as the steps to take to get the money.
Aside from the mountains of odorless excrement I started seeing on facebook, this was the other reason I quit. It became unusable because once I scrolled past something it was next to impossible to find it again by either scrolling up or searching. Instead of seeing what people were up to I started seeing things they thought would keep me on the platform for as long as possible.
Looking at it now, at least to me, it seems obvious that the purpose of facebook has changed from "make money by showing people what their friends are up to" to "keep those dumb fucks as long as possible by whatever means possible to make as much fucking money as possible, everything else be dammed". And this is being generous with their beginning.
I'm curious if when most people say they "quit" Facebook they removed their account or if they just stopped using it. It feels like most people I know has stopped using it, except for messenger, but almost no-one has actually removed or closed down their account.
I did a full closure of my account. I have not received any emails in about 18 months, when I closed it. I'm scared to even attempt to login to verify that it is actually closed, for fear that FB will somehow interpret that as a desire for the account to reopen, and do so, despite that I did close it fully (and did not merely "deactivate" it or whatever they call the other option that does not really close it).
You're lucky. I deactivated back in January and I think I did a "log in with facebook" button once and now facebook seems to have re-activated my account implicitly.
Also they send me click-baity emails like "John Doe has posted a status update!"
Heck, they've even started TEXTING me to lure me back.
I'm dreading having to log back in to try to figure out how to get them to stop trying to contact me.
You say “deactivate“ but Facebook does differentiate between deactivation and actual closing of an account. So if you merely deactivated it, then that would explain why your account was not actually deleted and it reactivated it for you.
> The biggest difference between deactivating and deleting a Facebook account is that deactivating your Facebook account gives you the flexibility to return whenever you wish, while deleting your account is a permanent action.
From what I remember, reactivation of a deactivated account merely requires logging back in. Which sounds like you did exactly that.
They do allow full deletion everywhere. But you must explicitly select it. If you choose to “deactivate” that is not actually closing the account. It is a dark pattern.
I went through the process to delete my account but I still get emails from facebook occasionally about the account that they "deleted". I don't think it is possible to delete a facebook account.
I didn't close the account. Once in a blue moon (once in a quater) I'll need to look up some shitty business's only online presence on facebook or contact someone on messenger.
I did close mine completely, early this January, and have since only occasionally been affected when encountering businesses like the ones you describe. However, I view those occasions as a chance to push back against that bad trend, reminding them that there are people willing to give them money that they simply won't reach. While the push is small and they probably have already accounted for this loss in their strategies, I think it is better than nothing.
I hear you but I'm past that phase. "They" don't listen. It's not worth my time and energy being angry and going down that rabbit hole to "remind them" (my base anger level is high enough :P). Why? "They" don't have the capacity to listen. It's not in their interest to make their product better for you. You think you can't scroll back and see what you saw just a second ago because it's a really challenging technical problem? You think you can't set your home and work location in google Maps when you have location tracking disabled because it's a really complex problem?
No. They'll listen when they'll get their teeth kicked in by a regulator. I'm just glad that they consistently keep fucking up, even in the face of mounting negative sentiment.
I, for one, naturally lost interest and stopped using it many years ago and deleted completey last year, along with Instagram and many other social platforms.
deactivating and/or closing your account just gives them more datapoints imo, and the number of people who do it is so small that it makes a great metric for fingerprinting.
I'd guess that it's better for engagement for you to keep looking for it for minutes on end. Who knows, maybe you'll happen upon just the right, rage-inducing, useless post that might keep you on for longer.
Have always been curious why people use Facebook. Personally I don't believe most of my life should ever concern most of my connections(to be clear, the people I know and know me too), so don't event mention those I've no idea who. I know quite a few people who will take pictures of what ever they are doing and share. They might think they are doing something fancy, I can get that. But, how does that concern other people, I don't know.
As someone who has moved to the middle east (from the bay area), a large portion of this society heavily relies on Facebook, particularly for FB groups and also FB marketplace. Aside from that, this entire society basically functions on Instagram for any news / personal connections, but as it relates to Facebook, having groups is a great way to connect with people and ask questions and since we don't really have a good alternative to Craigslist here, FB marketplace fills that void.
That said, I'm sure people here use Facebook for other reasons, but anecdotally, this is what I'm seeing. Instagram and Facebook aside, the most used app I've found here is WhatsApp, so the society here is deeply integrated into the Facebook web of services
Everything else is pretty much noise to me - so I rarely bother to scroll the main feed.
It's been said for a while that facebook is pretty much dead for younger people - which I guess is true. But older people (my parents and grandparents generation) seem to be extremely invested and connected to FB. It seems like many of them use FB from pretty much everything.
And, as some have said, if you live in a smaller town or similar, chances are that the community is also very invested into facebook. In my small hometown (population 2500), facebook is pretty much used for everything.
edit: FWIW - what people use, seems to be very dependent on location. For example, I live in Norway, and I personally don't know a single person that uses WhatsApp - even though it's wildly popular in other countries.
True. Most people dont care about lives of people other than those in their immediate or important circle.
FB also realized this. Because, initially, FB was all about getting posts from friends. I suspect data revealed that this sort of algorithms created pockets of well connected people. Hardly any use to advertisers. So you have the FB that is what it is. A massive cesspool of shit.
> Have always been curious why people use Facebook.
It used to be a great communication tool via "Facebook Groups", then people moved to other platforms and our groups lost their critical mass, and all gone silent. In these days I'm just occasionally sharing photos I'm taking with an international group of friends for showing each other nice things.
It also reminds some birthdays to me, so that's nice.
Weird, I watch YT quite a bit but it only suggests things that are related to stuff I've watched before, and I don't recall ever seeing a political video on there show up.
That's totally weird because I have never watched a political video on YouTube, yet my whole suggested feed is filled with it. Even in the first 3 suggested videos. All of it is very far right conservative. Never touch it. I login to see if anyone I'm interested in has posted anything new then get off the platform. Have no clue how to get out of it.
I find the opposite; YouTube is on a constant mission to pull me into the world of right-wing videos with angry titles and antivax propaganda.
If you browse YouTube with cookies disabled or within a cookie fence, you can see that such manipulation is the default behavior. For a significant percentage of videos, the very first set of suggestions is manipulative. For the others, perhaps the first set of suggestions is not problematic but the suggestions for those videos are where the manipulation starts.
It's nice that you found a way out of it but I'm guessing that's rare.
That being said, I live in the UK.
We know it's going to base its default recommendations based on your perceived location.
I personally found that I needed to make an account to get away from the horrendous default recommendations. And allow it to use some data and not clear my cookies etc.
You're mostly correct, but I'm guessing you don't use the feature of marking videos as not interesting - it really helps, to the degree that I don't get suggested almost any right-wing videos of the disgusting kind even when I listen to music popular in those circles, as evidenced in the comments sections (I prevailingly use YT for music). Now, if only there was an option to filter out the nationalistic comments (insubstantial comments calling for "unity among brethren" and similar), too.
One has to log in to benefit from this, though.
However, most YT advertisements that get served to me are scams the last few months.
> How can the public form an opinion if the information they get about it is skewed by the affected companies?
When Trump was in power, you had the right wanting to regulate Reddit/Twitter, and the left wanting to regulate Facebook.
I don't think the companies are successfully skewing people in their favor of not being regulated.
Big tech are a necessary evil for most people. Philip Morris' customers don't talk about it being a great company, they just need its product.
People blame big tech a lot though, but what if they did give people visibility and control of their algorithms in terms of what content they want...and if we had more competition...would there be less extremism, or would people just up their own dosage?
In the end it will come down to personal responsibility and education.
YouTube always lures you back to mainstream content. Pick an unusual genre of music and let it autoplay. Come back after a few hours and you will hear pop music. This has a self-reinforcing character as my views will make those videos even more popular. Youtube now also tagged me as liking pop music.
Pretty sure you could even call it distortion of competition. I doubt legislation can improve the net much but there is an economic significance for less known artists.
The problem is that if you watch a news clip for informational purposes, say you want to know information about lockdown restrictions easing, you will get sucked into the most addictive news content. It takes a lot of self-restraint not to click on some of the accompanying recommended videos, they are all carefully designed to make you click.
And people's ability to self-restrain varies with mood. They are ready to catch you when your mood dips, or you are in an extremely boring situation.
that's also the reason why facebook's moderation should be considered political manipulation: Facebook's raw material, user posts, cover all sides of the political spectrum, giving facebook the ability to shape the discussion to whichever direction they like
I recently had an issue with Facebook feed similar to yours.
I've suddenly had my feed full of... Beatles fanpages. All kinds of 'old music' fanpages, Beatles, Queen, Nirvana etc. I don't even listen to them, I have never liked any of them, hell, I didn't even google anything like that. For last 2 months I've specifically marked every such suggestion as Not relevant. And it still shows me that stuff. Every, single day... Every 2nd or 3rd post is a suggestion of some page that praises Freddie Mercury or some other 'legendary' band, singer, performer. I just don't get it...
In my 40+ years I never cared for any type of football, but my feed is full of news regarding different European clubs, Munich, Paris, you name it. Disclaimer: I am from a home town where football is super popular.
C'mon... admit it... these bands are starting to grow on you... slip out that credit card, my friend... that's right... just leave it sitting there beside you on the desk so it's handy... now a little more Queen... Mamaaaaa... Bohemian Rhapsody, now on Netflix... Oooooo... 20% off this commemorative mug...
I find this to be a problem with a lot of (all?) social media products. Lack of user control over the experience makes them less of a "bicycle for the mind" and more of a "chain gang for the advertisers".
The reason the algorithm does this is simple, they want to trigger an obsessive compulsive behaviour. The irritating content provokes you to engage and that's how a lot of their profit is made.
You are right that Facebook gets engagement by irritating people, but this isn't an example of obsessive compulsive behavior on the part of the user, just negativity bias maybe. OCD is a specific and potentially life-ruining condition if it goes untreated. It takes people years and years to get diagnosed because almost nobody knows how to recognize OCD or thinks that it's something it isn't and the longer OCD goes untreated the harder it is to fix. I don't think most people mean anything malicious by calling any old high standard or repetitive action OCD but it's really worth getting a clear understanding of what OCD is because maybe you'll help save another person, or even yourself, from pure living hell. https://www.madeofmillions.com/conditions/obsessive-compulsi...
Mindfulness/awareness practice sometimes is used as part of OCD treatment as a resource alongside ERP, and mindfulness is broadly useful to people, and might be good for getting away from a negative feed. https://pubmed.ncbi.nlm.nih.gov/31505967/
You don't want to "signal" - you want to "filter". So install FBP and filter out all the stuff you don't want. For me, it made FB like it was a decade ago and thus usable for it's original intent.
People in your social bubble clearly don't care too much about major personal events, but do care about politics. It's what your friends have in common, and what Facebook will deem important. You are just a statistical anomaly.
That's the point of this discussion, and the significance of dumping these papers in the open: Facebook's concept of what your friends deem important is decided by Facebook, not by your friends. For example, if Facebook takes a global "Politics sells impressions" signal and applies it to your social network, the result would be over-sampling political content relative to the amount friends actually share it (with possible positive-feedback-loop consequences).
I think misinformation and conspiracy theories on these platforms are a huge problem. A few years ago, some bad actors were successful in convincing Congress of a crazy conspiracy theory that Saddam Hussein had weapons of mass destruction. The results were far-reaching. The US violated the UN charter and launched a disastrous war in Iraq where millions of Iraqis were killed, injured and displaced. They used a platform called The New York Times to spread their lies.
All of the problems of Facebook and social media are just problems of the media in general. Misinformation? Conspiracies? Social comparison? Outrage-driven content? Body image issues? All rampant in the mainstream press, but no journalists are calling for regulation of their industry.
If you were to propose federal regulation of the New York Times, journalists would object and talk about the first amendment, the 4th estate and the need for a free and independent press. It's an elitist double standard with content controls for the masses and first amendment rights for the graduates of Princeton and Columbia with jobs at prestigious publications.
> A few years ago, some bad actors were successful in convincing Congress of a crazy conspiracy theory that Saddam Hussein had weapons of mass destruction ... They used a platform called The New York Times to spread their lies.
Citation needed that this was a knowing conspiracy as opposed to bad intelligence.
Setting that aside, there's a big difference between an institution like the NY Times getting it wrong on occasion, despite all their research and fact checking, vs an institution like Facebook building a platform that systematically rewards disinformation at massive scale.
> Citation needed that this was a knowing conspiracy as opposed to bad intelligence
Within hours of the planes hitting the towers on 9/11, before any information was in, Rumsfeld's aides were drawing up plans for striking Iraq, despite zero evidence linking Saddam Hussein to the attacks. The number one priority for the White House, above finding out who the real culprits were, was to figure out how to use it as a rationale to invade Iraq. This is not really a controversial viewpoint, it is documented fact [1].
When the 9/11 rationale for invading Iraq became untenable, it was replaced by the idea that Saddam had Weapons of Mass Destruction, that he'd definitely been procuring uranium from Niger. Except these were lies too (and the WH knew it, because they'd sent Joe Wilson to investigate and he'd reported back as such). When Wilson heard them saying it despite knowing it was false, he contradicted them in public [2]. In an act of retribution, the WH leaked his wife's position as a CIA agent, burning valuable contacts and networks, and endangering friendly lives.
Then they moved onto the argument that regardless of WMDs, the Iraqi people actually wanted the invasion anyway, US forces would be welcomed as liberators by gift-bearing citizens, etc. When that didn't pan out either, a final argument became that Saddam was a tyrant and that fact alone provided sufficient moral and legal justification for preemptive war (i.e. The Bush Doctrine) [3].
There is no evidence that the media was part of the conspiracy, they just parroted the talking points and a few media outlets did heavily question the rationale.. the fact is both wars were hugely popular when they started so few listened.
This is not even close to a comparable situation to the social medial platforms, these platforms and democratized and weaponized mis-information. But it not just them, the ad-funded internet creates perverse incentives for views not truth, FB et al don’t care about truth they care about engagement, they know emotional touch points particularly angry drive engagement so their feeds optimize for that.
Our fractured society is just an emergent behavior from several complex systems with poor incentives, as such we need to find out a way to realign the incentives
there were protests, and quite large ones but they were almost exclusively from the left and were generally categorized as just being people that didn't like Bush.
Gallup polling shows 50-60% of Americans supported going in just to remove Hussein, in the first half of 2003 only 23-27% of Americans thought it was a mistake to go in to Iraq. see here for tons of polls done https://news.gallup.com/poll/1633/iraq.aspx
Despite seemingly everyone's revisionist histories, the country was scared and generally whipped into a frenzy, the right wanted it, the centrists wanted it, some of the left wanted it.
Now for more context why I did this: Back then when Bush started pushing for the war, I immediately considered him a liar, I am not from US, and from my perspective and from many people where I live, it was blatantly obvious he was lying, for example some of the "evidence" he used was obviously bullshit, like his description of satellite photos on UN, or saying that WMD was found in certain places right after UN inspectors visited those places and said they found nothing there.
Sometimes I wonder how people from USA don't see how evil they country is, for example when Brazil said they would vote against the war in the security council, John Bolton just went and threatened his family (his exact words were: "I know where your kids live."), while the rest of the world was horrified by the lies spouted by US media, the US public was cheerily supporting the warmongers, in some cases even more than needed, for example some public polls done at the time found out a majority of the population were in favor of the wars, in Afghanistan case some people even agreed with the assertion that USA should "glass" afghanistan.
> Citation needed that this was a knowing conspiracy as opposed to bad intelligence.
There were literally UN weapons inspectors in Iraq, and the "intelligence" couldn't guide them to a single weapon [1].
It was widely reported, even before the invasion, that the intelligence had been "sexed up" [2] - such as the inclusion of claims that WMDs could hit the UK within 45 minutes. Of course the politicians in charge knew the intelligence had been sexed up - they ordered it.
There are also memos agreeing to invade regardless of whether WMDs could be found [3].
What is the difference between bad intelligence vs the average conspiracy theorist? It is all just bad intelligence someone telling someone else "data" that they then "trust" reflects reality when in reality it doesn't.
> Citation needed that this was a knowing conspiracy as opposed to bad intelligence.
I watched the presentation of "evidence" and thought "Who is going to believe THAT?!". "We did not really see-see anything, but here is some CGI of truck-based weapons labs that we somehow believe might exist". It was rather ridiculous, really.
It was so unconvincing, I can only assume that it was intentional sabotage by some actors with remains of integrity.
>It's an elitist double standard with content controls for the masses and first amendment rights for the graduates of Princeton and Columbia with jobs at prestigious publications.
It's called power. Journalists are people, and desire power just as much as the next politician, CEO or movie star. Unlike those, journalists get that power by controlling broad narratives and having disproportionate control over the free flow of information.
It's an immense power that has evolved alongside modern democratic society and we have developed strong accountability mechanisms. Strong peer pressure and integrity requirements control that journalistic power.
Social media and the internet are recent invention that lack that evolved institutional and social framework. They are stripping power from the elites specialized with controlling that flow, and putting control in the hands of tech firms and anonymous "content optimization engineers" that make the algorithms work. They are only accountable to the bottom line.
So yes, it's a power struggle and huge double standards at work, only we shouldn't assume every revolution and upheaval of the traditional social order works for the benefit of the people.
The fact that peer pressure is the "accountability mechanism" means that pretty much the only thing journalists are ever held accountable for is stuff that goes against the beliefs of the class of people who graduate from Princeton and Columbia and get jobs at prestigious publications. Stuff like spreading misinformation, inciting political violence and falsely undermining trust in democratic elections is fine so long as it's done in the name of the correct elite-supported political causes.
Hell, I've just watched the media over here in the UK create a fuel supply crisis through irresponsible reporting - they posted breathless headlines about petrol stations running out of fuel and having to close which actually created the crisis by scaring everyone into going out and filling up their tanks at once. In reality, about five stations in the country were shut prior to the panic buying - so few even the people who'd tried to go to one probably wouldn't realize there was meant to be a crisis. But our fuel supply chains just don't have the capacity to cope with everyone buying several times as much as usual once the press started a panic. Is there any accountability for this? Of course not. They just kept on running endless sob stories about the people affected by the crisis and how terrible it is for them, making money off suffering they created.
The big question, of course, is not whether media--mainstream or social--have ever "gotten it wrong", but what steps we can take to make them better.
The mainstream media very much got it wrong on Iraq. I agree with you. Yet the reality is more complex than your anti-elitist tirade suggests.
It's inconceivable to me that a well-functioning democracy can exist without trustworthy--and trusted--institutions who (yes) have an outsized influence on public opinion. The policy issues faced by citizens in a modern democracy are too complex to expect everyone to "do your own research."
So the question is, how do we foster those trusted institutions? How do we hold them to account when they go wrong? And how do we make them more deserving of our trust?
If the answer is, "We won't, but at least we'll have an algorithm that rewards CTR", well...I respectfully disagree.
They also basically told you fantasies about the Russians and many other bullshit and convenient stories that put superhero de jure in a good light.
Word clouds are silly and a bit out of fashion, but very useful to find common rhetoric. If there is a dissonance between reporting and common users, you might have found content that is more of a suggestive nature.
There was and is a lot of that. Maybe popular writers just have common groups for information interchange. That on its own can be detrimental because the number of perspectives shrinks.
Complexity isn't an excuse. Legislative frameworks have a tendency to get more complicated since they often lack mechanism to deprecate unfitting laws, but overall complexity didn't really increase. It is an excuse and rationalization for these kind of lies. Projection of hegemonial power and securing strategic resources isn't complicated.
Undermining trust has to be an existential threat of newspapers. Many people are slow learners but they learn.
The common journalists is a pretty poor slob and the economic situation of newspapers is pretty grim. It was also a "safe" option to lie on Iraq. But what can be just as a large problem is when journalists a easily impressionable.
It's easy. They have to hold themselves accountable for their own (weekly?) mistakes. People would trust them again (imo) if they called themselves out and proactively fixed their mistakes.
The glib answer is, "They publish corrections." Which many do.
But I don't think that addresses the (valid) criticism about the Iraq invasion: while there were some errors of fact, there were primarily errors of omission, where the media were too willing to act as uncritical stenographers for politicians' claims.
We see, in a way, the same phenomenon in the Trump era: whatever lunatic allegation Trump or his acolytes would make, the media felt the need to cover it; after all, whatever the President says is inherently news, no matter how crazy.
Yet in both of these examples, the problem isn't that the media represent some sort of powerful cabal, which I think was what the OP implied: it's that the media are too spineless to truly hold to account those in power (in both cases, the President of the US). Viewed that way, the premise of the original question is incongruous: it's deeply strange to criticize only the "Princeton and Columbia"-educated "media elites" who too-uncritically reported on the claims of the President, and not the Yale- and Harvard-educated President (and son of a President)--or the billionaire heir to a fortune President who came later--who uttered those false claims.
The media absolutely deserve to be criticized for their uncritical reporting before the invasion of Iraq, as I said before, but the original criticism sounded like we want a weaker media who will be less able to question false narratives espoused by powerful players. And I think if you look at these stories more closely, you'll realize we want the opposite.
Nielsen TV and radio ratings are the functional equivalent of "an algorithm that rewards CTR", and that system rewards polarization and outrage in exactly the same way. "If it bleeds, it leads" is the longstanding mantra of TV news.
IMO the problem of public trust in institutions has one answer: democratic accountability. In the US, the news media claims to be the watchdog of our society, and to act in the public interest, but there are no mechanisms for public accountability. Who decides what topics should be covered? Who decides what angle to take? Who decides what makes the front page? Who decides whether something is newsworthy or not?
None of these decisions are made with any connection to the public that the media purports to serve. In fact, the lack of connection to the public is considered a virtue. Journalists legitimize themselves by claiming to be neutral, objective, rational, principled, virtuous, beholden only to the truth and above the fray of partisanship and self-interest that warps the opinions of the public. So the media claims to be legitimate and trustworthy, not because it has democratic accountability—but because it does not.
The fundamental premise of the institution of the media is that democracy is fatally flawed and the public can't be trusted. That's why what's happening on social media is characterized as a problem of misinformation and public susceptibility to propaganda—the cure is greater control of information by a credentialed elite. Bt the real problem of social media is that a small percentage of the population are able to gain influence because the US electoral system grants them disproportionate power. Social media is a way of coordinating a political movement, not fundamentally an issue of truth and lies.
The problems in our society and on social media come from too little democracy. The media represents them as problems of too much democracy, and that tells you everything you need to know about their basic political commitments.
Frances Haugen's twitter bio says "We can have social media that brings out the best in humanity." Who decides what is the best of humanity? This won't be a democratic process, but yet another elite group of purportedly neutral and objective "experts and academics" from expensive universities. I can barely imagine a darker authoritarian project than to try to get private social media companies to manage the thoughts and feelings of their users so that it meets with elite approval.
So what, in your mind, is a more “democratic” news media?
I don’t think most journalists would accept the view you attribute to them—that they are opponents of democracy—but the logic seems like it applies to any expert specialization. Is that your argument—that all experts are inherently opponents of democracy because they “tell us what to think?”
> So what, in your mind, is a more “democratic” news media?
Assuming the problems of voter participation are solved, I would like to see publicly funded journalism and the position of editor-in-chief as an elected position.
My criticism of US journalism is that they lay claim to legitimacy by being above the rabble. That is why they are able to decide what's in the best interests of the country. That's a claim to 1) a political role based on 2) an anti-democratic premise. I don't know of any expert specialization which makes a similar claim.
Or should there be more direct political influence, as you suggest—NPR, but the editor in chief is elected every four years? That seems deeply problematic, for somewhat obvious reasons.
I’d love to know what those obvious reasons are, and whether they don’t ultimately boil down to paternalistic conservative bromides that people can’t be trusted to govern themselves.
>A few years ago, some bad actors were successful in convincing Congress of a crazy conspiracy theory that Saddam Hussein had weapons of mass destruction.
I mean ... there was a joke at a time - how does the USA knows about the WMD - they keep the invoices. Saddam was a product of US and everyone in the know knew exactly what he was having. I am not sure that there was a convincing of the Congress since everyone knew to and they were just playing theater. The neocons pitched successfully invasion, but the US had a strong bipartisan desire for powerbase in the middle east. The rest was show for the public.
Is Facebook (the company and the social media platform) the masses or the Princeton graduates? You could say that it shares content generated by the masses, but arguably so does the New York Times every time it quotes someone to support a story. And these papers suggest what Facebook does to a person's wall looks a lot more like editorializing than laissez-faire regurgitation of community-generated content.
> crazy conspiracy theory that Saddam Hussein had weapons of mass destruction
Don't blame NYT, they are just like proverbial broken clock which displays correct time twice per day. WMDs from Hussein's stockpile were still used in Syrian War until recently by Baath Socialist militants, the pathway for sarin is pretty clear.
Facebook is going to become a hollowed out shell of itself if they do not change. They’ll get hollowed out by litigation, legislation, investors, or all of the above.
They need new leadership. The tactics that got them here will not carry them further. There are tons of examples of the positive impact of new leadership, on companies like Apple, Microsoft, Uber, Disney, Ford, etc.
And there are certainly plenty of examples of very successful companies who thought they were not touchable, but received tremendous financial harm from the rest of society.
There is a time for a company to fight, and there is a time to make peace. Facebook is geared up to fight, basically, everyone. No company can win against everyone. They need a leader who can let go of what they wanted Facebook to be, and see clearly what it is. Only then can a path to the future be plotted.
Yes I know Zuckerberg cannot be ousted by typical corporate maneuvering. Unfortunately I think that probably just increases the chance he will take it down with him.
> The simplest solution … : Eliminate all news, politics, and other public policy groups and apps.
I assume users want this content. I never see news, politics, etc because I’m not interested and FB has learnt I won’t engage in it.
I’m not trying to say this makes me good or bad or special. I’m saying that they use feedback each user individually and it seems allot of people want that stuff.
Honestly people like to say "x social media sucks" but in actuality it's far simpler than that : people suck in general. Only now you get to see it for yourself at a more general level as opposed to anecdotal experience.
You will have to define “news” first, or “politics”. As a former strategist for a big news room, I can tell you it’s not as easy as you make it out to be.
Would love to hear how you (and your cohort) would tackle these issues.
I just finished Jill Lepore's These Truths. Terrific. Tracks how our nation transformed from seekers of truth (for all our faults) to anti-truth.
The transition period from newspapers to mobile mediums as people's primary source of news is very interesting. Sure, everyone has been lamenting this ad nauseum. But somehow Lepore's explanation is just so tight and focused.
And yet. I haven't seen or heard any ideas or strategies for what's next.
How is censorship the answer? People want to consume all types of content on social as much as possible. The answer is making all the data public and open sourcing the feed algo + allowing users to pick their preferred algo. That way, if your feed is not to your liking, pick a better one.
If you're the ones complaining about "censorship" or how "people want it": sit right down.
No one is saying that other Web actors can't do all these things, or that people who want it can't find it; just Facebook. Because they're a monopoly and the usual rules don't apply to them. When you make the market, you can't defend yourself with free market principles.
Eliminate all news, politics, and other public policy groups and apps. Return to being just a way to connect with friends and family.
Maybe this is just me being a late adopter of facebook (2012 or 2013 here), but... when was this magical time without politics and news?
Was this in the time of game requests? Was there really no one sharing news stories back then? No groups discussing religion and philosophy and current events? No one organizing protests? No one commenting about how they disapproved of a lifestyle or couldn't ever harm a little, innocent unborn baby? Really??
I'm gonna guess you simply didn't see it from friends and family back then or didn't consider some of it to be political. It could be that people themselves share it more often now, but if that's the case, that's simply because the world has changed into such a world. But don't get me wrong, this isn't something facebook has done, as some folks have always been political and some loud about it.
> when was this magical time without politics and news?
Before occupy Wall Street, maybe? Or before the recession? When it was just a college thing, it was great. I distinctly remember facebook during college being the same content makeup I get today when actually talking to friends. Photos from the weekend, conversation, invitations to the beach, jokes, etc. And you got these big albums, so it wasn’t IG’s carefully curated perfect-life-looking shininess.
Yeah, it always seemed like it wasn't worth it since I really didn't have many friends to speak of. I mostly joined to keep touch with family when moving overseas. I'm not sure about 'after the good years' since I've honestly met some wonderful folks and had always seen people act out the worst part of facebook in real life - racism, anti-gay, and anti-poor sentiments haven't been strangers to Indiana's small-to-medium towns. You know, making it hard to believe it wasn't on facebook too.
Presumably, you’re trying to imagine facebook 2021 without the shittiness, or facebook 2012 without the shittiness. Facebook 2005, however, was an entirely different product. Sure, people were shitty then. But the product didn’t bring it out. Maybe it’s a way of saying the virtual vs IRL behavior gap was really small back then, and each “product improvement” widens it.
No, sorry, you're right. Some of my "friends & family" did insist on posting politics all the time. I muted or unfriended all of them. You'd be surprised how many people never do that stuff.
My first post was in 2004, and shortly thereafter my second was something railing on Bush. you’re right, facebook has always been inseparable from politics and current events.
Which is why FB will never accept it. There's room in FB for one Instagram. Not two of them.
Besides, I don't know about everyone else, but a fair chunk of my FB feed was pretty much people getting outraged by (or trying to discredit) fake news, hyperbolic news, or disinformation. Because that's the stuff that goes viral.
You know, I would like to believe this is the case. That some combination of litagation, legislation etc will bring down FB. But outside of the HN bubble I'm unfortunately doubtful it'll actually happen. My friends and family outside of tech has no issues with what FB is doing. Wallstreet also seems to have no issues with what FB is doing. It really just seems to be the techies on HN that are yelling and screaming.
I don’t think it will kill Facebook. I do think there is a big risk that FB becomes a zombie company, with a leadership more concerned with fighting their critics, the government, and competitors than innovating for customers. Arguably they already had an innovation problem, which is why they bought Instagram and WhatsApp.
Microsoft lost a least a decade to this sort of thing. They did not really come out of their funk until they got wholly new leadership in Nadella, and let go of who they thought they were in the 90s.
> Yes I know Zuckerberg cannot be ousted by typical corporate maneuvering
Why is that? I am not aware of Facebook's corporate structure so I am curious.
- Yes, this is a LMGTFY question, but I find that people can often provide a better answer, that's easier to understand especially when the question is more of a curiosity rather something I wish to find the answer of rapidly.
To expand on the sibling comment, Facebook has two classes of shares. Class A shares have one vote and are the shares that are traded on stock exchanges. Class B shares get 10 votes each and are held by Zuckerberg and friends. Zuckerberg has enough Class B shares that he really can't be outvoted by other investors.
I had no idea companies could legally create vastly unequal schemes like that. How do people even find out about things like this? Is this information available somewhere? Impossible to make good investment decisions without this sort of knowledge.
Weeeeeell... it isn't a secret, that is true. And technically speaking there is no problem here.
But it is easy to imagine that this happened to be able to hoodwink someone with the "Zuckerburg only controls 10% of the shares!" style soundbites without being forced to point out that means control of the company.
Realistically this practice should probably be banned on the basis that Facebook could achieve a similar outcome with something like one class of shares and selling bonds to the public, or something. Allowing a "share" to break the one-share-one-vote principle is unnecessarily confusing and misleading.
Lawyers shouldn't be redefining words. These words are meant to mean something. Although this is unlikely to be a new practice, I think Berkshire had different classes of share.
> But it is easy to imagine that this happened to be able to hoodwink someone with the "Zuckerburg only controls 10% of the shares!" style soundbites without being forced to point out that means control of the company.
Does that hoodwinking matter, though? If you're unsophisticated enough to be tricked by this, you have 0% voting power. And at best you expected "someone other than zuckerburg, I guess, but I didn't look into who" to have voting power, and that seems so vague that it being false doesn't hurt you.
In practice, it leaves open the question of why anyone felt the need for 2 classes of shares. Why have 1 class A and 9 class B, instead of 100 shares for the company owner and 9 on the market? One of those is much simpler to work with. Maybe there is just some tax loophole that is being exploited here, but it looks like someone is getting scammed.
Not the biggest crisis I've seen today, but something that should get tidied up.
> Why have 1 class A and 9 class B, instead of 100 shares for the company owner and 9 on the market?
Because you want to sell more of the company without losing control. Changing that would be a lot more complicated than tidying up.
But consider a scenario where all shares have the same voting power, and class B shares have 100x the dividends and own 100x as many assets. With 100 class A and 9 class B.
In this scenario, the class A shares are much worse than class B shares. But the end result is the same: the founder can own 10% of the company value while having 90% of the votes.
Surely the founder isn't getting scammed here, despite their shares being so much weaker, right? And it would be strange to say the people with the superior class B shares are being scammed. But this is basically equivalent to having good class A and bad class B. So now what?
Having shares that have different rights is not really a new practice. For instance, it's what allows VCs to invest with more likelihood of recouping their investment, while still allowing for employees to get shares with a different payout structure. As long as these things are disclosed (and in Facebook's case they specifically call it out as a risk in the risks section of the annual report in a section titled "The dual class structure of our common stock and a voting agreement between certain stockholders have the effect of concentrating voting control with our CEO and certain other holders of our Class B common stock; this will limit or preclude your ability to influence corporate matters."
It's assumed that investors have a responsibility to read the information a company publishes for investors, so if someone were to invest blindly they'd only be hoodwinking themselves.
Ford famously pioneered it over a hundred years ago to keep control in the Ford family, which is still true today, though the family is now spread out enough that they don’t necessarily vote as a block on all matters.
Andreessen participated in that "hoodwink", convincing investors to cede control.
> Realistically this practice should probably be banned...
Post Jobs, Larry and Serge were determined to not be ousted from their own creation. Others founders followed suit. And since returns were so good, investors played along.
Methinks that after Zuck and Neumann and others, investors have lost their appetite for being sidelined.
Someone asked the other day about who owns the Facebook "super-voting" shares, besides Zuckerberg. These shares are not traded on any public exchange so the ability to track them is limited.
In the Facebook registration statement from 2012, one can get an idea of who owns these class B shares.
The London Stock Exchange doesn't allow such dual class share structures for exactly this reason. That's partly why it has become so unpopular with tech companies.
They are considering changing the rule but many of London's brokers and institutional investors strongly oppose it.
Tne NYSE used to restrict it. They relaxed the restrictions in the 1980's. Google was the company that started the trend in 2005. (Note they were sued by their shareholders and settled.) The percentage of companies using this type of structure was like 1% in 2005. By 2017 it was 22%. It is probably even higher today. It is used outside of Silicon Valley, too, but those non-tech companies have real products and services that produce revenue. They don't bait people with "free", spy on them and sell ads.
If I'm not mistaken there are some index funds that will exclude companies with dual class shares.
This is why "tech" companies can pursue outlandish projects that produce no revenue. Shareholders effectively have insufficient rights to question such decisions and hold the founders to account.
> You must be making investment decisions without looking at any corporate documents.
Well, I've never actually invested in any company before. Mostly because of stuff like this. I always assumed there's some insider information I'm missing out on.
As a start, every publicly traded company in the US publishes an annual and quarterly report that is available to anyone for free. Those generally have quite a bit of information about what's going on with the company, its structure, financial situation, management, etc.
Searching for `$COMPANY_NAME annual report` is a good start
This information is required to be in the articles of incorporation which are filed with the secretary of state. For public companies you can usually just google e.g. "Apple articles of incorporation" and the sec's copy will show up.
It adds a level of stability. Some institutional investors don't want shareholders to have power over a company because that opens the doors to things like hostile takeovers. Having all the voting power vested in the core "founders" means the company won't ever be subject to change from outside. So, if you like things as they are, you can rest assured that things likely will never change. Only a long term investor, someone willing to retain shares during a corporate crisis, really cares about actually voting out one board for another. A short term investor wants to ride on the coattails but will jump rather than attempt to change where those coattails are headed.
That power is still there. By selling their non-voting shares investors lower their price, which has knock-on impacts on the corporation. What is lost is the power to directly make corporate decisions.
Definitely not for individual-saving-for-retirement investors IMO (altough there might be some that think that's important, but they can't affect nothing really).
What matters most to these (me included) is the growth of the nest egg, by dividends or capital gains (which FB has delivered handsomely on the last decade).
For large institutional investors, it may be important. For activist investors, it certainly is.
"In addition, our current restated certificate of incorporation and bylaws contain provisions that may make the acquisition of our company more difficult, including the following:
- until the first date on which the outstanding shares of our Class B common stock represent less than 35% of the combined voting power of our common stock, any transaction that would result in a change in control of our company requires the approval of a majority of our outstanding Class B common stock voting as a separate class;
- we currently have a dual class common stock structure, which provides Mr. Zuckerberg with the ability to control the outcome of matters requiring stockholder approval, even if he owns significantly less than a majority of the shares of our outstanding Class A and Class B common stock;
- when the outstanding shares of our Class B common stock represent less than a majority of the combined voting power of common stock, certain amendments to our restated certificate of incorporation or bylaws will require the approval of two-thirds of the combined vote of our then-outstanding shares of Class A and Class B common stock;
- when the outstanding shares of our Class B common stock represent less than a majority of the combined voting power of our common stock, vacancies on our board of directors will be able to be filled only by our board of directors and not by stockholders;
- when the outstanding shares of our Class B common stock represent less than a majority of the combined voting power of our common stock, our board of directors will be classified into three classes of directors with staggered three-year terms and directors will only be able to be removed from office for cause;
- when the outstanding shares of our Class B common stock represent less than a majority of the combined voting power of our common stock, our stockholders will only be able to take action at a meeting of stockholders and not by written consent;
- only our chairman, our chief executive officer, our president, or a majority of our board of directors are authorized to call a special meeting of stockholders;
- advance notice procedures apply for stockholders to nominate candidates for election as directors or to bring matters before an annual meeting of stockholders;
- our restated certificate of incorporation authorizes undesignated preferred stock, the terms of which may be established, and shares of which may be issued, without stockholder approval;"
If the total worldwide market cap of a company exceeds 1.0% of the total US GDP for a given year, then I think they should be required to liquidate all preferred stocks, preferably trading them 1-for-1 with the regular stocks, although I would also concede that it might be fair to trade them for an equal value.
At US GDP for 2020 being just under $21T, this would mean that any company with a global market cap of slightly more than $200B would be affected — $209.63B, to be more exact.
If you want to shift the point at which this rule is applied, then adjust the percentage, but I think you want it to be tied to something that will vary and adjust in the future, to account for inflation, etc….
And I suggest global market cap as the comparison base, because some companies have a habit of using multiple shell companies in a wide variety of foreign countries, in order to try and hide their revenue and protect it from governmental taxes. I think we want to account for those kinds of shenanigans from the start.
Anyway, it’s just an idea. But I hope this general concept does take off.
Does anyone track the premium Facebook has to pay to get people to close their eyes and work for them? Subjectively, it feels like it’s growing, but that’s from sampling a limited friend group.
As a SWE at a different FAANG, my understanding is I'd probably get a 20-30% comp increase if I went to FB, though I'm also told people work longer hours there.
Past results are not an indicator of future performance, of course. :) But yes, you're right that heavily RSU-based comp means you have to try to take that into account. That said, I usually just assume it will trade sideways and so my comp is based on a four year moving average of my grant value. If it goes up, great.
Unless moderate, establishment politicians get replaced, this is unlikely to happen. Why? Because Facebook is very useful as an intelligence tool. It was funded by In-Q-Tel after all. Otherwise, there would need to be a lot more public uproar and I'm not seeing that from the general public.
But they have been getting replaced. On both the left and the right, but especially the right. And if Sanders had somehow been president in 2016, you bet that it would’ve been establishment Dems who’d have been retiring.
I have my doubts about progressives taking most of the power base from establishment liberals. Establishment candidates tend to have better and more mainstream media coverage.
> Yes I know Zuckerberg cannot be ousted by typical corporate maneuvering. Unfortunately I think that probably just increases the chance he will take it down with him.
What litigation or legislation would hollow out Facebook? I would like to see stronger legislation around consumer data privacy, but that would only marginally reduce Facebook advertising revenue.
Lest we forget the Microsoft antitrust trial of the late 1990s, it’s not a foregone conclusion that deposing a powerful CEO leads to their downfall. “I don’t remember” is a frequent answer that deponents give that is both frustrating and incontrovertible.
Gates was deposed in summer 1998 and did a famously horrible job. He stepped down as CEO in January 2000. Microsoft struggled for relevance beyond Windows/Office for the next decade or so.
Not necessarily a straightforward connection, as none of the items being discussed have a direct revenue impact only indirect impact to brand value etc.
Knowing it is unhealthy for teenagers does not have direct negative impact on advertising revenues per se . It is perhaps feasible to argue that keeping this information under wraps without a legal requirement to disclose is the optimum management fiduciary thing to do to maximize shareholder value by avoiding bad press.
Of course there will be lawsuits around this as other things, but it is not all that clear
Yes, but that is inverse here after all. Only If they disclose this report the share price is going to go down.
Without a leak we would have never needed or gotten to know this report, so logically keeping it hidden/buried was the best course of action with regards to shareholder value ?
A valid suit maybe about their poor security/ control processes for their Workplace tools that lead this leak.
Shareholders could perhaps sue saying they should not fund such research/ buried it well etc, not sure a court/judge would look nicely on a such a suit.
It's an unfortunate but inevitable outcome of the attention economys race to the bottom. This area needs better regulation. The best approach is to stay out of technical implementation and focus on what principles companies should adhere to. As James Williams argue: it should be a civil right to have "freedom of chance". We already knew all too well what it does to our democracy that a large part of the public only gets their news from Facebook. Prefiltered by the monetizing interests that drives it. In such an environment there is no fair chance of deciding for one self - at one end it's one individual on a smartphone, but the other is the entire R&D of Facebook with just the one goal of keeping your focus, making you engage.
It's not an easy answer, but an important discussion to have. In any case, what is made abundantly clear from this information is that Facebook and others like them neither will or can protect the public interest.
I like the argument that we have freedom of choice but we by no means have freedom of chance. To me that would be a place to start in terms of finding regulation that can protect the public.
I’m starting to think section 230 protections (as I understand them) are a bad idea. The whole issue is these negative externalities. Facebook has to choose between money for them versus good things for others. It has misaligned incentives because it isn’t liable for the harm it imposes. So make it liable.
Put people in an fMRI machine while reading and interacting with the facebook feed of a celebrity, or person that they dont have personal connection with
Then show people the feed of their friends or family
Then force facebook to change its algorithms until the two fMRI readings show similar level of activation of key areas in limbic system
The real power of facebook is that it exploits the stronger emotional relationship that people have with close friends/family , as opposed to the shallow relationship they have with other public figures, which is what the TV/radio/news exploited in the past
Use a most recent first timeline like the old Facebook used to do. Stop manipulating people’s emotions to get them to click on ads by controlling the newsfeed.
Real deep regulation would mean breaking up the network effect, and forcing a federated system so either people have a choice or people have tools to manage what’s on their social streams.
This is not palatable but trivial. Your login tied to your actual ID. Sites that won't participate can't take US funds or do business with entities that also want to do business with the US.
Call for violence or threaten people go to jail. Spread malicious lies you knew are false for political gains go to jail.
Spread others harmful crap unwittingly that would tend to lead to others deaths get a ticket for actual money and a strike. 3 strikes you can't play on social media this year. Get banned 3 years in a row get a 10 year vacation from social media.
The internet and society would be better off within a year. Within 3 most of the real pieces of work would be conspiring with each other via tor unable to reach 90% of the population.
Problem here is enforcement, as we have seen ... The Legal representatives are also elected, meaning that the legal system is also political. Everyone has something to gain out of it.
"Fire in a crowded theater" is an assertion that there are limits to protected speech. It says nothing about what those limits are. Harmful misinformation is well inside those limits. (And in any case it's from a decision that was overturned 50 years ago.)
You said these things "have never been protected", and that "there have always been limits". Those are descriptive (and incorrect) statements about the actual law.
If you think the law is bad and should be different, say that. Don't phrase your policy goals as if they're the way the law currently works.
I think we can maintain individual rights trivially while still sending people to jail who threaten to murder people. Other lines can be carefully drawn. The idea that if we regulate harmful behavior AT ALL we will suddenly enact 1984 seems thoughtless.
> I think we can maintain individual rights trivially while still sending people to jail who threaten to murder people.
Yes, we do that now.
> Other lines can be carefully drawn.
I challenge you to draw a careful line for what is considered “harmful crap”. If you think you can do this, you are most likely a person who sees everything in black and white, and we don’t want people like that enforcing laws or policies.
I called out a local individual for trying to scam people on the facebook market and got death threats from someone who lives an hour from me who wasn't banned from facebook.
But death threats are already illegal. What does your proposal bring? Making it extra illegal? A failure of the police and other authorities to act does not mean that the law is at fault.
I'm talking about enforcement and making social media platforms liable if they don't deal with such situations expeditiously.
We don't need such standards per se because neither of us is a lunatic but if I were in this very comment to make threats of horrific violence against you or your family you can be assured that hacker news tiny moderation team would waste little time in removing my ability to post because they give a fuck about the health of this community.
Facebook by contrast deals with a fraction of 1% of such issues.
Harmful crap defined. Statements that are either knowingly false or evidence a reckless disregard for the truth herein defined as being misinformation that a reasonable person would know it is false that would reasonably be said to lead to serious harm and death by actual persons.
For example telling people to bleach their autistic kids buttholes, that the vaccine contains a chip to track you or will kill you as part of a secret government plot, that covid isn't real, that you should prefer to treat covid with horse paste or take such as a preventative measure.
I don't think its hard to distinguish between telling people that they shouldn't get vaccinated against a deadly disease because the vaccine is part of an imaginary plot to kill you leading thousands of people to demonstrably die and sarcasm, parody, art, or those other unnamed worthwhile things.
Not only are some people unknowingly spreading such info but others are knowingly communicating such lies for the lols and for ad revenue.
When we enforce existing laws we take into account context and reasonable assertions of both judges and citizens to interpret situations and the law. The law books and case law is full of fuzzy situations we successfully litigate every day.
Everyone is entirely in control of their own thoughts and opinions- these being about the only things in life that are fully under our own control. Facebook, by itself, is zero threat to democracy, it neither votes nor stops people from voting for what they believe.
We are going the totalitarian route. Ministry of Truth will tell what numbers are legal and which ones are. Publishing data that goes against a narrative is a high crime punishable by Goolags. I know it might be news for the kids but this is a common practice in Socialist countries.
The inflation rate is a hate number and banned from all platforms. Enforced by the Soviet Ministry of Truth called Pravda the russian translation of Truth.
I am puzzled how quickly something Americans were completely scared about is being so beloved by people. Do we really see ourselves as so despicable that we want to replace individual speech with government speech ?
Nonsense. We're currently doing nothing, equivalent of no route at all. We have regulations for many other areas - and for good reasons too.
Saying there is a playing field is not at all totalitarian, and shouldn't really be seen as that extreme of an idea.
I also dont see why a notion of "freedom of chance" could not go both ways: "the totalitarian route" would offer equally little _chance__for the individual.
Facebook was beyond fixing the day it was founded. Zuckerberg is one of the least ethical operators of a large company that I'm aware of and the company reflects that in every way possible. Fixing Facebook would mean removing Zuckerberg and that's not going to happen, so I applaud the whistleblower stepping forward but I think they're hopelessly naive if they believe that this will really lead to meaningful change.
I believe it was Ben Thompson who recently suggested that it's possible a platform like Facebook is fundamentally incompatible with human nature as well. Maybe, at least, incompatible with society as it exists today. I've been thinking a lot abut that.
Edit: Not a defense of Facebook. Regardless of what I said, I still don't think Facebook is trying to be a good company (morally and ethically) and I don't like them
Totally agree, though with a tad of nuance. I believe a Facebook could be at least marginally compatible with human nature if the feed was just your friends, new posts in chronological order.
But the economic incentives to make Facebook addictive, to traffic in outrage, and that demand that Facebook track your every move on and off the platform are just immensely strong. I mean, I think Facebook would easily be a far better platform for users if they could only see my actions on Facebook itself, instead of the god awful behavior of getting flooded with "retargeting" ads because I happened across some website somewhere, but look how bitterly Facebook fought against the iOS changes.
Facebook are spending a lot of money on engineering around this (they call it Community Integrity I think) to try and figure some of these problems out. Not sure how well it's going, but they're certainly spending lots of money.
The media theorist Marshall McLuhan accurately out the dangers of "Peer to peer electronic media" in the 1970s, as thought experiment. His theories on media were pretty good.
That may very well be true, but FB is clearly not built with the best interests of society at heart, and until a platform rolls along that is we are pretty much in the same situation as where we are with communism: it may well be a great system to base a society on but all known implementations were made by people that were in it mostly for themselves.
Whether or not what Facebook did was illegal and she technically qualifies as a "whistleblower", I salute her for what she has done and is doing by telling this story. People need to know about this, because it is harmful.
This doesn't mean it should be illegal. But it is something people should consider if they go to work for Facebook (or other companies), and hopefully it will inspire people to do something about the toxicity and divisiveness that social media is causing or at least feeding into.
I don't think we should want good people to stop working for Facebook. That never made sense to me. Facebook isn't going anywhere, and as such it should be improved, and the only way that will happen is if good people keep going to work there and trying to make it better.
Yeah... I get where you're coming from but imo if you're a storm trooper working on the death star and you keep going into work everyday knowing who and what you serve, then you're not a "good" person anymore. More like "ok" or "meh" at best when it comes to your personal morals/ethics.
> I don't think we should want good people to stop working for Facebook. That never made sense to me. Facebook isn't going anywhere, and as such it should be improved, and the only way that will happen is if good people keep going to work there and trying to make it better.
I wish I remember the article that explained this to me, but "participate, but try to change it from the inside" pretty much simplifies into just "participate." So all those good people will end up doing (from the inside) is help Facebook achieve Facebook's existing bad goals.
The best thing a "good person" can do to Facebook is leak and hope an external entity takes it down, but most of those good people won't have access to anything that's worth leaking.
> and the only way that will happen is if good people keep going to work there and trying to make it better.
I can buy this reasoning for people in leadership positions, but it seems rationalization for peons working on logging systems, performance etc.
Note that there is an alternative - making bad things illegal or enforcing existing laws. That is hard to do - but still easier than somehow convincing FB to act against its own interests.
No, but if fewer people want to work for Facebook that Facebook has to pay them more to get them onboard, Facebook has a financial incentive to do better.
I’m not really well versed in the history of consumerism, so I ask: Is there a common pattern to what happens with companies which produce harmful products for their customers? Facebook is by no means the first company to do something like this.
E.g. Which cigarette company went furthest in marketing to children, are they still in business? Did any car companies refuse to include seat belts after their safety records were prooven, did they loose good workers as a result of that stubbornness?
"facebook isnt going anywhere". If this was the 90s/early 2000s you would say that about Yahoo. MySpace was a thing for a long time and operated in a way that we were never concerned about it causing the end of democracy or a civil war or helping dictator. I know part of it was technology at the time but also MySpace never used the kind of rage inciting algorithms Facebook does. I only hope PyTorch and Oculus get spun off before facebook gets shut down.
Yahoo is the absolute opposite of Facebook in every conceivable way. Jerry Yang famously roughly said (someone did) they had no idea how they’re making all the money from banner ads. They’re famously stupid in passing on every major tech company willingly coming to them to be bought out.
Facebook famously and studiously (and in hindsight extremely accurately) bought out every conceivable future competitor ensuring they truly don’t have competition in their real market. And their top brass know exactly what company and market they are in contrary to popular belief. barring legislation to split or curtail this org it will continue to metastasize and thrive in the world with reckless abandon.
You can say that for Fox News, or for some company that makes the predator drones. But many would clearly not want to work for such companies. Yet Facebook, a lot of tech folks decide “meh it’s not exactly a war crime” or are blithely/willfully ignorant of its consequences. After this unrefutable proof now, anyone who joins Facebook better learn to live with their decision, if that’s the kind of dilemma they care about that is.
The term whistleblower here is just a means to raise awareness for this information, which is good (not that it does much for the FB-reading population, besides those infatuated with following conspiracy theories).
"Whistleblower" is a term of art: someone who brings a "private" company matter to the attention of the outside world. Usually this would get you fired, so there are legal protections for whistleblowers.
From where do you get the notion that something illegal needs to take place for someone to qualify as "whistleblower"? First time I encounter this idea.
A "whistleblower" in the USA is a legal term of art used to describe someone who reports wrong-doing to the government. The conduct being disclosed doesn't necessarily have to be explicitly illegal, but in order to be considered a whistleblower under the law, the information has to be disclosed to the appropriate government agency (i.e., reporting wrong-doing to a private organization or going public in the media wouldn't legally be considered protected whistleblowing).
The OP's article says that Frances Haugen reported Facebook's actions to the SEC, so that may make her a whistleblower legally, but it's not clear if the SEC is the appropriate agency to handle social media regulation. That seems more like something the FTC or the FCC would be responsible for.
Honestly, I got it from the at-the-time topmost comment on this discussion, which said she's not a whistleblower because FB didn't do anything illegal. https://news.ycombinator.com/item?id=28742045 (I also replied to that comment directly)
I did look it up though, and the dictionary seems to suggest it needs to be illegal or against rules or something, as opposed to simply being harmful.
I think there's a legal definition of a whistleblower that would protect them from civil litigation. That usually involves criminal conduct on the part of the company.
Is any of the research released here actually news to anyone here? Seen internal facebook research leaked that shows negative impact of facebook since back in 2010 and they made the choice to ignore that then as well. 11 years later no change to their priorities and no surprise with Zuckerberg in charge. Those of us who care don't use fb and refuse to work for that company or any of its subsidiaries. She absolutely can and will be sued by fb for breaking her NDA. "illegal" is an extreme term for this though, she won't be arrested for it, will just be forced to pay $$$.
"Those of us who care don't use fb and refuse to work for that company or any of its subsidiaries."
Yes but that doesn't mean you aren't being harmed. If your mom uses facebook and dies because she believed a conspiracy theory about vaccines, for instance, you are harmed.
"She absolutely can and will be sued by fb for breaking her NDA."
That may not be smart on Facebook's part. She probably doesn't have a ton of money. All that will do is bring more attention to this issue. If her testimony before Congress is compelling (and I expect it will be), suing her is just going to piss off Congress. Not sure that's where Facebook wants to be.
I see this issue as an issue far greater than Facebook.
With that I mean the mass production of content that is illegal, harmful, misinformation, divisive, stolen or down-right low quality.
The internet is a crowded place now, people have a lot of screen time yet a low attention span. Hence, content creators have perfected the art of short form content that achieves maximum engagement. This won't change, not even in a world without Facebook. Even journalists work this way now.
Even though it harms us, it's what we want. We pay attention to bad news, outrage, the unusual. We want black and white and good and bad narratives, not complicated and reasonable takes.
Imagine a world without Facebook. It only consists of blogs, and there's no timeline at all. One day one of the blogs publishes a controversial article. All other blogs will link to the outrage article. Everybody reads the controversial article, nobody reads the moderate ones. The unreasonable ones will always win.
> With that I mean the mass production of content that is illegal, harmful, misinformation, divisive, stolen or down-right low quality.
100%. Look at the front page of most popular websites. It's mostly outrage in some form or another. Reddit r/popular is so bad now. It's mostly videos of public freakouts, idiots in cars, or someone yelling about something on tik tok with the occasional uplifting post. Outrage content makes websites more money.
Very much so. I'd even extend the outrage phenomenon to real life and non-tech.
"I just had a perfectly reasonable conversation with my neighbor on a matter of no consequence"
Said nobody ever.
"Omg did you hear what that idiot down the block said this time?"
Said everybody, all the time. The very act of gossip is to dramatize something mildly unusual and escalate it into an ever juicer story as it passes around. We are Facebook.
And this is just one half. The other half is Sophie Zhang's revelations about how facebook is used by corrupt governments as a weapon against their own citizens. At the very least the public should be warned about such weapons of mass manipulation, and the newspapers that used to laud them as bastions of democracy should publicly apologize
The fact that congress has not done anything about FB (other than fruitless hearings) shows how badly in shape we are to confront the risks of social networks.
I think this is where tech workers are failing society. Other industries actively write legislation bc they understand that politicians are not technocrats. The best thing we've come up with is the EFF, but we need to do more given how consequential our industry has become. So if we understand how engagement optimization works, then what governing rules could we come up with to minimize the negative pact?
I think part of it is that there are't enough former software devs or engineers or data scientists in congress or the senate. The only programmer/politician I can name is Mayor Pete. Andrew Yang doesn't count since he was more business and less software dev. Dean Cain also had a CS degree.
> She also sifted through the company’s internal social network, called Facebook Workplace,...for years the company has been tightening access to sensitive material.
Of course FB's highest priority order of business: lock down Facebook Workplace and put much tighter controls in place for disseminating internal documents.
Yes, this seems like a prudent choice for companies to use Facebook, excuse me, "Workplace", to manage internal data. Heck, why does Facebook need to sell ads, they could just sell "Workplace". The service seems to be working well for Facebook, except for the leaks.
Workplace (the product) has nothing to do with this but it's the policy of Facebook (the company) to trust their employees to use any information responsibly to do their jobs (with the exception of user or HR data).
It’s sad to me that so many people just can’t see this as common sense. Why do you need an insider to tell you FB is bad? Look at it. Look all around. It reminds me of when I was a kid any every smoking adult around me was hacking their lungs up. I didn’t need the surgeon general or some fancy research published in a medical journal to draw the conclusion that smoking was bad.
I suppose many may be too young to remember a world without FB and it would be difficult to distinguish society today and in the past but things are not well.
This seems purposely ignorant. The election part is just one thing that makes it bad. I didn’t even mention politics but I’ll bite, from what I believe I hold as a rather objective view.
1) I never heard any talks of Obama campaigns being “evil”. My knowledge is they were positive, spreading a message hope. It was more of the fact that at the time, FB was demographically skewed young and he did well with young voters. I never saw any accusations of external election tampering, fear mongering, international influence, hacking, etc. It was not conducted in a way that one would feel call “evil” unless you just don’t like Obama and feel he is personally evil and that’s your baggage not facebooks. I can’t recall exactly, but I don’t think FB even really had an advertising mechanism back then. The algorithm was not what it is today, although it was still unhealthy by my standards by witnessing the addictive qualities it had.
2) Plenty of evidence Trump behaved evil. Probably even more so those who sought to have him elected. Fake news, propaganda, fear mongering, election tampering, hacking, meddling, name calling, etc. Even the most objective person on earth has to admit it was just a dirty election of epic proportions. All enabled by Facebook.
This is unhealthy. I don’t use FB and this effected me. It’s second hand smoke and I’m stuck on this flight. The don’t have to solve single thing regarding society, but that doesn’t mean they should be able to harm society. That’s why there are calls for regulation. Them being profitable is not in question, their ability to make decisions that balance profits with the health of their user base is what’s at question.
There’s plenty of people, myself included, that have voiced concerns about the negative consequences of this social network. But, like it’s a bit like the people warning of an housing bubble leadinf up to 2008.
Edit: a lot of typos, not fixing, but you get the gist
One difficulty of claiming a double standard is that often there isn't a perfect parallel.
The Obama and Trump campaigns were different, but I sorry--i dont believe the backlash is because of their campaign strategy but instead their politics.
You mention obama being 'positive' but that's very subjective. To other people, Trump is positive and hopeful.
And I don't agree that Trumps antics really had much at all to do with Facebook--nevermind social media or the internet at large. When Trump was testing his rhetoric at his gatherings, there wasn't any Facebook around.
And the meddling was greatly exaggerated. It's the type of freedom internet paragons all espoused. The internet is international. I didn't know we weren't allowed to comment on foriegn politics (Im not American citizen... Am I interfering right now?).
People talked about the internet creating a more democratic society. One where anyone can challenge the elite. Well guess what--thats exactly what happened. We just didn't like what came out when most people started opening their mouth.
The model itself is broken, the abuses on section 230 turned platforms into publishers. It's now a chess game where external governments use this to control public perception. Best example is China and how coupled with Big Tech they are.
This can only be temporary and it's inevitable for this whole thing break down once there is disagreements. All we need is one issue and the whole thing collapses catastrophically.
We have seen this story before. Person joins Facebook thinking they can "help". Then quits.
Facebook is a public company whose purpose is serve advertisers. Online ads is the only way they can generate revenue. It is not some woo-woo about connecting the world, giving people a voice, or whatever. That's what the internet enables, not a single website controlled by someone else. Facebook tries to take credit for what is good about the internet.
Facebook is a barrel of fish, the website is the barrel, users are the fish (ad targets) and advertisers are the shooters. For advertisers and FB investors,^1 things are going great. Except for the bad publicity.
There is nothing to fix. As other commenters point out, the "solution", for users, is to stop using Facebook and use other means to communicate over the internet that do not require middlemen, third party intermediaries and interlopers that seek to sell online ad services and are determined to conduct ever-increasing levels surveillance and personal data mining.
[Sophie Zhang] joined Facebook in 2018 after the financial strain of living on part-time contract work in the Bay Area had worn her down. When she received Facebook’s offer, she was upfront with her recruiter: she didn’t think the company was making the world better, but she would join to help fix it.
“They told me, ‘You’d be surprised how many people at Facebook say that,’” she remembers.
It has to do with how the university educated are socialized. They believe it's gauche to admit to pecuniary interest when looking for a job. They have to make it like a quasi-religious mission for which they are incidentally paid. It's intensely stupid. Obviously Facebook invites it and has consciously adapted its recruiting message.
Back in the 2000s before the financial crisis it was popular to portray the young as nauseating money grubbers who just wanted to get a job in banking. That wasn't great either, but I would rather honest greedheads rather than greedheads who lie to themselves and others about what they're doing with their lives.
You are exaggerating. Facebook IS providing real value to some users. For example - when I lived abroad a few years and started missing people from my home country, a Facebook group was there to talk and meet. It solved actual loneliness. Instead of individuals trying to create stand alone websites and invest time and money and effort, Facebook made it super easy to connect. My wife used a pregnant lady / new moms Facebook group in the city we were living in etc etc, it brought a lot of good info and comfort to her.
Another example is their market place; it became super easy to buy used stuff in your city (I don't know if it caught in the U.S but in Europe this is becoming popular). People buy less new stuff, save money and help the environment. That's also good right?
I can go on and on (for example people who put their business on Facebook and get actual clients). Not all interactions on Facebook are bad.
Facebooks purpose isn't too serve advertisers. Most for profit companies propose is to make money. Facebooks primary way of doing that is asking ads. But they can't sell ads without plenty of eyeballs. Their FB needs to also pay attention to the users.
Users are a supplier, an input, a resource used to make the actual product. Facebook cares as much about that supplier as a company cares about any other supplier. They'll push for the lowest possible prices and push the supplier as hard as they can without breaking the relationship.
Facebook will do as little as possible to benefit users, min/maxing as close to the
breaking point as they can without going over the line.
Facebook is a public company whose purpose is serve advertisers. Online ads is the only way they can generate revenue. It is not some woo-woo about connecting the world, giving people a voice, or whatever. That's what the internet enables, not a single website controlled by someone else. Facebook tries to take credit for what is good about the internet.
I truly think it could move away from that model by innovating ?
Disclaimer: I don't like FB, work there, I don't use it personally, I do use Instagram though.
Sure there is: advertising. It's the root cause of nearly all evils in the internet today. Facebook itself is merely a symptom of the advertiser problem. The web and the internet will never be fixed until we get rid of the advertisers. Either we ban advertising or we spread ad blocking technology far and wide.
More Ad blocking won't solve this in the long term, it will become an arms race.
If ad blocking becomes pervasive then there will be more efforts to evade blocks, starting with technically ( operationally difficult) solutions like hosting ads first party etc.
There is simply too much money today in it for the industry to go away quietly.
Ad-blocking as a tool can help address the security issues with a trusted service hosting all sorts of unknown ad scripts that can potentially have malware and/or scrap your data, but issue of ad's itself can sadly not be controlled with ad-blocking only.
> Why ?
Because it is asymmetric battle. Content Platforms ( not only digital) already insert native advertising or do product placement, these are harder or impossible to block technically, and quite hard cognitively to differentiate as well.
Also Google is removing web APIs and actively making its browser poorer and controlling the web standards that is making it increasingly technically difficult to block ads, it is much harder to block ads on mobile platforms where most users are, only Firefox supports an ad blocker at all and Mozilla generates 95% of its revenue from Google .
We need better regulations not better ad-blocking to protect vulnerable users . More common ad-blocking becomes more aggressive the counter measures will be, given Google's control of Android and Chrome (that also gives them influence of Edge(based on Chrome) and to some extent Safari - Webkit heritage ) they can easily effect changes that makes it impossible or extremely difficult to block ads.
Adblocking also works pretty well on Safari, even better since iOS 15 added extensions.
It’s pretty easy to install apps like 1Blocker and NextDNS on an iPhone and eliminate nearly all ads. More importantly, the platform automatically prevents tracking [0] “across other companies apps and websites” without explicit opt-in since April, and Safari has already been blocking trackers for a couple of years (the above link to Seattle Times blocked googletagmanager.com, google.com, chartbeat.com, amazon-adsystem.com, with default settings, and no additional purchases).
On top of this the many iOS users who already pay for iCloud+ to get access to enough storage now have an onion router service available in beta that hides IP addresses from websites. I’ve been using it for a couple of weeks and had no issues. [1]
I agree that it’s an arms race, and as Facebook and Google see ad revenue drop from users who have ad blocking enabled, they’ll grind the remaining ones even harder.
Good to know to that Apple is investing in this recently, They also have the market share to counter to Google control of web market with Chrome and its derivatives to keep web developers in check.
Sadly while Apple is reasonable choice in the U.S. market, it is not affordable choice in most of the world. This effectively means only the rich get protection from ads.
I'm sure people will figure out a solution. Maybe some machine learning thing that identifies brands and just deletes them.
We already have extensions such as sponsor block.
> Google is removing web APIs and actively making its browser poorer
Yeah, that sucks. I've already switched to Firefox and I also keep Brave around. Google made it easy for me when it killed sync for custom Chromium builds.
> controlling the web standards that is making it increasingly technically difficult to block ads
What, really? Do you have examples of this? What web standards have they proposed that would make blocking ads harder?
> only Firefox supports an ad blocker at all and Mozilla generates 95% of its revenue from Google
I agree, that's a massive conflict of interest. I think Mozilla should work to solve that.
> We need better regulations
Agreed, but I still believe that technology is a potent and subversive weapon. The mere existence of ad blockers has forced these tech giants to change the way they operate. If we keep up the pressure, either we'll win and drive them out of the internet or they'll become ever more tyrannical in their attempts to maintain control, drawing more attention to themselves and hopefully inviting public condemnation and regulation.
Standards control has been never always direct since JScript days, the dominant players do what they please, and by the nature of dominance web app developers have to comply follow their tools, and standards get established de-facto.
Specifically, moves like changes in manifest V3 and removal of chrome.webRequest is aimed squarely at adblocking, Forcing DRM in HTML5 is a step in this direction as well and can become dangerous in conjunction with AV1 enhancements .
For example it may become feasible to dynamically mux content and ad streams without a separate network request if encoding is not as expensive as H264 today with AV1 . Coupled with DRM on HTMl5 it would be impossible for any plugin as streams will never be exposed available for "Piracy reasons". Even HDMI has support for DRM , the only analog/open/accessible format left will be the light stream to your eyes.
This would make ad blockers straight up illegal under the DMCA since they cannot circumvent these protections. I already oppose copyright and DRM for many reasons, I never expected advertising to become one of them.
He has majority control of the company. There is no balancing input. As long as Facebook is around with Zuckerberg at the helm, he is by definition the root of the problem.
Discord doesn't appear to be profitable yet. Or, at least they don't appear to have ever said they were, and have taken about $500m in funding w/ about $220m in revenue since 2016.
They seem to be on a solid path, but I think it's too soon to hold them up as an archetype of a social networking service that can be massively successful on a freemium/subscription revenue model.
I also can't think of any company close to the size of facebook that is free for "users" rather than charging for their software/services. That doesn't mean it's impossible, but we're looking for a black swan event to change that, and it's unclear if Discord is going to have the hockey stick growth to make that happen, or if its recent increase in revenue is even sustainable as Covid becomes less disruptive.
If Discord ever did need to become ad-supported (not that I ever want it to!!), it could actually strike an interesting middle ground.
Facebook's problem is not that it controls which ads a person sees based on their content and communities, but rather quite the reverse: it controls what content and communities a person sees in order to increase their engagement with ads!
Discord, if it were to maintain a strong commitment to a "one way flow" of information towards ad targeting (without leaking that information to advertisers, of course, which is easier said than done) and not towards content promotion, could find a sweet spot where it is not possible for them to ever enter the business of being incentivized to promote misinformation - except insofar as paid ads are the misinformation, which is still a problem but at least a more traceable and manageable one. People would still discover and choose their communities organically, and the mods and participants of communities would control their outbound edges of the discovery graph.
Of course, not being able to optimize the bulk of a user experience for engagement is somewhat anathema to both the data scientist and product management instincts in me... but I honestly would much rather work on any B2B recommender system, or even an ad system like the aforementioned one, than blindly write code that has now been proven to do harm.
Then any communications medium is a middleman — postal mail, telephone, even telegraph. Maybe the only exception is speaking face-to-face. There’s no middle-man there.
Sorry but nothing she shared is incriminating. It’s just the challenges internally that Facebook has to deal with given their scale and how much content is shared every second. She makes a false choice that somehow Facebook can 100% make it safe but they just choose not to. It doesn’t matter who regulates it, who is in charge, the technology aspect of figuring out what is “dangerous” content and what isn’t is still impossible. Facebook has far more internal discussions and open debates about how to solve this problem than any other company. They are right. They are the best at this but they are still failing in many ways. But it doesn’t mean they aren’t trying and that they haven’t made progress.
Facebook doesn't have rights. People have rights. Like the right to free association, such as to associate together and form a corporation, like Facebook.
The best way to fix Facebook is to shut it down. I can't see how this company benefits anyone in any way except their owners and chums.
Internet was a much better place before Facebook.
Facebook are playing politics, the fact checkers are a facile attempt to hide their true agenda.
My question is, who actually uses facebook?
Not me or any of my family or friends, I deleted my account years ago and just have a fake profile for a reason I cannot remember.
My teenage children are completely disinterersted in facebook.
For me facebook had a bad energy and now there is no reason for me to go back there
What I'm thankful for out of this is a little more public awareness of the engagement engines and incentives that motivate ad revenue based content platforms, driving people towards more extreme content as they'll stay longer on it.
Facebook's policy communications director responded to the piece thus:
> "Every day our teams have to balance protecting the rights of billions of people to express themselves openly with the need to keep our platform a safe and positive space. We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true.".
Have they visited their platform? The callout here is that it is in fact not a positive place, and that they're directly incentivized to put forward high-engagement content, much of which has negative consequences. The words to reframe it as "bad" content and that they "do nothing" is reflective of the culture they have. Any argument that works regardless of moral reflection, just like any content that gets ad revenue regardless of moral reflection.
I think it would help the discourse and thinking if we sidestep the subject of "who decides what is negative/bad content?". I also think the idea of finding "better incentives" is going to immediately fail, because what sort of incentives could possibly be more alluring to a publicly traded company than massive profit margins? For that reason, I think some simple regulations, not around content, for which regulations already exist and there are incentives to
keep violent content off, but instead around the amplification of content by engagement metrics.
At a very minimum, disallowing amplifying content simply because its being engaged with and being actually content neutral would be a nice step. Twitter's "Top Tweets" setting or "Explore" tab are good examples of this, as is basically the entire feeds of linkedin, facebook, instagram's "Suggested posts", etc.
For Facebook - I would ask them to make tools that let users take control of their own feeds. They absolutely have the ability to allow people to mute political posts if they wished. They could have settings that don’t allow image posts or links, or reposts. Or even hiding the number of likes, reactions and comments on posts. If facebook starts acting as if engagement isn’t the end all be all and maybe I’d start to believe what they say about making their platform a positive space.
For me the promise of Facebook was being able to keep up with what friends and family are doing. The platform is so far away from that I have long since left.
Yeah, i mean just taking a look at the product I realize that it was designed for this kind of behavior. The feed looks like a spitoon that everybody disrespects. And the comments are continuously recycled in a memory drain, in order to provoke new outrageous reactions. The page spams you to connect, react and count the reactions. The focus is not on reading, but on acting like an emotional toddler.
The real takeaway: Parents protect your children from social media. Talk to them. Give them rules to keep them safe. Say "No." Don't wait for Zuckerberg to do it. Don't wait for Congress to do it. Get yourself away from it, and get your children off it. Find some other way to share pictures with grandma and grandpa.
We need regulation, yes, but this is the same as people smoking themselves into an early grave, but on a societal level.
> Parents protect your children from social media. Talk to them. Give them rules to keep them safe.
The current generation of parents itself is addicted and influenced by social media (not just facebook) and struggles keeping own online privacy and safety. So they need to do it for themselves first and then for their kids.
Perhaps educational system should care about this as well; computer classes shouldn't be limited to basic machine handling skills and software along with coding (at least that what I ad during my school years) but also cover the Internet and social media along with its pros and cons. It's not enough to do some random lesson with "there's porn and pedos on the Internet, be careful" topic and call job done relying on self-taught experiences at home during free time.
>The real takeaway: Parents protect your children from social media.
This also flows the other direction too. Talk to your parents about how the algorithms on these sites are designed to manipulate and how that shapes everything you see on the platform.
My bet would be that Zuckerberg would be more than happy to endorse your "real takeaway", because unlike some of his detractors he can actually see beyond the end of his nose.
Yes, in the short term you can and should take steps to curtail (possibly to zero) your children's use of social media.
As a longer term strategy this is bound to fail, for exactly the same reason that "if you don't like being surveilled around the clock, just don't buy a smartphone" is by now completely impractical advice.
Optional technologies become creepingly obligatory.
With Facebook the cost of opting out for most Western adults is likely already much greater than what non-smokers ever faced (being seen as less cool?). Chances are already pretty good something that's actually important in your live (like your work) will require use of WhatsApp, for example.
100% this. From what I heard Steve Jobs, Bill Gates, completely disallowed their kids to use of smartphones until they were much older and independent.
When there is no skin-in-the-game, one must ask why, do they do this?
Now that you say it, it seems obvious. Just compare to anything else with kids: 'all good, you have nothing to worry'? Well, nothing. There is no free lunch.
I think the point is that even if it isn't facebook, talk to your kids. Just like you'd talk to them about other kids at school and the interactions between them: Social media is a part of their social world, so talk to them no matter if it is facebook or not. Every one of them has the potential to do something akin to what facebook has done, albeit with their own unique method of delivery.
This is the way to do it. When they're very young, it's by saying "No." When they're a little older, it's by explaining why. By the time they're a teenager, you have to trust them and make sure they know you're there for them.
You also have to model the behavior for them in the first place.
I'm also a parent. You begin as a shield and end as a foundation in your child's life. It is a process.
There are a lot of contradictory ideas out there, sometimes it is important to let children make mistakes, it is not easy as a parent to sometimes do nothing.
It's mad respect to the whistleblower for the intention of fixing the company instead of destroying it. Facebook should try and come up to fix its flaws.
The more likely scenario rather than this faux-altruism is that she just wants to get whistleblower status to collect 30% of the settlement Facebook will make with the government.
FB is what it is because 45,000 employees work hard daily to make it that way.
if they weren't satisfied with how it works or what it does, they'd fix it or they'd leave. every day they continue to work there without changing direction, they vote with their time that FB is doing what it ought to be doing.
all of the responsibility for FB's effects on the world belongs to them.
1. Break up Facebook into separate companies to reduce its power. Antitrust laws may have to change.
2. Ban communication technologies which the behavior is a black-box to the user. It should always be apparent to the user why they are seeing what they are, and in what order.
These restrictions should weaken this cancer of a business known as Facebook.
> 2. Ban communication technologies which the underlying behavior is a black-box to the user. It should always be apparent to the user why they are seeing what they are, and in what order.
I don't understand elliptic curve cryptography, but I still use ECC TLS.
I think it's not too hard to define. On this forum, I know when I post a comment, it will appear where I expect it. You will see it if you look. There's no AI in the middle of us deciding whether my reply is worthy, or prioritizing extremism to optimize for engagement.
When you use email, you know what's going to happen. You can predict the behavior (yes, there are spam filters).
Facebook (and other social media) behave such that if we knew the entire sequence of user behavior, we wouldn't be able to predict what happens next. It's this sort of black box behavior that isn't good.
If you have an algorithm that is strictly optimizing to reduce inflammatory text, then I suppose that's ok. But still, it might be better to have user-moderation instead. I'd rather people just be banned by moderators.
> Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they'll click on less ads, they'll make less money.
This will have huge ramifications across all of big tech. They will get regulated now based on this. This is the excuse that government needed and they won’t lose this opportunity.
To be honest, I think it's great that light is being shone on these internal documents, but I have yet to hear any big revelations in these leaks, so I don't really understand why she would be eligible for a payout. I mean, they mostly say that Facebook is bad for societal discourse and bad for personal well-being. Color me shocked.
I haven't seen any evidence, for example, that Facebook tried to cover anything up. They just paid some people to do a bunch of research, which they then did nothing about because it would have had a negative impact on their bottom line. Literally every large company does this.
Don't get me wrong, I think Facebook is a horrible company and could very well be the downfall of society, but I just don't think that was a secret, and nothing in these reports makes me think any differently.
Under what whistleblower statute? There are various statutes that provide different percentages, but there are strict rules for qualification.
I haven't seen any indication that she is a qualifying whistleblower under any statute (though admittedly I am not deeply following these revelations).
update: sounds like these disclosures were made to the SEC under the Dodd-Frank whistleblower protections, which could yield 10-30 percent of the amount collected by the SEC.
The theory of the case is that FB is misleading investors by not disclosing this material information.
Well that certainly changes our perception of her motivations. She claims to care about making social media better but probably just wants an easy payout.
It's not just politics. Like someone said on some podcast recently, you Americans get top tier content moderation, while the rest of the world is a "free speech" heaven.
A few years ago a young man was brutally beaten to death on a FB livestream in my country. I reported the thing on the first day, but it took at least a week for FB to take it down. You don't need ML for that.
I'm deeply convined that FB is directly responsible for the massive number of anti-vaxxers in my country.
I'm glad she shared what she shared. Facebook is a cancer on society IMO.
In many cases, although perhaps not this one, whistleblowers have had an extremely active and political Twitter account when hired, or are literally being hired _because_ of their wokeness.
Do silicon valley companies not screen for intensity of political views during job interviews in order to prevent issues like this? They certainly don't seem to. Political views and ideological values are subjective, so in a large enough organization you can't avoid some people getting very upset. It would seem like it would be better for business to just not let people join the organization if they're relatively likely to one day go on a moral crusade.
I think it's only a matter of time before people who are militantly political or ideological -- on either side of the spectrum -- are going to be seen as toxic by any company big enough to care.
It might be a good idea for companies to be up front about their significant moral choices.
Then regardless of disgruntled employees, they can defend and discuss their behavior and any mistakes with a majority of consistent internal and external communications.
And maybe avoid doing most really bad things.
This consistency of purpose and avoidance of causing surreptitious harm is called “integrity”.
As Upton Sinclair said, "It is difficult to get a man to understand something when his salary depends upon his not understanding it." When over a trillion dollars of value is on the line, history tells us that people will generally act poorly and come up with any plausible justification for why they're doing good.
This isn't to excuse Mark Zuckerberg, Sheryl Sandberg, or anybody else in a decision-making position. The first time your business model is widely seen to have played a determining role in a genocide [1], the ethical decision is unambiguous--it's a good time to step down and take your billions to lobby against that business model. There's no moral excuse for doing otherwise.
Just because you don’t see it, doesn’t mean it doesn’t happen.
I have a relative who processes digital intakes for a CA ACLU chapter, and they get a huge number of interview-related complaints, which get referred directly to legal assistance who are interested in such cases.
My opinion is we likely don’t hear about it because each case is small, and companies have no incentive to publicize settlements.
I’m quite sure that a lot of interview complains get adjudicated. My point is that few if any of those are complaints related to political discrimination.
Nobody is asking you to support the ACLU there, the poster just mentioned that a relative working for the ACLU deals with such complaints on a regular basis. Your opinion of the ACLU doesn't make that untrue.
Is it me or have things shifted from “prevent what you say online from being traced back to you as much as possible as it could be used against you” to “what you say online is now part of your resume and ‘leaving it blank’ is a ‘bad look’”?
I’m … not young and follow the “blank slate” approach. Was just wondering if the world had changed while I wasn’t paying attention.
I don’t think anything really changed there — those of us who think their opinions are worth other people hearing about (and I’m generally in that camp) have been around since time immemorial, and I doubt we’re growing as a percentage.
It’s just that social media has provided us a big new platform, much more visible than Usenet or whatever. I think generally companies don’t care what you write online as long as it won’t come back to bite the company.
There is a difference between being heard and having your real name and a photo of your face attached to what you said like in some cases on Twitter.
I was wondering if social media has become so ingrained into the lives of the younger generations (and maybe even the elderly) that it’s effectively as much a part of their public identity as their choice of clothes and academic/career accomplishments.
No — I know lots of young people who hate all public forms of social media, and worry constantly about having any sort of visible opinions — and these are people with socially-liberal political views. It really is just a personal preference, and outside of social media manager positions, employers don’t care.
>Do silicon valley companies not screen for intensity of political views during job interviews in order to prevent issues like this? [...] It would seem like it would be better for business to just not let people join the organization if they're relatively likely to one day go on a moral crusade.
That sounds likely to cause a lot of employee blowback, and negative news articles somewhat similar to this one.
I can see the headline now: "[Tech company] wants employees to be sheep" "Afraid of fallout from whistleblowers who report all the bad things it does, [tech company] has put measures in place to prevent job candidates with any morals from being hired."
Who would be the ones doing this? Currently regular employees are the ones conducting interviews. Would there be required interview questions along these lines?
In fact, there was just recently a thread about hiring (YC company filed for IPO) and the comments were filled with less aware interviewers arguing about a je ne sais quoi about candidates. This is WHY there's a rubric and the question is silly whiteboard algorithm questions. The alternative is shit like "so how pregnant are you" which is illegal, and judging candidates on "gut" and it turns out that hinders your company.
Most companies who work at population scales end up being toxic cuz there is a power and information asymmetry, that your average corporate robot with factory defaults, will find hard to resist exploiting.
Its what they are programmed to do.
When the work environment is exploitative in nature triggering people within the org happens all ths time.
The difference today is people who get triggered woke or non woke, can take advantage of networks effects like never before to do serious damage. Companies cant do shit about it.
Look at the people she connected with Andrew Bakaj, Jeff Horwitz, 60 minutes etc.
It's really that simple. I also think Facebook is a net negative for humanity, but if I was willing to subject myself to leetcoding interviews and passed theirs, I'd work there in a heartbeat because they pay senior engineers with ~6+ years of experience about $400k as a baseline. Quite a bit more if you're particularly good, have competing offers and negotiate well. I have principles, but frankly I care about securing my and my family's future more than standing up for them. The same is probably true for most.
It means either (1) they don't give a shit about the harm their employer's business practices cause to society, or (2) they do care, but the large amount of money they get paid outweighs their moral compass.
Anyone with the skills and expertise to get a job at Facebook has many, many other choices of employers, many of which are at best morally neutral or less damaging than facebook.
There's no amount of money I'd accept to work at FB.
> It would seem like it would be better for business to just not let people join the organization if they're relatively likely to one day go on a moral crusade
Oh man, do I hope my entrenched competitors do this. Give me your crazy ones.
Facebook is actually quite open to internal debate on these issues all the time. They want open dialogue from many diverse sets of backgrounds and opinions. Which is why just screen capturing people’s dissent internally is not some gotcha moment where FB is somehow squashing the existence of problems or people disagreeing with the current direction. It’s something that happens literally every day and they work out the best way forward or whether someone independent needs to make a decision for them.
Well, FB hired Nick Clegg, whose name is a shorthand in British politics for "I will betray all my promises and principles for a paycheck". So maybe they just assume that anyone who goes to work for them is in it for the money and the "wokeness" is merely performative.
> Do silicon valley companies not screen for intensity of political views during job interviews in order to prevent issues like this?
It would seem that they actively select for it.
> It would seem like it would be better for business to just not let people join the organization if they're relatively likely to one day go on a moral crusade.
To be fair: Blue Team straight up hit the panic button when Trump was elected in 2016.
From a non-US perspective, Trump was... grating? Talked a big game, but clearly didn't have the leadership chops. His "Law and Order" tweets during all the rioting were absolutely pathetic. Blue Team, however, has clearly gone quite insane.
At this point, I'm sort of politically homeless, which puts me in good company. Republicans talk hard about winding the clock back but don't actually seem to do much of anything, Libertarians pretend there is no clock, and Democrats evangelize that we'll finally escape the oppression of time itself once we've burned the clock to cinders.
During the riots, Trump kept tweeting out "LAW AND ORDER!", like a crippled old man furious at the neighborhood kids for trashing his roses. He's angry, but doesn't have the means to do anything about it.
This made Trump look weak and impotent. All he did was hide in his bunker and shake his tiny fist on Twitter while chaos raged.
The rioters placed Trump in a dilemma action. Smart. If Trump does nothing, he looks weak. If he responds with sufficient force to quell the riots, he looks like a tyrant.
Off the top of my head, a better play would have been something like: "As President, I am bound by the Constitution to respect the sovereignty of each State. To any state Governor that requests assistance to restore peace, I will put all necessary Federal resources under your direct command."
Repeat variants of that, over and over. An offer of assistance demonstrating clear respect for the Constitution.
This very diplomatically throws the ball to mostly blue-state governors. Now they either have to be seen as doing nothing, or as collaborating with Trump. Moreover, if they do accept assistance and pull a Kent State, it's on their heads.
And I'm some clown on the internet. I imagine somebody that really does statecraft could do better.
Trump is a showman. Good at reality TV. Clearly didn't understand how to lead a country.
LOL, the chaos was the plan. It’s why Junior and the others were celebrating when shit was hitting the fan. The whole thing was an attempt to cause enough disorder that there would be cover for throwing the votes back to R-lead state legislatures - look at the Eastman documents.
The whole thing failed cause Pence wouldn’t play ball, the Trump rabble started hunting for him (and Congresspeople on both sides) in the Capitol building, and there was no backup plan.
The statement about how Trump was “like a crippled old man furious at his neighborhood kids” is hilariously off the mark. He was furious that Pence didn’t back him up.
> The statement about how Trump was “like a crippled old man furious at his neighborhood kids” is hilariously off the mark. He was furious that Pence didn’t back him up.
Maybe? I didn't get the sense that Trump had a game plan at all. Looked like pure reaction the entire time.
The optics from the outside were "angry old man clutching his cane, shouting at the neighborhood troublemakers".
The biggest conspiracy theory and insurrection attempt was the fake Russian conspiracy and the conspiracy to affect the election outcome conveniently described by time magazine.
I'm completely uninterested in a ragtag group of curly haired shirtless weirdos who stormed congress. I am concerned about large corporations, rich interest groups and large multi media conglomerates that have actual power to affect political dialogue without any consequence.
Why does every single article about Facebook manipulating data talk about the Jan 6 riot instead of the years long onslaught of fake news?
Probably because the Jan 6th riot is one of the most recent and tangible outcomes of that years-long onslaught that set the stage for this sort of thing to happen.
Do you not care about people attacking your biggest symbol of power. Within in reach of many of your representatives. Driven by years of hate and misinformation? You should care.
Russian interference is well documented by American intelligence agencies and private individuals.
No I think people should in fact focus their rioting towards politicians. That's much better than harassing or committing violence against your fellow citizens because of something your government did.
For example if your government is oppressing you and your response is to burn your fellow citizens car dealership or vandalize their shop, that is less Noble then going after your government officials.
I disagree with the Jan 6 rioters but they correctly targeted their efforts . On the other hand, blm Chicago called for looting private businesses, many of whom had even come out in support of their cause.
BLM riots were drien by just as much misinformation. Studies show that progressives are way more likely to overestimate the number of minorities killed by police. A good portion of them believed tens of thousands were being killed a year. This number is way off. Conservatives were most likely to accurately estimate, although even they were skewed towards overestimating.
And the Russia collusion narrative is a hoax. While Russia certainly has an interest in interfering with us, the claim was that trumps campaign directly aided them. That is false. It is provably false. Stop changing the goal posts. You yourself are spreading misinformation.
Edit: and for the record, the federal government of the USA derives legitimacy from the people not buildings
Perhaps I am of the minority opinion, but I damn sure was never worried about a bunch of selfie-taking unarmed rednecks destroying the United States of America. My faith in this country goes a little deeper than that.
People are much less concerned about the unarmed rednecks taking selfies than they’re concerned about the armed folks searching the grounds for certain representatives, the people planting pipe bombs around DC, and anti-government militias comprised heavily of former military and law enforcement. Oh yeah, said militia members stashed guns at a nearby hotel (according to their own testimony and video footage).
This could’ve gotten way uglier and who knows what the ramifications would be. Let’s not fixate on the selfies.
Well, I am going to fixate on the selfies because there were a bunch of dimwits taking selfies.
“Armed” folks armed with what? Yeah, I’m pretty sure the most powerful nation ever to occupy the earth wasn’t going to fall to a bunch of pitchfork wielding social media posting hayseeds demanding to have Frankenstein’s monster handed over to them.
Hundreds of dipshits taking goofy selfies and you post a fact check that talks about 3 people charged with weapons outside the capital grounds and a single person on the capital grounds that had a holstered weapon. Also someone using a flag pole to gain entry.
Let’s agree that at least a thousand people breached the capitol. So at least 99.96% of the crowed were angry pitchfork waving villagers and .04% were some dorky Van Helsing wannabes.
In what world does that constitute a threat to topple to most powerful country ever to exist on this earth? It’s not.
A world that’s governed by complex legal codes, procedures, and norms with a variety of loopholes and undefined states. One that’s not safeguarded by rah rah “we are most powerful country ever” therefore our bureaucracy is immune to bureaucratic failure!
We simply don’t really know what would’ve happened if e.g. the ballots were destroyed. They would’ve had to get duplicate ballots from the states. Would all the state election boards have stuck with their previous results now seeing an apparent opportunity to install their leader and feeling galvanized by the “fraud” of vote-by-mail?
It’s interesting how many people will complain about clear bureaucratic failure in basic day-to-day tasks like the DMV, and then imagine that the most complex bureaucracy in history is held together by patriot power and not, you know, bureaucracy, with all of its strengths and vulnerabilities.
The point is that this event is characterized as an insurrection and a threat to the the United States. It wasn’t.
If you want to shift the conversation to election integrity, that’s fine…but that was actually the whole point of the demonstrations. You have one group that felt that they had legitimate concerns about the election’s integrity. The other side downplayed those concerns, does not want to improve election integrity, and actually wants to erode it further.
I am pretty sure that if you want the bulk of America to feel that elections have integrity you should make it a priority and not downplay concerns just because your candidate won and I apply that standard to both parties since both are guilty over the last few election cycles.
To clarify, when I refer to ballots being destroyed I’m referring to electoral college ballots being destroyed by rioters during the Jan 6 insurrection. It wasn’t a coincidence that they were called to DC on that particular day when a normally extremely boring and mundane procedure is carried out. It has no symbolic value, it has no cultural value, it has no value except ensuring the legal, procedural basis for transition of power. That’s why it was attacked.
Yes, if that succeeded, it would’ve harmed the US.
In any case, your suggestion that the democrats “actually want to erode [election integrity] further” reveals a lot.
Does it? What exactly do you think it reveals? I’m not a Republican, and haven’t voted for a Republican for any office for decades. So what does that tell you?
It might reveal that I am not happy with the political party that once upon a time were liberal and good but are now just filled with power hungry intolerant authoritarians. At least the republicans don’t pretend that they are good people.
You're talking about a small group of Oath Keepers in an otherwise peaceful protest. But I'm curious, why do you think Stewart Rhodes was never arrested? Would it make more sense given the context that the boogaloos behind the Whitmer kidnapping plot turned out to be FBI agents, as did the leaders of other right-wing groups including the Proud Boys and Atomwaffen Division? Who does the right-wing terrorism scare serve?
Right-wing terrorism scare? We all watched angry mobs attempt to disrupt the peaceful transition of power on live TV. These are the same people who proudly fly the flag of literal traitors to our country and call the press the “enemy of the free state.”
If all it takes for your movement to attempt to kidnap elected officials or disrupt transfer of power is a few undercover agents nudging along, your movement is a huge problem.
> If all it takes for your movement to attempt to kidnap elected officials or disrupt transfer of power is a few undercover agents nudging along, your movement is a huge problem.
If it takes more agents then perpetrators to pull off your sting, and if the agents conceptualized, planned, financed, managed the operation, and participated in it…
Well, let’s just say I am going to be the wrong person to sit on your jury if you want a conviction.
They aren't strictly synonymous but, as you should know, in this context an "informant" is a kind of agent, both on paper and in practice. They aren't informants in a conventional understanding of the term, just feeding information back to the government. They receive training and are employed to proactively infiltrate and participate in criminal organizations much as any investigator would (or, in these cases, steer non-criminal organizations into criminality).
We all watched protesters demonstrate in a public building, and yes, disrupt the transition of power. Peaceful disruption of the government is the quintessential act of protest.
>call the press the “enemy of the free state.”
Criticism of media is terrorism?
>kidnap elected officials
That group was comprised almost entirely of undercover agents along with a couple of mentally ill people. The FBI orchestrated the entire thing.
In a democracy, the results of a free and fair election are actually not a valid target of protest if your form of protest entails an attempt to overturn those results.
Feel free to disrupt traffic and make a scene elsewhere.
In a democracy, people should be allowed to protest if they believe an election is not free and fair, no? If not, why? This is a strange line in the sand that to my knowledge has never existed before in American history. People have protested and contested elections numerous times.
> Feel free to disrupt traffic and make a scene elsewhere.
Yes, disrupt the worker bees, not the queen.
Edit: it does seem strange that you wouldn't include some sort of disclaimer in this convo about your employer's stake in "counter-terrorism".
I wonder if your opinion would be different if those unarmed rednecks were able to get to Speaker Pelosi, Leader Schumer, Minority Leader McConnell or VP Pence?
Those "unarmed rednecks" were able to do something not even the various armies of the Confederacy couldn't: get inside the capitol.
People are not terrified enough, because this group of people saw they could do it and I have a strong feeling they will try again.
I watched the live stream. The capitol police opened the barriers in front of the building to let them pass. They seemed friendly towards the protestors.
The capitol is (normally) a public building. There is a sitting senator who has been arrested at a protest in the building. Likewise it was not long ago that the capitol was full of protesters accosting senators in the lead up the the supreme court pick.
It's not meant to be a secure building that takes an army to get into.
Susan Rosenberg was part of a group of former Weather Underground members that bombed the U.S. capitol in 1983. She was pardoned by Clinton and became involved with the company bankrolling the largest BLM organization in the world.
In what way is a group of proles walking around, taking selfies, and breaking some glass comparable?
What's Undetermined
In the absence of a single, universally-agreed definition of "terrorism," it is a matter of subjective determination as to whether the actions for which Rosenberg was convicted and imprisoned — possession of weapons and hundreds of pounds of explosives — should be described as acts of "domestic terrorism."
https://www.snopes.com/fact-check/blm-terrorist-rosenberg/
Interesting fact checking.
They weren't. You, I, and everyone else knows that hyperbolic, sometimes violent language is par for the course on both sides of the aisle. It's actually protected speech.
Whataboutism increasingly seems to be used as a thought-terminating buzzword by people who want double standards.
Comparing this to the civil war is ridiculous. They were literally invited in by the capital police because it is a public building. The south is not at war with the north. The only shots that were fired were from the capital police. No one brandished a firearm in that capital except the police.
“But what if…” is a pointless exercise because the past has already proven that those things didn’t happen and is just a mechanism for a bunch of people who desire authoritarianism in this country to justify making a free people less free.
Marie Antoinette was similarly not worried about the visceral expressions of very real problems across the nation of France. It's not a problem until it's a problem.
No, being overthrown by the populace was a mistake from the perspective of the French monarchy. You might want to stop looking at the world as black and white at some point.
Some people forcibly entered the Capitol to protest during Justice Kavanaugh’s nomination. A few people got arrested. Life went on.
Personally, I think the mortal fear politically active leftists have of any sign of opposition, no matter how trivial, speaks volumes about how they feel about their own position.
I think Jan 6 was a problem in its origin, not in its means or ends. As an American, I truly do not care about people attacking Congress, or the White House, or any politician, because they are not symbols of my power anymore than my local carwash or plumber is a symbol of my power. They are public servants, worthy of no more respect than the pizza delivery person (a lot less, actually, since the pizza delivery person is working to make a living).
If they were attempting to overthrow actual despots, about whom everyone agreed something Must Be Done, the popular reaction would be much different. I think the current reaction is on the money: they are violent, idiotic, hooligans.
But the ability to storm the capitol is a necessity.
I thought that foreign manipulation of the 2016 election was well in fact documented. If you don’t believe that the Russian government interfered with the election, you must at least believe that a private company located in the UK did so. https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana...
Unfortunately private companies and foreign governments from Tel Aviv to Dubai spent countless millions to influence our elections and buy our lawmakers.
Changing the goal posts. The claim was that the trump campaign directly participated in Russian efforts to interfere with the election, including direct collusion. This is a false claim despite the media spending years on it and Pulitzer prizes being awarded.
The claims were made without evidence. The origins of the claim ... The Steele dossier... Is itself fake.
> Now the glow has faded — from both the dossier and its promoters. Russia, as Mr. Steele asserted, did try to influence the 2016 election. But many of the dossier’s most explosive claims — like a salacious “pee” tape featuring Mr. Trump or a supposed meeting in Prague between Michael Cohen, Mr. Trump’s former attorney, and Russian operatives — have never materialized or have been proved false. The founders of Alfa Bank, a major Russian financial institution, are suing Fusion GPS, claiming the firm libeled them. (Fusion has denied the claims.) Plans for a film based on Mr. Steele’s adventures appear dead.
>
> Beneath the dossier’s journey from media obsession to slush pile lies a broader and more troubling story. Today, private spying has boomed into a renegade, billion-dollar industry, one that is increasingly invading our privacy, profiting from deception and manipulating the news.
It's a perfectly legitimate concern and it's a shame you're being downvoted. Anyone who is concerned about fake news and misinformation should be appalled by the media's treatment of the bogus Russian interference story.
Now the glow has faded — from both the dossier and its promoters. Russia, as Mr. Steele asserted, did try to influence the 2016 election. But many of the dossier’s most explosive claims — like a salacious “pee” tape featuring Mr. Trump or a supposed meeting in Prague between Michael Cohen, Mr. Trump’s former attorney, and Russian operatives — have never materialized or have been proved false. The founders of Alfa Bank, a major Russian financial institution, are suing Fusion GPS, claiming the firm libeled them. (Fusion has denied the claims.) Plans for a film based on Mr. Steele’s adventures appear dead.
Beneath the dossier’s journey from media obsession to slush pile lies a broader and more troubling story. Today, private spying has boomed into a renegade, billion-dollar industry, one that is increasingly invading our privacy, profiting from deception and manipulating the news.
The Committee did indeed release a bipartisan report making those assertions. Their assertions were bogus because no evidence is cited anywhere in the report. Claims of secret evidence are not evidence.
Considering the polarizing nature of the topic, it seems unlikely there would be a bipartisan agreement on the issue without classified intel to get people on both sides of the aisle behind it.
That, of course, is not proof. But we have also had reports of activity traced back to Russia from other organizations.
They could all be lying, but then that claim takes us strongly into the conspiracy theory territory at the same time that it runs afoul of the "lack of evidence" problem quite a bit more than alternative explanation.
>Considering the polarizing nature of the topic, it seems unlikely there would be a bipartisan agreement on the issue without classified intel to get people on both sides of the aisle behind it.
Congressional support for US global military empire has, unfortunately, always enjoyed complete bipartisan support.
>But we have also had reports of activity traced back to Russia from other organizations.
All sorts of organizations released conclusions, none of these conclusions come with any evidence that be scrutinized. Further, on the rare occasions when arms were twisted under oath, all of these conclusions turned out to be completely baseless. One of the most prominent debunked claims was that of Crowdstrike that the DNC servers were hacked by Russians. In fact, in December 2017 when testifying before the House Intelligence Committee, CrowdStrike President Shawn Henry admitted that not only was there no evidence that Russians hacked the DNC, they had no evidence that it was hacked at all, rather than leaked, or exfiltrated in some other manner. I was never a Trump supporter, didn't vote for him, don't like him at all, but I hate false propaganda even more. Whether it was the CIA and the DOJ doctoring documents in order to spy on US citizens or DNC lawyers feeding false information to the FBI (as detailed in the recent Durham indictment), all of the publicly available information (which after 5 years is voluminous) indicating a "Russian attack on our election" have all been based on lies and baseless assertions. Given this, and the documented record of lies and obfuscations peddled by our "intelligence agencies" for decades, its clear that the conspiracy theory here is the one being pushed by the Washington blob and not people who demand evidence for their wild and oft debunked assertions.
I would be interested in the portion of Shawn Henry's testimony that you are referring to. Because Cloudstrike still maintains their response to the DNC hack observed a Russian presence in examining their servers, and Henry's response to Congress on the topic was:
"We said that we had a high degree of confidence it was the Russian Government. And our analysts that looked at it and that had looked at these types of attacks before, many different types of attacks similar to this in different environments, certain tools that were used, certain methods by which they were moving in the environment,and looking at the types of data that was being targeted, that it was consistent with a nation-state adversary and associated with Russian intelligence."
Attacks that coincide with patterns seen from Russian intelligence is evidence. Evidence isn't proof. If you're standard of evidence is "A Russia n defector with server logs and video recordings of FSB officials confirming the acts".... Well, then you're using the word "evidence" wrong.
It doesn't have to be illegal or fraudulent to reveal bad things about the company that the company doesn't allow its employees to reveal. Looking up various definitions of the term, the activity reported on doesn't necessarily have to be illegal. Though I acknowledge that in many cases the activity involved is illegal.
> Scott Pelley: What is the legal theory behind going to the SEC? What laws are you alleging have been broken?
> John Tye: As a publicly-traded company, Facebook is required to not lie to its investors or even withhold material information. So, the SEC regularly brings enforcement actions, alleging that companies like Facebook and others are making material misstatements and omissions that affect investors adversely.
How would any of this information affect investors adversely? What she revealed is that they actually are doing what is in the best interest of the shareholders. The users are not the investors. She just revealed what we all already know and what FB has known all along…that high engagement doesn’t mean the content is worth sharing at scale in all cases, and deciding whether that line is, is often very difficult and technology/rules need to catch up to solve that problem.
No but what Facebook is doing is harmful, and what she is doing is embarrassing to the company and to those who work for them.
For all I know, what she did is illegal. But I'm glad she did it, it was very brave and she puts herself at a lot of risk by doing it. If Facebook goes after her, they will undoubtedly get bad press and encourage government regulation and possibly even antitrust action.
Facebook and other social media aren't inherently bad, but the incentives are certainly there for them to amplify the worst of human nature. This is most likely a problem that needs people outside of Facebook (and all these other companies) to solve. I am 100% in agreement with her that it is a solvable problem.
I did read it. The law says she is allowed to share documents with the SEC. I don't know if she is allowed to go beyond that, such as doing this interview. And I certainly don't know all the requirements for it falling under that law.
Regardless, my whole point is that what she is doing is a positive and I salute her for it.
To quote Matt Levine, “everything is securities fraud”. If FB is representing to the public that they are doing better than they actually are on fighting these issues, and the truth damages their share price, then they have committed securities fraud and will be sued by shareholders.
Are there no consumer protection laws which forbids a company from pushing a product to market that they know to be harmful to their customers? IANAL but it seems pretty illegal to my eyes.
I think we should agree that women and girls physically harming themselves is "harmful".
> Frances Haugen: And what's super tragic is Facebook's own research says, as these young women begin to consume this-- this eating disorder content, they get more and more depressed. And it actually makes them use the app more. And so, they end up in this feedback cycle where they hate their bodies more and more. Facebook's own research says it is not just the Instagram is dangerous for teenagers, that it harms teenagers, it's that it is distinctly worse than other forms of social media.
The "who defines" question isn't really much of a gotcha for laws. Either the legislature specifies when the pass the law, or courts get to decide when a case comes before them. Sometimes a jury is also involved.
It is a huge question, because leeway in the definition, or political disagreement about the definition, makes for a law that is impossible to know you are breaking.
Vague terms should be avoided in laws. Vague terms with obvious political wiggle room should be avoided at all cost.
have they defined what "hate speech" and "misinformation" mean these days?
More often than not these terms are used by one party to silence & suppress opposing views of another.
Thankfully Zuckerberg doesn't give a damn, one would be hard-pressed to find a better CEO for social media company who can stand up to all this whining.
These are not mutually exclusive. One of the primary reasons for laws is to stop people from infringing on the rights of others. These laws are therefore state controls that can simultaneously yield freedom and liberation.
Defamation and libel laws are a classic example. They are an infringement on an individual's right to free speech. However we as society have decided that is justified when that speech is being used to infringe on the rights of another person. Hate speech laws are basically the same thing as defamation and libel laws except they are scaled up to protect groups of people instead of just individuals.
Whether these tradeoffs of one group's rights for another group's rights are worth making is certainly up for debate. However it seems pretty obvious that hate speech laws do serve to provide liberation and freedom for people who would commonly be the recipients of hate speech.
> “Freedom Is Slavery” because, according to the Party, the man who is independent is doomed to fail. By the same token, “Slavery Is Freedom,” because the man subjected to the collective will is free from danger and want.
I can refer you to your own past discussions on this topic. After all, you frequently post these same three links whenever freedom of speech comes up on HN:
Generally speaking, you typically post some variation of "free speech is oppression"/"outlaw harmful speech", to which other posters generally reply with sources debunking the core premise of your argument, ultimately followed by you either ignoring them and doubling down and/or breaking site guidelines by throwing ad hominems. This is a very low quality of discourse for HN.
the whole concept of "oppression" in modern context holds absolutely no merit, it's a divisive racist view of American culture and it's been used for political gain again and again.
Since it's published by supposedly reputable academics I think it's a perfect example of misinformation.
It’s simple really. “All X are Y” where X is an immutable characteristic and Y is something negative = hate speech. “Covid vaccines contain 5G microchips” is misinformation. Do you have any examples of what you’re alluding to?
Naw, just burn it down and start anew, that's fine too. It's really not all that special, and people should become comfortable with rebuilding things at the moment.
> Even cigarette companies eventually stopped targeting children for harmful products.
No, they didn't, really. They just have made it less overt when direct, and do the rest of it through the non-cigarette tobacco product firms they invest in. The whack-a-mole on that industry and its marketing of nicotine addiction to children is ongoing.
Not commenting on the whole FB thing, but your comparison is very very poor.
Altria (one of the largest tobacco product seller) is the owner of the Juul company. They literally were told to stop marketing to children and then turned the ship around to push cotton candy-tasting nicotine to minors.
Cigarette companies didn't change their morals. The laws changed.
In fact, they bought into the vape industry (at a time it could be marketed in a way that appeals to kids) after research showed that vaping was a gateway to cigarettes.
By taking a cut of the sale price of the banana as margin to cover it's costs and profits, and then passing the rest down the supply chain to Some Banana Company?
Flipping your question around, why does the banana cost 20 cents and not 5 cents? Where do you think the upward pressure on the price comes from?
Patagonia was founded by Yvon Chouinard, one of the original Yosemite “Masters of Stone.” Nobody is above reproach, but he is generally regarded as a principled environmentalist.
Facebook might be a factor in the division we see today in America but don't kid yourself: we were pretty f*in' divided during the Bush years—the Supreme Court had to decide that election, and the 9/11 unity lasted about a day and a half—and the GOP stopped at nothing to go after Clinton.
This has been going on since the 1980s. Facebook may be an accelerant, but it didn't start this fire—not by a long shot.
> we were pretty f*in' divided during the Bush years, and the GOP stopped at nothing to go after Clinton.
Its quite weird, because the exact same narrative of unprecedented polarization has been used under Clinton (during the bipartisan neoliberal policy consensus), Bush, Obama, and Trump (with Biden, its at least usually acknowledged as a continuation of that under Trump, not new), with most major voices just seemingly forgetting the unbroken succession of preceding instances of the same narrative, pretending whatever is currently happening came recently, out of the blue without warning.
I think you can justify the claim that there was a real increase in political tribalism resulting from the resolution of the long post-WWII set of political realignments that was particularly evident from about the 1994 midterms and the Contract With America through today. (With some signs as far back as the 1980s as the lines of the new alignment started to become clear and the parties really accelerated movement toward their new polarity, where previously major ideological divides weren't aligned with the major partisan divide, so that ideological factions, to succeed, had to work across party lines.) But that's not as sexy a narrative as “suddenly we woke up in this election cycle with divisiveness that just came out of nowhere”.
I think it's pretty obvious to anyone familiar with US history that the seeds of these "culture wars" started in the '60s with the hippie movement. While I wasn't old enough, I know several older liberals who talked about Reagan the way many younger liberals talk about Trump (and probably worse!), and older conservatives who talked about Carter similarly to how younger conservatives talk about Obama. Reagan was known as the Teflon President because none of the gaffes he made could ever stick to him and he remained quite popular throughout his rule.
The problem is, the internet, the web, and now social media has accelerated this divisiveness to a whole new point. As the GP says, it's an accelerant. Like a small cooking fire can turn nasty with the addition of gas, the acceleration of the culture wars thanks to the winner-take-all outcomes of today mean that everything is getting more heated _faster_.
EDIT: While I know it's faux pas, I am curious where/why the downvotes are coming (from) simply because this seems fairly non-controversial, and also because it started out with a few upvotes first.
> The US can barely afford to school its children, purchase its homes, or enact reasonable public health measures without bankrupting people.
I think its pretty obvious to anyone whose knowledge of US history extends earlier than the 1960s that it did not start there (and, also, that the existence of the culture war and partisan polarization, while not unrelated, are quite distinct.)
> The problem is, the internet, the web, and now social media has accelerated this divisiveness to a whole new point
No, its not, as looking at US history anytime outside of a partisan realignment will show. Partisan polarization is a feature of the divide between the major parties in the two-party system aligning with the major ideological divides in the country, which they tend to do in a normal stable political alignment, but not during a period of partisan realignment.
The relatively low level of partisan divisiveness (despite the often quite high level of ideological divisiveness, seen in overlapping things like the Civil Rights Movement, the Anti-War Movement, the youth counterculture/“hippie” movement, etc.) in the (roughly) WWII to 1990s period isn't due to some kind of weird temporary absence of technology needed for partisan division, it is due to the long and overlapping partisan realignments spurred by (mainly) the New Deal Coalition and parties settling out their positions on civil rights. These things meant that on the politically salient ideological issues, the divide between the parties was not aligned with the major cleavages; “liberal Republicans” were farther left on the key ideological issues of the day than “conservative Democrats”, and the opposing ideological factions were present in both parties.
That political realignments had largely settled out by the mid-1990s, with Clinton and the remaining “conservative” Democrats who hadn't defected to the Republicans solidly to the left of what was acceptable for Republicans, and vice versa, which enabled ideological division to translate more directly into partisan division.
> Partisan polarization is a feature of the divide between the major parties in the two-party system aligning with the major ideological divides in the country, which they tend to do in a normal stable political alignment, but not during a period of partisan realignment.
A feature? I'm not aware of any document written by the American founders that calls this a "feature" of the system. An emergent property of FPTP perhaps but not a feature. I'd call it a mistake to attribute more to FPTP than its intuitive appeal; the American founders invented FPTP off-the-cuff.
> The relatively low level of partisan divisiveness (despite the often quite high level of ideological divisiveness, seen in overlapping things like the Civil Rights Movement, the Anti-War Movement, the youth counterculture/“hippie” movement, etc.) in the (roughly) WWII to 1990s period isn't due to some kind of weird temporary absence of technology needed for partisan division, it is due to the long and overlapping partisan realignments spurred by (mainly) the New Deal Coalition and parties settling out their positions on civil rights. These things meant that on the politically salient ideological issues, the divide between the parties was not aligned with the major cleavages; “liberal Republicans” were farther left on the key ideological issues of the day than “conservative Democrats”, and the opposing ideological factions were present in both parties.
Right but what _caused_ this political realignment? I mean sure, you can say that the ideologies were extant and during the Civil Rights Movement (and that this divide itself traces itself back to the Reconstruction period and before that into the founding of the US itself) the New Deal Coalition began unraveling and formed the seeds of today's divide, but I submit a large part of what exacerbated that divide was technology. FDR was a popular president and helped cement the New Deal coalition largely because of his use of radio. Likewise Goldwater and early televangelists received a lot of their support from the rise of TV. Social media is just the continuation of this ongoing technological trend.
Facebook is more than a factor. It is central to the problem as the largest social media corporation. Its amplification of psychopathic content to maximize the sickest forms of engagement for profit has people effectively living in a destructive cult. Remember when first came the tea party with their death panels talk, then birtherism, then Trump, QAnon and coup attempts?
They do not, however, predate opportunistic actors begging for shares of their content on the platform, and of the platform's users happily doing so. I know; my dad was one of them.
There's a difference of kind to mass broadcast on a platform one incidentally scrolls (and thus receives incidental exposure from) versus, say, chain emails. The latter have a lot lower impact and require way more activation energy to push onward than a one-click Share.
All that garbage was circulating and being amplified in early Facebook and for many years later, even today, generating engagement and profit regardless of algorithms. They just made it worse over time. Why do you also dismiss the rest of the list?
Please stop assuming everyone shares your values and beliefs. Personally, I think that while there is some mismanagement at Facebook (that can be fixed) most of the issues brought up can probably be expected from anyone in this problem space at this scale. Facebook has allowed me to discover a lot of opportunities and connect with some long lost friends, so I would say the cost is worth it.
I do agree with this part of the statement from the company:
"If any research had identified an exact solution to these complex challenges, the tech industry, governments, and society would have solved them a long time ago." The social costs of social media are complex, and solutions are not obvious. Unfortunately it's likely that if the companies can't meaningfully address these problems on their own, congress will attempt to, and the results of that the companies may not find to their liking.
If you're trying to optimize for engagement AND avoid rewarding divisive content or misinformation, then yeah, that's complex and maybe impossible.
But as Haugen points out in her 60 Minutes interview, they don't _have_ to optimize for engagement above all else. They have tools to drive down divisiveness and disinformation. They just choose to leave them turned off most of the time.
There is no need for any unified Point of View. From UX changes ( stop showing like/view counts) there are a ton of things a platform can do if they don't want to optimize for engagement, it will be financially damaging to do so, their revenue and stock price will drop no company will do it, but it not a hard problem technically.
Optimizing for engagement inevitably leads to toxic and unhealthy ecosystem sooner or later.
Issues with facebook and the right wing becoming more extreme long before Trump became mainstream ( perhaps it was always there). There was lot of concern noise during the early Tea Party movement in 2010s. There was a lot of toxic content when Obama became president.
Facebook started as tool to rate looks of female students in Harvard , I don't think there was ever a time when Facebook was not toxic.
The ecosystem seems pretty healthy... are users leaving?
It seems that there is a subset of the populace whose opinions you disagree with and want FB to stop serving those users. If female users didn't want to be rated, why would they post photos and encourage people to 'like' them? It seems FB/IG has found a set of users who want to rate photos and a set of users who want to upload photos to be rated. How is this 'toxic'?
healthy in the context emotionally/ mentally for the user.
It is toxic when said users are teenagers and pre-teens, that is the whole point of all the Instagram / Facebook public discourse recently .
At that age as a society we have decided they cannot take certain decisions on their own, which is why Sex, Alcohol, Marriage , Content with Rating PG/R, admission to certain venues, etc are all regulated.
idk. If you found out that your product is harmful, I would expect you to stop producing it. This is the case for every other consumer products except maybe cigarettes and alcohol. For example, if they find out a car is not safe, they pull it of the market at the very least, sometimes they even issue a recall. Why do we not hold social media products to the same standards?
Cars and Social Media is pretty hard to compare, but I’ll give it a shot:
The equivalent here is not that a company produces cars, and that cars are dangerous. The equivalence is more in the line of a specific car manufacturer makes a car which they know is more dangerous to a subset of their customers, but still continue to push it to them.
Nissan knows that the Versa is more dangerous to their customers than, let's say, a Mercedes S500. Should Nissan then stop pushing the Versa in order to protect customers?
I'm sure this argument is along the lines of what Facebook employees must tell each other so they can sleep at night.
But let's be clear, Nissan is making a car that is as safe as they can make it for consumers at a certain price point. Safer than riding a small motorbike. Safer than crossing a road. Or many other alternatives. They're enabling and empowering consumers that can't afford better cars to have some measure of mobility and safety.
Facebook, on the other hand, designs features to manipulate emotions and harm consumers. Make them feel bad, stay engaged, worry more, buy more. They ran the numbers. As long as the increase in suicide count is under a certain threshold, it's not a big deal for them.
>allowed me to discover a lot of opportunities and connect with some long lost friends, so I would say the cost is worth it.
This value that they are providing should not necessitate the wholesale pillaging of user data. My data is worth way more to me than a simple weblog service. These thieves are trading trinkets for land and the natives just don't realize how valuable their land actually is. (hint - it is worth way more than FB charges their advertisers)
Facebook was awesome in the first few years, when the feed was merely a chronological timeline of your friends’ status updates and photos. Then, they started putting crap in the News Feed that users didn’t explicitly subscribe to, down-ranked updates by actual friends, and made it far too easy for people to amplify garbage via the Share button.
The problem is not that Facebook exists; it’s that they wrested control from their users and assigned it to themselves and their advertisers.
> The next best step is to pledge to not hire ex-Facebookers.
I actually agree with that, and not because of any personal agenda against FB, but because I don’t want the corrupted, unethical mindset of FB to permeate my organization. I just cannot trust ex-FB employees to make ethical decisions.
Don't advertisers assess their return on investment, if there are more clicks coming from facebook but less purchases, surely that should set off some alarm bells and loose facebook customers?
Its telling about HN that this isn't the top comment, or that I've seen this post a million times on here.
I quit Facebook 6 years ago, and Twitter a few months ago, didn't have any other "social media" websites. Not missing a thing. Nobody cared. Still talk to all the same people. So there's that little nugget of knowledge if you're reading this and wondering if it was hard to quit.
"Social media" never:
Got me a job
Got me a raise
Made me a friend
Helped me find X
Helped me
Changed my mind
Made me happy
Nope. It's corporate government mergers, such as the now well known facts that the white house and California state government are directly involved in facebooks editorial decisions. That's not capitalism. That's just a corporate state.
Facebook needs to be shut down. Dissolve the corporation, confiscate its assets under civil forfeiture laws, and put Zuckerberg and his enablers in jail.
> For better or worse, in this country you don't get to confiscate assets, dissolve corporations or put people in jail just because you don't like them.
This is the country where people have been summarily executed for driving while black. I'm sure the DOJ can build a criminal case against Facebook under the RICO statutes or some anti-terrorism statute that will let them put criminal forfeiture on the table.
Besides, Facebook is incorporated in Delaware. They didn't get their corporate charter out of a Cracker Jack box.
That state's attorney has the authority to petition the state's Chancery Court to have Facebook's charter revoked. The General Assembly has authority under the Delaware state constitution to ensure that corporations that engage in malfeasance can forcibly shut down.
This is why it needs to be evaluated on its own merits. Who knows maybe you're an FB astroturfer looking to discredit a whistle blower (also in sci Fi novels)
Meanwhile on Twitter, the "fact-checkers" are now taking it upon themselves to try to discredit tweets that make the leading personalities of the political left look bad:
>>The entire "anti-disinformation"/"anti-extremism" industry is a gigantic fraud, funded by the same tiny handful of billionaires, governments and NGOs: a scam to control discourse while feigning neutral, non-ideological goals to battle "disinformation"
Sorry to say, this whistleblower is either not honest or not capable to see.
People say it's just the news, but no, I remember from a while ago being shown multiple suicide anouncements, then a baby which strangled itself accidentally in a zoo etc and I do not even have a fb account!
These i was shown by Facebook users like "hey look at this".
Are you kidding me, Zuckerberg and "whistleblower" , basically snuff kinda videos and suicide threats are being shared, posters not banned for it yet you say you are trying to get to the root of the problem, yet meanwhile, let's start the meta verse project and the ray ban glasses so everyone can be do a little espionage.
I thought that was Gab’s job? Looks like Facebook is in its own league of ‘enabling hate’ and is profiting off of it.
On top of knowing that Instagram is bad for teens, there is no limit to the scandals, leaks, whistleblowers, lawsuits or penalties that will bring the Facebook mafia down.
Remember when Facebook used to be pretty cool? I mean sure, it fell out of favor with younger people including myself, but overall it had generally favorable and positive views in the press and media - especially compared to today. Yeah, people knew it wasn't the greatest, but it wasn't the devil, either.
But then, I guess it was about 5 years ago now, 2016-ish or thereabouts, it all seemed to change overnight, and suddenly it became demonized by the media, to such an incredible degree.
I think it coincided with the exodus of a lot of people from Facebook. People leaving because there parents were on there, better options existing, and maybe because of advertising reporting.
Maybe that drove their engagement machine to start. Or maybe this was the long term effect of their engagement machine.