I had a somewhat similar experience at a much smaller scale a long time ago. I used to buy and sell domains. Bought one that was very close to being on the first page of Google results for a keyword related to personal finance.
Did a bit of sprucing up the page, dropped a few links, and it did get on the front page for a few months, with AdSense paying me several thousands dollars a month while it lasted. The page itself wasn't a scam per se, but it was roughly 1000 medium-effort words on the topic I wrote in a day, along with a few original photos that took another day, and a couple of inline AdSense ads.
It didn't deserve to be on the first page of results. I guess the "kingmaker" thing still exists where big tech decides who today's winner is, and doesn't always get it right.
On the bright side, the algorithms are way more fair and even handed than the human editors of old. I don't think the majority of YouTubers, Twitch streamers and bloggers would ever have made it past the editorial desk at traditional media companies
Saying this in jest, but there was a time when this was highly respected and I feel that the number of places that have them must be diminishing quickly - or a shadow of their former pasts.
Because traditional media companies (I assume TV, radio stations) have a limited outgoing bandwidth in terms of outputting content, so they have to prune a lot more? The argument you are making is not in favour of content-ranking algorithms vs humans, but in favour of Internet-based media platforms (YouTube, Twitch, blogs).
Even if you put human editors in charge of a video website, there's very little chance they would approve someone like Mr Beast for example (before he became famous). They would likely judge his looks (innate human bias), plus his high budget format (self preservation with the bosses) and say his is an unfeasible show. Only an algorithm could have green lighted Mr Beast's show
"Facebook flipped a switch, favoring comments and reactions over shares, and suddenly a food blogger from Utah became the largest publisher in the country, if not the world."
While the firehose volume is disproportionate, the direction wasn't random. Whoever is copywriting these messages is an artist, drawing an outstanding number of responses. I felt the urge to react when I read them and bravely restrained myself. There's a lot of power in asking someone the right personal question.
I agree - The author hits at topics that are somewhat controversial but at the same time everyone has a story. I suspect, and I could be wrong, but majority of the comments on that DUI post are people over the age of 40 or 50. They all want to share their story of how drinking and driving wasn't a big issue when they grew up, that their dad always had a drink in hand. People love to talk about themselves and this topic, combined with that audience creates a lot of "nostalgia" for lack of a better term.
I mean, you must see the primal appeal - essentially the posts are "haha, who else does {{COMMON_THING}}?!". Millions of people want other people to know they too are just like them.
Facebook's algorithm is optimized for engagement - but people aren't desperate to engage - they are desperate to be engaged with.
> I mean, you must see the primal appeal - essentially the posts are "haha, who else does {{COMMON_THING}}?!"
No, and I feel like I'm explaining to the AI what human behavior looks like here. Yes, the sort of meme discussion you quote is fairly common, but it's not just "common thing", it's some sort of shared experience that's probably private; you're not sure if everyone else experiences it, but saying you experience it might sound weird to others, and the (usually unneeded) thoughts of potential embarrassment maybe prevents you from doing so. Until someone does, in such a post, and yeah, those take off as the day's lucky 10000 are all sighing a collective "I'm not weird!" sigh of relief. Eye floaters come to mind as an example.
But … that's not the case here. "Who the heck sleeps with a Fan (sic) and AC on?". Heck, that odd capitalization just makes it even more uncanny valley. Like Zuckerberg really is a robot, and he's just tried a new social subroutine out on me.
Have you ever considered the fact that because clearly much of the world's population _does_ in fact comment on these posts, that _they_ have said normal "human behavior" and not you or I? We're the outlier on Facebook.
While several million is a lot, yes, it is a far cry from the 7B people in the world, or the just shy of 3B users FB claims to have. The question is how many ignored that post; it could very well be that they out number the ones who commented, and those who comment are the outliers.
It could also be sockpuppets are commenting to drum up the numbers to make the post look hotter than it actually is.
I don't get why you're being down voted. It's a fair argument. We really don't know how many ignored the post.
Yes, since Facebook picked to promote it, probably a low percentage ignored. But what is "low percentage" to Facebook algo?
Maybe usual average is 90% ignores without "engaging". With this one, "only" 60% ignored.
Which means the majority are not triggered by this sort of thing.
This is only speculation, but just to illustrate the argument may be valid indeed and these networks like Facebook end up giving an impression that something is "normal" across humans, but they're actually not.
You're on the right path, but you're missing the milquetoast sting at the end; something that's controversial, or slightly embarrassing that makes the activity seem niche, but really isn't. "Who the heck eats Food with the Fridge door open?" - or "Who the heck eats Food slowly, so the crinkly wrapper doesn't wake the kids?".
Yoi can find the same sort of viral drivel under "relatable tweets" on Twitter.
you have what I call a "Kardashian problem" - I don't understand why the Kardashians are popular. Who the hell would be interested in these people? Why would anyone waste any time watching them or paying them any attention? And yet people clearly do, as the Kardashians are (or were) making millions from other people's interest in them. Clearly it's not their problem, it's that I don't understand their market.
If you're struggling to understand why this FB page has such an enormous following,the answer isn't that "they're doing it wrong, people aren't like that" - because they're clearly not wrong, and enough people are like that to give them that huge following. The answer is that you don't understand that market.
I know the name "Kim", and I know they are in plural, but wouldn't be able to tell them apart - or give any reason why they should be interesting either. The word "implants" springs to mind, before my brain shuts down to what I perceive as infinite stupidity overload.
I understand there's a significant number of people who quite clearly excel at getting through life worrying about rather different things than myself.
Now you're getting it, although you need to think of something that most people do, and then mock them. The key is to trigger confused/shock engagement. Plenty of people are obviously falling for this.
I mean, the "who here has never had a DUI?" question is tailored for engagement. The actual % of people who have had a DUI is really low, and so the number of people who can engage with that honestly is really high. Then there's the fact that DUIs tend to evoke a strong moral response, so people will dive into the comments to wag the finger at all the DUI havers.
We're all here commenting on posts about tech and tech-adjacent concerns, so the community is pretty niche and interesting (to us). This is what you get when someone crafts content for literally everyone, and you'd be right to find it simultaneously bland and bizarre. But people, it turns out, do engage with it.
Reminds me of those posts from a few years ago that were like:
"[Simple math problem, like what's 5 x 4?]: 95% of people can't solve this! I bet you can't either!"
With hundreds of thousands of commenters "engaging" with it. I don't understand anything about this behavior. 1. What does the poster have to gain from a post like that, 2. What do all the commenters have to gain, and 3. What does SocialMediaCompany gain? The promise of social media is long dead. It seems to now just be bots (or people who behave as bots) responding to bots (or people who behave as bots)... everywhere.
Well, for (1) the poster gets engagement from low-effort posts. The end game is usually monetizing their social media presence, and though there's multiple ways to do that, they all involve getting eyes on your page.
Your (2) should be evident from the phrasing of your example: if 95% of people can't do it, people want to prove they're in the 5%. They get a few minutes' distraction, some self-satisfaction, and maybe some conversation. I'm not going to play psychologist, but that's enough for some folks. And it really only has to be some: if your post is seen 32 million people and 30 million dismiss it, you still got 2 million people to engage.
(3) is, I guess, money again? Active users means SocialMediaCompany can tout the value of their platform to advertising partners. The bot problem is a good observation, though, because platforms with a reputation for bots are less valuable to advertisers who want human attention. People who act like bots are probably great for advertisers, though, so idk; maybe that's a wash.
Those posts really are a curse. And they are usually not quite that simple, there is always some PEDMAS quirk to it so ~70% of the people get it wrong, causing yet more comments pointing out how they are wrong and further inflating engagement. Those are worse than the stupid Farmville/Mafia Wars posts that used to horrendously clutter up the platform, at least those are easy to block. Those math things come from all different sources so blocking them isn't effective.
For folks that are out of the loop, the post looks something like “24/4*3: 80% of people get this wrong!” and the arguments in comments are between people who do this the way most computers do (multiplication and division have the same priority and the leftmost one goes first, so you first calculate 24/4 to get 6 then multiply by 3 to get 18) versus the way that the acronym literally says to do it, (multiplication comes before division in the acronym so you first multiply 3x4 to get 12, then divide 24/12=2).
Part of the reason that it generates so much animosity is that we condition people to think of math as always having a right answer, it is the Fount of Objective Truth. The idea that math is sublimely subjective, is indeed an artistic medium, does not seem to be well-appreciated and is often not even acknowledged. This is usually chalked up to the fact that math has “rules,” which is a very strange sentiment because the points in a sports game also follow “rules” of a similar sort but we usually don't regard those as either objective or subjective, they exist in a murky third world where we do not ask those questions...
The poster gains by getting a large amount of interaction with their page, which is certainly a ranking factor for future posts. e.g. SocialMediaCompany will say "This new post by Poster will get higher placement because their previous posts were enjoyed (interacted with) by many people, and therefore Poster must be producing good content."
I totally agree with you. These posts all seem like a way to scape data about me. Why would anyone want to know if someone else had a DUI? If I sold insurance I would like to know that about you so I can increase your cost and lower my risk. All these questions seem to ask questions you are better off not answering because it builds a profile about you. I just don’t get it but definitely see aunts and coworkers who are totally into posts like these. Bizarre
Have you ever seen the posts? Some relatives comment on them so they show up on my feed, but the post will be something like "do you like dogs or cats better?" And my aunt and 10 million other people will make a post just saying "cat" or "dog"
It's sort of weird to have watched the target demographic switch from college kids to everyone's older aunts and uncles. We went from "you should know better than to post that picture" to "you should know better than to engage with that order of operations question".
To propagate (show up on more feeds), they need people to interact with it. So successful content will be that that best drives interaction, and I guess comments still weigh more heavily than likes?
Consequently, Facebook itself is a brute force algorithm for discovering the more virulently memetic content and phrasing.
Facebook really ruined the opinion I had of my relatives because of that kind of stuff.
Man, you taught me how to fish but look at you now, jumping on every bait thrown at you...
A while back I was trying to get twitter followers. I experimented with a few different strategies - quick reactions to big accounts I agreed with or disagreed with, saying what I thought people wanted to hear, saying what I thought would be controversial hot takes, tweeting about everything trending, etc. I basically got no traction. I was getting low tens of thousands of impressions a month according to the twitter analytics tool and maybe 1 or 2 new followers a month - and, to be honest, those new followers were probably bots.
Eventually, I hit on a new strategy. Instead of trying to generate good tweets I just followed people at random. I forget the exact number, but somewhere between 15-30% of the people I followed at random would follow me back. I realized I could probably get an arbitrary number of followers this way but it would be a pretty stupid and meaningless achievement so I quit doing it.
The "viral post" strategy of the article, asking things like "Who the heck sleeps with the fan on?" maybe similar as a kind of brainless strategy to get followers. I guess some people like responding to these prompts. I suspect it has the same weakness as the "follow back" strategy though. Nobody really cares about the account or connects with it or even understands what you're trying to do. These people aren't your followers so much as they are people who like small talk and coalesced around you for a time.
>Eventually, I hit on a new strategy. Instead of trying to generate good tweets I just followed people at random. I forget the exact number, but somewhere between 15-30% of the people I followed at random would follow me back.
This is exactly what my teenage cousins used to do (maybe not twitter maybe snaphat?) but with one tweak- they'd go through following people, and then once they'd followed they unfollow them again so now it looked like they were following you but you weren't following them back.
> I suspect it has the same weakness as the "follow back" strategy though. Nobody really cares about the account or connects with it or even understands what you're trying to do. These people aren't your followers so much as they are people who like small talk and coalesced around you for a time.
But if the entire point is to get impressions so you can sell a product, or whatever, do you are if people don't particularly like you?
Twitter users with 1:1 followers: following are known to be trying to get followers and aren't actually reading posts from those users. It's all circular, a bunch of people juicing each other's follower stats.
They use lists or separate accounts to read the accounts they care about. They have bad engagement on their posts
OK, I can give you a real world example of why parent commenter's strategy is useless.
I had two pages on Facebook based on a very popular movie from the 1980s.
One was the actual studio's page for the movie that I was given access to (as the studio admitted they don't give a fuck about any IP that doesn't have an immediate release - e.g. Blu-ray etc - planned for it), which had over 2 million likes. When you opened a Facebook account this page was one of the list that was often presented to a new user as something they should "like".
The other was a fan page I'd made which was niche, had to be actively sought out, and had 30,000 likes.
I experimented with selling branded products through the accounts. The small fan page sold 10X as many products as the official page that had almost 100X more followers.
tl;dr: if you want to sell shit to your followers they need to be engaged with you - just having people follow you on autopilot is going to create nothing of value.
And they can tell? I'm not on Twitter but I can see just following everyone that passes in front of my mouse, and so be it. If that's banned, I'd egg people on to do it more and more and help tear down the idea that follower counts mean f** all.
There used to be services that let you sign up to follow and be followed back. I don't think anyone serious about quickly gaining followers will sit and click manually. They definitely can tell people automatically doing this (at least the naive ones)
"The Algorithm" or "Its Zuckerberg's fault" etc are all simplistic, comforting narratives to distract from the horrifying truth. The Clickfarm bots and fake account NPCs have collectively achieved an emergent intelligence. We humans aren't the majority of our social networks anymore, and we don't have any idea what it is the bots are pushing for.
The people we want to hold accountable for this mess are being run over by this steamroller just as we are.
> The one mystery we weren’t able to figure out is why any sane person working at Facebook would feel comfortable publishing a content report that admitted that the most viral publisher on its platform this year was a barely active drop-shipping scam page full of stolen video content run by an LLC that doesn’t even exist anymore.
Every reply so far has missed the point of this sentence, so here it is:
Facebook reporting accurately is sane, but Facebook is in charge of what is being reported, which is not sane.
Like, imagine if your friend was slowly poisoning himself by flavoring his food with lead for a month. Even if he is honest with you about what he’s been doing, it’s still not a sane situation. If anything it’s less sane to be seemingly lucid and forthright about a month of self-destructive behavior, but not actually do anything to change it.
The question here is: why did Facebook allow the situation to get so bad that this page was the top for a whole month? Why didn’t someone at Facebook notice this stupid pointless scam page was surging and do even a little of research and tweaking to change that?
They have seemingly abandoning any responsibility for the behavior of the machine they built and operate. Personally I don’t give them many points for just reporting that.
Ex-FB here. My guess is that it's primarily a combination of two factors:
1) Someone / some org was slow to notice that this was happening.
2) Bureaucratic muck slowed resolution down. Even once escalated, there would be a lot of red tape to go through to make the necessary changes to how things are ranked. The people noticing the problem most likely aren't the people able to make changes, the changes would need to be tested at scale and their effects evaluated, and the people making the changes would have a lot of deliberating, politicking, explaining, and appeasing to do. Add onto that that everyone is stressed out of their minds, worried about performance reviews, and having to balance changes they want to make against how they and everyone else are evaluated at perf time...
Ok, but all that must also be true for the change that originally created this situation. There's all this red tape and large-scale testing and then... nobody actually looks at what hits the #1 spot?
Presumably there are people who are in charge of designing the ranking algorithm, so they would look at the #1 spot, but they would also look have to look at numbers 2 through 1 billion, and evaluate over long timescales and have a lot of reports, and compare how one algorithm performs vs another one.
Maybe they've already evaluated and decided that they're ok with #1 being off, if the majority of everything else performs better. Doesn't really make any difference per se what is #1. And then even if you want to stop X becoming #1, you'll never truly understand the myriad of butterfly chain events that caused us to get here, and how do you know even if you've solved this one issues, you haven't just ruined everything else.
Remember spam is an adversarial relationship. Almost certainly when whatever ranking change was exploited here was deployed the top ranked items were great, then some actor started realizing what was being selected for and exploited it
> Even once escalated, there would be a lot of red tape to go through to make the necessary changes to how things are ranked.
When I write a program that accidentally gradually fills a disk with garbage, I don't wait until I can redesign the program before I delete the garbage.
It's more complicated than that. Facebook is a company. People in different parts of the company do different things, have different motivations, etc. One cannot look at a large organization as an individual. I tried to come up with a more apt analogy... but analogies are awful here.
So, probably lots of people actually did notice it, and either (a) shrugged and thought "not my job", or (b) notified someone else about it, who then did (a)?
There's also likely a (c), which is people who don't think that Facebook should do anything about it. I guess that could be considered part of a set with (a), but I thought it should be mentioned separately as it's a different motivation.
Might I suggest (d): analysts, disgusted by the fact this crap is on top, tweaked The Feed in an attempt to disfavor that type of content then ran a simulation but saw their simulated ad revenue become lower than allowed minimum, then did (a) and went out to have lunch.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
- Upton Sinclair
In particular, stare too hard at that viral page, realize how scummy that page is, and then spend the rest of a career at facebook trying to tear down the systems that made that the most viral page, are likely going to earn you enemies, and cause the company to lose money (in the short term, at least).
Which is totally fair in my opinion. It is no single individual's job to fix an issue this big and I don't think it's even feasible for a single person to do at a company like this. And if management hasn't set up incentives for people to proactively fix issues like this then they probably won't get fixed. Upper management can and should fix these issues but they probably don't care because they don't think it's an issue.
If it were a tiny thing that slipped under the radar, I'd buy it, but if my company were being dinged in the media every day for spreading misinformation, I would make sure the whole company is hyperaware of how misinformation spreads... which is often through viral posts and pages. I don't think this sort of oversight is something to blame on corporate bureaucracy.
It doesn't seem like facebook pumped the whole page, all other posts have extremely low engagement, literally < 10 likes. What's wrong with pumping individual "good" posts? If anything it can be seen as honorable/egalitarian giving folks a chance to make it even if they aren't some media behemoth.
They set out to build a platform where anyone, no matter how small, could compete with big media on a level playing field based on a straightforward metric: whether a user cares enough about a post that they're willing to engage with it. Publishers get rewarded for producing compelling content, users get to see more content they care about, and Facebook makes money since people spend more time on the site. Everybody wins.
And they succeeded wildly at this. But as it turns out, the things people respond to most are spam and hate and conspiracy theories.
The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.[0]
Replace "paperclips" with "clickbait" and "human life" with "the right to privacy". "Artificial intelligence" then becomes Mark Zuckerborg.
I enjoy this author's podcasts and newsletters, and an interesting angle they've covered recently is that Facebook, ostensibly trying to "clean up their act" with regard to being seen as a platform for disinformation and radicalization, has been tweaking their algorithm to not promote so much alt-right and similar content.
But the problem is, once they exclude all that stuff, there's just not a lot left. So you end up with inane garbage like this account being the most popular content on the platform. Like the whole platform is just Ben Shapiro and drop-shipping scammers. Really paints an "emperor has no clothes" picture of what's going on on Facebook these days.
When corporations hide facts we call them immoral, and when they publish the facts we are scratching our heads?
Facebook does a lot of internal research which to me seems to be coming from a good place. Doing that research carries a lot of risk for them; e.g. the recent 'whistleblowers' mainly leaked that exact research reports. But instead of giving Facebook credit for even doing that, we always take a negative approach.
Note: I don't like facebook, I don't even have a facebook account, but unless we encourage good behavior (even at places we dont' like) how do we expect things to get better?
Yeah I also find it surprising that people are shocked that Facebook have internal studies into e.g. whether instagram causes depression in teenage girls (answer: inconclusive), like they’re a bad company for investigating whether their products have negative effects on people…
Thanks for the article, it was an interesting read, since I was mostly going of the WSJ stuff before.
I nonetheless feel that with social research in the private sector absolute certainty or even studies that would pass peer review are a bit too high of a bar to meet before we can talk about the organisation knowing something. But I guess thats more of a semantic argument.
After reading the NYT article it still seems to me that a more ethical company would have done more to address the probable issues. Meta did very little and seemed to cherrypick the most favourable public studies.
Anyyway, all this doesn't seem suprising or even unusual to me. I generally don't think that we should expect companies to act very ethical. Rather, incentive structures should be created/amended in a way so that purely self interested companies act as if they were ethical.
We are told again and again that correlation is not causation, but we readily ignore this maxim when we are looking for an account that we hope is true. At a time when Facebook is regularly vilified (sometimes deservedly), wanting to believe that its practices have caused teenagers’ mental health to suffer is understandable. But wanting doesn’t make it so.
> We are told again and again that correlation is not causation
I'm beginning to suspect that maybe we're told this too often, so we're starting to take it for granted that correlation can't be causation. But the saying actually only means that correlation is not necessarily causation. Quite often, correlation is actually there because of causation.
The shock comes because it doesn’t fit the narrative that Facebook is all evil, all the time. Nuance, complexity, and evenness are lost on some. No one and nothing is all bad all the time.
The reporting on this study has been terrible. It was a really small study of female teenage Instagram users who reported having various problems. Of them, 13% blamed Instagram for making things worse, which was actually less than the number that said Instagram made things better. And this is all based on self-reported data anyway, so none of us should be taking it very seriously.
Just because you were honest with your partner about cheating on them doesn't mean they have to stay with you. That isn't a mixed message about whether your partner wants to hear the truth or not. You're not being punished for telling the truth about what you did, you're being punished for what you did.
I think you are just missing the backstory. Of course if you are approaching what Facebook does with naivety you end up sounding.. naive.
The whole reason this report was put out by Facebook in the first place is because people used their own tool Crowdtangle to point out how the most popular pages of all were pushing political and medical misinformation to billions and were run by foreign spam mills. So they gutted Crowdtangle and published this report cleansed of any reference to "questionable" pages. But as a previous submission [1] showed, the numbers they state don't add up. And as this submission shows, if you dig into what Facebook wants to tell you is popular, it's all still the most garbage, devoid of value scam content - and it is their algorithm that is promoting it above all.
In hindsight, I can see this quote is generating a lot of discussion about the content reporting, but I intended to focus the latter part of the sentence, which I thought really highlights the ridiculousness of social media.
Well, perhaps whoever generated this report was simply reporting the truth and not trying to fabricate or distort the truth in order to make Facebook look better?
The fact that the author of the linked article apparently just presumes that they would do so and "[can't] figure out" why they wouldn't says a lot about their own integrity.
I don't think it says anything about their own integrity. I think it's just a tongue-in-cheek remark about how this information makes Facebook look ridiculous.
When conducting scientific research, I'd think "not caring" is a desirable trait. You don't want anyone's personal feelings to get in the way of reporting accurate information.
I think this is just being pedantic. You have to care about something to formulate and test a hypothesis. You have to care about what the data say to bother gathering them and collating them into some report in the first place. "To care" can be "to be curious about something" or "to have an interest in something", not just "to be concerned about something."
No, you have to be interested in something. Caring implies that you have an emotional desire for the hypothesis to be either true or false and that leads to bias. Scientific interest drives motivation.
I can't help but think it's not a coincidence that Facebook decides to produce a content report that lists the top 20 most viewed items and it just so happens that the top 20 items are inexplicably spam. Either facebook is enitrely dominated by spam (possible) or facebook figured out they could just boost a few items to make sure anything embarassing doens't make the short list.
The entire existence of the Facebook Transparency report is as a counter to the Facebook Top 10 Twitter account (https://twitter.com/FacebooksTop10) showing that Facebook tends to surface right-wing content.
Facebook prefers to be known for facilitating spam than right-wing content.
Well what I'm saying is... if it were would facebook tell you? Or would they find a way to produce a report that tells you something else? As another commentor points out - the independent version of this consistently ends up listing right wing content.
I developed software for lottery and casino services, and when I read stories like this the first thing that comes to my mind is this "golden ticket winner" effect that ranking algorithms are designed to generate.
The chances of winning the lottery are so low, but whoever wins will get so much media coverage that everybody believes they can win.
I don't think it's a coincidence that "poor" content makers are on top of the transparency list, it's just the algorithm exploiting this "you can be a winner too" effect.
when I see these posts i always figured it was a kind of bot-net role call. imagine you pay a contractor for one million sock puppets and you want to check how many of them aren’t shadowbanned - the bots already follow each other and are programmed to “engage” with posts their bot-friends engage with, so it becomes a follow the leader effect, and allows the purchaser of a botnet to inspect their army, kind of like Obi-Wan getting a tour of the Kaminoan’s clone army after finding the official records were deleted from the archive.
no official list of which accounts are bots == check the size of the network myself, and if a few million retirees get caught up in the net, all the better for camouflage
You see this on Youtube as well. Comments acknowledge "The Algorithm", mentioning that the algorithm suddenly chose this video from 2013 to be popular, or that people who were picked by the algorithm to like this video must be cool. This has happened with music too, Plastic Love and Ryo Fukui's Scenery being some examples I noticed. Maybe it really is "the algorithm" right now but it seems like this could be easily manipulated.
Plastic Love doesn't quite serve your point, it's just they released the MV after literally decades, and nostalgia is handy for engagement.
Granted I do see that YT has the same issues. I don't think its easy to manipulate, but all recommendation systems are going to have people trying to manipulate it.
Plastic Love is pretty much the face of city pop/future funk and it blew up on youtube a few years ago. That crappy video they just released is a money grab.
Many channels I saw were inspired by SpiffingBrit's claims.
He claimed that creators using the Community tab was disproportionately rewarded by the recommendation algorithm. (Since the community tab was a new feature, YouTube wanted the feature to show up in peoples' feeds; but since few creators were using the community tab, it meant those that did got a big boost). -- IIRC, this evened out once everyone started using the feature.
SpiffingBrit also suggested that copy-pasting many of the trending keywords in comments would help the recommendation algorithm show the video. -- This turned out to be incorrect; but for about a week, most of the comments I saw in a bunch of videos had the same 300 words copied and pasted.
I clicked through to the Q3 transparency report, and when I got to the list with Thinkarette at the top, I looked at the next handful of slots and audibly exclaimed “THESE GUYS” (substitute an expletive for “guys”).
- 96.9 The Eagle KKGL
- 101.9 KISS FM
- That ain't right
These pages have been all over my timeline the past year. I blocked “That ain’t right” a few months ago because I found them particularly obnoxious, but I had assumed that the radio station ones were just local Baltimore area radio stations that people followed (I don’t listen to the radio). But no, it turns out these are actually EVERYONE’S local radio station! We’re all calling in to tell the same person whether we flip over our pillow to get to the cold side!
Really though, I find this absolutely fascinating. For about a year, big corporations with hundreds of social media marketers who get paid bundles to spend their day figuring out how to win social media were undercut by a couple radio DJs and food bloggers who had the guts to ask the big questions everyone was really thinking about, like “What happens if you spell your name but use predictive text after each letter?”
What I hate is that Facebook's intentionally nerfed reporting mechanisms keep it's customers from self-reporting.
For a period of time I'd get a post 'From %Person%, With some other %entity%' and they're less than interesting...and you CAN'T block it. One of your friends comments on a stupid post and the only option you have to send a signal is 'mute my friend for 30 days'...because the 'with' account always changes.
This actively makes Facebook unfun and had the great side-effect of sending my on a 2 week hiatus.
If I could block posts 'commented on by a friend, but has engagement above 100,000 people' I'd block it in a heartbeat...because the content has NEVER been entertaining, and has often been half-a step away from learning people's security questions....at scale.
> If that bothers you, now imagine if, instead of being a random woman, you were a company with employees, and the same thing happened. Imagine if Facebook allowed you to reach inconceivable levels of scale and then, one day, they flipped a switch and made it all disappear. Pretty messed up, right?
This was buried right at the end. I estimate this is the real import of the article, but it went virtually unnoticed and unattended, only mentioned in passing.
A pretty wild idea, but what if this is just a hash collision or id encoding bug going unnoticed?
Some recommendation or news feed pipeline accidentally keeps boosting “Thinkarete” posts. Would be funny.
It’s very unlikely that this is the case. Whatever the cause, it’s extremely surprising such a low quality page and low quality content have become so popular.
People here are saying that the questions are broad and well formed enough to encourage “engagement.” The thing is, Facebook has millions of pages trying to do this, entire firms with big money and research on their side. I’d be shocked if an attempt at engagement this poor won out organically, Facebook is after all an advertising platform.
It's been a few years since I've been on Facebook, but doesn't facebook show your comments to your friends by default? It's not like Reddit or Hacker News where everyone sees the same comments, is it?
Yes, you will see comments from friends on posts like these. Even if they have a million comments it will come into your feed with a friends replies visible.
That or the post is some variation of "tag someone that does X", or the one I've recently noticed "tag your favourite celebrity and see if they'll respond". Random pages I've no connection to are now popping into my feed because some page I follow responded to someone that tagged them under one of those. Completely pointless.
There are millions of fake social profiles on Facebook, LinkedIn, Twitter, etc that use fake generated profile photos, friend networks among both themselves and gullible real users, and automated markov chain style comments on popular content. It's a self-perpetuating system and an effective way to see inside the walled gardens.
facebook's commenting interface, showing only the few latest comments and discouraging threading is aiming to create the largest number of duplicate comments with immediate (and thus outrageous) reactions.
> We had a breakthrough about what exactly is going on here after we put “thinkarete.com” into WhoIs Domain Tools, which listed “thinkarete.com” as “ThinkaRete.com”. It’s a small difference, but after we googled “Thinka Rete,”
Another poor soul misled to believe that DomainFools isn't just totally guessing at word boundaries!
I'd posit that this random blog is harmless. It's better that some random dropship site is connected to this traffic than a more sophisticated organization IMO.
Pessimistic conclusion when this easily fits into a common ethos: success is laying down all the infrastructure to get lucky!
Facebook is selling a dream. A lottery. The idea that your form of engagement will become the favored form of engagement, just like this page creator got into dropshipping after falling for that dream.
Did a bit of sprucing up the page, dropped a few links, and it did get on the front page for a few months, with AdSense paying me several thousands dollars a month while it lasted. The page itself wasn't a scam per se, but it was roughly 1000 medium-effort words on the topic I wrote in a day, along with a few original photos that took another day, and a couple of inline AdSense ads.
It didn't deserve to be on the first page of results. I guess the "kingmaker" thing still exists where big tech decides who today's winner is, and doesn't always get it right.