Hacker News new | past | comments | ask | show | jobs | submit login
Facebook shouldn’t be in charge of how you use Facebook (doctorow.medium.com)
115 points by RageoftheRobots on Oct 9, 2021 | hide | past | favorite | 62 comments



> Regulating tech is a great idea (assuming the regulations are thoughtful and productive, of course),

I’m not nearly as enthusiastic for these calls to bring heavy regulation on to the tech industry.

Articles like this have a hypothetical ideal mental image of regulations that will punish only the bad guys while protecting themselves and the content they like. There’s also an implicit assumption that regulation will only apply to Facebook, without bringing negative effects on to the sites and services the tech community prefers.

This article may be well-meaning in intentions, but how would such regulation work in practice? Writing regulation that outlaws companies from forbidding customer API automation sounds good if you’re a tech user who wants to automate some clicks, but would that also mean that spammers must have free reign on the API as well? How would such a law be structured to force sites to allow consumers to use “good” tools while also allowing them to catch and stop the “bad” tools like auto-friend bots?

Some of these arguments are starting to feel like utopian fallacies where people want impossibly perfect regulations applied with the assumption that only their enemies will be affected. I’m also surprised to see authors like Cory Doctorow advocating for more government control and regulation of popular internet communication sites. Once we open that Pandora’s box of government control, it’s hard to imagine future politicians won’t be teaching for it as a way to push their agenda.


Totally agreed on the utopian fallacy angle, it's really hard to come up with effective regulation ideas that won't create unintended distortions at best, and new innovation-killing moats at worst.

But we have to try.

As imperfect as government is, it is the only power that we have over giant corporations. Existing regulation and agency structure is completely unsuitable for the world we live in today. So far tech has relied on creative destruction as compute power grew and form factors shrunk rapidly, leading to ever increasing waves of adoption, and "software eating the world". However at this point most people have a smartphone in their pocket and the attention economy is maxed out. A lot of future innovation will be capital (and data) intensive, and with adoption already saturated, the natural power of Google, Facebook, and Amazon will not be naturally disrupted the way PCs disrupted mainframes or the web disrupted Windows hegemony.

Of course there's nothing worse than bad regulation, and historically tech regulation has been a minefield due to the general cluelessness of congress. The good news is that this will improve with turnover and younger staffers helping build a broader understanding of the issues. Hopefully we can start to pivot away from distractions like section 230 and asking corporations to be arbiters of truth, and instead focus on anti-trust and data privacy angles, with specific focus on size and power.


> As imperfect as government is, it is the only power that we have over giant corporations.

I don't think this is exactly true. There have been many words written about the effectiveness of technological countermeasures vs. regulation, but I think the proof is in the pudding: Facebook wouldn't go nuclear against Unfollow Everything if it didn't work (i.e. tilt the balance of power back towards the end-user). Google wouldn't be trying the whole manifest v3 shenanigans if they weren't concerned about content blockers.

But here, as Doctorow points out, _regulation_ stands in the way of these effective technological countermeasures. I think that the apparent slowdown of technological disruption and creative destruction you reference has at least some grounding in the increasing use and chilling effect of applying IP law against adversarial interop, rather than some inherent limit of attention or hardware.

So, I agree with the parent in bring surprised by the combo of Doctorow's insightful analysis around the Unfollow Everything debacle and a call for additional regulation via existing agencies. I'm not as hopeful as you that younger staffers will bring some kind of renaissance of enlightened regulation - consider how popular the "let's make Facebook ban the nazis" take appears to be across generational divides.

To sum up, I'm afraid that anything we pass now is going to be a lot closer to the DMCA, in terms of how it protects corporate interests over individual rights, than the sort of utopian wishlist that is usually quoted as an ideal.

On a positive note: I've recently been interested by some of the electoral successes of the Czech Pirate Party (https://en.m.wikipedia.org/wiki/Czech_Pirate_Party) and wonder if the way forward for internet rights is to tie the idea to a broader campaign for transparent, effective, and highly democratic governance.


Why do we have to regulate? It seems an alternate yet much more difficult path would be that we could grow a societal backbone and make certain destructive practices illegal instead of trying to figure out a central list of allowed practices and licenses and policies and permits and bureaucracy etc. For example, it’s pretty clear that internet scale data markets and user attention platforms harm the individual. Why is step one always to reach for a regulatory solution instead of a slightly more radical “outlaw such machines and then figure things out from there” approach? A naive example might be a law stating: “It is illegal to profit from the sale of advertisements to your own users.” In the same vein, why cant we outlaw DRM because it’s bad for users and tell movie studios we care more about users health and well being than we do about your “piracy” problem?


I'm not following your argument, what do you see as the difference between regulation and outlawing something?


The argument against regulation people often reach for is that it actually benefits incumbents and stifles innovation because it involves permits and checks and licenses etc generally just roadblocks to doing something in a domain. And that if you open the door to it in one spot then you can definitely expect it to crop up everywhere. I generally agree with this take and understand why people are not excited about working in, using products in, living with, heavily regulated industries.

What I’m saying is that form of regulating FB seems silly to me too and I think we could avoid the “fear of central government control raining on our disruptive tech utopia” problem and the “any regulation here would be a hand wavey slippery slope” problem by simply taking a more aggressive principled stance as a society and declaring bad things illegal instead of trying to compromise and regulate “tech” into submission. Punish the bad behavior seems like such a simple solution.


Outlaw what specifically though? You're falling into the same trap OP is calling out, that your extremely vague hand wavey proposed laws would only ban "the bad guys".


I list two examples (as does Doctorow) of things that are pretty clearly bad for users: 1. attention marketplaces, 2: DRM. I am falling for no such trap I really don’t understand your angle there. My suggested solution to 1 is rather than try to regulate Facebook, which I agree is hand wavey, just ban the business model altogether because it demonstrably breeds machines and software that harm users. The “good guy” user attention businesses are collateral, yes, but this is a move to protect us not our ad marketplaces and the hypothesis is that the incentives just aren't aligned for there to exist a “good guy” ad platform. (If one exists please show me so I can understand the extent of any collateral harm I might be advocating.) I like this solution because it’s principled and specifically not hand wavey. The collateral seems worth it IMO. Same with DRM.


what's an attention marketplace? facebook, twitter, reddit, ok, all gone. But then advertising is attention and a marketplace, so does that mean banning all online or app based advertising? Amazon has attention and it's a marketplace, so does Etsy.


No you don’t ban by name like that.

A start might be to say: it is illegal both run a content platform and also to profit from the sale of targeted ads on your platform.

Or simply but more radically: you cannot use user data collected in the operation of a software platform in order to target advertisements to your users.

The goal would be to ban the bad incentives by eliminating the ability to profit. Yes that kills the profit. That’s the point. I don't think anybody is at war with e.g. a news site that runs an integrated ad on their headline or landing page or even a search engine that inserts its own contextual ads. We’re talking about dismantling the user content advertising platforms where the incentive becomes maximizing the time users spend on the platform.


I'm not suggesting by name, I just had no idea which platforms you think your law would target. Banning using user data to target adds is a lot more specific and actionable.

That would hurt facebook and Twitter's revenue quite badly, but it would do nothing at all to stop the toxic effects of facebook particularly. My main problem with facebook isn't targeted adds, yes I'm against their toxic attitude to privacy but it's not their most toxic behaviour. The main problem for me is the way they turn peoples feeds into firehoses of radicalising, enraging, adrenaline juicing hostility.


I guess the hypothesis is that they engage in that behavior exactly because their business depends on driving up engagement so they’ll go as far as to, as you say, show people content that pisses them off so they’ll spend an extra hour as a keyboard warrior and hopefully see an ad in there. I don’t think those incentives exist for e.g a paid phonebook.


That box is already open. Respectfully your comment kinda reads like you assume tech is some new untouched frontier unexploited and ripe for the picking. We’re way past that and we must decide as a society how we want to navigate a world of global software services and immense data collection.

Doctorow has always been against government central control of information. I suspect he’s against any overly large centralized information control regime because it has the same practical impact to people’s everyday lives. It’s not weird to see him (or anyone) call for “regulation” when the regulation being asked for pertains to enforcing we as a society retain certain rights with respect to how we interact with our world. That’s the whole point… don’t get distracted on some “govt control vs no control” axis. Nobody is asking the govt to tell you how to run your business, they’re just asking that we make it illegal to abuse users in the way Facebook has by permabanning the developer of a genuinely socially good tool that is obviously not spam for the insane “it automates user interactions” blanket reason. My password manager automates user interactions… GTFO FB.

Also laws aren’t regulation. You can make something illegal without imposing back-pressure on every new venture in the space it pertains to.


> Nobody is asking the govt to tell you how to run your business, they’re just asking that we make it illegal to abuse users in the way Facebook has by permabanning the developer of a genuinely socially good tool that is obviously not spam for the insane “it automates user interactions” blanket reason.

I don't know if it's cognitive dissonance, or some other phenomenon at play, but it seems to me like people in tech are unwilling to accept that the problems are systemic. Having a roster of mustache-twirling villains is convenient fiction, and having those companies punished/broken apart and calling it a day is doing something, but that doesn't prevent a different company from doing the same thing in a year.

There is a tension between wanting the government to do something - but not wanting it to do too much (lest it affects our stock options, or make our jobs harder? I don't know)


I mean I think the “I’m a private company I am not bound by tue constitution” is a little dated. Companies effectively run the world and practically have a lot of power over individuals. There is essentially zero difference between a large company impinging on shared liberties and the government. Maybe we should start by applying a broader set of protections to citizens in order to start addressing the systemic issues?


We're in agreement - and the protections have to be broad indeed to bar specific activities, regardless of which company partake in them (current, or future). Targeting individual companies will lead to an endless game of snail-paced whack-a-mole


Lately, it seems like our only remaining choice as a civilization is to decide if we want to be exploited or controlled.


All of us are controlled, all the time, by how reality happens to unfold.

With some practice, it’s possible to accept that this is just how the world is, and to no longer be bothered by it.


It's either/or?


Just imagine the politician of your nightmares being in control of the regulations.

I think the real issue is that people live in black holes of misinformation on the internet because that is where they want to be. When I look at the many Herman Cain Award threads on reddit, it seems the predominate original source of most posts is groups that the user signed up for.

This isn't new, before Facebook, there were usenet and forums, AM Radio and even mailing lists and magazines.


> Just imagine the politician of your nightmares being in control of the regulations.

Exactly this! Every time I see someone proposing some new expansion of government reach and power, it seems to be considered only from the angle of “just think of what the (people I think of as the) good guys could do with this increased authority!”

“Yeah, but what could the bad guys do with this increased authority? Because as sure as the sun rises in the East, whoever you think of as the bad guys will eventually come to power and have this authority…”


Why does it have to mean giving anyone power? We could for example enact privacy laws that make it illegal to share personal data. That gives no one power, and takes away power from corporations. We could enact laws that prevent targeted advertising, which would make it pointless to gather a lot of the data they gather in the first place. That doesn’t make the bad politician any more powerful. So on.


That is saying the misinformation comes from companies gaming the news feeds and not the people choosing friends, influencers or groups of the former that re-enforce their feelings of victimhood?

I mean are we upset at targeted ads, or the sea of misinformation and biases that are fostered by social media? The seeds of which are often political AM radio and TV news networks.


I estimate that laws to prevent all forms of targeting advertising serve to protect incumbents (both corporate and political incumbents). That makes incumbent politicians more powerful (in relative terms).


Specifically they serve to protect consumers. They might also protect incumbents, at least ones that don't have targetted advertising, but they're still in service of the consumer


You can 1) make something illegal without 2) requiring any sort of regulation.


Yes, there’s a technical legal distinction between a law (legislative) and a regulation (executive), but nothing in my comment above was intended to signify that I want one or the other to have broad authority to protect their incumbency or exercise power over the People to a greater extent than today.


I don't think the bad guys would respect the law's limit anyways.

The not-very-bad bad guy seems like a werid group to optimize for.


> Just imagine the politician of your nightmares being in control of the regulations.

Even the worst politician is (ostensibly) accountable to the people. Corporations are not.


Corporations don't have customers?


I'd add to this sentiment, Google Adwords. Google increasingly push people to put your money here and trust our systems. Except their systems and account managers push people towards what is good for Google and not the business.

It amazes me what they get away with. I suspect it happens out of lack of awareness, but if any switched on regulatory body looked into it, its ripe ground for greater regulation.


The author seems to be filled with a sense of “knight-in-shining-armor-ism.” If Facebook’s actions really are a problem to most people, then most people would stop using it.

He calls Zuckerberg a "pope-emperor" of Facebook's users, but that's because the users choose to give him power. If the users stop choosing to give him power, he loses it.


>If Facebook’s actions really are a problem to most people, then most people would stop using it

what do you call this argument, best of all worlds-ism? So a priori any institution that persists is beneficial to its constituents merely because it continues to exist? Do concepts like power and dependence exist in the kind of worldview that produces these arguments?


There's an actual name for it -- Panglossianism -- named for Dr. Pangloss, the tutor in Voltaire's Candide who keeps insisting, despite catastrophically mounting evidence, that "all is for the best in this best of all possible worlds." (This was a deliberate parody of moral philosphy written by some of Voltaire's 18th-century contemporaries, most notably Leibniz.)



My argument is based on free-market economics. The users/customers seem to not care enough to put their time and money elsewhere.


No it’s not, and a cursory glance at studies of human behaviour (what economics is really about, in a way) would dispel this notion.


Could you please elaborate on what you mean? Facebook is a company and therefore its success and failure is dependent on its customers/users, right?


It seem you don’t understand the economics of this situation.

Facebook makes money from ad partners not users. Users are not paying Facebook for the software. People who want to run ads are. There is no market for newsfeed software here. Only a market for user attention.

Because FB made a creepy tool 15 years ago they won some users and have an established base they can market to. People using it these days do so because of the network effect not because it’s the best tool to keep in touch with people. Facebook has to maximize time spent with users eyeballs glued to the newsfeed because that’s where ads appear. The friend network portions of the software only exist vestigially at this point to keep people locked in to a newsfeed platform.

Facebook is dependent on their customers but their customers aren't the people glued to the newsfeed, those are the product literally for customers paying for an advertising platform. If it was legal, Facebook would literally chain you to your computer and force you to interact with the world through their platform because that maximizes their profits.


The gp does say users-slash-customers, usually meaning "or".

There were social media platforms before them that lost despite the network effects, and there are platforms that have appeared after them that gained a ton of users despite the lock-in (some of which they have bought, which is in my view the only thing about them that should be amenable to regulation). EDIT: based on TFA and only tangentially relevant to this thread/the moral crusade, making sure online tools can be modified/scraped/etc. in any normal way is something else I would regulate. EDIT2: based on the rabbit hole of anti-FB articles, what is needed is more of a DE-regulation, removing or significantly narrowing the scope the of the laws FB uses to threaten developers. It's not that modifying a webpage is not protected, it's that it can explicitly be construed as illegal, in part because of the previous moral crusades.

The users derive some value from Facebook. I'd rather we didn't have holier-than-thou people regulate every pastime they don't like... reminds me of moral panics over everything from video games to weed.


It is illegal in most of the rest of the world to advertise/market prescription drugs specifically because it causes undesirable outcomes where patients are telling their doctors what to prescribe, which is backwards. And drugs still exist in those countries. I don’t think anybody is saying “make Facebook illegal”. The ask is to deeply explore our understanding of ad-based social media and consider whether such machines have any place in a healthy society. Facebook or some equivalent would still exist if you curtailed ad-profits, I don’t really understand the scare/worry that all our good internet things would just vanish if we clamped down on the harmful business model… Every law is a moral value judgement on the type of society we want to participate in (unless you’re an absolutist). I think your moral panic examples are generally unfounded concerns. I would have agreed with you 10 years ago that Facebook panic is also FUD. However I think we have clear a history of examples of harm to go off of now with FB. We clamped down on the Tobacco industry and it wasn't just holier than thou zealots having a field day as fun ruiners. Real harm lead to real legislation. We just need to realize things as they are and stop pretending that FB newsfeed style social machines are benign especially in the face of hard evidence to the contrary. I totally agree with your “I would rather” in the general sense. I simply think there’s a specific case here that warrants scrutiny.


I think I understand what you are trying to say, however...

> I don’t think anybody is saying “make Facebook illegal” ... consider whether such machines have any place in a healthy society.

That sounds exactly like a moral crusade. You don't want to make it illegal, but would like to consider if it has "any place", and also someone will have to define "healthy society". If you asked people in many places and times (if not most), they'd tell you homosexuality is an abomination that has no place in a "healthy society". Even at present, some people literally treat it as a disease in need of a cure.

And yes, I think prescription drugs should not be a thing (well, prescriptions are fine but if I want to buy metformin because I think I'm smarter than everyone else I should be able to do it and prove myself right (or wrong).

Tobacco is a perfect example for me personally, because I dislike it, so I feel good about forcing everyone to not smoke. On the other hand, it hurts almost entirely the user, so the only justifiable restrictions would be the ones where the user is imposing costs on others, e.g. charging them more for healthcare... I like the advertising bans, but I would have also liked e.g. never having children around, via e.g. banning them from all restaurants and gyms. Doesn't make it justified.


> We clamped down on the Tobacco industry and it wasn't just holier than thou zealots having a field day as fun ruiners.

Well-said! There's a big difference between enacting laws that protect vulnerable groups (eg.: smokers / the public) from powerful influences (eg.: Tobacco companies / propaganda), and enacting laws meant to impose standards of "moral purity". A very big difference.


Where do you draw a boundary? What you're saying is "you are doing a thing that is bad for you, /I/ know better what /you/ should be doing". You can talk about externalities, but unless it's something that is very narrow and direct, e.g. "you ruin your health via smoking so you get charged extra for certain healthcare services", it becomes a slippery slope. "Healthy society" type stuff is especially suspect, stuff like "undermining society/government/cohesion" has been used by authoritarians for the vaguest of reasons.


The users aren't the customers, they are the product.


Like cigarettes? I have no sense of whether FB is physically addictive like smoking.

But, if a company directly contributes to decline in civil discourse and harms democracy, and thereby harms the users, then isn't it a government role to step in? And isn't this the governments role even if users can't see the harm at the micro-level of their daily interactions with the company? I guess that is the argument at least. Really its a balance between values: individual freedom and the collective good.


> if a company directly contributes to decline in civil discourse and harms democracy, and thereby harms the users, then isn't it a government role to step in?

The chief problem here is “who decides?” If a government in power is being undermined, they have incredibly strong incentives to determine that those undermining actions are “harmful to democracy” (rather than merely harmful to their party). (I think we could point to many examples in US politics in the last 5 years where “this is bad for my party” is cast as “this is bad for democracy!”)

Which is why I think that, of all the speech that must be protected, political speech is of the highest criticality to protect. (And that claims by the government or strongly politically aligned citizens that ‘X is bad for democracy’ should also be viewed with a healthy amount of skepticism.)


Facebook is addictive tbh. I heard they hired some prominent psychologist to make it more addictive.

My mother is surely addicted to facebook. She knows all data implications but she enjoys getting likes/shares etc. And fake news etc which is intentionally specious make the platform even more enjoyable.

I can't tell her to stop facebook and I am sure it is harming her right?

Yes this is government role but as with every government in democracy most of them are vested for short term profits and company like facebook lobbies a lot. We know facebook pushes millions of dollar in lobby so why would government solve it? Also facebook also provides sweet taxes to government.


>I heard they hired some prominent psychologist to make it more addictive.

This is basically what the argument to ban it amounts to whenever this discussion comes up.

Facebook is a simple site, there's nothing to it really. It's just an endless mediocre content website that happens to have your friends on it.

Good user experience = "designed to be addictive by army of ill-wishing psychologists."

The only reason id like to see it banned just so something better can finally take it's place


Humans enjoy being liked, what specific harm does your mother experience from using facebook?

Better to be addicted to typing words than drugs, it's free and you don't die.


Since when is “most people don’t speak up so it’s obviously okay” a healthy way to determine whether anything should be legal or not?

Where is the knite-in-shining-armor-ism? What even is that? Doctorow is asking for for us to build a society where we don't take a company on their word that they will do right by users because he cites multiple examples where a company leader charged as the steward of software integral to peoples’ lives has said they will and then about faced to chase profits. There’s clearly something worth discussing here… Remember how nice and interoperable chat was before Google killed xmpp?


> If Facebook’s actions really are a problem to most people, then most people would stop using it.

Just like sugary soda, loot crates, cigarettes, and opium then? Companies design products that are bad for people all the time.

> Zuck: they “trust me”

> Zuck: dumb fucks

This was said after his AOL hacking days [1], so he already had a certain predisposition towards "ordinary people".

Given that his history and his present behavior aren't much different, it seems he hasn't changed much. He designs products to extract from people.

[1] https://qf0.github.io/blog/2020/01/28/Mark-Zuckerberg-was-a-...


Have you thought how old he was when he said this? I am sure you never did anything stupid as a teenager.


I don’t see your counter argument. People eat sugar because they want to. If they didn’t like it, they wouldn’t eat sugar and no one is forcing them to.


By that measure, most of those who live under oppressive dictators think their leaders are okay. If not, "they would just rise up and replace them".

Doctorow's piece here reads to me as a call for viable alternatives. The prose attempts to advance a number of his favored alternatives, which makes it a little confusing.

To me, I believe we are headed toward a world where social media operates via an open API, like email. The road between here and there will be rocky.


"If Facebook’s actions really are a problem to most people, then most people would stop using it."

No. Because all your friends are on Facebook. And by buying out all the serious competition (Whats App, Instagram) they have guaranteed a monopolistic position. Even if you move to another place, if will grow too big, odds are it will end up being acquired by FB.


Arguably facebook provides useful, valuable services and also causes some harm. Lets suppose the good it does outweighs the harm. That's not a good argument to simply accept the harm it does. The benefits and harm are not an inextricably linked package, it should be possible to mitigate the harm without eliminating the benefits Facebook provides to people.


The nature of addiction (etymologically, “without-say-ism”) is that the problem is most felt by those who can’t stop using it.

We can turn this to a giant regulate-tech topic but the source complaint, simple enough, was a desire to have the feed consume less of life’s attention and be more worthy of what attention he was giving it.


I find it ironic that reading this article requires logging in or using the app, presumably so you can be more accurately tracked.


This article is not an exclusive (but some on medium are), so here's the canonical non-paywalled version:

https://pluralistic.net/2021/10/08/unfollow-everything/#shut...

Cory seems to be up front about the conditions of his various forms of publishing:

How to get Pluralistic:[0]

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/web/accounts/303320

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

[0] https://pluralistic.net/2021/05/05/clarkes-third-law/


> That’s how you distinguish between good mods and bad mods — by passing democratically accountable laws that tell us what we are and are not entitled to.

Government involvement would worsen the problem by a million. I'm also tired of tip-toeing around what is probably a real conspiracy. That whistleblower could easily be a plant by the government. We already know there isn't balanced political moderation with big tech (they censored the former president), so there's nothing stopping them from trying to expand that control and do so in a transparent, legal way now. They just need an excuse as always.

I'm all for regulating big tech, but if anything it should be other tech companies. Why not have private businesses specializing in content moderation do the regulating? I think there are many solutions that don't involve government, but I suspect they are salivating at the prospect of having full control without the need to pretend they don't. Then these laws will become permanent and it's only two steps away from requiring this as some national profile, but that's another thread.


We need a market for content moderation and content curation services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: