Treat addictive social media companies like addictive cigarette companies. Lets see some huge warning labels about how mentally harmful it is to continue scrolling on facebook right on the first result where its unavoidable to see. Lets tax the hell out of social media companies to generate local revenue just like sin taxes. It won't be a huge change but it will be a great starting point and will come with revenue that can fund potentially mental healthcare programs for people damaged by these companies.
> Lets tax the hell out of social media companies to generate local revenue just like sin taxes.
Very interesting idea, actually. There is evidence Social Media causes harm to some individuals' mental health (in a widespread manner causing some measurable societal harm), so a proposed tax on all social media companies with revenue going towards mental health programs seems worth exploring.
Generally I'm not much in favor of implementing new taxes (would rather close existing loopholes) but if implemented reasonably and backed by scientific evidence this seems valid.
That's because, so far, they've managed to deflect, deny, and discredit research and critics pointing out exactly how social media uses things like variable rewards in the same way as slot machines use them to keep gamblers pulling the lever. They do this using tactics developed by the tobacco companies to fight findings that smoking causes cancer and other harms and refined by the fossil fuel industry to prevent action on global warming.
I agree with you but a lot of the analogies and metaphors here are insufficiently subtle.
FB in some sense, but not entirely, is a form of speech, no better or worse than Grand Theft Auto or the National Enquirer. That's how I thought of it ten years ago.
Now that it is in our pockets nearly cradle to grave; a monopoly; and dependent on minutes of engagement rather than subscriptions -- it is a different animal altogether.
Yet with all those things we have laws and regulations and even restrictions for young people explicitly. FB is the wild west on the other hand and constantly lobbies to keep it that way in terms of how regulators see it.
Yes, thats the buried lede. Those are all things which you need to be old or mature enough to use responsibly - they make demands of experience and impulse control you develop as adults.
Meaning that blocking social media for kids and teens is likely on the anvil at some point.
> Uhm a cigarette you cannot change the ingredients of, it's tobacco.
You can soak the tobacco in solution which contains additives, such as more nicotine. Which is exactly what cigarette companies have done in the past (and not just the tobacco, the filters, and the paper as well).
The parallel here is filling people's feeds with divisive political news and posts, even when they have tried to opt out.
The point is tobacco itself is a carcinogen, you cannot make a cig not cause cancer because it needs to burn tobacco at least.
A social media website does not need doom scrolling or private algorithms for the feed, you can change how it works instead of adding a useless banner.
1) What would you expect be implemented to reduce/eradicate doom scrolling?
2) What would making the algorithm public do for us? I'm not an ML engineer, but presumably their algorithm isn't just an algebraic equation where x is how toxic the post is and y is how inflammatory it is and y is the number of kids who will think harder about suicide because of the post.
Maybe I'm just super naive and that _is_ how Facebook made their algorithm, but my understanding is that the algorithm is a little more of a black-box and is a little abstract. How is a lay-person supposed to evaluate something like that?
The input to these algorithms are usually human understandable and quantifiable signals like likes, text sentiment, maybe engagement history -- and the output is probably a score than can be ranked. Ultimately though even if the algorithm is a black box (entirely possible it's not ML based!) we can still evaluate it in a lab environment.
Some of the signals might be generated by ML also, like photo labels, but ultimately these things are very understandable if you have the model and data.
I don't want them to do anything regarding doom scrolling, it was just an example and came from another user.
I do want them to publish their algorithms and moderation logs so we have insight on how they are serving and moderating content.
I don't care about organic user content, I do care if FB is pulling the strings to make it either more salacious or being biased in one way or another.
I also care if they are banning certain users or content but not others.
Sorry for being pedantic, but you can absolutely change the ingredients of a cigarette. There's a ton besides the tobacco. And you can breed different strains of tobacco to have more or less of some chemical.
Smoking was happening in the 1800's. Lung Cancer rates didn't shoot up until the 1900's, it was rather rare. This is around the same time that tobacco companies figured out they could soak tobacco in ammonia. This allowed for inhalation into the lungs (e.g. it sucks to inflate a cigar deep into your lungs). It also made the cigarettes much more addicting, so people smoked way more and inhaled into the lungs. That's about when lung cancer stopped being so rare.
Yes, cig's cause cancer, but to say that it's because it burns tobacco is missing a big part of the story.
That was probably because smoking was not common outside of wealthy men during the 1800s. It was not widespread at all among most of the public until after the world wars thanks to mass produced cigarettes (which weren't around until the late 1800s) now being added to rations. Smoking rate after WWI increased 350% and was high ever since. US government didn't stop issuing cigarette rations to soldiers until 1975. Lung cancer rates have followed lock step with smoking rates, its not really that smoking suddenly became harmful. It always was, it just wasn't common to smoke and even among those who did back in those days, it wasn't common to smoke very much at all and certainly not around the clock (kinda like hookah users today).
My nintendo DS from 15 years ago gave me an eye strain warning every time i started it up and it doesn't always cause eye strain, only misuse does and I know that thanks to the informative banner.
I love that Nintendo is very aware of the potential negative effects of their products and games and tries to inform users / mitigate.
Even when it comes to encouraging positive play between users - in the new Pokemon MOBA (games known for their toxicity) there's no text chat, only communication with a few emotes you can show. Some of their decisions make for arguably worse games for "hardcore" gamers (like the way they rank users in smash, or how they focus on more casual-style in-game tournaments or make matchmaking harder) but they sacrifice that in favor of a more positive general experience, especially important since children play their games.
I thought pictochat was great too and a lot of fun. They could have opened it up and made it into a global network, but the beauty is that it operates on local networks so it was more of an in person social network, plus no way for advertisers and commercial companies to break in.
I remember pictochat, so many dicks and graphic drawings sent to each other in JR high. The sensitive world today would have had a field day with that.
This part of the thread went pretty off topic but I like it! Pictochat was certainly ahead of its time, wish we stuck to things like that.
Moreover, warnings are useless if people can't vote with their feet. So if you want to actually affect change in the dynamics of the market you need to make services compete on quality and value to the customer rather than engaging in a scramble to accrue insurmountable network effects and lock-in.
That means mandates for data interoperability. Sadly, I have no idea how to implement that in a way that doesn't utterly stifle innovation by ossifying what sorts of data models social media is allowed to have. But at the very least we could create a sort of interoperability minimum that prevents you from locking up things like photo albums or peoples' "social graphs."
Over the longer term I'd like to see some kind of disentanglement of the protocols, standards, and data models from the front-end clients. It's obviously a lot more complicated now, but in the same way that you could access AIM, ICQ, GChat, and a bunch of other stuff from a variety of chat clients it would be good to be able to do this with everything social. Hell, ActivityPub basically tries to do this now so it's not impossible.
What is the appropriate middle though? Think about alcohol culture. Should we ban beer commercials on TV? Only allow beer commercials with talking frogs rather than attractive young people having fun?
I'm honestly baffled over why beer commercials are considered socially acceptable - but then again I think that advertising (in our modern interconnected world) only ever serves to drive overconsumption. If you want a beer - you go to the bevy and pick out a beer you'd enjoy... if I'm watching TV and the TV tries to make me want a beer - that's not a good thing.
Good advertising[1] is limited to making sure your product is visible in comparison to competitors - having shiny cereal boxes is something I find pretty meh, but in the cereal aisle you're dealing with someone who wants to buy some kind of cereal and you're trying to convince them to buy yours. TV Advertising drives up demand for products which, by definition, means we're consuming more of that product than we otherwise would... that's great for business... and it's also great for the obesity epidemic.
1. What I'd consider to be ethical advertising, but that's like my opinion man.
I agree but I'd imagine the beer companies would argue with your substitution standard for advertising that any drink or even consumable product you can put in your mouth is a competing product. So then no drink commercials, and you've reduced the capital available to fund TV, and you have a domino elimination of economic activity.
I mean, ideally we could allocate all the capital we put into manufacturing, selling, and consuming beer into fitness or math education or something more harmonious with wellness and human achievement; but hey, plenty of great scientists and inventors love beer.
What is the specific harm involved here that is deserving to be taxed?
How would we measure this harm in order to know how much to tax a given company?
Should other causes of this harm be taxed/penalized as well? If not, why?
For instance, if the harm in question is some people feel varying degrees of worse after using a given product, is there any limit we as a society should set on penalizing the cause of the harm?
Should people or entities who say things that make people feel worse be fined/prosecuted by the law? If I feel worse (let's call this 'trauma' or 'anxiety' or 'depression' or 'literally shaking' or 'panic attack') after reading a book or reading a news site, should I have standing to sue the creators and medium which presents said content?
You tax Facebook but allow it to operate however it wants. Facebook is then incentivized to double down on its algorithms---like tobacco companies using chemical and biological techniques to make cigarettes more addictive---in order to regain the lost profits.
Then you can double down on the taxes you levy against them if they begin harming more people, no? The idea is the cost of doing bad business will eventually be too much to make it worth doing that sort of bad business. Same idea with carbon taxes where the costs scale to damage and incentivize shifting to good behavior rather than doubling down on bad behavior. And even with cigarette companies doubling down, far fewer people smoke today and die of lung cancer than 50 years ago, so this stuff works on the whole.
That definitely isn’t what happened with alcohol or tobacco! Instead you end up with a significant enough amount of money going to the government that the government now ends up protecting those industries to an extend - ensuring lower priced competition (e-cigs, moonshine) get stomped on and the market gets protected and not eliminated or reduced too much.
Warning labels won't be of much use, as an individual most will ignore them believing in there own prowess to discern truth.
Taxing all social media or all media may have interesting implications as this again will reduce profit for all and give some revenue for governments without making any actual change. Also people making cooking/educational videos on Youtube may resent having to pay sin tax.
It doesn't notify you when someone responds to one of your posts. It doesn't send you any nagging emails (indeed, doesn't even require an email to sign up). It has the noprocrast setting to let you set limits on your own usage. It doesn't (afaik) try to optimize for engagement - dang tries to maintain civil discourse as much as possible.
You are are able to consume both tobacco and alcohol (let's not tangent into a drug legalization discussion). Tobacco and alcohol cause measurable societal harm and measurable costs to the state - are you implying it's unreasonable for states to tax these goods for those reasons?
Generally speaking I'd rather reduce taxes but I fail to see what's wrong with e.g. an alcohol excise tax going towards rehabilitation and/or highway safety programs. "Sin tax" is just a colloquial name for an excise tax, which a state has every right to enact.
> Tobacco and alcohol cause measurable societal harm
And if I choose to smoke in the privacy of my own home (or yard)? What societal harm am I causing?
As for alcohol, the societal harm caused is a laundry list of already illegal behaviors that are illegal regardless of alcohol's involvement with the exception of sin tax avoidance.
Why not outlaw the societal harm instead?
> e.g. an alcohol excise tax going towards rehabilitation and/or highway safety programs
Both of those seem like good things regardless don't they? Why do we need a special tax on alcohol for things that are generally good? It's not like only people who consume alcohol are the only ones who need rehab or they're the only problem with highway safety.
Does the tobacco tax go toward lung cancer patients? It actually goes towards funding campaigns that overstate (ie, lie) about the dangers of smoking to the point that people vastly overestimate the dangers of smoking [1].
> Sin tax" is just a colloquial name for an excise tax, which a state has every right to enact.
Of course it's legal, it's just garbage policy. Sin taxes come from the pairing politicians wanting more money and pearl clutching interest groups pleading to think about the children.
Unfortunately it's not so simple. An individual's smoking and alcohol use can and does harm others, and the state levies excise taxes for that reason.
Another example is driving a car, which results in thousands of fatalities and many more injuries daily. Not to mention environmental impacts which affect others. The state chooses to require drivers to have insurance and their cars to pass smog tests, rather than outlawing driving.
> An individual's smoking and alcohol use can and does harm others, and the state levies excise taxes for that reason.
Smoking and alcohol use can also not harm others. Should those who smoke and drink responsibly be held responsible for those who don't? How does the tax ameliorate those harms?
For everyone responding that smokers cost the government money, it is actually the opposite in that they save the government money because on average they die sooner. From the manning study: "In this analysis, the federal government saves about $29 billion per year in net health and retirement costs (accounting for effects on tax payments). These include a saving in retirement (largely social security benefits) of about $40 billion and in nursing home costs (largely medicaid) of about $8 billion. Costs include about $7 billion for medical care under 65 and about $2 billion for medical care over 65; the remaining $10 billion cost is the loss in contributions to social security and general revenues that fund medicaid. "
Presumably COVID also saves the government money, then? It mostly kills the old who have already paid into the tax system their whole working lives and are now drawing from it. And it mostly kills the chronically ill who need more tax support than they contribute. It seems terribly cold and callous to look at it this way though, e.g. when a son is holding his mom's hand in the hospital who is dying of lung cancer, to go up to the son and tap his shoulder and whisper, "Hey kid, cheer up, uncle Sam saved $8 bil on medicaid nursing home costs 'cus mommy here couldn't stop sucking nicotine sticks."
> Presumably COVID also saves the government money, then?
It most certainly does. The retort your parent made is for those who make the argument that a tax is necessary because X (smoking, in this case) costs the country economically.
If you want to make the purely _economic_ argument, it's a benefit to the bottom line.
That's not what a sin tax does. You are still free to smoke cigarettes or drink alcohol, and were we to tax social media usage, you would still be free to use or not use that.
But a sin tax ostensibly accounts for the economic externality*. We know that cigarettes impose a cost on society beyond the individual smoker. I'm all in favor of making people pay for things that we know cause damage to society more broadly. And I hardly think it's controversial that social media is in many aspects harmful to society.
*Sin taxes are technically different than pigovian taxes, but I and I think most people tend to use the terms interchangeably.
> We know that cigarettes impose a cost on society beyond the individual smoker
What's that cost?
From what I've read, all economic costs smokers impose on society are more than made up for in their dying early, they actually cost less [1]. I guess everyone should smoke to save the state money!
Neither you or the government/state decides what you get to do with your mind. An advertising company decides what to do with it and can manipulate it however it decides best benefits itself. Not you, not society, Facebook, what makes Facebook the most money.
It’s not the state “deciding” it’s the state requiring compensation for the negative externalities created by the product. You’re more than welcome to smoke cigarettes if you so chose. But that decision isn’t made in a vacuum and it impacts the rest of us in the form of increased public health burden, insurance costs, secondhand smoke, etc. A “sin tax” serves not only to discourage the asocial behavior (we’d have a big problem if everyone made the same choice) but also to pay your fair share of the costs of your decision.
Curious, do you not wear seatbelts too? Opt for asbestos insulation since its better than anything on the market today? Plumb your home with lead since its more durable and flexible? Use leaded gas because its better for your older engine?
The state acts on the collective when the public is not making good decisions for themselves and causing net harm onto themselves, usually with the public paying the price. Sometimes thats overt like with death rates from accidents without seatbelts, or cancer from asbestos exposure. Sometimes its less overt like the behavioral issues, increased incidents of mental illness, and crime rate increases from leaded pipes and gasoline.
I'm willing to bet social media causes net harm. It hasn't enabled communication that wasn't possible before; if you can get access to a facebook account you therefore have email and access to irc. But it has cost probably trillions in productivity from people staring at it so much during all their idle time, and the cost to treat mental health issues that wouldn't have cropped up without toxic social media culture.
I say we have these companies pay for these externalities if they are forcing us to pay for them otherwise. By not passing a tax on externalities like this, the state is deciding that I need to pay for facebook's ills on society whether I use the service or not, which should anger you as a libertarian as much as it angers me as someone on the left.
Not only the age restriction but there are restrictions meant to curb some abuse at least. Drunk in public is a crime, establishments technically aren't allowed to overserve patrons who are very drunk, you can get tried for manslaughter worst case if you force someone to overconsume and they die, etc.
I wear my seatbelt, I don't smoke, I don't drink and I'm vaccinated.
Everyone keeps talking about these "negative externalities" without being specific. Why not just make the societal harm illegal and let people hurt themselves without buying permission from the government?
We require driving licenses, age restrict operation of vehicles, require vehicles to operate within parameters (speed limits, gross vehicle weights) and according to standards (traffic signals and markings), and prohibit operation while under the influence of decision or reaction-impairing substances.
Because these are all statistical precursors to accidentally killing someone with a car.
Texting while driving, while illegal (everywhere by now, I assume), causes more accidents than driving under the influence (both in total numbers and, apparently, per capita). Should we tax text messages?
So you get to ignore all the responsibilities that come with your rights so you get to clog up our hospitals with your bad decisions?
How about you take full responsibility: you get to not put whatever in your body, and you agree never to take an ambulance ride or be treated by a hospital.
Don't want the vaccine? That's fine, it's your right. But now when you can't breathe, nobody coming to help.
> So you get to ignore all the responsibilities that come with your rights so
Absolutely not. I know lots of people that manage to drink alcohol responsibly. They never drink and drive, don't regularly over indulge and it makes their and their peers lives _better_.
What negative externality are they paying for with alcohol taxes?
> How about you take full responsibility: you get to not put whatever in your body, and you agree never to take an ambulance ride or be treated by a hospital.
> Don't want the vaccine? That's fine, it's your right. But now when you can't breathe, nobody coming to help.
If I pay for health insurance, I'm already taxing myself in this instance. It would be perfectly reasonable for a health insurance company to offer incentives for people to be vaccinated just like they offer incentives to non-smokers.
If the government wants to start providing that healthcare, then they can have a say in the cost of poor health decisions.
I don't think number of users is the metric that really matters. I care a lot more about market cap because money is what gives corporations greater leverage to do bad stuff.
Public algorithms, or at least some 3rd party review. Ban infinite-scroll on social media platforms. Require feeds to be configurable (users can set to "newest first" or "top picks" or whatever else). I'm sure I could come up with more, that's just off the top of my head.
These seem like awkward things to encode into law in a durable way. Laws are long-term blunt instruments, banning something like infinite scroll will have all kinds of unintended consequences.
That's true - these things might be better implemented as regulations out of the Executive branch - but that would still require legislation authorizing somebody to implement the regulations.
State governments are much more interested in participating in the cable news culture wars to pump up next election numbers then they are in actually governing. I doubt it.
Have you ever watched a teenager (or addicted adult) scroll thought their IG feed? It's disturbing. They just scroll and scroll and scroll waiting for the tiny little dopamine hits. I don't know if a "Next" button fixes it completely, but it almost has to be better, even if only marginally so.
> I would rather take the stance of feed algorithms and moderation logs be PUBLICLY available. Transparency instead of censorship.
I think that removing CDA Sec 230 protections for algorithmically-curated feeds is the answer. It's one thing to have a basic FIFO feed, it's another to hide or reveal content based on your own internal engagement/revenue targets. At the point where you're crafting bespoke engagement-maximizing feeds, you're creating a gestalt creative work that has a life of its own and that is no longer merely a passthrough for the works of users of the platform.
In other words, consider Facebook's curated feeds as Facebook's speech, not only the speech of their users, so Facebook faces liability for that speech.
I agree. Especially since political ambition won't be to curb misinformation, it will be to get control about which misinformation will be spread. Harm to users will be an excuse and like misinformation it will be quite difficult to quantify.
I considered the widely-used 'hot' algorithm and I don't think that really qualifies as a 'creative work' on the part of the site. The 'top from last hour/day/month/year' and 'hot' algorithms are really elementary sorting methods that are in the same league as FIFO.
> Who gets to decide what algorithms are simple enough to be legal? You? Why?
I'm not talking about the legality, I'm talking about the liability. If one user posts defamatory/libelous speech and Facebook decides to blast it to 1 billion users because their secret sauce algorithm found that it really cranks the revenue dial, Facebook should share some culpability in that defamation lawsuit. A basic 'hot' algorithm that's not designed by squadrons of PhDs running machine learning on user behavior analysis isn't really the same thing, culpability-wise, imho.
> basic 'hot' algorithm that's not designed by squadrons of PhDs running machine learning on user behavior analysis isn't really the same thing, culpability-wise, imho
A "hot" algorithm can blast high engagement libel in front of people as well as a fancy algorithm.
If the "hot" algorithm and the fancier algorithm are both content neutral, on what basis can you distinguish the two as a matter of law?
Does the hot algorithm become illegal if a PhD implements it? I'm at a loss about what distinction you're actually trying to draw.
Your post, like many others on this thread, isn't articulating exactly what about FB's conduct you find objectionable?
Illegal, liable, doesn't matter --- you want to use the state to drive certain kinds of ranking off the internet.
Fine. What, precisely, is the line between algorithms acceptable to you and algorithms not?
What is the precise conduct that would make liability attach to a ranking algorithm? You can emote, but you can't describe what exactly it is you would turn into a law.
> You can emote, but you can't describe what exactly it is you would turn into a law.
I thought I made it pretty clear from the outset I was talking about removing CDA Sec 230 protections for sites using bespoke (i.e. proprietary) curation algorithms for their feeds.
No, actually, it's not any of these things. It's on HN that dang applies special rules to your comments if he doesn't like you. FB's algorithm is impersonal.
Your list of criteria is legally inactionable. Neither you nor anyone else can write down what precisely it is that FB should be prohibited from doing. All I see is a bunch of unproductive rhetoric about how this is bad or that is bad. Blah blah, filter bubbles, moderation, whatever.
What is the exact definition of the criteria you would use to prohibit ranking algorithms?
> Why the hell shouldn't I be able show ads to people who want to see them?
Nobody's stopping you. If you want to show ads to people interested in football, post ads on a football site. If you want to show ads to people interested in horses, show them on a horses site. If this type of non-invasive targeting to display ads doesn't make you as much profit, well that's a you problem.
> HN is an algorithmic feed.
You seem to be struggling with the idea of a 'bespoke' algorithm where everyone's algorithm is different, informed by an internet-wide surveillance system. This is the difference between HN and FB.
Targeted ads. Targeted. As in specifically and exquisitely targeted to individual people based upon datum gathered from surveiling everyone.
Still legitimate, and maybe even encouraged, are contextual ads, banners, sponsorships, branding, swag, paid placement, influencers, misc calls to action, etc.
Please rest your concerns. Late stage capitalism will continue unimpeded after Facebook's wings are clipped. Social media will revert to less noxious enterprises, like metafilter and craigslist.
> HN is an algorithmic feed.
Opt-in. Transparent.
What is FB's equivalent to HN's /newest?
I'm not crazy about HN's ranking. It's pretty good, non-toxic, seems fair. But someone other than me is making judgement calls.
I much prefer client side (user controlled) filtering and sorting, like with any generic RSS reader.
Repeating myself: All HN visitors see the same feed. A crucial qualitative point you pointedly do not acknowledge.
Ad targeting is just a bugbear for a section of the tech activist crowd. There is nothing fundamentally wrong with a site like Twitter noticing that I tend to post a lot about cats and showing me cat ads as a result. No, it's not self-evident that targeting is bad, and no, behavioral targeting doesn't require "surveiling" everyone internet-wide.
> Late stage capitalism
I don't subscribe to the neo-Marxist worldview that uses this "late stage capitalism" frame all the time. Capitalism is the natural order of the universe, not some temporary aberration on the way to your Utopia.
> Opt-in. Transparent.
Not transparent, and hardly opt-in: HN feed ranking is the default.
> I much prefer client side (user controlled) filtering and sorting, like with any generic RSS reader.
Okay, so people seeing different feeds is good...
> Repeating myself: All HN visitors see the same feed. A crucial qualitative point you pointedly do not acknowledge.
... and now people seeing different feeds is bad.
Make up your mind.
Different people should see different feeds because they're interested in different things. There's nothing sinister or nefarious about showing people stories about topics that interest them and not showing them stories about topics that don't.
I think even you'd agree that someone should be able to opt into seeing stories about "cooking" and opt out of stories about "motorcycles". Okay, that's good, right? Now what's wrong with using ML to infer, based on what someone reads, that he's interested in "cooking" and not "motorcycles"?
This interest inference is what FB is doing. There's nothing bad about it.
I maintain that the criticism you and others are lobbying against algorithmic feeds is logically incoherent, emotionally rooted, and unworkable as public policy.
> This interest inference is what FB is doing. There's nothing bad about it.
And circling back to the original article, there is in fact something bad about it. One guy started out with a site to rate the attractiveness of women at Harvard, and now my mother is probably going to die because his customized algorithm found that showing her lots of anti-vaccine misinformation would maximize his profits.
I think that's the right approach. Legally you could require every social media company that collects and sells data on its users to advertisers to allow the users to access their internal algorithmic interface (for their own account).
Now, what controls are on the internal algorithmic dial? Apparently that's top secret, but a legal requirement to expose the interface to the users seems reasonable.
Note that this might not affect what ads you get served (that seems more on the private business side, although banning prescription pharma ads makes sense), but it would affect what shows up in your feed, what content you get served, etc. You could write your own exclude lists, for example (i.e. if you never want to see content from MSNBC, FOX, or CNN, that would be your decision - not the algorithms, etc.)
If you get too big, you can't buy your competition (e.g. FB buying IG). Or if you get too big, you have to open your stuff up like email does. Or if you lie to congress, you get penalized. Or if you get too big, you have to make your algorithms publicly available.
I think GP is referring to the fact that email overall is a system that is based on public standards and open to new entrants. You can start Hmail.com if you want, and plug into the existing email eco-system as a new competitor very easily.
The social media ecosystems aren't like that. You can't be a chat provider and plug into FB Messenger; you can't plug into Twitter, etc.
There is an open social media eco-system called the fediverse (for its federated nature), in which Mastodon is the best-known player. But it's gotten very limited traction, because of the network effect that keeps people on FB and Twitter. No such effect keeps people on Gmail.
Email, not Gmail. I can email people from my provider even if they use other providers, including people who self host. And I can get email from them too.
I would ban algorithmic targeted media -- ie no personalized feed based on an "engagement" algorithm, for social media just see a chronological feed of posts from the people you follow. This is the most addictive and radicalizing part of social media - and the most lucrative. Much like the nicotine in Big Tobacco's case.
Could I write my own algo for personal use? Could I hire someone else to write that algo? Could I share it with others? Could I start a company to sell it? If it gets to popular will you try to ban it too?
>> To clarify, can you specify exactly what law you would like made? What do you want to be done exactly?
Honestly, social media issues are for the most part a parenting issue. If you don't have access to your kids phone, or know what platforms they are on and who they are talking to and what they're sharing, I'm not sure legislating social media is going to do much of anything. New platforms will pop up, more private networks will be started and suddenly, everything becomes to fragmented to really oversee.
I would create laws that have teeth and address issues like bullying, doxxing, SWATING and other ways people weaponize social media against other people. You start to put some teeth into laws where people are facing serious consequences for bullying and pushing people to suicide, then you might see some changes.
> You start to put some teeth into laws where people are facing serious consequences for bullying and pushing people to suicide
Counterpoint: kids aren't all neurologically and socially developed enough to understand life-altering consequences for certain actions, and that's not their fault. Legal codes and law enforcement are too crude in most child-related cases, unless you're okay with incarcerating misbehaving children.
It's on adults to make sure things kids can reach are reasonably safe for—as well as from—them.
Counterpoint to your counterpoint: Almost all kids are neurologically and socially developed enough to understand they'll be picking up cigarette butts and cleaning graffiti on the weekend if they start bullying someone. It's quite common for American schools to enact "zero tolerance" policies for any physical altercation that, barring criminal charges, is the same punishment regardless of severity and irrespective of who the aggressor was. The policy is literally to give the victim the same punishment in the name of "fairness". Now there's no reason not to escalate and it incentivizes the victim to not report it. Even with the same punishment it's still skewed towards the bully as odds are, they're probably not going to care as much about a couple days of suspension and most bullies aren't going to be in any serious risk of expulsion.
School administrators don't care about bullying and teens being driven to suicide, they care more about liability than fairness. Taking the King Solomon approach in public schools is abhorrent and injust. Kids do dumb things, but it makes it so much worse when incompetence and callous disregard create perverse incentives.
In my opinion if social media is important to society the way it seems there should be a government funded social network for users and businesses. In the USA like NPR and PBS.
The problem is companies selling peoples data and optimizing the algorithms on probability. Instead what if everyone paid some taxes to have a social network which helps people interact and businesses promote themselves, you can get rid of the ads AND the algorithms. Let users customize settings which dictate the algorithm.
You better be sure the incumbents would fight tooth and nail make this look like a very unattractive idea and lobby heavily to make sure it will never happen.
> I would rather take the stance of feed algorithms and moderation logs be PUBLICLY available. Transparency instead of censorship.
Ok let’s say that’s now the case. The FB source is now open. What changed? Any negative consequences are still occurring, and if anything, we just have had actors better visibility into what to exploit.
A lot of these issues seem difficult to regulate but one that seems more realistic is usage by minors.
What if social media platforms required all minors to have their account associated with a parent account? The parent could monitor activity, institute time limits, etc.
Minors don't use FB much anyway. It's more tik-tok now. And of course no minor would use an app monitored by her parents, she will immediately switch to another app.
Sorry, should have clarified: I was suggesting that if the government decides to regulate it should apply to all social media platforms, not just FB. Updated the original comment.
People seem to dislike this suggestion, but while I believe it is unrealistic to check for age on the internet, this idea has some merit.
Platforms that harbor minors and adults together will have to have different rules than platforms just for adults. But you also cannot make everything fit for kids. Government would actually try to do just that since ensuring safety here is their mandate. So a solution must be found. Normally minors should be supervised, but that is not trivial and you don't want constant surveillance.
Make spying on people illegal, even when a computer does it to billions of people rather than one creep doing it to one person. If you have to collect info about people to provide a product or service, make it strictly illegal to transfer or sell that info or anything derived from it. Don't like it, get into another business. No one's making you collect people's info. Yes, this should apply to e.g. credit card companies, not just big tech. This'd need some fine points hammered out (don't laws always?) but it's not that crazy.
Do something to make platforms responsible when their "algorithms" promote something. Not just hosting it, but when they promote it. Don't like it? Don't curate, then, or have a human do it so you're sure nothing you're deliberately promoting is shitty enough to land you on the wrong end of a lawsuit. "But how will tech companies show every visitor a totally different home page of content they're promoting (but in no way responsible for), and how will Youtube find a way to recommend Jordan Peterson and Joe Rogan videos next to every damn thing? How will tech companies make every part of their 'experience' algorithmically-selected, personalized recommendations of content they farmed from randos?" They won't, they won't, and... they won't. You're welcome.
Make data leaks so cripplingly expensive that no company would dare hoard personal data it didn't absolutely need to get by.
Force the quasi-official credit reporting agencies not to be so shitty. In particular, "freezes" should be free and should be the default, alerts for activity should be free, and access to one's own info should be on demand at any time, not once per year per agency. Or just outlaw the bastards completely, IDGAF.
I dunno, lots of things we could do to make the current personal data free-for-all less hellish.
> This'd need some fine points hammered out (don't laws always?) but it's not that crazy.
It sounds like you're suggesting GDPR style regulation. They're still figuring out how to enforce that but generally I support it. Too much money is against it to get anything passed in the US, though.
Another problem is that the US government seems to like when the tech sector gobbles up data on people. It gives them new powers for social control.
Freedom of speech also includes from being compelled to speak of things you don't want to, so forcing companies to make their recommendation and moderation systems publicly visible would be eve more of a free speech issue than expecting companies to moderate violent, hateful, or deliberately misleading content.
I absolutely disagree but I'm upvoting anyway because it's an argument I haven't seen before with regards to making algorithms public and god knows the discourse could use some variety.
That being said. No. This is no more a free speech issue than forcing food manufacturers to make their ingredients public.
Can you cite some case law that bears out this argument? While I agree that your point is true in the most general sense, we compel companies to make their internal information public fairly regularly via various mechanisms (admittedly, none of which are 100% analogous to the FB/social media situation).
I would rather take the stance of feed algorithms and moderation logs be PUBLICLY available. Transparency instead of censorship.