> basic 'hot' algorithm that's not designed by squadrons of PhDs running machine learning on user behavior analysis isn't really the same thing, culpability-wise, imho
A "hot" algorithm can blast high engagement libel in front of people as well as a fancy algorithm.
If the "hot" algorithm and the fancier algorithm are both content neutral, on what basis can you distinguish the two as a matter of law?
Does the hot algorithm become illegal if a PhD implements it? I'm at a loss about what distinction you're actually trying to draw.
Your post, like many others on this thread, isn't articulating exactly what about FB's conduct you find objectionable?
Illegal, liable, doesn't matter --- you want to use the state to drive certain kinds of ranking off the internet.
Fine. What, precisely, is the line between algorithms acceptable to you and algorithms not?
What is the precise conduct that would make liability attach to a ranking algorithm? You can emote, but you can't describe what exactly it is you would turn into a law.
> You can emote, but you can't describe what exactly it is you would turn into a law.
I thought I made it pretty clear from the outset I was talking about removing CDA Sec 230 protections for sites using bespoke (i.e. proprietary) curation algorithms for their feeds.
A "hot" algorithm can blast high engagement libel in front of people as well as a fancy algorithm.
If the "hot" algorithm and the fancier algorithm are both content neutral, on what basis can you distinguish the two as a matter of law?
Does the hot algorithm become illegal if a PhD implements it? I'm at a loss about what distinction you're actually trying to draw.
Your post, like many others on this thread, isn't articulating exactly what about FB's conduct you find objectionable?