That's a big reason what makes HN successful: a blanket ban on emotional content.
Edit. I've been thinking of an idea that for us knowledge sharing resembles neuron connections, based on my limited knowledge of that topic. Each of us watches a few channels of information and makes a choice to forward/retweet a piece of information so your subscribers would see it, ignore it or block a channel if it sends you bogus data. Every time we do this, we adjust the rating of the source. So I've been thinking that if we could set up such a network in a more formal way, and if the network would grow to at least a few thousand "neurons" (that are people), would it exhibit some unexpected higher level effects?
HN has a ton of emotional content. But it really depends on the topic/thread. For example, I clicked into a thread recently where I found that at least 80% of users commenting have been blocked by myself for posting that type of content, and this is what I literally see:
(I use a script so users can be blocked with one click)
Whereas in this thread, it's only about 1% (1 blocked in 73 comments).
Despite me having a huge block list, I'm consistently amazed how often I'd click into a huge thread with hundreds of comments and find that I've blocked not even a single user.
I think this site gets emotional content. It might not be political flame war baiting topics. There's plenty of rust is great, microservices suck, npm continues to be trash, hot new thing considered harmful. Reinforces the my team wins ideas
Like a web of trust, but for link recommendations?
Maybe it could work by changing the incentives. If you had a magical perfect AI that was loyal to you and ran on your PC, it would always act in your interest - Spidering the web and returning interesting things based on your feedback. Whereas the current model is, none of this stuff is self-hosted, it has to run on Somebody Else's Computer. SEC costs money to run, so they have to make that money back by tweaking the recommendations, and none of the big ones inter-operate. YouTube will never recommend videos from the Fediverse.
So anything that's not self-hosted (which is just a specific form of a subscription payment, anyway) will never have your best interests at heart, it can only have some of your interests at heart. This same problems comes up in politics - Companies have no interest in creating public goods and we shouldn't expect them to do so.
But anyway, I think this is where some of the good stuff will fall out of AI in the next 10 or 20 years - Having intelligent agents that try to cut through the BS of the web on behalf of their owners.
My idea is a lot simpler and doesn't need any ML. Think of it as the linkedin feed: if you upvote a post, it'll be shown to your contacts, but if you upvote too much junk posts, your contacts will unsubscribe.
This idea is relatively trivial to implement as an alternative UI of HN. That site would show the very same html, but will hack the upvote and downvote buttons. You upvote to subscribe to someone and downvote to unsubscribe. If someone in your watchlist upvotes something, that something gets shown in your feed. You'd still use the main HN feed for discovery. Eventually this would form a network that propagates information in some way. That network could be even visualised in some way to see how certain posts travel thru it. The topology of the network, its dynamic properties and whether it'll remain connected are interesting open questions. The entire thing can be written as a one file python script with a sqlite db running on a 5 bucks vps. The biggest challenge is getting enough folks on hn to use it.
I feel that formalizing such a system would make it more vulnerable to market capture - there are powerful entities that spend a lot of money to control influence, and it's always risky to paint yourself as a target.
Any system you create would have to be distributed enough to be resilient to those forces. I'm not naysaying, but this is something I think about a lot and I haven't come up with (or found) a good answer to those problems.
Edit. I've been thinking of an idea that for us knowledge sharing resembles neuron connections, based on my limited knowledge of that topic. Each of us watches a few channels of information and makes a choice to forward/retweet a piece of information so your subscribers would see it, ignore it or block a channel if it sends you bogus data. Every time we do this, we adjust the rating of the source. So I've been thinking that if we could set up such a network in a more formal way, and if the network would grow to at least a few thousand "neurons" (that are people), would it exhibit some unexpected higher level effects?