Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about censoring Dislikes? This is in progress now https://www.youtube.com/watch?v=kxOuG8jMIgI&t=1s


I really hope this comes back to bite them. I already find myself not watching as many videos simply because I can't gauge the like/dislike ratio on them. In the short term I expect an explosion in clickbait and low quality videos because people won't be able to immediately see that the video is bogus from dislike counts. I suspect people will tend to retreat into watching the channels they already like and cut down on consuming recommendations.

I'm already trying to gauge likes vs watch count as a sloppy proxy for video quality but it's just not the same.


I find what you said really interesting, mainly because I don't think I've _ever_ used likes or dislikes (or their ratio) as a metric for choosing what to watch. The most I've interacted with dislikes is when seeing some helpful low-production-value video, or some artist's music stream, I've wondered why 1-3% of the viewers disliked it. I mean, even when I've found _better_ or _more informative_ videos, I've never been tempted to dislike the previous ones I'd watched that weren't as good.


I have a great example that illustrates my use of likes/dislikes as a filter.

https://www.youtube.com/watch?v=NRCSX-u01eM

The video is titled: "Boeing C-17 Globemaster Jet Crash All Hell breaks loose". It has 2,302,640 views and I took a screenshot of the like/dislike metrics before the change took place. 1.6K Upvoted, 21K Downvoted.

The spoiler is that the plane taking off never actually crashes. It just looks like it will because of the camera angle. The uploader wrote "I made this video to start a conversation and it has certainly started a conversation ..." but has disabled comments. The video is a complete and total lie and the ratio reflected that. Without comments, you have nothing else to warn you about the video. From now on, you will have to rely on the fact that a video viewed two million times only has one and a half thousand likes as a proxy.

Granted, I know some videos are prone to have bad-looking ratios because they are discussing contentious topics. I give those a wide berth and don't immediately dismiss them because they have a 60-40 like dislike ratio.


I still see the up/down counts.


This. I find that the vast majority of cases where I'd pay any attention to the dislike bar are just cases where people are getting dogpiled for whatever reason (whether they deserve it or not, or if anyone deserves to be treated like that on the internet, is another matter entirely) but I've heard reasonable arguments from people talking about tutorials and other informative videos that the like:dislike ratio is a convenient sniff test for if the video is worth watching.


Clickbait is bad enough without removing one of the last few tools left in the arsenal to fight it.


I feel like we have different definitions as to what clickbait is - when I see a clickbait video, I can simply identify it by it's title and thumbnail, I've never needed to look at the like:dislike ratio to confirm that it's clickbait. What kind of videos do you find as clickbait?


An interview where they spend 30 minutes off-subject and 1 minute on-subject. A tutorial with 10 minutes of detailed explanation and a "now draw the owl" step buried 3/4 of the way through. A thumbnail that promises a level of complexity, sophistication, or accomplishment that winds up never actually happening in the video.

Content farms aren't the only ones in the clickbait game these days.


For entertainment it’s not that big of a deal but still annoying not to see the ratio to total counts, but for educational content or how to videos it is absolutely critical. Have you ever tried to search got to fix a washing machine or basically anything? Even with likes and dislikes you have to go through tons of videos that are just plain bad advice before you can figure it out. 90% of the time the dislikes help you weed out the straight up clickbait material and the other 10% it tells you when what the person did will actually make a problem worse.


Dislikes indicate politics, clickbait, or unhelpfulness. Politics is obvious, but clickbait and unhelpfulness can waste a lot of your time before becoming obvious. Dislikes help combat this.


And let’s be honest, most political videos don’t have enough dislikes from both sides.


Yeah it occurred to me that I glance down to see the likes and dislikes to see how credible some of the how-to videos I watch are. A lot of them are garbage or dangerous and that's reflected in the dislike count. Fortunately you can scrub through maybe the top 30 comments to see if there's anything off.


YouTube is such an awesome thing. The executives want to destroy it. The content creators and users are pissed off.

This whole dislike thing fundamentally breaks YT for me. Completely. I will not waste time watching videos if they’re not worth watching.

There is a reason why IMDB is a thing. If they remove ratings, it becomes useless. Ratings are the foundational aspect of IMDB.

Google needs a lesson or two and I hope the community responds in the strongest form of protest.


> There is a reason why IMDB is a thing. If they remove ratings, it becomes useless. Ratings are the foundational aspect of IMDB.

Are the ratings why people go to IMDB? I use IMDB as a database on the internet for movies and TV cast/crew credits. I almost never look at the ratings for the things I look up.


IMDB ratings are solid and often the main reason why people go to IMDB at all.

8: a masterpiece

7: enjoyable

6: watchable if you like that specific thing

5 and lower: oddball

There are some exceptions for niche movies (may have very low or very high rating) and for recently aired, otherwise it's pretty reliable.


I always thought crowd sourced ratings was one of the killer apps of the internet. That's what makes Airbnb possible, what made darknet markets successful, what makes Amazon powerful...

Throwing away ratings is like going into the woods to live and throwing away your book on native plant life. I guess just nibble at whatever looks good, even if it might be void of nutritional value or poisonous at worst.


It’s amazing that this needs to be spelled out for the HN community. I thought how can you ever oppose this - I am trying hard to listen to the counter arguments but it’s just not holding up. Something sinister is going on and we are secretly wishing for an authoritarian world - my cynical guess.


That's their data that they're choosing not to disclose. That is not censorship. Analogously, if somebody asks for your name and you don't mention your middle name, that is also not censorship.

HN only shows max(upvotes-downvotes, -4) to the original commenter. Censorship?


imo it's censorship to the degree that seeing a ratio does explicitly or implicitly set a specific context to a video. For example, say we have a government published video for an initiative that is incredibly unpopular. However, by masking the actual dissent, all you see is one side of the equation, not allowing you as a citizen to easily see how contentious / controversial something that could directly affect your life really is. It's all about controlling dissent imo, there's no real reason to hide this otherwise.

As such, yeah HN does only show up/downvote ratios like you are claiming. However, the scale is completely different between here and YT, which is a primary source of information for many people nowadays.

Edit - To further elaborate, with the same example, imagine that not only is the dislike ratio masking the actual dissent, but other companies and platforms are collaborating on a truth, and discussion to the contrary cannot be discussed on their platform. This is what is literally happening, you have to be blind to not see it at this point. Government published videos are having their ratios hidden, anything that is counter to the decided narrative is being automatically flagged by AIs on FB/Twitter to throw "warnings" up. This is the nature of the current web right now and you really should acknowledge the tightening of the grip that these companies are doing over the years.


I mean, that's definitely your opinion, but this doesn't match any definition of censorship I'm aware of -- it sounds like you want compelled disclosure. It's a single statistic that YT collects per video. YT also has location data for the dissenters. Is it censorship that they aren't showing a country-by-country breakdown of where the dislikes are coming from? Is it censorship that they aren't showing a town-by-town breakdown of where the dislikes are coming from? Is it censorship that they aren't showing the IP address of each dissenter?

Clearly, that got ridiculous. But, what I'm curious about is if there's an underlying principal in your mind here. Because what you appear to be suggesting is a regulation compelling not only disclosure of internal statistics, but specifically how fine-grain those internal statistics are allowed to be? And, for example, what about twitter? They don't have a dislike button -- do you think they should be compelled to implement one? Since your focus seems to be on where people are getting their news, do you think that news sites (above a certain popularity?) should be compelled to implement dislikes on their own content, or only user-submitted comments?


I guess, the underlying principal in my mind is around the intent of why they are removing such a feature, when it has very valid uses even as a consumer of content. I understand some of the issues with the upvote/downvote concept in terms of targetted ratio campaigns, however, I think it is censorship if they are removing this information for the intent of social engineering, which I think that they are. I know I'm mostly speaking on gut here, and I could be wrong as to the motivators behind this change.

It's just that it is a very unique situation. We're at a stage where YT is one of the most important platforms on the current web, it's incredibly centralized, and at the end of the day it is up to the whims of Alphabet execs on how they want information published on their platform.

So maybe it's not exactly "censorship" in the standard definition. However, there is functionality that exists and has existed in the product since inception (when it was a rating system instead of voting). You have always been able to see how unpopular a video really is. Taking this away is an alarm to me, especially in today's environment.

I apologize if it's a bit hard for me to explain my reasoning here, but it just truly unsettles me.


> I apologize if it's a bit hard for me to explain my reasoning here, but it just truly unsettles me.

I feel you here, but you're not making a principled argument. You're appealing to a fundamental right to free speech, but that just doesn't fit the situation.

If we're going by intent, YT's statement is that people seem to be biased by the rankings. Taking it at face value, they want people to think for themselves, and not pre-judge the content based on what previous viewers thought. That sounds like a good thing to me. That said, I don't consume youtube except when my kid watches minecraft stuff there, so I couldn't care less what happens to/on the platform. What I care about is the legal mechanisms in play -- and your suggestion is quite alarming from that perspective.


Sure, the legal mechanisms around such a thing isn't too much on my mind when I'm making this argument, as ultimately I believe the government is incapable of introducing any legal mechanisms outside of the ongoing antitrust case that has the capacity to lessen the influence Alphabet has on the general public. I agree with you I think on the perspective that I certainly wouldn't want someone to attempt to legislate that YT cannot unpublish this information, or regulations on how transparent a company has to be with this information.

So yeah, from a legal perspective, it probably doesn't fall within the domain of censorship, especially since as you say, YT can fully make a case that they are attempting to remove initial bias of videos, and probably win any legal bouts with that standpoint. I would personally argue that it's not removing bias from videos as you are able to still see one side - the upvote side. Hence it feels as though the intent behind this is not to remove bias, but to remove dissent. You see a video that has 100000 likes? That must be pretty popular! However the same video could have 4-5x the amount of dislikes, and you would never know. By virtue of simply publishing any of the like/dislike information, you are giving an initial bias to users prior to seeing the content of the video if the user is able to see this prior to consuming the vid. If they truly wanted to remove initial bias, as an engineer I'd never implement this solution, but only show that information after you watch the video. Even something as simple as showing it at the end of the video could accomplish the idea of removing bias.

Of course I would be unable to argue this point really beyond what I'm doing / attempting to do here, as I have no legal experience or even really experience in debate around such subjects, I'm just an engineer that feels the need to vent a bit of dissatisfaction with the direction the web is turning. Anyways, thanks for listening a bit even if it was a bit ranty.


It could be argued as censorship only if they have it enabled on the whole platform and on the hypothetical video you mentioned it decides to turn off for whatever reason. Censorship is against an individual or a target group.

Since they analyzed the platform as a whole and decided there are more harm than good in dislikes, and they are applying globally (allowing for a transition period), it is just how they decided the feature set to behave. You can't call it censorship if it applies to 100% of people and content without exception.


> That's their data that they're choosing not to disclose.

And newspapers own the copyright on everything they print, even falsehoods.

What's important is the reason why they are doing it, not whether they have the right, because we all agree that they have the right to do it, but we are not all happy.


censoring seems like a strong word. it's likely some automated system that based on some frequency analysis detecting whether the dislikes were authentic or not (by some defn of 'authentic').

Without such systems it's trivial to just 'clickfraud' relevancy systems out of whack. Of course Google has has a long record of playing cat/mouse games with this type of behavior due to the entire SEO industry who would manipulate rankings and other signals.


What makes you think censorship ceases to be censorship when it is automated?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: