Hacker News new | past | comments | ask | show | jobs | submit login

The spam/troll filter on Youtube really is egregious. "Nigger "*85 was just one such example. Obvious spam such as "my stay-at-home mother earned $X last month from Y job. Visit Z to find out more" are way too common. I have only seen two filters on youtube comments: 1. block URLs, and 2. upper limit on message length. I'll state it plainly: in this instance, Google does shitty software.

As a counterexample, most comments on Slashdot are far from ideal, but Slashdot has long had filters in place to prevent obvious trolling such as these. Given /.'s OSS-friendliness in general, I'm sure they would have given Google these filters, if Google had only asked.




> in this instance, Google does shitty software.

Worse. It's doing shitty text mining, which is something they usually do pretty well at (seeing it's their core competency).

No-one complains when Apple messes up their cloud data storage (well, they do, but most people just say "OK, local storage + Dropbox still works"), but if the next iOS looked like a late 90s Java UI it wouldn't be a good sign.


Slashdot providing Google with algos to detect spam? Thats adorable. I respect slashdot but claiming that they may have better NLP algorithms than Google is absolutely unbelievable.


> claiming that they may have better NLP algorithms than Google is absolutely unbelievable.

That's why I did not make that claim. The NLP talent at Google is likely better than that at any other company or university. My claim is that their expertise is not used for filtering Youtube comments.


Care to provide ANY facts beyond anecdotes that their expertise is not used for filtering Youtube comments?

BTW you did make a claim that Slashdot would give filters to Google so there is that.


Sure, I found some examples in this very comment thread! These show that filtering is effectively not being done. And it's not anecdotes based on single/rare comments.

https://news.ycombinator.com/item?id=6748803


All these examples are anecdotal evidence provided by a biased party.


I've seen that same spam, and I rarely look at youtube comments. Unless I'm extremely lucky to have seen the exact same comments, this is a widespread problem and proves that even trivial filtering is not being used to block rampant spam.


So, more anecdotal evidence. If you really trying to say that your personal experience and experience of few other people proves that even trivial filtering is not being used, my arguments won't change anything, you already know everything, it seems.


The argument that there is a filter set up to block spam can be disproved by a single comment that would have been blocked by such a filter. I'm not sure what kind of evidence would actually sway you, but it seems fine to me: 1. widespread spam exists, 2. a week later it's still happening 3. therefore any filter is not set up in a way that blocks spam.


Apparently said "expertise" failed to tag 80 or so repetitions of the word "Nigger" as spam. Color me not impressed -- an undergrad could do better.


Reducing spam isn't just about having a better algorithm (short of strong AI, and even then two people can reasonably come to a disagreement about whether something is spam or not). It helps a lot to have the co-operation of the users. Something YouTube used to have, but doesn't anymore.

You have been arguing that pissing off the user base doesn't matter, but there is a real cost. Fighting your users means things like people not reporting spam anymore, or deliberately misreporting things that aren't spam.


You have zero proof that this all matters to any significant portion of the userbase. And I was arguing that pissing off the small part of the user base that wasn't happy with Google in the first place doesn't matter. And I stand by my argument. The majority wont care and will enjoy seeing relevant comments from their G+ friends under YT videos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: