Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For the web it requires that you run a snippet of javascript code (the challenge) in the browser to prove that you are not a bot.

How does this prove you are not a bot. How does this code not work in a headless Chromimum if it's just client side JS?





Good question! Indeed you can run the challenge code using headless Chromium and it will function [1]. They are constantly updating the challenge however, and may add additional checks in the future. I suppose Google wants to make it more expensive overall to scrape Youtube to deter the most egregious bots.

[1] https://github.com/LuanRT/BgUtils


LLMs solve challenges. Can we not solve these challenges with sufficiently advanced LLMs? Gemini even, if you're feeling lulz-y.

Yes, by spending money.

I agree, in some cases and depending on LLM endpoint, some money may need to be spent to enable ripping. But is it cheaper than paying Youtube/Google? That is the question.

sometimes, it's not about the cost. it's about who/where the money is being spent.

Once JavaScript is running, it can perform complex fingerprinting operations that are difficult to circumvent effectively.

I have a little experience with Selenium headless on Facebook. Facebook tests fonts, SVG rendering, CSS support, screen resolution, clock and geographical settings, and hundreds of other things that give it a very good idea of whether it's a normal client or Selenium headless. Since it picks a certain number of checks more or less at random and they can modify the JS each time it loads, it is very, very complicated to simulate.

Facebook and Instagram know this and allow it below a certain limit because it is more about bot protection than content protection.

This is the case when you have a real web browser running in the background. Here we are talking about standalone software written in Python.


How does testing rendering work? Can javascript get pixel data from the DOM


So the way this works is to draw fonts/svgs inside the canvas and check the pixels, that makes sense

This is just one element among many others. They probably have many available and others in reserve in case one becomes obsolete.

I recently discovered that audio codecs, frequencies, resolution, mix volume, etc. are accessible via JS in the browser and that this allows fingerprinting. Since we are talking about YouTube, the same type of technique should be possible with video codecs.


why can a bot dev not just get all of these values from the laptop's settings and hardwire the headless version to have the same values?

Because the expected values are not fixed, it is possible to measure response times and errors to check whether something is in the cache or not, etc.

There are a whole host of tricks relating to rendering and positioning at the edge of the display window and canvas rather than the window, which allow you to detect execution without rendering.

To simulate all this correctly, you end up with a standard browser, standard execution times, full rendering in the background, etc. No one wants to download their YouTube video at 1x speed and wait for the adverts to finish.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: