The takedown didn't backfire at all just because some people decided to publish on Twitter a tarball that still happens to be hosted in many other places so there is absolutely no risk of it getting lost.
The problem is that the project may not survive this.
Major distributions for example will no longer carry the project and likely refuse to touch it even with a ten-foot pole (think media codecs situation). It will be relegated to 3rd party repos. They will lose users, they will lose contributors. And how long until YT's (and/or the other supported websites) HTML changes more rapidly until the remaining manpower can't keep up?
In the worst case, they need to adjust their tests. This is what the issuer of the complaint brought up. Their tests of the code downloaded a couple of seconds of what the RIAA thought is copyrighted material. If I'm not mistaken, this is what they are basing their case on.
If youtube-dl removes those explicit urls from their test cases, I don't think they have a case. You could argue fair use, but remember it's only a test case in that program. The authors could just as easily reprogram the test cases to accept any url a user wants to test.
Anyways, consult lawyers, perhaps change the test code and issue a counter notice and move on.
It's not a traditional DMCA copyright take down, but a take down that's enforced under a different section of the DMCA. I believe that in this case, there is no avenue for issuing a counter notice, but I'm not a lawyer and I'd be happy to be proven wrong.
Github files this notice under their dmca notices[0]. Maybe it's not a legal dmca notice, but I would assume if github files it as such, then the internal rules of github would apply. This includes a counter-notice.
If it's not a dmca takedown and those rules don't apply then I would assume github would need to explain why they filed it as such. (It's probably the RIAAs "fault", but "we" have only direct contact with github so we need to go through them)
The allegedly relevant sections of the DMCA do not contain any forced takedown measures except under court order. So I agree it doesn't seem to be a traditional takedown order, there doesn't seem to be any other reason they took it down.
They should remove the tests and the circumvent features from youtube-dl, then someone else should created a fork that is kept in sync with youtube-dl and with, for only difference, those tests and circumvent features.
That way only the fork get periodically DMCA'd, which will be way easier to recover from : just create another fork with the tests
I wonder if it could just be done as a plugin system. Developers should start taking note of this kind of stuff and anything that might ruffle the feathers of big media should go into plugins.
Do the digital equivalent of the way rich people use holding companies and shell companies. Put all the high risk stuff into the smallest independent repo possible and abandon it at the first sign of trouble. Use an MIT license so anyone can fork it and continue development if you have to abandon it.
It probably wouldn't even take a ton of money to set up a matching corporate structure so you could control everything. Put all the risk into a subsidiary that doesn't make any money (or makes very little). Drag out DMCA complaints until they sue you and then bankrupt the asset-less company. Rinse and repeat.
If you want to get super ballsy, sell the plugins that infringe, but siphon all the money out of the subsidiary with trademark licensing. That way you can license the same trademark to the next company to maintain the brand. Bonus points if you put the holding company in a tax haven.
IANAL. That's probably a really bad idea. Don't do it.
Part of the problem is that even if the code gets reinstated, the RIAA may just pressure Google into changing its method of protection. Then perhaps a new tool could be created, but not very easily if the RIAA succeeds in establishing a precedent for taking down tools that have the potential to infringe.
YouTube is increasingly only a few pain points from mass exodus from both creators and audience. Advertisers and copyright maximalists less so.
Bandwidth and distribution are no longer the limitations on independent hosting they once were. Independent discovery methods (e.g., DDG video search), are improving. Noncommercial orientation makes youtube-dl and similar mechanisms a net positive --- simplify the web-based delivery, increase viewer flexibility, provide greater tolerance for network or system variability.
My own interest in YouTube has very little to do with commercial mass media, and far less to do with ad-seeking chum-spewing bottom-feeders.
I love that youtube for me works everywhere: on my iPad, Samsung phone and my smart TV, laptop, the offline viewing mode is better than in any other application. None of these are hard to replicate, but it's still a lot of work (and some of the devices that I mentioned are walled gardens).
Youtube works poorly on my Android tablet, my aging Linux desktop, and even on a recent iMac, the flexibility of mpv, mps-youtube, or VLC is often preferable.
More options are more options, surprisingly enough.
I don't think distributions dropping it will have that big of an impact considering you only ever used those versions to bootstrap as you needed to `-U` update it regularly to keep it working (except maybe on Arch and similar bleeding edge, rolling distros).
I kind of hope they adopt flatpak or appimage based disto for it, giving it a more standard distribution mechanism and promoting one of those platforms.
Either way, this sort of advertisement is hard to beat. I'd expect the project to come out of this with more manpower, not less.
Not at all, I've been installing the latest releases of youtube-dl from PyPI via pip for years. In fact, it's still there. The RIAA takedown has had no effect (so far) on my usage of youtube-dl. The RIAA clearly doesn't have any understanding about how open source Python programs are commonly distributed.
That's not wrong, but it is missing the point: Ongoing development is impeded by the takedown. That means the tool will not work anymore in the near future, rendering it useless.
(Not here, my distro has a backports repo, and I'd suspect a significant if not majority of users get it from a repo of some sort, distro or not).
And even then, where are users going to search the most recent version? You will definitely not preserve the same number of users if you switch from yt-dl being "one apt-get away/one FDroid install away/one pip install away" to "one bittorrent search/wget from shady .ru website". And let's not talk about how will people contribute. The project could be as good as dead.
I'm on a rolling distro and I still maintained my own copy of youtube-dl (with a cronjob to run `youtube-dl -U`) instead of using my distro's package. If I used my distro's package, every time it broke I would have to wait a few days for my distro to update its package with the fix, even though upstream would've cut a release with the fix within a few hours.
As someone who uses youtube-dl for all the videos online, it's presence in any repos isn't as important as it may seem, because it breaks from time to time for some websites and you need to update it in order to keep using it, which is only easy if you keep it in your home directory so you can just run youtube-dl -U once it breaks.
I completely disagree. I have been using for example NewPipe from FDroid which is autobuild regularly as native Android program.
I would no longer expect FDroid now to keep doing this if they risk a DCMA letter that takes out their entire repo.
And I would assume a shitton of users use yt-dl code through another GUI application, rather than directly through an EasyInstall/virtualenv/whatever. And will most py repos still dare to host/link this code if the risk is a DMCA letter to their ISP?
NewPipe doesn't use youtube-dl, they have their own implementation of the stream extraction code[1]. Unless that codebase also has test cases which specifically try to download RIAA content, they would need to find a different justification for a DMCA takedown.
I realize that since NewPipe's Java; but it's just an example.
I really doubt the "it's just the test cases" justification btw, since test cases would likely be fair use. The deencryption stuff is the problem, and NewPipe does it too.
Cam down. This will just end up like DeCSS, when it was packaged separately and left to the user to trigger the necessary download.
Distribution/integration is the easy part. The hard part is to anonymise and secure core developers, and to allow contributions to continue flowing in a safe manner.
But user loss implies contributor loss, and that is the problem. It's not about me being able to find it (I definitely will be able to :) ), or guaranteeing a "safe harbor" to developers (there are plenty of organizations that are dying to host something like this). It's about a project hat requires a shit ton of constant manpower not having it because of it being "tainted".
It’s not C++ and complex encryption, this is a python web scraper - hardly rocket science. I don’t think manpower will be significantly harder to come by than it was in the past — the opposite, in fact.
It did not say it requires experienced manpower, I said it requires a lot of it, and importantly, constantly. It's easy to find a bunch of "fanatics" that will rise to the ocasion to "defend their freedoms" right now. But that is not what these projects require. On the other hand, it's hard to have volunteers who will keep doing the _menial_ changes for years as the different websites change their HTML/layout. Anything less and the tool loses a huge chunk of its usefulness.
Yes, but this was the same before. If anything, the current visibility is a shot in the arm. I don't think that, once the dust settles, youtube-dl will end up with less regular contributors than it had before - likely the opposite, in fact.
It's not worth playing cat and mouse with scrapers, it'll always be relatively trivial to solve compared to the engineering effort to come up with a new strategy, and that's with brute force scraping which is a fair way behind methods commercial scraping companies have been using for a long time.
There are smaller projects who get along fine with larger scopes but I think the opposite will hapen. This has given them enough popularity to overtake other similiar projects. This can turn be widely popular.
I started downloading from youtube because of this. I haven't come across RIAA protected content but who knows what the future holds
A tarball of _ancient_ DeCSS code has some value because you can use it to "decss" _all_ content made _before_ a certain date (e.g. most physical DVDs). So it makes sense to preserve the tarball in any method possible.
A tarball of an _ancient_ youtube-dl version is absolutely useless because the youtube HTML will have changed a million times in the meanwhile. You will not be able to use it for anything, neither for old nor for new content. Publishing/preserving youtube-dl tarballs is an absolute waste of effort.
The RIAA here is targeting existing project developers, and possibly also users. Trying to scare them away. Not trying to censorship old tarballs of the code which will quickly become useless.
> A tarball of an _ancient_ youtube-dl version is absolutely useless because the youtube HTML will have changed a million times in the meanwhile.
But it's not a tarball of an ancient version of youtube-dl. It's a tarball of a version of youtube-dl from like two days ago. So then it gets posted to some other host or some distributed thing and development continues over there, only with twice as much support because of all the media coverage.
Your argument that "You will not be able to use it for anything" because "youtube HTML will have changed a million times" is most definitely based upon that erroneous premise.
You come off as responding in bad faith. The argument is not specific to YouTube; YouTube is just the example. You're reaching to find something that technically incorrect, even though the point is fairly clear. There's an obvious difference between a static target (like a DVD) and a moving target (like a web service). With a static target, an archive is useful for all items produced prior to the archive. With a moving target, an archive is useful until the target moves.
It's not a reach when YouTube was specifically named as the reason that youtube-dl will be "absolutely useless". That was what was clearly stated.
Whereas it is a reach, ironically, for you to assume that all WWW sites are like YouTube, especially given the discussion of "extractors" that I pointed to, and even are moving targets.
As I said before, this premise, that you have assumed like AshamedCaptain, is highly erroneous. I suspect that neither of you have actually looked to see what WWW sites out of the hundreds that youtube-dl works with actually do have changing HTML, and how many of the "extractors" are still happily working years after they were written. (Sadly, the open issues list on GitHub, which would have helped to determine this, is gone, GitHub being a single point of failure.)
This is just entirely erroneous assumption and hyperbole, that everything is like YouTube, that things will change "a million times", and that magically all of those hundreds of WWW sites will stop working and "you will not be able to use it for anything".
At best, it's hyperbole. It's certainly not "highly erroneous." The most useful websites, with the largest user bases, will also have the largest developer bases, and change the most frequently. That's just kinda how the internet is. Consider - if a tool like this had been written 20 years ago, are there any websites it would still work on today? What about 15 years ago? You're not wrong that some of the smaller websites might take longer to change than the larger ones, but the calling the notion that most websites change over time "entirely erroneous" is just silly.
I literally mentioned on the _original_ post that youtube-dl supports other sites. Excuse me for not repeating on every post "YouTube and/or the other websites" as I did on the original one.
Not GP, but I understand what they are getting at. They were shortening to be concise. What was actually meant was...
"You will not be able to use it for anything because YouTube and every other video sharing website supported by youtube-dl will have changed a million times.
... which revision is equally based upon the erroneous premise that all of those WWW sites are the same as YouTube. Clearly, with all of the hundreds of "extractors", they are not.
I don't think so for youtube-dl. I think it's dead now. There was already hundreds of PRs to be merged - so many videos already weren't able to be downloaded on various sites.
The codebase was not the useful thing about this project: it was the constant upkeep and whack-a-mole-ing of the various site changes over the 1000+ supported sites.
This event may spark renewed interest and help, but my money is on "slow death" as support for sites and videos decays.
I think you might be right, but I also don't know how viable this was before the DMCA.
I have never been able to get youtube-dl to work with (both ubuntu / mint) distro packages. The packages always fall out of date due to youtube changing things, and you need to download it directly from the youtube-dl website regardless.
Not in the same light as preconfigured repos in Ubuntu et al.
I don’t think pip is even installed by default under most distributions, and Iirc some actively discourage its use. It’s also Supply chain insecure when compared to trusted repos.
The problem is that the project may not survive this.
Major distributions for example will no longer carry the project and likely refuse to touch it even with a ten-foot pole (think media codecs situation). It will be relegated to 3rd party repos. They will lose users, they will lose contributors. And how long until YT's (and/or the other supported websites) HTML changes more rapidly until the remaining manpower can't keep up?