Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A sensible choice. Now only if open source developers would update their licenses, perhaps a new GPL license, to restrict reselling of IP through AI models. These folks need to adhere to rules if we are to have a healthy ecosystem.


Reddit does not own their users' content, however - they would also be simple resellers. All that's happening here is that they have failed to monetise where others are succeeding, and now they are positioning themselves to get a piece of that pie.

The right thing for them to do morally, would be to implement content visibility/privacy controls for their users similar to what Facebook offers (strange feeling to be referring to Facebook in this context).


My hope is that large players sealing off their content will motivate individuals to protect theirs. It brings awareness that their data is harvested and sold in ways never seen before, and is then used against them. Ideally those who make free software are the first to understand the implications.

Basically what I want is that all models trained on open source data or user created content without proper licensing are also open source and free.


Didn't this happen only after Facebook got burned by the Cambridge Analytica scraping a decade ago ?

(Also when the Twitter APIpocalypse happened, which the article forgot to mention.)


The argument AI businesses use is that their use of copyrighted work is fair use, which means that there is no license that would prevent your IP from being used by AI models.

If that holds up legally, the best you can do is to try to stop your content from being scraped or not release it at all.


If that argument holds then indeed there is no reason to create content. Although not sure how that argument can stand in a free society.


In a free society everything is fair game for AI. What you want is actually the opposite of a free society.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: