A sensible choice. Now only if open source developers would update their licenses, perhaps a new GPL license, to restrict reselling of IP through AI models. These folks need to adhere to rules if we are to have a healthy ecosystem.
Reddit does not own their users' content, however - they would also be simple resellers. All that's happening here is that they have failed to monetise where others are succeeding, and now they are positioning themselves to get a piece of that pie.
The right thing for them to do morally, would be to implement content visibility/privacy controls for their users similar to what Facebook offers (strange feeling to be referring to Facebook in this context).
My hope is that large players sealing off their content will motivate individuals to protect theirs. It brings awareness that their data is harvested and sold in ways never seen before, and is then used against them. Ideally those who make free software are the first to understand the implications.
Basically what I want is that all models trained on open source data or user created content without proper licensing are also open source and free.
The argument AI businesses use is that their use of copyrighted work is fair use, which means that there is no license that would prevent your IP from being used by AI models.
If that holds up legally, the best you can do is to try to stop your content from being scraped or not release it at all.