Hacker News new | past | comments | ask | show | jobs | submit login

Who?



People who want to treat AI generated works differently, even tho by objective examination, it is not differentiable from a man made one.


The sister thread notes that this requires inverting the process, so it is likely only useful to those hosting the particular AI model; not an outside person who wants to check if a specific image is AI generated.


That seems silly though as presumably that is a different set of people then those generating the images.

Any security system that requires the adversary to be on board is doomed to fail. If the adversary was willing to play by the rules you wouldn't need a security system.


That doesn't work for that purpose because nothing mandates that AI-generated works be watermarked, and latent diffusion models are easily run on a desktop GPU now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: