Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is it perfectly reasonable to discriminate against the computer? Would the same apply to someone - with equivalent photographic memory - who listens to a ton of music and is thus inspired?


I think one difference is in how humans learn vs. how AI learns. A human could hear a very small amount of music and start mimicking it. AI -- at least in its present state -- needs to be trained on a huge quantity of music to start doing anything productive. What that difference means, exactly, is debatable, but it's a difference, and, I believe, one that begs deeper contemplation of laws surrounding AI use of copyrighted materials.

Another difference is one of scale. We've seen this in other areas, like surveillance. A random security camera here and there, or a random street photographer here and there, most people didn't really find objectionable. Besides which, any captured photographs or video had to be manually inspected for persons of interest. But start surveilling everyone in public, all the time, and using technology to track everyone automatically, and it starts to seem like a different construct.

A human producer putting out material that another human learns from, quotes from, paraphrases from, builds new things based on... well, that still requires time and dedication on the part of the learner. And there will be a correspondingly limited amount of new derivative material based on what the "teacher" producer created. AI systems offer the possibility for effectively unlimited derivative material, produced at an unprecedented rate.

Existing copyright laws, and existing social norms around creation of "intellectual property", have been formed with humans in mind. Humans who operate at a human rate of production, and who learn with a human style of learning. Some maybe be more efficient than others, but not so drastically different in form as AI.


AI generated content can be either derivative or transformative. For example if I use AI to paraphrase books or articles, that would be a derivative use.

But an AI that searches the web, news, and scientific papers for references and then outputs a Wikipedia-style article on a given topic would be a transformative use because it does a lot of work synthesizing multiple sources into a coherent piece, and only uses limited factual references.

Or we can do something more advanced. We solve a task with a student model and in parallel we solve the same task with a teacher model empowered with RAG, web search and code execution tools. Then you have two answers. You use an examination prompt to extract what the student model got wrong as a new training example.

That would be transformative, and targeted. No need to risk collecting content that is already known by the student model. It would be more like "machine studying" or "machine teaching" because it creates targeted lessons for the student.


Because computers don't have any of the limitations that most humans, even ones with photographic memories, have.


Why are limitations relevant?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: