Hacker News new | past | comments | ask | show | jobs | submit login

As soon as something can be automated, people start acting irrationally upset, especially if the thing is seen as even remotely "creative". Those people are going to have a bad time moving forward.



I don't see this being raised as an automation issue. The GP comment is concerned with their work, including licensed code, being used by Microsoft to train LLM models without any kind of agreement or even compensation.

There isn't clear legal precedent yet whether training models is an acceptable use of licensed work, but it has nothing to do with automation.


If the legal precedence comes out as "yes, it is legal", as many countries have already done (e.g. Japan), would GP really change their mind and become okay with it? (Note: the Copilot lawsuit have also mostly been dismissed, so this is already close to reality.)

I doubt it. I think the GP will still be concerned about it, and would petition to change the law. So it has nothing to do with the legal precedence either, unless the GP concern genuinely comes from legalities, instead of using it as an argument. But why would the GP continue to be concerned?

The first possible reason I can think of is automation. That is, "no one cared until the models became good enough". The GP might fear for their job, or have "artist envy".

The second possible reason is a distaste for corporations, and wanting one's due for contributing to it in any way, regardless of what the law says (note: assuming that the training is legal as aforementioned). So this is more of a personal morals issue, one that I disagree with but must acknowledge. I must also point out that open weight models exist.


Legal precedent doesn't generally cross national borders like that, unless you're talking specifically about international law and precedent set by bodies like the UN or ICC.

When it comes to petitioning to change the law, that's a step further than precedent. Precedent is really just legal cases that had to be hashed out because the laws either don't exist or aren't clear. A person would be well within their right to disagree with legal precedent and try to get lawmakers to clarify or create laws that overrule court decisions.

Automation is a reasonable guess on why some may be worried about LLMs and his they're trained, I just didn't see that in the GP comment. They commented specifically on concerns of their content being used to train models without any kind of agreement or financial incentive.


I should clarify, I meant "will the GP be less concerned if the precedent says training is legal in the U.S.?" The Japan one was just an example of what could happen.

My point was that, if the GP will disagree with such a precedent, then their worry goes further than mere legalities. Specifically, "there isn't clear legal precedent yet whether training models is an acceptable use of licensed work" felt like a non-sequitur to me, because it is unlikely they would care about the legal precedent.


We either allow copyright on abstract ideas or allow AI to learn and use those ideas




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: