That seems like a really broad interpretation of "technically memorization" that could have unintended side effects (like say banning equations that could be used to generate specific lyrics), but I suppose some countries consider loading into RAM a copy already. I guess we're already at absurdity
> but I suppose some countries consider loading into RAM a copy already. I guess we're already at absurdity
FYI most do. Have a look at many software licenses. In particular Microsoft (who as we know invested lots into OpenAI), will argue it is so.
I would also say it makes sense. If it wasn't the case we can just load a program into lots of computers using only a single license/installation medium.
The law doesn't care what technical trickery you use to encode/compress copyrighted material. If you take data and then create a equation which contains it based on it it which can reproduce the data trivially then yes, IMHO obviously, this form of embedding copyrighted data still is embedding copyrighted data.
Think about it if that weren't the case I could just transform a video into an equation system and then distribute the latest movies, books, whatever to everyone without permission and without violating copy right even through de-facto I'm doing exactly what copy right law is supposed to prevent... (1)
Just because you come up with a clever technical trick to encode copyrighted content doesn't mean you can launder/circumvent copyright law, or any law at that. Law mostly doesn't care about technical tricks but the outcomes.
Maybe even more importantly LLMs under hood the are basically at the core compression systems where by not giving them enough entropy to store information you force to generalize and with that happen to create a illusion of sentience.
E.g. what is the simplest case of training a transformer? You put in data to create the transformer state (which has much smaller entropy) and then output it from that state and then you find a "transformation" where this works as well as possible for a huge amount of different data. That is a compression algorithm!!! And sure in reality it's more complex you don't train to compress a specific input but more like a dictionary of "expected" input->output mappings where the output parts need to be fully embedded i.e. memorized in the algorithm in some form.
LLMs are basically obscure multi layered hyper dimensional lossy compression systems which compress a simple input->output mapping (i.e. database) defined by all entries in it's training data. A compressed mapping Which due to forcing a limited entropy needs to do compression through generalization....
And since when is compression allowing you to avoid copyright??
So if you want it to be handled differently by law because it's isn't used as a compressed database you have to special case it in law.
But it is used as a compressed database, in that case e.g. it was used to look up lyrics based on some clues. That's basically a lookup in a lossy compressed obscure database system no matter how you would normally think about LLMs.
(1): And in case it's not clear this doesn't mean every RNG is a violation because under some unknown seed it probably would reproduce copyrighted content. Because the RNG wasn't written "based on" the copy righted content.
In regards to "Because the RNG wasn't written "based on" the copy righted content."
Does that mean I can distribute the seed if I find one and this RNG wasn't trained on that content?
Does it prevent me from sharing that number on the internet?
It seems like theres a lot of subjective intent here that I'm extremely skeptical
For an LLM also:
If it's lossy enough that it needs RAG to fix the results is that okay?
-------------------
In my opinion I think actually getting the output is where the infringement happens. Having and distributing the LLM weights shouldn't be infringment (in my head) because of the enforcability of results. Otherwise you risk banning RNGs or them all being forced to prove they didn't train on copyrighted content
> If it's lossy enough that it needs RAG to fix the results is that okay?
but then the only way RAG can "fix" the result is if the RAG system stored the song text in it's vector data base
in which case the law case and solutions to fix the issue are much more clear
in a certain way a a LLM which only encodes language but now knowledge and then uses RAG and similar is the most desirable (not just for copyright reasons but also e.g. update-ability, traceability, remove-ability of misinformation etc.)
sadly AFIK it doesn't work as language and knowledge details are too much interleaved
> Does that mean I can distribute the seed if I find one and this RNG wasn't trained on that content?
honestly I think this falls outside of situations copyright law considers. But also if you consider that copyright law mostly doesn't care about technical implementation details and that the "spirit of law" (intent of law maker) matters if unclear cases I think I also have a best guess answer:
Neither the RNG nor the seed by them self are a copyright violation but if you spread them with the intend to spread non licensed copy you still do a copyright violation and in that context the seed might be idk. taken down from sharing sites even if by itself it isn't a copyright violation.
The thing is in the end you can transform _any_ digital content into
- "just a number"
- or "just a equation", "equation system" etc.
- or an image, matrix, graph, human readable text , or pretty much anything
so fundamentally you can't have a clean cut between what can and can't be a copyright violation
which is why it matters so much that law acts on a higher abstraction level then what exactly technical happens.
And why intent of law (in gray area cases) matters so much.
And why law really shouldn't be a declarative definition of strict mathematics rules.