This is quite a narrow view of how the generation works. AI can extrapolate from the training set and explore new directions. It's not just cutting pieces and gluing together.
Calling it “exploring” is anthropomorphising. The machine has weights that yield meaningful programs given specification-like language. It’s a useful phenomenon but it may be nothing like what we do.
Do you have any concrete examples you'd care to share? While this new wave of AI doesn't have unlimited powers of extrapolation, the post we're commenting on is asserting that this latest AI from Google was able to extrapolate solutions to two of AI's oldest problems, which would seem to contradict an assertion of "very limited".
Positively not. It is pure interpolation and not extrapolation. The training set is vast and supports an even vaster set of possible traversal paths; but they are all interpolative.
Same with diffusion and everything else. It is not extrapolation that you can transfer the style of Van Gogh onto a photographl it is interpolation.
Extrapolation might be something like inventing a style: how did Van Gogh do that?
And, sure, the thing can invent a new style---as a mashup of existing styles. Give me a Picasso-like take on Van Gogh and apply it to this image ...
Maybe the original thing there is the idea of doing that; but that came from me! The execution of it is just interpolation.
This is knock against you at all, but in a naive attempt to spare someone else some time: remember that based on this definition it is impossible for an LLM to do novel things and more importantly, you're not going to change how this person defines a concept as integral to one's being as novelty.
I personally think this is a bit tautological of a definition, but if you hold it, then yes LLMs are not capable of anything novel.
I think you should reverse the question, why would we expect LLMs to even have the ability to do novel things?
It is like expecting a DJ remixing tracks to output original music. Confusing that the DJ is not actually playing the instruments on the recorded music so they can't do something new beyond the interpolation. I love DJ sets but it wouldn't be fair to the DJ to expect them to know how to play the sitar because they open the set with a sitar sample interpolated with a kick drum.
A lot of musicians these days are using sample libraries instead of actually holding real instruments in their hands. It’s not just DJs or electronic producers. It’s remarkable that Brendan Perry of Dead Can Dance, for example, who played guitar and bass as a young man and once amassed a collection of exotic instruments from around the world, built recent albums largely out of instrument sample libraries. One of technology’s effects on culture that maybe doesn’t get talked about as much as outright electronic genres.
kid koala does jazz solos on a disk of 12 notes, jumping the track back and forth to get different notes.
i think that, along with the sitar player are still interpolating. the notes are all there on the instrument. even without an instrument, its still interpolating. the space that music and aound can be in is all well known wave math. if you draw a fourier transform view, you could see one chart with all 0, and a second with all +infinite, and all music and sound is gonna sit somewhere between the two.
i dont know that "just interpolation" is all that meaningful to whether something is novel or interesting.
If he plucked one of the 13 strings of a koto, we wouldn't say he is just remixing the vibration of the koto. Perhaps we could say that, if we had justification. There is a way of using a musical instrument as just a noise maker to produce its characteristics sounds.
Similarly, a writer doesn't just remix the alphabet, spaces and punctuation symbols. A randomly generated soup of those symbols could the thought of as their remix, in a sense.
The question is, is there a meaning being expressed using those elements as symbols?
Or is just the mixing all there is to the meaning? I.e. the result says "I'm a mix of this stuff and nothing more".
If you mix Alphagetti and Zoodles, you don't have a story about animals.
That is not strictly true, because being able to transfer the style of Van Gogh onto an arbitrary photographic scene is novel in a sense, but it is interpolative.
Mashups are not purely derivative: the choice of what to mash up carries novelty: two (or more) representations are mashed together which hitherto have not been.
uhhh can it? I've certainly not seen any evidence of an AI generating something not based on its training set. It's certainly smart enough to shuffle code around and make superficial changes, and that's pretty impressive in its own way but not particularly useful unless your only goal is to just launder somebody else's code to get around a licensing problem (and even then it's questionable if that's a derived work or not).
Honest question: if AI is actually capable of exploring new directions why does it have to train on what is effectively the sum total of all human knowledge? Shouldn't it be able to take in some basic concepts (language parsing, logic, etc) and bootstrap its way into new discoveries (not necessarily completely new but independently derived) from there? Nobody learns the way an LLM does.
ChatGPT, to the extent that it is comparable to human cognition, is undoubtedly the most well-read person in all of history. When I want to learn something I look it up online or in the public library but I don't have to read the entire library to understand a concept.
You have to realize AI is trained the same way one would train an auto-completer.
Theres no cognition. It’s not taught language, grammar, etc. none of that!
It’s only seen a huge amount of text that allows it to recognize answers to questions. Unfortunately, it appears to work so people see it as the equivalent to sci-fi movie AI.
I agree and that's the case I'm trying to make. The machine-learning community expects us to believe that it is somehow comparable to human cognition, yet the way it learns is inherently inhuman. If an LLM was in any way similar to a human I would expect that, like a human, it might require a little bit of guidance as it learns but ultimately it would be capable of understanding concepts well enough that it doesn't need to have memorized every book in the library just to perform simple tasks.
In fact, I would expect it to be able to reproduce past human discoveries it hasn't even been exposed to, and if the AI is actually capable of this then it should be possible for them to set up a controlled experiment wherein it is given a limited "education" and must discover something already known to the researchers but not the machine. That nobody has done this tells me that either they have low confidence in the AI despite their bravado, or that they already have tried it and the machine failed.
There’s a third possible reason which is that they’re taking it as a given that the machine is “intelligent” as a sales tactic, and they’re not academic enough to want to test anything they believe.
Is it? I only see a few individuals, VCs, and tech giants overblowing LLMs capabilities (and still puzzled as to how the latter dragged themselves into a race to the bottom through it). I don't believe the academic field really is that impressed with LLMs.
no it's not I work on AI and what these things do are much much more then a search engine or an autocomplete. If an autocomplete passed the turing test you'd dismiss it because it's still an autocomplete.
The characterization you are regurgitating here is from laymen who do not understand AI. You are not just mildly wrong but wildly uninformed.
Well, I also work on AI, and I completely agree with you. But I've reached the point of thinking it's hopeless to argue with people about this: It seems that as LLMs become ever better people aren't going to change their opinions, as I had expected. If you don't have good awareness of how human cognition actually works, then it's not evidently contradictory to think that even a superintelligent LLM trained on all human knowledge is just pattern matching and that humans are not. Creativity, understanding, originality, intent, etc, can all be placed into a largely self-consistent framework of human specialness.
To be fair, it's not clear human intelligence is much more than search or autocomplete. The only thing that's clear here is that LLMs can't reproduce it.
Yes but colloquially this characterization you see used by laymen is deliberately used to deride AI and dismiss it. It is not honest about the on the ground progress AI has made and it’s not intellectual honest about the capabilities and weaknesses of Ai.
I disagree. The actual capabilities of LLMs remain unclear, and there's a great deal of reasons to be suspicious of anyone whose paycheck relies on pimping them.
The capabilities of LLMs are unclear but it is clear that they are not just search engines or autocompletes or stochastic parrots.
You can disagree. But this is not an opinion. You are factually wrong if you disagree. And by that I mean you don’t know what you’re talking about and you are completely misinformed and lack knowledge.
The long term outcome if I’m right is that AI abilities continue to grow and it basically destroys my career and yours completely. I stand not to benefit from this reality and I state it because it is reality. LLMs improve every month. It’s already to the point of where if you’re not vibe coding you’re behind.
Let me be utterly clear. People with your level of programming skill who incorporate AI into their workflow are in general significantly more productive than you. You are a less productive, less effective programmer if you are not using AI. That is a fundamental fact. And all of this was not true a year ago.
Again if you don’t agree then you are lost and uninformed. There are special cases where there are projects where human coding is faster but that is a minority.
Isn't that what's going on with synthetic data? The LLM is trained, then is used to generate data that gets put into the training set, and then gets further trained on that generated data?
You didn’t have to read the whole library because your brain has been absorbing knowledge from multiple inputs your entire life. AI systems are trying to temporally compress a lifetime into the time of training. Then, given that these systems have effectively a single input method of streams of bits, they need immense amounts of it to be knowledgeable at all.