This is beautiful. I'd actually been going down this same rabbit hole with sqlite, I hadn't gotten far enough to bring a neural net into it.
I'd been inspired by the makemore lecture series[0]. At the 1hr mark or so, he switches from counting to using a nn, which is about as far as I've gotten. Breaking it down into a relational model is actually a really great exercise.
It's a nice demo. Unfortunately the article mixes up things in the explanation for causal masking, because it seems the author conflates aspects from training and inference. First, causal masking exists to prevent the model from "peeking" at future tokens during training, and second (at least for GPT-like architectures) for enforcing the autoregressive aspect during inference. During inference we only use the last token anyways, so it will attend the entire input sequence. So this token is definitely not decided only from the last token's embedding.
Is this an accurate representation of the GPT driver loop?
def generate(prompt: str) -> str:
# Transforms a string into a list of tokens.
tokens = tokenize(prompt) # tokenize(prompt: str) -> list[int]
while True:
# Runs the algorithm.
# Returns tokens' probabilities: a list of 50257 floats, adding up to 1.
candidates = gpt2(tokens) # gpt2(tokens: list[int]) -> list[float]
# Selects the next token from the list of candidates
next_token = select_next_token(candidates)
# select_next_token(candidates: list[float]) -> int
# Append it to the list of tokens
tokens.append(next_token)
# Decide if we want to stop generating.
# It can be token counter, timeout, stopword or something else.
if should_stop_generating():
break
# Transform the list of tokens into a string
completion = detokenize(tokens) # detokenize(tokens: list[int]) -> str
return completion
because that looks a lot like a state machine implementing Shlemiel the painter's algorithm which throws doubt on the intrinsic compute cost of the generative exercise.
I think the "context window" that people refer to with large language models means there's a maximum number of tokens that are retained, with the oldest being discarded. The window is a sliding window.
Your first video is fantastic. As someone who thinks LLMs are pretty nifty but never had a professional need to learn how they actually work, that 10 minute video taught me more than several years of reading esoteric HN comments and fluffy mainstream media on the topic. Seeing a bazillion floating point numbers all stacked up ready to be calculated across also makes it much more intuitive why this tech devours GPUs, which never really occurred to me before. Thanks for sharing.
Nice job. Spreadsheets are a natural way to explain an LLM. I suspect that you can show training well too by calculating the derivatives for each parameter under each training example and showing it all explicitly mapped to the relevant parameter etc etc
Thank you. Validating to hear others feel spreadsheets are a natural and accessible way to explain LLMs. Someone asks “what’s do they mean by a parameter in a model” or “what is the attention matrix” and you can just pull it up graphically laid out. Then they can fiddle with it and get a visceral feel for things. It also becomes easier for non coders do things like logit lens which is just a few extra simple spreadsheet functions.
I actually plan to do what you describe after I do the embeddings video (but only for a “toy” neural net as a proof-of-concept introduction to gradient descent).
Thanks! Each one takes a surprisingly long time to make. Even figuring out how make the explanation accessible and compelling yet still accurate takes awhile and then there’s still the actual video to do.
Sure, but if your doing work in machine learning that’s generally not the terminology used, hinting that this isn’t the area the author specializes in (which isn’t a bad thing, but take their explanations with a grain of salt).
Also, about the non-determinism issue, there was a post some time ago and that comes from the way the GPU does the calculations, something something floating point something.
So of course the algorithm is deterministic, but the real-life implementation isn't.
Floating point addition, for example, is not associative, so the order of taking a sum affects the result. If the summation were sequential and single threaded, it would be deterministic. But it happens in parallel, so timing variations affect the result.
But there is probabilistic sampling that happens (see "temperature").
> Floating point addition, for example, is not associative, so the order of taking a sum affects the result. If the summation were sequential and single threaded, it would be deterministic. But it happens in parallel, so timing variations affect the result.
In this sense, I don't think it's fair to say floating point math is non-deterministic, as much as parallel computation is non-deterministic. FP behaves in unexpected ways, but the same order of operations always yields the same unexpected results (except on Pentium 1).
Electricity, Cars, and Gas were once upon a time a luxury as well - reserved to those who could afford them or had unique access / credentials / skills. The people who were able to simplify and spread the advanced tech to the common person became Billionaires.
I’ve completely avoided GPT and LLMs. This looks like it would generate some level of fluidity in text output, but not be able to parse and answer a question.
Is there any simplistic blog posts / training courses which go through how they work, or expose a toy engine in python or similar that? All the training I’ve seen so far seems oriented at how to use the platforms rather than how they actually work.
Interestingly, modern ML does not require Turing completeness. And yet, people are considering the possibility of AGI - I would find it pretty amusing if Turing completeness isn't necessary.
Token inference by itself isn't Turing complete, but if its output can have side effects (e.g. editing the prompt for the next iteration), that's a whole different story.
Great write up, I enjoyed the reading the explanations for each piece and found them to be clear and quite thorough.
I did make the mistake though of clicking "+ expand source", and after seeing the (remarkable) abomination I can sympathize with ChatGPT's "SQL is not suitable for implementing large language model..." :)
> Plain Unicode, however, doesn't really work well with neural networks.
That is not true. See ByT5, for example.
> As an illustration, let's take the word "PostgreSQL". If we were to encode it (convert to an array of numbers) using Unicode, we would get 10 numbers that could potentially be from 1 to 149186. It means that our neural network would need to store a matrix with 149186 rows in it and perform a number of calculations on 10 rows from this matrix.
What the author calls alphabet here, is typically called vocabulary. And you can just use UTF-8 bytes as your vocabulary, so you end up with 256 tokens, not 149186. That is what ByT5 does.
The point isn't that it doesn't work at all, but that it doesn't work as well as other approaches that we have. Which is evidenced by the fact that all the best-performing models on the market use tokenization. It's not a secret that tokenization is fundamentally a hack, and that ideally we'll get rid of it eventually one way or another (https://twitter.com/karpathy/status/1657949234535211009). And in principle, you can compensate for the deficiencies of byte-level tokenization with larger models and larger contexts. But what this means in practice is that a model that has the same level of intelligence (on most tasks; obviously, there are some specific tasks, like say counting characters in a word, where tokenization is detrimental to intelligence) takes a lot more resources to train, hence why we aren't seeing more of those.
Unexpectedly insightful and answers some of the same questions I had early on: not just “how” questions, but “why” as well. You see the pattern with softmax quite often. I wish it was taught as “differentiable argmax” rather than by giving people a formula straight away. That’s not all it is, but that’s how it’s often used.
I keep reading that GPT is a "smarter" "complex" Markov, in the end, just a function spitting out next word with some probability.
But from my experience that cannot be true - it has to learn somehow. There is an easy example to make. Tell it something that happened today and contradicts the past (I used to test this with the Qatar World Cup), and then ask questions that are affected by that event, and it replied correctly. How is that possible? How a simple sentence (the information I provide) changes the probabilites for next token by that far?
There are two parts of the knowledge at play here.
1. The trained knowledge included in the parameters of the model
2. The context of the conversation
The 'learning' you are experiencing here is due to the conversation context retaining the new facts. Historically the context windows were very short and as the conversation continues it would quickly forget the new facts.
More recently context windows have grown to rather massive lengths.
Mostly #2 of what the other response says. The entire input to the NN is what it's learning off - the weights are static, you are changing the probabilities based on which context is provided and it's length.
No doubt the author is a super genius ex-child prodigy whizzkid who emerged from the womb with the umbilical cord diagramming Euler's formula.
For real though, and knowing this is a leading question, the author has near-on 15 years of blog posts showing complex problems being solved in SQL. Is their brain bigger than yours and mine? Maybe a little bit. Do they have a ton of experience doing things like this? Most definitely.
Also remember he builds these things piece by piece from the inside out.
It is easy to get overwhelmed when looking at the end result.
To learn you should probably run one block at a time to understand what each piece does. (In "normal" languages those pieces would be an isolated function)
In the tokenization example for '"Mississippilessly", why is 'si' not part of the combined column? It appears twice in the text. My speculation is that it got collapsed out with 'iss' (a longer token). Is that right?
Yes. At step 1 there are two 'i','s' and two 's','i', 'is' gets picked because it comes first. Then at step 7, because 'is' got formed, there are two instances of 'is','s' and only one instance of 's','i' so 'iss' gets formed.
It might. But also, all decisions and knowledge is pretty much based on a resampling of our own language, conversations and internet data. It might puzzle together some existing ideas that have never been combined. Or it might hallucinate a better solution by accident. But we're definitely not at the level yet where it will actively build something great.
Self-play works for Go, because the "world" (for lack of a better term) can be fully simulated. Human language talks about the real world, which we cannot simulate, so self-play wouldn't be able to learn new things about the world.
We might end up with more regularised language, and a more consistent model of the world, but that would come at the expense of accuracy and faithfulness (two things which are already lacking).
Games like Alpha Go have very limited(or known) end state so reinforcements learning or similar methods work great. However, I wonder how will AI train itself in learning human languages without being judged by humans. It’s just a matter of time before someone figures out
Right, a rich simulator with humans for feedback: an evolved version of online worlds with a mix of AI NPC's and real people, with the task: find the NPC's. The NPC's can train in rooms with exclusive NPC's or mixed with people, without knowing.
I'd been inspired by the makemore lecture series[0]. At the 1hr mark or so, he switches from counting to using a nn, which is about as far as I've gotten. Breaking it down into a relational model is actually a really great exercise.
[0] https://www.youtube.com/watch?v=PaCmpygFfXo