Yes, I think so! That's super encouraging about holding the attention of a room of kids!
> It takes ~15 minutes and $10 bucks to generate a script depending on how fast OpenAI is feeling. So in a real scale v2 it would be very reasonable to explore this.
Yeah -- still a bit large to truly put into a CI pipeline that is running against every commit tho. :-/
Do you mind sharing your context window size? I always want to use local LLMs for rapid iteration -- I think 32k window isn't too difficult (Mixtral supports this out of the box, I think?), but I've heard of people pushing 100k tokens locally. Even so, that's peanuts compared to what hosted LLMs are doing, and if quality of writing is your bottleneck, then you wouldn't want to stray too far away from GPT-4 / Claude.
> Man, I sure hope I get to build this further!
Yeah!! It really feels like you've latched onto a nugget of something here, and I'm excited to see what's next!
Yes, I think so! That's super encouraging about holding the attention of a room of kids!
> It takes ~15 minutes and $10 bucks to generate a script depending on how fast OpenAI is feeling. So in a real scale v2 it would be very reasonable to explore this.
Yeah -- still a bit large to truly put into a CI pipeline that is running against every commit tho. :-/
Do you mind sharing your context window size? I always want to use local LLMs for rapid iteration -- I think 32k window isn't too difficult (Mixtral supports this out of the box, I think?), but I've heard of people pushing 100k tokens locally. Even so, that's peanuts compared to what hosted LLMs are doing, and if quality of writing is your bottleneck, then you wouldn't want to stray too far away from GPT-4 / Claude.
> Man, I sure hope I get to build this further!
Yeah!! It really feels like you've latched onto a nugget of something here, and I'm excited to see what's next!