> I expect this will cost maybe few dozen dollars and train in few hours within next few years.
I wouldn't count on it. Nvidia's been cleaning up shop, but their best option for expanding right now is through parallelization (bigger clusters, basically). Now that Blackwell is on TSMC, Nvidia is alongside Apple waiting for new and denser nodes to upgrade to. A real "generational leap" in training cost is going to require some form of efficiency gain that we're not seeing right now. It's possible that Nvidia has something up their sleeve, but I'm not holding my breath.
> What I think will be interesting is when commodity hardware can run cheap inference from very capable, specialized models.
What's funny is, you basically already can. The problem is becoming integration, and in the case of video games, giving the AI a meaningful role to fill. With today's finest technology, you can enjoy an AI-generated roguelike that is nigh-incomprehensible: https://store.steampowered.com/app/1889620/AI_Roguelite/
As time goes on, I really think developers are just going to not use AI for video games. Maybe I'm missing the "minecraft moment" for procedurally-generated stories here, but the sort of constraints needed to tell a story of create an interactive experience don't exist within LLMs. It's a stochastic nightmare of potential softlocks, contradictions or outright offensive requests. The majority of places I've seen AI applied today isn't for content creation, but instead automated moderation.
I wouldn't count on it. Nvidia's been cleaning up shop, but their best option for expanding right now is through parallelization (bigger clusters, basically). Now that Blackwell is on TSMC, Nvidia is alongside Apple waiting for new and denser nodes to upgrade to. A real "generational leap" in training cost is going to require some form of efficiency gain that we're not seeing right now. It's possible that Nvidia has something up their sleeve, but I'm not holding my breath.
> What I think will be interesting is when commodity hardware can run cheap inference from very capable, specialized models.
What's funny is, you basically already can. The problem is becoming integration, and in the case of video games, giving the AI a meaningful role to fill. With today's finest technology, you can enjoy an AI-generated roguelike that is nigh-incomprehensible: https://store.steampowered.com/app/1889620/AI_Roguelite/
As time goes on, I really think developers are just going to not use AI for video games. Maybe I'm missing the "minecraft moment" for procedurally-generated stories here, but the sort of constraints needed to tell a story of create an interactive experience don't exist within LLMs. It's a stochastic nightmare of potential softlocks, contradictions or outright offensive requests. The majority of places I've seen AI applied today isn't for content creation, but instead automated moderation.