All else equal, this means that price per GB of VRAM stayed the same. But in reality, other things improved too (like the bandwidth) which I appreciate.
I just think that for home AI use, 32GB isn't that helpful. In my experience and especially for agents, models at 32B parameters just start to be useful. Below that, they're useful only for simple tasks.
Yes, home / hobbyist LLM users are not overly excited about this, but
a) they are never going to be happy,
b) it's actually a significant step up given the bulk are dual-card users anyway, so this bumps them from (at the high end of the consumer segment) 48GB to 64GB of VRAM, which _is_ pretty significant given the prevalence of larger models / quants in that space, and
c) vendors really don't care terribly much about the home / hobbyist LLM market, no matter how much people in that market wish otherwise.
I just think that for home AI use, 32GB isn't that helpful. In my experience and especially for agents, models at 32B parameters just start to be useful. Below that, they're useful only for simple tasks.