The drip-feeding seems crazy to me. Open AI is undermining their reputation by forcing almost everybody to use the older, lower-quality models. Even if customers are willing to pay for GPT 4, they're being told to wait at the back of the line.
Wait for what!? Christmas? When we can open our presents and have a GPT 4 inside?
It's like they took a leaf from Google's "how to guarantee the failure of a new product" marketing. That is: restrict access, ensuring that to word-of-mouth marketing can't possibly work because none of your friends are allowed to try the product.
The announcement here is "general availability" of the GPT-4 model...
...but not the 32K context model. Not the multi-modal version with image input. No fine-tuning. Only one model (chat).
As of today, I can only access GPT 3.5 via Azure Open AI service and the Open AI API account that I have.
What's the point of all these arbitrary restrictions on who can access what model!?
I can use GPT 4 via Chat, but not an API. I can use an enhanced version of Dall-E via Bing Image Creator, but not the OpenAI API. Some vendors that have been blessed by the Great and Benevolent Sam Altman have access to GPT-4 32K, the rest of us don't.
Sell the product, not the access to it.
Don't be like the Soviet Union, where you had to "know someone" to get access.
This is the bottleneck. EUV Photolithography is one of the hardest engineering challenges ever faced, it's like trying to drop a feather from space and guaranteeing it lands on a specific blade of grass. Manufacturing these GPUs at all requires us to stretch the limit of what is physically possible in multiple domains, much less producing them at scale.
Thanks for this explanation! :) (as someone without knowledge of the hardware process I appreciated it).
It is SO amazing that we have such a driving force (LLMs/consumer-AI) for this (instead of stupid cryptocurrencies mining or high-performance gaming). This should drive innovation pretty strongly and I am sure the next "leap" in this regard (processing hardware) will put technology in a completely different level.
Not disagreeing but just curious, why can't money buy enough GPU's? OpenAI's prices seem low enough that they could reasonably charge 2x or more to companies eager to get on the best models now.
They're giving people access to GPT-4 via Bing for free, but apparently can't accommodate paying API users!?
That makes no sense.
What makes much more sense -- especially if you listen to his interviews -- is that Sam Altman doesn't think you can be trusted with the power of GPT-4 via an API unless it has first been aligned to death.
It’s the exactly the same. If they could make 75 cents selling the compute to someone else for $1 versus not making it providing the Bing chat service, that is 75 cents they lose.
Why do you assume that the same amount of computing power would be used by someone else? There are only so many customers. You can't magically start selling more compute if you stop using it yourself.
They simply want control over the rollout of their product and how it is used. That, and perhaps opening the flood gates would produce scaling bottlenecks they’d rather stay ahead of than get behind.
So they open things carefully, pull back when necessary like when they limited use of the public GPT-4 version of ChatGPT. That doesn’t seem too unreasonable. And yes sure, some amount of it might be attempts to manufacture scarcity to increase the hype. It’s an old tactic and hardly comparable to Soviet Russia.
There's no scaling issues to speak of. These AIs are stateless, which makes them embarrassingly parallel. They can always just throw more GPUs at it. Microsoft even had some videos where they bragged about how these models can be run on any idle GPU around the world, dynamically finding resources wherever it is available!
If there's not enough GPUs at a certain price point, raise prices. Then lower prices later when GPUs become available.
More GPUs currently don't exist. Nvidia is at capacity for production, and they have to compete with other companies who are also bidding on these GPUs. It's not an issue of raising the price point. The GPUs they want to buy have to be purchased months in advance.
Yes I was anthropomorphising it back into the realm of human emotion, wherein the angles at which one’s lines run need not be a source of emotional distress. Excepting perhaps the innate sadness of two parallel lines destined to ever be at each other’s sides but still never to meet across the infinite plane.
If we guesstimate that every 100 customers needs 1 NVIDIA GPU (completely random guess), then that means OpenAI needs to buy more GPUs for every 100 new customers using GPT-4. The problem is there's a GPU shortage so it's hard to add more GPUs by just throwing money at the problem.
Sam Altman was giving people access to GPT 4 APIs because they attended a conference.
"Lick my boots, in person, and you can be one of the privileged few" is very much the behaviour of a Communist dictatorship, not a capitalist corporation.
I can spin up an Azure VM right now in almost any country I choose... except China. That's the only one where I have to beg the government for permission.
You've clearly not experienced the reality of enterprise then. You're opinions are based on a limited understanding and knowledge of real life situations when it comes to this sort of stuff.
> "Lick my boots, in person, and you can be one of the privileged few" is very much the behaviour of a Communist dictatorship, not a capitalist corporation
It's both, and more besides. Veblen goods are absolutely a thing in capitalism.
Not that "giving people access to GPT 4 APIs because they attended a conference" should be controversial enough to even get worked up about, let alone to compare to a dictatorship, but Google did much the same at developer conferences: I/O 2012 swag list included a Galaxy Nexus, a Nexus 7, a Nexus Q, and a Chromebox.
Wait for what!? Christmas? When we can open our presents and have a GPT 4 inside?
It's like they took a leaf from Google's "how to guarantee the failure of a new product" marketing. That is: restrict access, ensuring that to word-of-mouth marketing can't possibly work because none of your friends are allowed to try the product.
The announcement here is "general availability" of the GPT-4 model...
...but not the 32K context model. Not the multi-modal version with image input. No fine-tuning. Only one model (chat).
As of today, I can only access GPT 3.5 via Azure Open AI service and the Open AI API account that I have.
What's the point of all these arbitrary restrictions on who can access what model!?
I can use GPT 4 via Chat, but not an API. I can use an enhanced version of Dall-E via Bing Image Creator, but not the OpenAI API. Some vendors that have been blessed by the Great and Benevolent Sam Altman have access to GPT-4 32K, the rest of us don't.
Sell the product, not the access to it.
Don't be like the Soviet Union, where you had to "know someone" to get access.