I think the GPTs market is a bit of a novelty thing. But it might work with low costs. I've been making my own GPTs last few days and I find the capability useful but very inconsistent.
If you give a GPTs personality more complex tasks, it gets distracted and reverts to its default ChatGPT personality.
There aren't enough parameters and compute in these LLMs for the things we're pushing them to do. We need hardware to catch up and offer analog compute for neural nets, so we can scale them way up. And of course evolve architectures to make use of that capability. Only then I see this becoming truly rock solid.
Also using the context window for personalities is likely the wrong approach. You gotta finetune this on the model itself.
But yeah GPT can't keep a secret. But humans also can't, so I guess it's imitating us fine.
If you give a GPTs personality more complex tasks, it gets distracted and reverts to its default ChatGPT personality.
There aren't enough parameters and compute in these LLMs for the things we're pushing them to do. We need hardware to catch up and offer analog compute for neural nets, so we can scale them way up. And of course evolve architectures to make use of that capability. Only then I see this becoming truly rock solid.
Also using the context window for personalities is likely the wrong approach. You gotta finetune this on the model itself.
But yeah GPT can't keep a secret. But humans also can't, so I guess it's imitating us fine.