I like the functionality and use it a lot but it has become so annoyingly unstable lately. Literally, every time I try to use it, it fails in one way or another. Perhaps my $20 / month isn't that interesting to them in the grand pursuit of AGI but I want to use the service, not merely fund research.
I don't know why anyone pays for 4 when 3.5 is already a billion times better than what we had 2 years ago. 3.5-turbo, at least from an API perspective is an extremely cost effective way for you to add more intelligent decision making to your applications and backend processes. We're going to use GPT 3.5-turbo to help us decide if a specific thing is probably "this" or "that" or "one of the following".... Super easy to use it that way than rolling our own crappy bag-of-words neural network that uses word2vec. Tiny bit slower, but worth it.
I use ChatGPT 4 daily (usually one of my custom GPTs). 3.5 is next to useless on anything I'm reaching to ChatGPT for.
For pipeline / LLM project stuff- sometimes I can use 3.5, but again most interesting things I take on requires 4. 3.5 is so unreliable, is terrible at detailed / specific prompts.
Fine tuning 3.5 helps massively though. Especially for things like using it as a general classifier.
4 is still better than 3.5. Enough to justify the (low) cost. Most recently, 3.5 couldn't identity that February 29th is a valid date this year, but gpt-4 does.
Also we're not just paying for GPT4. The paid version comes with image, so you can take photos of log dumps or diagrams. It also does some image manipulation stuff for me and passes me the code.
There's also DALL-E 3, but it's more of a bonus than something I'd pay for. PDF is cool too, but it's not the only service out there.
I'd likely have unsubscribed if it was only 4 because the API version is better.
No, the hours it saves per month are worth a lot more than $20. It really depends on your use case, personally I find it superior to any other model I've tried for coding, debugging and troubleshooting server issues.
If you're talking about instability / outages as the sibling comment is, can't help there, but if you're running into laziness (I never use the standard one for coding) i spent a while on this custom GPT and it works well.
Yeah, the status site lies. There's instability in the UI almost every 2nd or 3rd day where it just won't load and they almost never mention it.
It still saves me tons of time, esp as a non-js developer who now has to write js somewhat regularly, and I suspect they'll get this stuff worked out. There's worse problems to have than, "We face such high demand our servers regularly melt down."
Funny that you posted this, I canceled it an hour ago.
If I need it, I'll use the gpt-4 api, why I cancelled it?
It can't think anymore, it is arrogantly wrongly overconfident, it can't detect it's own mistakes anymore, and it has goldfish memory. For the record, it was brilliant and the best LLM ever, but now it feels like GPT2 levels of quality. It all started 2 weeks ago, when chatgpt got `silently` updated (https://chat.openai.com/share/512002b1-ceb3-48b5-9a29-d44b63...), In the beginning, it is decent, you can see towards the end there is certain gibberish (when the update happened). When creating a new chat, the quality seriously went down. I am looking for replacements, not sure where everyone went, what are they using nowadays?
The fact that their now 1.5k token system prompt is forced into every response makes it not worth it, even though it's "unlimited". API makes a lot more sense for most purposes.
If people used it for real things, instead of trying to assess its biases less of this kind of thing would be needed. I am not particularly concerned about the political views of a csv file.
Yeah, the main issue is if they are charging per token it's essentially them adding an extra fee per response, like a gas fee in Etherium. But thinking about it as an API is really strange, I don't expect an API to do anything but respond to my query yet LLMs act like an API that has to explain why it did (or didn't) do what you asked every time.
I paid for a subscription today and cancelled the subscription today and was refunded. It was freezing non stop, it kept adding color to my black and white line art and then told me it had no control over that, then it was telling me I ran a single prompt too many times. I said it kept getting it wrong that's why and it would do it again but it was slowing me down every time it was refusing my commands. The final straw was when it shut down and said I'd used too many requests and to come back near midnight the next day... so I'd waste a whole day of not being able to use it because I'd be in bed by midnight. If I'm paying, I want to able to use the thing for more than 20 images and I don't want to have to argue with the chatbot about what I'm doing. It should have no say or thoughts about how many times I've run a prompt.
we switched to librechat [0] it is a great app if you don't use gpt 4 often enough to hit the 20$ worth of tokens, also it supports plugins. If you use more than 20$ then just stay with gpt plus.
I started getting too many responses which were clearly incorrect, I point out the correction to which the response was, "You are indeed correct [rephrases the answer correctly]"
Okay, great, but what about all the responses where I don't actually know the topic enough to know it's incorrect?
No, it is labeled as GPT-4, but last time I checked it was using GPT-3.5 under the hood. I asked about something recent that GPT-4 should know, can't remember what exactly, maybe about stable diffusion, and asked not to use the web search and it answered like GPT-3.5
Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was completely lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.
Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was utterly lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.
I'm considering switching to Gemini when the mobile app is available in my country and the web app gets better.
ChatGPT lately has its confidence really high and hallucinates in the middle of really basic technical stuff. It will say what a command should be for something really mundane like the `ip` command in Linux and then it sprinkles stuff that doesn't exist in the middle... you start to lose confidence.
I think LLMs are better than search engines but if I have to fact check everything, I'll switch to another LLM or go back to a search engine.
I don't know what has changed in ChatGPT but it's worse lately.
I use a fork of https://github.com/Krivich/GPT-Over-API, I edited it to support recent models and added cost estimation that keeps track of all the money spent on all requests. For most of the tasks ChatGPT3.5 is fine, but for more complex/recent-data related tasks, GPT-4 performs better.
Why this over some fancy ui on the web? Well, this can be hosted locally and will not steal your OpenAI key, and it allows to set max.tokens and select history items on every request.
I canceled mine and I now pay for Gemini. Only thing I really use it for is brainstorming and coding and it’s better for me at both those by a long shot.
How are people using GPT4 via the API? I signed up and loaded some money, but I still can’t use GPT for by the API. Only 3.5 turbo.
I still find ChatGPT 4 well worth the money. The coding is way better than 3.5. I wonder what their system prompt is, I don’t get as good results from the API (maybe I am doing it wrong).
To avoid both instability and strict limitations, you can utilize the ChatGPT API. By adding the API key into clients like MindMac[0], you will gain access to a pleasant UI with numerous additional features.
Many of my customers canceled their ChatGPT subscription and switched 100% to API.
If you want to use GPTs or to generate images with Dall-E then ChatGPT Plus is a no brainer I guess.
But if your focus is on GPT-4 then I highly recommend to use an API instead.
There are multiple pros of using an API Key:
- You pay for your usages. I've been using API Key exclusively and most of the time, it cost me around $5-$10 a month
- Your data is not used for training. This is important for a privacy-minded user like me. Though you can disable this in ChatGPT (but you will lose chat history)
- No message limit. Though there are rate limits to prevent abuse but generally, you do not have the message limit like the ChatGPT
- You can choose previous GPT models or other open-source models via
Depends on the applications, you can also get these:
- Access to multiple AI services: OpenAI, Azure or OpenRouter
- Local LLMs via Ollama etc.
- Build custom AI workflows
- Voice search & text-to-speech etc.
- Deeper integrations with other apps & services
There are also a few cons:
- No GPTs support yet
- If you use Dall E a lot then ChatGPT plus is more affordable. Generating images using Dall E API can be quite expensive
Edit: Some tips when using an API Key:
- You pay for tokens used (basically how long your question & AI's answers). The price per chat message is not expensive, but usually you will need to send the whole conversation to OpenAI, which makes it expensive. Make sure to pick an app that allows you to limit the chat context.
- Don't use GPT-4 for tasks that doesn't require deeper reasoning capabilities. I find that GPT-3.5 turbo is still very good at simple tasks like: grammar fixes, improve your writing, translations ...
-You can even use local LLMs if your machine can run Ollama
- Use different system prompts for different tasks: for example, I have a special prompt for coding tasks, and a different system prompt for writing tasks. It usually give a much better result.
Shameless plug: I've been building a native ChatGPT app for Mac called BoltAI[0], give it a try
I think that image generation, while very impressive at first, has already become a kind of cheap parlor trick. Everyone can easily spot images created by ChatGPT and it now just seems kind of cheesy.
Unstable due to what? Network error? Or the model itself providing bad results?
If it's about model output, I highly recommend custom GPTs. Taking 15 minutes to an hour to play around with a custom prompt to get it to work how you want is incredibly worth it.
I cancelled it about four or five months ago. I've brought up ChatGPT 3.5 like three times since then. Definitely don't use it enough to be worth paying for it right now.
The past few weeks ChatGPT's response hangs about 80% of the time for any query I make and I need to refresh the page. Has anyone else had this experience?
I am; it is extremely flaky. I use it mostly to make custom GPTs that read and analyze my own manuscripts. In the last few weeks I've had it refuse to read documents completely, read seemingly only the first pages, refuse to produce outlines or summaries, claim inability to parse what should be supported documents, and spew generic crap that is not at all based on the uploaded manuscript at all. It's starting to waste more of my time and energy than it saves.
I've cancelled long ago. Tried it for 2 months but was a hassle to have to use over VPN as OpenAI is too afraid of Hong Kong and banned us. So, yea no. Not going to pay money for that.