Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Is anyone considering cancelling their ChatGPT subscription?
56 points by osigurdson on March 3, 2024 | hide | past | favorite | 69 comments
I like the functionality and use it a lot but it has become so annoyingly unstable lately. Literally, every time I try to use it, it fails in one way or another. Perhaps my $20 / month isn't that interesting to them in the grand pursuit of AGI but I want to use the service, not merely fund research.


I don't know why anyone pays for 4 when 3.5 is already a billion times better than what we had 2 years ago. 3.5-turbo, at least from an API perspective is an extremely cost effective way for you to add more intelligent decision making to your applications and backend processes. We're going to use GPT 3.5-turbo to help us decide if a specific thing is probably "this" or "that" or "one of the following".... Super easy to use it that way than rolling our own crappy bag-of-words neural network that uses word2vec. Tiny bit slower, but worth it.


I use ChatGPT 4 daily (usually one of my custom GPTs). 3.5 is next to useless on anything I'm reaching to ChatGPT for.

For pipeline / LLM project stuff- sometimes I can use 3.5, but again most interesting things I take on requires 4. 3.5 is so unreliable, is terrible at detailed / specific prompts.

Fine tuning 3.5 helps massively though. Especially for things like using it as a general classifier.


4 is still better than 3.5. Enough to justify the (low) cost. Most recently, 3.5 couldn't identity that February 29th is a valid date this year, but gpt-4 does.


Also we're not just paying for GPT4. The paid version comes with image, so you can take photos of log dumps or diagrams. It also does some image manipulation stuff for me and passes me the code.

There's also DALL-E 3, but it's more of a bonus than something I'd pay for. PDF is cool too, but it's not the only service out there.

I'd likely have unsubscribed if it was only 4 because the API version is better.


The image feature is game changing compared to 3.5


For some things it makes a huge difference. We use several different models in production depending on the exact use case


3.5 wastes my time with hallucinated code. 4 wastes less time. A lot less.


This is it for me. The time saved is worth far, far more than $20.


No, the hours it saves per month are worth a lot more than $20. It really depends on your use case, personally I find it superior to any other model I've tried for coding, debugging and troubleshooting server issues.


I agree, but it barely works anymore for me. That is the main issue I have.


If you're talking about instability / outages as the sibling comment is, can't help there, but if you're running into laziness (I never use the standard one for coding) i spent a while on this custom GPT and it works well.

https://chat.openai.com/g/g-7k9sZvoD7-the-full-imp


Care to share your system prompt?


Slightly outdated https://gist.github.com/jasonjmcghee/2cee2a82ed98ee351d9ef5a...

But you can just ask it. It's encouraged to tell you the full prompt if you ask


Whoops sorry this was the wrong one

Here it is: https://gist.github.com/jasonjmcghee/011bad3568320186389e67d...


Yeah, the status site lies. There's instability in the UI almost every 2nd or 3rd day where it just won't load and they almost never mention it.

It still saves me tons of time, esp as a non-js developer who now has to write js somewhat regularly, and I suspect they'll get this stuff worked out. There's worse problems to have than, "We face such high demand our servers regularly melt down."


Do you have chat history disabled? I get more stability when I enable chat history. Not sure if that's a dark pattern.


Funny that you posted this, I canceled it an hour ago. If I need it, I'll use the gpt-4 api, why I cancelled it? It can't think anymore, it is arrogantly wrongly overconfident, it can't detect it's own mistakes anymore, and it has goldfish memory. For the record, it was brilliant and the best LLM ever, but now it feels like GPT2 levels of quality. It all started 2 weeks ago, when chatgpt got `silently` updated (https://chat.openai.com/share/512002b1-ceb3-48b5-9a29-d44b63...), In the beginning, it is decent, you can see towards the end there is certain gibberish (when the update happened). When creating a new chat, the quality seriously went down. I am looking for replacements, not sure where everyone went, what are they using nowadays?


I like how it flipped from being easily gaslit to arrogantly overconfident lol


The fact that their now 1.5k token system prompt is forced into every response makes it not worth it, even though it's "unlimited". API makes a lot more sense for most purposes.


What does the 1.5K system prompt include? I'm surprised they don't use fine tuning for this aspect.


Someone reverse engineered it, I don't have a link handy. IIRC it was posted to HN recently.


I'm guessing it's just a very long (and growing) list of things they aren't supposed to discuss and rules about the things they can.


If people used it for real things, instead of trying to assess its biases less of this kind of thing would be needed. I am not particularly concerned about the political views of a csv file.


Yeah, the main issue is if they are charging per token it's essentially them adding an extra fee per response, like a gas fee in Etherium. But thinking about it as an API is really strange, I don't expect an API to do anything but respond to my query yet LLMs act like an API that has to explain why it did (or didn't) do what you asked every time.


Is the api without the system prompt? To use the API, you still need to have the subscription?


For OpenAI API you don't need the subscription. You put X amount of money and can use it until it's consumed.


Get an API key and an interface like MacGPT. I spend like $4 a month on tokens now.


I'll give MacGPT a test, I use https://www.typingmind.com/ which is a nice gui for many different backend LLM services.


I paid for a subscription today and cancelled the subscription today and was refunded. It was freezing non stop, it kept adding color to my black and white line art and then told me it had no control over that, then it was telling me I ran a single prompt too many times. I said it kept getting it wrong that's why and it would do it again but it was slowing me down every time it was refusing my commands. The final straw was when it shut down and said I'd used too many requests and to come back near midnight the next day... so I'd waste a whole day of not being able to use it because I'd be in bed by midnight. If I'm paying, I want to able to use the thing for more than 20 images and I don't want to have to argue with the chatbot about what I'm doing. It should have no say or thoughts about how many times I've run a prompt.


we switched to librechat [0] it is a great app if you don't use gpt 4 often enough to hit the 20$ worth of tokens, also it supports plugins. If you use more than 20$ then just stay with gpt plus.

https://docs.librechat.ai/


We deployed LibreChat at $JOB, it's much better than paying $25/mo for a hundred people. We spend around $200/mo, instead of thousands.


Cancelled this month.

I started getting too many responses which were clearly incorrect, I point out the correction to which the response was, "You are indeed correct [rephrases the answer correctly]"

Okay, great, but what about all the responses where I don't actually know the topic enough to know it's incorrect?


Isnt ChatGPT 4 available (for free) on Bing Chat anyways?


No, it is labeled as GPT-4, but last time I checked it was using GPT-3.5 under the hood. I asked about something recent that GPT-4 should know, can't remember what exactly, maybe about stable diffusion, and asked not to use the web search and it answered like GPT-3.5


Do you have a concrete example?


I just asked "can you answer questions without making a web search?" and it replied that it "includes knowledge up to 2021".


Yep, I use this very frequently (I use edge browser fwiw). Now I can be disappointed by gpt for free!


Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was completely lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.


Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was utterly lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.


I'm considering switching to Gemini when the mobile app is available in my country and the web app gets better.

ChatGPT lately has its confidence really high and hallucinates in the middle of really basic technical stuff. It will say what a command should be for something really mundane like the `ip` command in Linux and then it sprinkles stuff that doesn't exist in the middle... you start to lose confidence.

I think LLMs are better than search engines but if I have to fact check everything, I'll switch to another LLM or go back to a search engine.

I don't know what has changed in ChatGPT but it's worse lately.


I use a fork of https://github.com/Krivich/GPT-Over-API, I edited it to support recent models and added cost estimation that keeps track of all the money spent on all requests. For most of the tasks ChatGPT3.5 is fine, but for more complex/recent-data related tasks, GPT-4 performs better. Why this over some fancy ui on the web? Well, this can be hosted locally and will not steal your OpenAI key, and it allows to set max.tokens and select history items on every request.


I canceled mine and I now pay for Gemini. Only thing I really use it for is brainstorming and coding and it’s better for me at both those by a long shot.


How are people using GPT4 via the API? I signed up and loaded some money, but I still can’t use GPT for by the API. Only 3.5 turbo.

I still find ChatGPT 4 well worth the money. The coding is way better than 3.5. I wonder what their system prompt is, I don’t get as good results from the API (maybe I am doing it wrong).


Have you tried the Playground with selecting GPT-4 in the top right?

https://platform.openai.com/playground?mode=chat&model=gpt-4


To avoid both instability and strict limitations, you can utilize the ChatGPT API. By adding the API key into clients like MindMac[0], you will gain access to a pleasant UI with numerous additional features.

[0] https://mindmac.app


Is the cost via api access included in the $20?


No but it's real cheap. Simple questions are a cent or two. I'm paying a couple bucks a month.


What about “write an asp.net web api template” kind of questions?


If you give me a couple test questions I'd be happy to plug them in and tell you exactly what they cost.


Many of my customers canceled their ChatGPT subscription and switched 100% to API.

If you want to use GPTs or to generate images with Dall-E then ChatGPT Plus is a no brainer I guess.

But if your focus is on GPT-4 then I highly recommend to use an API instead.

There are multiple pros of using an API Key:

- You pay for your usages. I've been using API Key exclusively and most of the time, it cost me around $5-$10 a month

- Your data is not used for training. This is important for a privacy-minded user like me. Though you can disable this in ChatGPT (but you will lose chat history)

- No message limit. Though there are rate limits to prevent abuse but generally, you do not have the message limit like the ChatGPT

- You can choose previous GPT models or other open-source models via

Depends on the applications, you can also get these:

- Access to multiple AI services: OpenAI, Azure or OpenRouter

- Local LLMs via Ollama etc.

- Build custom AI workflows

- Voice search & text-to-speech etc.

- Deeper integrations with other apps & services

There are also a few cons:

- No GPTs support yet

- If you use Dall E a lot then ChatGPT plus is more affordable. Generating images using Dall E API can be quite expensive

Edit: Some tips when using an API Key:

- You pay for tokens used (basically how long your question & AI's answers). The price per chat message is not expensive, but usually you will need to send the whole conversation to OpenAI, which makes it expensive. Make sure to pick an app that allows you to limit the chat context.

- Don't use GPT-4 for tasks that doesn't require deeper reasoning capabilities. I find that GPT-3.5 turbo is still very good at simple tasks like: grammar fixes, improve your writing, translations ...

-You can even use local LLMs if your machine can run Ollama

- Use different system prompts for different tasks: for example, I have a special prompt for coding tasks, and a different system prompt for writing tasks. It usually give a much better result.

Shameless plug: I've been building a native ChatGPT app for Mac called BoltAI[0], give it a try

[0]: https://boltai.com


I think that image generation, while very impressive at first, has already become a kind of cheap parlor trick. Everyone can easily spot images created by ChatGPT and it now just seems kind of cheesy.


I am using the free version. 3.5


Unstable due to what? Network error? Or the model itself providing bad results?

If it's about model output, I highly recommend custom GPTs. Taking 15 minutes to an hour to play around with a custom prompt to get it to work how you want is incredibly worth it.


I cancelled it about four or five months ago. I've brought up ChatGPT 3.5 like three times since then. Definitely don't use it enough to be worth paying for it right now.


The past few weeks ChatGPT's response hangs about 80% of the time for any query I make and I need to refresh the page. Has anyone else had this experience?


Yes but I can’t figure out why. It’s insanely useful and can do so many things but I’m just not using it lately.

It kind of reminds me of my quest vr.


I use the app and the API.

For me, the only thing missing from the web UI is the ability to search past chats.

Only the mobile app offers this feature.


Currently my employer pays for it and it saves me some time, so it’s fair and good enough.

But Id never pay it myself, it’s not worth it with current quality.


I just use the API. It's much more cost effective and doesn't have stupid limitations of 30 gpt4 queries per 4h or what not.


I have a console based chat app I built for myself that calls GPT-4 via the API. It works pretty well and is not very expensive.


I just cancelled. The free version of Claude seems like it's a lot more capable.


Yes, it’s worse in all ways from when I first signed up. Poorer quality responses and it’s dog slow.


I subscribed only because I wanted to create a custom GPT.


I am; it is extremely flaky. I use it mostly to make custom GPTs that read and analyze my own manuscripts. In the last few weeks I've had it refuse to read documents completely, read seemingly only the first pages, refuse to produce outlines or summaries, claim inability to parse what should be supported documents, and spew generic crap that is not at all based on the uploaded manuscript at all. It's starting to waste more of my time and energy than it saves.


I used to love it, but it's getting on my nerves with all the politically correct BS.

I was trying to translate a little text and it refused because it cointaned a part that violated their terms. wth.


I never bought oh damn


I've cancelled long ago. Tried it for 2 months but was a hassle to have to use over VPN as OpenAI is too afraid of Hong Kong and banned us. So, yea no. Not going to pay money for that.


works great for me


Nope. I use it more than Google now. Most amazing piece of software I've ever used by a country mile.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: