I haven’t noticed much degradation, and I ask it fairly technical questions related to code and physics topics. What I have noticed though is needing to qualify my statements more, to avoid having it go down the wrong chain of conversation and getting stuck in a suboptimal context. By this I mean having to say things like “I have installed <package XYZ> per the documentation and am familiar with the API” or “assume the reader is already familiar with <foundational topic>, do not simply your explanations.” Otherwise I get stuck in a situation where it talks to me like a kindergartener or spends 3-4 additional rounds of querying just getting through the basic boilerplate. Maybe this was always the case, and posts like these are just subconsciously influencing me.
Plugins like AskYourPDF and Wolfram have been a huge boost though. I can feed it whole textbook chapters and have it summarize sections, solve problems, generate new exercises, and generally help with learning in ways previously unavailable.
I have a theory about this. I think what we are witnessing is a GPT-4 conversation being switched over to GPT-3.5
I've seen it mentioned somewhere that due to server load the model could be switched on you. I have had similar experiences. When this happens I start a new conversation and upload the full contents of my code or previous conversation.
My suspicion too. It seems like ClosedAI switches from gpt-4 to a faster model (e.g., gpt-turbo-3.5) for certain tasks but they do it without telling the user.
Greg Brockman from OpenAI said a round table chat a few weeks ago that ChatGPT is heavily quantized since the end of Q1/early Q2 2023: https://www.reddit.com/r/mlscaling/comments/146rgq2/chatgpt_... ; I am looking for the source document / source quote from which I read it, but the big switch from 'not so stupid' to 'pretty damn stupid' occurred with the 1 March 2023 model switch.
That's around the time that I noticed `gpt-3.5-turbo` becoming lower quality, whether in the UI (ChatGPT) or programmatically (via `gpt-3.5-turbo`) API calls.
The 10-20x lighter-weight version of the models that they're (OpenAI) running now - the heavily quantized version - allows them and Microsoft to save on far-and-away their largest expense: cloud expenditure. I suspect/expect that the AMD GPU announcement with OpenAI will come to fruition in the next few years as all of these LLM companies depend upon large piles of GPU compute to be able to train their models and no one wants to be beholden to NVIDIA or any one other GPU manufacturer.
You see 20 of these posts a day and I have yet to see one that provides before and after prompts. Given ChatGPT saves your history, I don’t even understand why this is difficult.
The problem can't be articulated in a single prompt response. There are no cases where it goes "full retard" in an obvious way that will convince skeptics "oh, i see." No cases of asking for a Python script and it returns a recipe for grilled snake. It's more like watching someone develop Alzheimer's or something over time. They just start forgetting small things and become harder and harder to work with.
Which, to prove to you, one would have to show more of their history than they likely feel comfortable with. I'm the last person to excuse concealment of evidence but the nature of the proof you're asking for is invariably intrusive.
But what if ChatGPT always had this issue, and it's now becoming more apparent because people are using it more? That sounds more likely to me. Maybe it's not like watching someone develop Alzheimer's, but more like watching a sloppy musician perform. At first it's impressive, but the more you listen the more you notice that he's constantly missing notes, and the longer he performs, more and more mistakes start pilling up.
When they only confirmed the frozen models in the API hadn't changed and not the web interface I feel like that was pretty much admission by omission that chat gpt 4 under premium subscriptions had been lobotomized by sparsification or quantization or aomething.
Plugins like AskYourPDF and Wolfram have been a huge boost though. I can feed it whole textbook chapters and have it summarize sections, solve problems, generate new exercises, and generally help with learning in ways previously unavailable.