I suspect they might transparently fall back too; Opus 4.5 has been really reasonable lately, except right after it launched, and also surrounding any service interruptions / problems reported on status.claude.ai -- once those issues resolve, for a few hours the results feel very "Sonnet", and it starts making a lot more mistakes. When that happens, I'll usually just pause Claude and prompt Codex and Gemini with the same issue to see what comes out of the black hole.. then a bit later, Claude mysteriously regains its wits.
I just assume it went to the bar, got wasted, and needed time to sober up!
They don't ever fall back to cheaper models silently.
What Anthropic does do is poke the model to tell you to go to bed if you use it too long ("long conversation reminder") which distracts it from actually answering.
Sometimes they do have associations with things like the day of the year and might be lazier some months than others.
If they are real slime balls they can justify it by saying you see we use speculative decoding so we first use a smaller faster model model first and then then answer is enhanced by larger model blah blah ..... "FOr the best User experience"
I didn't believe such conspiracy theories, until one day I noticed Sonnet 4.5 (which I had been using for weeks to great success) perform much worse, very visibly so. A few hours later, Opus 4.5 was released.
I just assume it went to the bar, got wasted, and needed time to sober up!