Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I find it a terrible business practice to be completely opaque and vague about limits. Even worse, the limits seem to be dynamic and change all the time.

Here are some things I've noticed about this, at least in the "free" tier web models since that's all I typically need.

* ChatGPT has never denied a response but I notice the output slows down during increased demand. I'd rather have a good quality response that takes longer than no response. After reaching the limit, the model quality is reduced and there's a message indicating when you can resume using the better model.

* Claude will pop-up messages like "due to unexpected demand..." and will either downgrade to Haiku or reject the request altogether. I've even observed Claude yanking responses back, it will be mid-way through a function and it just disappears and asks to try again later. Like ChatGPT, eventually there's a message about your quota freeing up at a later time.

* Copilot, at least the free tier found on Bing, at least tells you how many responses you can expect in the form of a "1/20" status text. I rarely use Copilot or Bing but it demonstrates it's totally possible to show this kind of status to the user - ChatGPT and Claude just prefer to slow down, drop model size, or reject the request.

It makes sense that the limits are dynamic though. The services likely have a somewhat fixed capacity but demand will ebb and flow, so it makes sense to expand/contact availability on free tiers and perhaps paid tiers as well.



I believe the "1/20" indicator on Copilot was added back when it was unhinged to try to prevent users from getting it to act up, and it has been removed in the latest redesign




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: