Hacker News new | past | comments | ask | show | jobs | submit login

When did people ever believe that model selection mattered when using Deep Research? The UI may be bad, but it was obvious from day one that it followed its own workflow.

Search within ChatGPT is far from obsolete. 4o + Search remains a significant advantage in both time and cost when handling real-time, single-step queries—e.g., What is the capital of Texas?




If you have not been reading every OpenAI blog post, you can't be blamed for thinking the model picker affects Deep Research, since the UI heavily implies that.


Hmmm i noticed it after two deep research tasks. No doubt bad UI but surprising folks here were confused for that long.


Single-step queries are far better handled by Kagi/Google search when you care about source quality, discovery and good UX, anything above that it's worth letting Deep Research do its thing in the background. I would go so far as say using Search with 4o you risk getting worse results than just asking the LLM directly - or at least that's been my experience.


YMMV as always but I get the same exact answers between Kagi's Quick Answer and ChatGPT Search, this includes sources.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: