> This feature is more like Google’s Deep Research, which basically goes off and does a whole lot of search and compute to produce something more like a full research report.
Of course. It is in response to their disastrous operator demo which did not justify the $200 per month ChatGPT Pro subscription on top of the release of DeepSeek to make matters worse for them.
> This has nothing to do with open weight models like DeepSeek (note: DeepSeek, Llama, etc are NOT open source).
It obviously does. Even before they rushed this presentation, they made o3-mini available for ChatGPT free users so it in direct response to DeepSeek.
> This feature doesn’t just require the research on the model but also enormous compute. Plus anyone using such a feature for real work is not going to be using DeepSeek or whatever, but a product with trustworthy practices and guarantees.
Nothing that Perplexity + DeepSeek-R1 can already do.
Of course. It is in response to their disastrous operator demo which did not justify the $200 per month ChatGPT Pro subscription on top of the release of DeepSeek to make matters worse for them.
> This has nothing to do with open weight models like DeepSeek (note: DeepSeek, Llama, etc are NOT open source).
It obviously does. Even before they rushed this presentation, they made o3-mini available for ChatGPT free users so it in direct response to DeepSeek.
> This feature doesn’t just require the research on the model but also enormous compute. Plus anyone using such a feature for real work is not going to be using DeepSeek or whatever, but a product with trustworthy practices and guarantees.
Nothing that Perplexity + DeepSeek-R1 can already do.
So what is your point?