These marketing stunts from Perplexity made me stop using their product. For me, it's an indicator that they don't believe in their product, so there's no reason for me to do it either.
I've tried to use Perplexity after reading all of the hype, seeing it praised by so many VCs, and seeing it appear on so many different lists of essential AI tools.
Yet most of my Perplexity queries have produced poor results. It always feels like they optimized for minimizing latency and producing output that feels good instead of doing actual research. Most of the time it feels like the same quality of results I'd get from skimming the top of the Google search page summaries if I didn't filter out the spammy site.
The product could be more useful if it spent several minutes researching, but that would defeat the wow factor that I'm sure their product managers are prioritizing.
Perplexity had a business case for one hot minute there, before OAI, Anthropic and Google all added search to their models, but now that have it, Perplexity doesn’t have a reason to exist anymore. They’re kind of the poster child for “if you don’t have your own model, you’re basically VC-funded market fit research for the companies which do, who will go on to copy and crush you.”
Even during ChatGPT peak, when HN was buzzing with every other post being how ChatGPT/other LLM product replaced Google for them, I could not honestly switch, or meaningfully reduce my Google usage.
Until Perplexity.
It was the AI product that actually reduced my Google usage. Even with AI mode directly built into Google homepage now, Perplexity is still better.
It has basically zero hallucination, each para/entry backed by a URL, and lower latentcy than any other LLM product.
I don't know why you find it bad. I use it daily, and for serious searches.
It has fundamentally changed the way I search the web/ask questions in the web.
3 minutes is too long for exploratory searches, where I'm not sure what I'm even looking for. And 3 minutes feels too short for deep research which I'm expected to trust some complex result which I either don't know enough about myself (that's why I'm searching for it) or know enough about to the point that AI probably can't do something that I already couldn't within a couple minutes.
I think the sweet spot for AI results is around 10-30 seconds. It's fast enough that I'm willing to wait for the results even if I'm not sure I'm exploring the right topic. And it's also fast enough that even if I knew what to search for, it can give me summarized results faster than I could read on my own.
I remember when the hype first started around it, it was unusably slow, and produced poor results. Granted, I haven't tried it lately to see if latency improved, but the hype versus product state at the time, really turned me off from the product.
I second this. Perplexity is the only AI I actually pay for. It absolutely excels at the kind of deep search into narrow domains where expertise is concentrated in forums and specialist sites. Things like mechanical work on obscure classic vehicles, vacuum tube electronics, company tax arcana. It's also very very good at those questions you sometimes wake up with, where something happened in the news six months ago and you think, "Whatever came of that?"
Its deep research and Pro modes are great at synthesizing thorough briefings on complex topics too, to get up to speed on a new client or job responsibility for example.
It's not a chatbot for me, it's a brilliant, tireless little research minion.
As always with any LLM you should double-check its final, specific answers. It does occasionally hallucinate when information simply isn't available. Your research minion is just that - a minion, you have to have the context. It's not a teacher or guru.
EDIT: the bottom line is, it came along at exactly the right time for me. Google's search results are pages of ads, and DuckDuckGo insists on showing page after page of content-farm blogspam for the types of topics I search for. It cuts right through all that crap for me.
> As always with any LLM you should double-check its final, specific answers. It does occasionally hallucinate when information simply isn't available.
It also sometimes completely botches it when information is available. For example a while back someone cited Musk only scoring 730 on the math SAT as evidence that there is something from with the test.
I looked up Musk's age to figure out about when he would have taken the SAT then asked Perplexity what percentile a 760 would have been then. It gave me an answer that as far as I can tell was right (~90th).
I then wondered what my percentile was, so asked it what percentile 790 would have been when I took it. It told me it would have been 17.something, where that something had 5 digits.
That was obviously completely wrong because (1) there is no possible way it could have data that would justify giving an answer with 5 digits after the decimal point, and (2) the maximum possible score was 800 and scores were a multiple of 10, so for 790 to have been 17th percentile would mean that 83% of people who took the test scored a perfect 800.
I told it that this was clearly absurd.
It responded that I was completely right and said it was going to try again. On the retry it gave a reasonable answer that I knew from what I remembered was in the right ballpark and not given to ridiculous accuracy.
Couldn't agree more with this. A stock I hold suddenly started trending sharply upwards earlier in the year and when I asked Perplexity to research why it came back with a very detailed and well-cited explanation. It's far more efficient at distilling stuff down into a useful format than if I were to Google it myself
Same, it outshines Gemini and ChatGPT and hallucinates far less. The tone is less eager too, making it feel more tolerable as a tool rather than an unpaid assistant
"Remote material cooperation with evil" at best, which implicates virtually all human action. There is nothing immoral here. It's direct or formal cooperation that you need to worry about.
I don't disagree with the gist of their argument, but the fact that they try to whitewash an actual genocide [1] with "politics" is absurd.
1 - if anyone is confused, the UN convention on genocide explicitly lists taking children of an ethnic group to give them to another in the definition of genocide. Russia is quite openly and blatantly doing this.
My tax dollars are already funding genocide, so for like 10 cents to go towards Yandex a month, of which some fraction goes towards Russia's quixotic war effort (which is an international crime but not a genocide in intent or effect), is not something that's gonna keep me up at night. Almost every other purchase I make comes with harm roughly commensurate with that of Kagi. The damage to the environment Perplexity and Google (and Kagi) cause with unnecessary AI usage is a much bigger concern to me personally.
> which is an international crime but not a genocide in intent or effect)
Why not? Russia has kidnapped hundreds of thousands of children, gives them for adoption to Russians, and claims that Ukrainians are just confused Russians.
If it smells like a genocide, fits the definition of genocide... it's a genocide.
> The damage to the environment Perplexity and Google (and Kagi) cause with unnecessary AI usage is a much bigger concern to me personally.
Historically interesting note: Nazi Germany kidnapped thousands and thousands of Polish children and handed them to German parents to be raised as Germans. Their decedents usually don't even know about this.
Back to the topic at hand: you have to distinguish between material and formal cooperation with evil. I don't know the Kagi situation (if they're just indexing images for Yandex to improve search results, then I don't see how you have a real case here; even calling this remote cooperation with evil -- something that is generally impossible to avoid -- seems like a stretch). But let's say a company is doing something like making financial contributions to some organization doing something immoral. While you can boycott a company for that reason, you are not generally morally obligated to do so. And in practice, it usually has no effect. It's also unjust to saddle people with a burden of guilt they do not actually have. This is called rigorism.
Sorry, but this is not really enough of a concern to care about 2% of a $5 monthly fee going towards a company (Yandex) whose involvement with Russia's war seems iffy (Russia themselves fined the company for refusing to give user data to its state intelligence).
>Multiple things can be damaging at once.
Yes, but the point is that if I can tolerate some of the money Google or Amazon gets from me going to fund concentration camps abroad, in which camp residents receive a quarter of the calories per day (~250) that victims of Nazi concentration camps did, then I can tolerate this. Everyone has to draw a line somewhere, and I see no reason to draw it at Kagi but not Google/AWS/Microsoft/Apple/etc.
Both Kagi and Perplexity are customers of Brave, btw. See https://brave.com/api or just ask if you have questions. Will answer what I can for anyone curious.
self-hosted SearXNG [1] pointed at the lot of them. All the results, none of the tracking and some insight in which subjects are suppressed by which search engine.
i really enjoy perplexity. i recommend taking advantage of one of the o2 resale deals out there so that its like $7/yr instead of $240 and let the VCs eat the rest. I don’t know of any better ai access deals out there. It’s absurd and unsustainable.
These are consumer products that are basically commodities to all but the largest power users. If you loved their product than this approach should make you ecstatic as its the only way they'll be able to survive as an independent.
OpenAi literally retired all their models to the anger of the likes of people like you because they know this is all basically a race for the most familiar consumer assistant on a monthly subscription.
My reason is much more petty, but their refusal to allow me to sign-in with either a password+2fa or passkey and instead force me to open my email for a magic link has pushed me away.
Magic links are an order of magnitude safer than passwords, and the majority of regular users will never set up 2FA, so this raises the base safety for all