Hacker News new | past | comments | ask | show | jobs | submit login

Yes definitely. We wrote a multi-step reasoning engine similar to o1. Depending on the route and depth in the chain of response, prompting techniques vary. The LLM router was pretty important to get right. On input: synthetic question enhancement has yielded positive results. Asking the right question is subtly important, and we find it is sometimes better to just slow a user down and help them improve their question instead of relying on multi-step reasoning from the get go. On output: checking answers via mixture of experts and other techniques is often necessary -- "truthful but wrong" answers are where some tricky gotcha's lie. Eval is hard.

One under-discussed feature of fast databases is that they are especially necessary in an agent-centric world. E.g. if you're running recursive SQL to pathfind towards an answer, it has to happen with minimal latency otherwise you break the user experience. Our interface looks like a spreadsheet and users mentally benchmark against spreadsheet-latency speed. They won't accept 20 second query response times they might get from their data warehouse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: