We use our own implementation of function calling orchestrated by chain-of-thought. The CoT allows us more granular control over the function calls, rather than zero-shotting and hoping the LLM selects the right functions.
do you prompt chain-of-thought, use a model trained on chain-of-thought, or use o1?
I ask because I am interested in trying out function-calling (without the problem you mentioned of zero-shot sometimes getting it wrong, and then having to validate it and re-send it with a correction prompt if it's invalid)
Agree with you, and we're definitely trying to thread the needle!
We're generating the SQL to answer natural language questions, so folks can just get answers and results tables if that's all they need, with the option for power users to fiddle with the SQL either directly or via a query editor GUI.
There's a ton of use cases for working with unstructured and semi-structured data and that's coming down the pipe!
This post was briefly flagged after that initial comment, which is what dioptre was referring to, not AdGuard. Phrasing was too ambiguous, hope that clears it up!
I'm sorry, I've missed the "delete" window. But may I know how a comment here (after it being blacklisted) about it being blacklisted will be the reason to be blacklisted?
We thought there might be basic word filters that tripped the algorithm. “Scam” being the offending word here. (Turns out it was something else that tripped a flame war setting. Probably a comment that later got flagged by the mods.)
Anyway, fun fact: it turns out our domain used to be a scam erectile pills website!
Wherever possible, the chatbot output is deterministic, in that to answer a query, we're realtime generating and running code or SQL against your data. Our LLM orchestrates that, and finally evaluates whether the output correctly and adequately answers the question.
We also extensively use synthetic data and examples to guide and constrain our models.
Another way we're ensuring good-quality output is to ensure good-quality _input_ -- by enriching the detail and specificity of the user's question, and asking the user to disambiguate when we determine the question is too broad.
It does considerably more than (poorly) managing the context window. It also (poorly) enables persistent document storage, knowledge retrieval, function calling and code execution.
It's more about UX, to reduce the perceived delay. LLMs inherently stream their responses, but if you wait until the LLM has finished inference, the user is sitting around twiddling their thumbs.
We're hiring iOS and Android developers to do great things at the bleeding edge of wearable technology and healthcare IT that will positively impact lives in the real world. Awesome equity for awesome devs.
We leverage smart glasses to connect surgeons in the operating room with their data and their teams. You'll be among our startup's first employees so you'll have a big part to play in how this company gets built. We've just closed our seed funding round and finished a session at Stanford's StartX accelerator, so this is an awesome time to jump in now that we have momentum and will be growing quickly.
Looking for an awesome Senior Web Application Developer (C#, OO Javascript, ExtJS, NoSQL, web services) and a QA & Support Technician (a new position so help us design the role!)
We're building a CRM web app for the Recruitment Industry, from which we will ultimately extract an application platform. If you like working with cutting-edge tech, passionate geeks, and cake, come talk to us! Deets are here: http://beastcrm.com/about/jobs/