thanks - well one interesting thing is that by including humans in the workflows it's not just AI deciding what to think... yes everything is deeply researched and fully cited - but where it gets powerful is humans-in-the-loop adding their wisdom and experience as well.
Company is already in revenue with major global customers. We've been in stealth and kept a low profile. But we're venture funded. 2 years of engineering and a lot of IP. Already in commercial use for deep financial analysis, market intelligence, strategic planning for global companies. One of the first enterprise scale applications of massively multi-agentic AI to high-level knowledge work. Typical project uses 50 million to 60 million tokens. This is not a chatbot.
Not an OpenAI wrapper in fact. Using multiple models but there is a large amount of code above the models. Read the ArXiv paper and you will understand more. https://arxiv.org/abs/2403.02164
You cannot do what we are doing merely by wrapping a model... models cannot do this on their own. It's a massively multi-agentic system that also includes humans.
Congratulations, you reinvented the GAN and gave it a trendy "all you need" title. You can collect your award once you invent a time machine so you can impress people from 2012.
Can your AI do that yet? No sparks of AGI starting to fly yet? Shame. I hope OpenAI doesn't pull up their drawbridge, it would suck swimming in their moat.
Not an OpenAI wrapper in fact. Using multiple models but there is a large amount of code above the models. Read the ArXiv paper and you will understand more. https://arxiv.org/abs/2403.02164
thanks for the correction, that is an interesting paper.
EDIT: a very interesting paper since I am specifically interested in integrating neuro-symbolic cognition with large language models. I was paid to work, generally, in the field of symbolic AI in the 1980s and early 1990s and in retrospect I feel like I wasted a lot of my time. I think it is likely that symbolic AI will make a comeback if combined with LLMs, LLMs+function calling, integration with 'explain your steps' LLM prompts, etc.
yes Mark - am familiar with your work and good reputation - and yes things go in cycles - seems like every decade AI swings from neuro to symbolic and then from symbolic to neuro - I too spent a decade on 5th Gen computing and then a decade on Semantic Web and now back to neuro with LLMs - but this time adding symbolic on top -- because, as you mention, it is only by combining these approaches that real AGI can happen. Models on their own are what I call "instinctual intelligence" - they don't think they just react immediately with no intermediate cognition. That's not going to get us that far. Symbolic reasoning ABOVE the models is necessary to get to human level insights.
> ...finding an error in the original proof is uninteresting (from a mathematical perspective). Even if Gödel’s orginal proof contained a minor error, there are plenty of modern (and computer verified) proofs that establish the theorem.
Shockingly, there appears to be a fatal error in Godel's famous Incompleteness Theorem. Read more at the link above, and for more details see also: https://www.jamesrmeyer.com/ffgit/godels_theorem