Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is definitively not possible. But the frontier models are no longer “just” LLMs, either. They are neurosymbolic systems (an LLM using tools); they just don’t say it transparently because it’s not a convenient narrative that intelligence comes from something outside the model, rather than from endless scaling.

At Aloe, we are model agnostic and outperforming frontier models. It’s the anrchitecture around the LLM that makes the difference. For instance our system using Gemini can do things that Gemini can’t do on its own. All an LLM will ever do is hallucinate. If you want something with human-like general intelligence, keep looking beyond LLMs.



It feels like we're slowly rebuilding the brain in pieces and connecting useful disparate systems like evolution did.

Maybe LLM's are the "language acquisition device" and language processing of the brain. Then we put survival logic around that with its own motivators. Then something else around that. Then again and again until we have this huge onion of competing interests and something brokering those interests. The same way our 'observer' and 'will' fights against emotion and instinct and picks which signals to listen to (eyes, ears, etc). Or how we can see thoughts and feelings rise up of their own accord and its up to us to believe them or act on them.

Then we'll wake up one day with something close enough to AGI that it won't matter much its just various forms of turtles all the way down and not at all simulating actual biological intelligence in a formal manner.


Then we’ll have to reinvent internal family systems to truly debug things. :)


It might feel like that's what we're doing, but that is not actually what we're doing.


This mirrors my thinking and experience completely. Based on seeing Aloe in action, your company is IMHO positioned extremely well for this future.


I’m confused, you wrote “model,” but then specified “system.” I assume you mean “system” because the tools are not being back-propagated?


I read that as "the tools (their capabilities) are external to the model".

Even if an RAG / agentic model learns from tool results, that doesn't automatically internalize the tool. You can't get yesterday's weather or major recent events from an offline, unless it was updated in that time.

I am often wondering whether this is how large Chat and cloud AI providers cache expensive RAG-related data though :) like, decreasing the likelihood of tool usage given certain input patterns when the model has been patched using some recent, vetted interactions – in case that's even possible?

Perplexity for example seems like they're probably invested in sone kind of activation-pattern-keyed caching... at least that was my first impression back when I first used it. Felt like decision trees, a bit like Akinator back in the days, but supercharged by LLM NLP.


> At Aloe, we are model agnostic and outperforming frontier models.

what is your website ?


A quick google gave: https://aloe.inc/


their name `.inc`; see the user's post history.


Aloe looks super cool, just joined the wait list.

Agree context is everything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: