I don't know if there's really an answer to that, beyond noting that it never turned out to be more than the sum of its parts. It was a large ontology and a hefty logic engine. You put in queries and you got back answers.
The goal was that in a decade it would become self-sustaining. It would have enough knowledge that it could start reading natural language. And it just... didn't.
Contrast it with LLMs and diffusion and such. They make stupid, asinine mistakes -- real howlers, because they don't understand anything at all about the world. If it could draw, Cyc would never draw a human with 7 fingers on each hand, because it knows that most humans have 5. (It had a decent-ish ontology of human anatomy which could handle injuries and birth defects, but would default reason over the normal case.) I often see ChatGPT stumped by simple variations of brain teasers, and Cyc wouldn't make those mistakes -- once you'd translated them into CycL (its language, because it couldn't read natural language in any meaningful way).
But those same models do a scary job of passing the Turing Test. Nobody would ever have thought to try it on Cyc. It was never anywhere close.
Philosophically I can't say why Cyc never developed "magic" and LLMs (seemingly) do. And I'm still not convinced that they're on the right path, though they actually have some legitimate usages right now. I tried to find uses for Cyc in exactly the opposite direction, guaranteeing data quality, but it turned out nobody really wanted that.
One sense that I've had of LLM / generative AIs is that they lack "bones", in the sense that there's no underlying structure to which they adhere, only outward appearances which are statistically correlated (using fantastically complex statistical correlation maps).
Cyc, on the other hand, lacks flesh and skin. It's all skeleton and can generate facts but not embellish them into narratives.
The best human writing has both, much as artists (traditional painters, sculptors, and more recently computer animators) has a skeleton (outline, index cards, Zettlekasten, wireframe) to which flesh, skin, and fur are attached. LLM generative AIs are too plastic, Cyc is insufficiently plastic.
I suspect there's some sort of a middle path between the two. Though that path and its destination also increasingly terrify me.
>because they don't understand anything at all about the world.
LLMs understand plenty, in any way that can be tested. It's really funny when i see making mistakes as the evidence of lack of understanding. Guess people don't understand anything at all too.
> I often see ChatGPT stumped by simple variations of brain teasers
Only if everything else is exactly as the basic teaser and guess what ? humans fall for this too. They see something they memorized and go full speed ahead. Simply changing names is enough to get it to solve it.
The goal was that in a decade it would become self-sustaining. It would have enough knowledge that it could start reading natural language. And it just... didn't.
Contrast it with LLMs and diffusion and such. They make stupid, asinine mistakes -- real howlers, because they don't understand anything at all about the world. If it could draw, Cyc would never draw a human with 7 fingers on each hand, because it knows that most humans have 5. (It had a decent-ish ontology of human anatomy which could handle injuries and birth defects, but would default reason over the normal case.) I often see ChatGPT stumped by simple variations of brain teasers, and Cyc wouldn't make those mistakes -- once you'd translated them into CycL (its language, because it couldn't read natural language in any meaningful way).
But those same models do a scary job of passing the Turing Test. Nobody would ever have thought to try it on Cyc. It was never anywhere close.
Philosophically I can't say why Cyc never developed "magic" and LLMs (seemingly) do. And I'm still not convinced that they're on the right path, though they actually have some legitimate usages right now. I tried to find uses for Cyc in exactly the opposite direction, guaranteeing data quality, but it turned out nobody really wanted that.