I have always thought of Cyc as being the AI equivalent of Russell and Whitehead's Principia--something that is technically ambitious and interesting in its own right, but ultimately just the wrong approach that will never really work well on a standalone basis, no matter how long you work on it or keep adding more and more rules. That being said, I do think it could prove to be useful for testing and teaching neural net models.
In any case, at the time Lenat starting working on Cyc, we didn't really have the compute required to do NN models at the level where they start exhibiting what most would call "common sense reasoning," so it makes total sense why he started out on that path. RIP.
1. that NN models (LLMs) exhibit common sense reasoning today
2. that the approach to AI represented by Cyc and the one represented by LLMs are mutually exclusive
I don’t know about [1]. I asked an example from the paper above to GPT-4:
“[If you had to guess] how many thumbs did Lincoln’s maternal grandmother have?”
Response:
There is no widely available historical information to suggest that Abraham Lincoln's maternal grandmother had an unusual number of thumbs. It would be reasonable to guess that she had the typical two thumbs, one on each hand, unless stated otherwise.
You didn’t ask something novel enough and/or the LLM got “lucky”. There’s plenty of occasions where they just get it flat wrong. It’s a very bimodal distribution of competence – sometimes almost scarily superhumanly capable, and sometimes the dumbest collection of words that still form a coherent sentence.
ChatGPT is a hybrid system; it isn't "just" an LLM any longer. What people associate with "LLM" is fluid. It changes over time.
So it is essential to clarify architecture when making claims about capabilities.
I'll start simple: Plain sequence to sequence feed-forward NN models are not Turing complete. Therefore they cannot do full reasoning, because that requires arbitrary chaining.
In any case, at the time Lenat starting working on Cyc, we didn't really have the compute required to do NN models at the level where they start exhibiting what most would call "common sense reasoning," so it makes total sense why he started out on that path. RIP.