Hacker News new | past | comments | ask | show | jobs | submit login
Doug Lenat has died (garymarcus.substack.com)
550 points by snewman on Sept 1, 2023 | hide | past | favorite | 173 comments



Doug was at times blunt, but he was fundamentally a kind and generous person, and he had a dedication to his vision and to the people who worked alongside him that has to be admired. He will be missed.

I worked at Cycorp (not directly with Doug very often, but it wasn't a big office) between 2016 and 2020

An anecdote: during our weekly all-hands lunch in the big conference room, he mentioned he was getting a new car (his old one was pretty old, but well-kept) and he asked if anybody could use the old car. One of the staff raised his hand sheepishly and said his daughter was about to start driving. Doug gifted him the car on the spot, without a second thought.

He also loved board games, and was in a D&D group with some others at the company. I was told he only ever played lawful good characters, he didn't know how to do otherwise :)

Happy to answer what questions I can


I would expect lawful good because it would be the most logical.


That's a very neutral thing to say.


Why is that?


I don’t know much about him. What makes you start by saying he’s blunt?


It was a part of his personality, as it is for many people who are intelligent and opinionated, and some can mistake that for unkindness. But I wanted to emphasize that in his case it wasn't.


I haven't read much by Lenat, and haven't seen anything that would lead me to believe he was particularly blunt, I disagree with attributing bluntness to intelligence. Some of the most intellectually blunt people I've met were some of the most conversationally blunt, and some of the most brilliant were absolute social butterflies. Making people needlessly uncomfortable when communicating is a shortcoming regardless of your intellectual capacity. Even if difficulty communicating appropriately and intelligence are correlated somewhat through common neuropsychiatric profiles, the relationship is not causal.


> Making people needlessly uncomfortable when communicating is a shortcoming regardless of your intellectual capacity.

Emphasis on needlessly. Some bluntness is a good thing, even if it makes people uncomfortable. The alternative usually is prolonged awkwardness and beating around the bush, which likely will make people uncomfortable anyways. Just get it over with like an adult. Don't drag an uncomfortable thing out or even risk people not getting the message at all the first time around.


Sure— the word choice was deliberate. If someone is known as a blunt person broadly, chances are they sucked at communicating appropriately.


Germans are known as blunt. I don't think they are collectively being inappropriate.


All communication is relative to it's cultural context. Refusal or inability to operate within cultural mores is very different than having a different baseline to begin with.


Bluntness is associated for intelligence because when your mind works faster than the room and you reach conclusions that the room is 15 minutes away from making it feels blunt and crude to them.

The socially acceptable thing to do is wait for them to catch up but that gets old.


You've describing the justification for lifelong hubris, not intelligence. I've known a lot of very stupid people who assumed they were doing exactly that when in reality they jumped to a conclusion because they didn't understand the topic's complexities.

A article shared on HN recently: https://www.bihealth.org/en/notices/intelligent-brains-take-....


Whenever I hear things like "understand the topic's complexities" from people they are unable to articulate these supposed complexities and we almost always land on exactly the solution I proposed in the first place.

Don't get me wrong other intelligent people are on board with me during these episodes, and they are sometimes able to present their specific concern with the solution and that is valuable. I've been wrong before of course. On the other hand the "think of all the complexities" crowd just spout nuggets of wisdom like that until they are able to comprehend the solution, and that takes about 15 minutes in my experience.


Your anecdote contradicts the study discussed in that article. Given the nature of your argument, this sounds like a great time to exit this exchange.


He was just describing his experience which indeed only a sliver of people knows about, because it’s a very high IQ situation. The study you linked is just one study and cannot be said to encapsulate everyone’s experiences. I can confirm from my own experience that his experience is very possible.


Anecdotally, according to my 3 WISC tests and 10 years working in a research environment along with faculty and staff at a famously rigorous ivy league grad school, I assure you my sample size of interactions with people that have IQs that are several orders of magnitude higher than average is much larger than most. Do you have a specific criticism of the study's methodology or data?


My mind works extremely quickly, have to stop myself from finishing other people’s sentences. I don’t think it’s a sign of intelligence, just perceiving the world a little bit differently.


I suspect you and the person you're responding to have different ideas of what the word blunt means in this context?


Got it. I was merely curious if there were any particular stories, rumors, or legends about his bluntness (like there is Linus).


No, it was never anything at that level. I would describe (pre-reformed) Linus as more than just "blunt"


May i ask, if you believe CYC cannot achieve intelligence / commonsense, what are the main reasons ? Or what are the reasons you believe it could ?


I interviewed with Doug Lenat was I was a 17 year old high school student, and he hired me as a summer intern for Cycorp - my first actual programming job.

That internship was life-changing for me, and I'll always be grateful to him for taking a wild bet on a literally a kid.

Doug was a brilliant computer scientist, and a pioneer of artificial intelligence. Though I was very junior at Cycorp, it was a small company so I sat in many meetings with him. It was obvious that he understood every detail of how the technology worked, and was extremely smart.

Cycorp was 30 years ahead of its time and never actually worked. For those who don't know, it was essentially the first OpenAI - the first large-scale commercial effort to create general artificial intelligence.

I learned a lot from Doug about how to be incredibly ambitious, and how to not give up. Doug worked on Cycorp for multiple decades. It never really took off, but he managed to keep funding it and keep hiring great people so he could keep plugging away at the problem. I know very few people who have stuck with an idea for so long.


That sounds awesome! Was coming back to Cycorp to permanently work ever in the works for you? Or did you think the intern was nice but you didn't want a career in the field?

Also - what exactly did you do in the internship as a 17 year old - what skills did you have?


I was certainly interested in working at Cycorp full-time. But after two summers there, I could tell that the technical approach they were taking was just not working.

My first summer, I was an ontologist, which was a unique role that only existed at Cycorp where they hired people to literally hand-enter facts like "A cat has four legs" into Cyc using formal logic. My second summer I programmed (poorly) in Lisp for them.


> I could tell that the technical approach they were taking was just not working.

Could you say more about that? How could you tell?


Perhaps other people with deeper AI knowledge can weigh in here too. But at the time, there were the two things that tipped me off.

1) Cyc's reasoning fundamentally did not feel "human". Cyc was created on the premise that you could build AGI on top of formal logic inference. But after seeing how Cyc performed on real-world problems, I became convinced that formal logic is a poor model for human thought.

The biggest tell is that formal logic systems are very brittle. If there is any fact that is even slightly off, the reasoning chain fails and the system can't do anything. Humans aren't like that; when their information is slightly off, their performance degrades gracefully.

2). Imagine a graph where time/money was on the x-axis, and Cyc's performance was on the y-axis. You could roughly plot this using benchmarks like SAT scores. It was clear if you extrapolated this that Cyc was never going to hit human-level performance; the curve was going to asymptotically approach something well below human-level performance.

As a side note, if you look at the performance of LLMs, I would argue that you get the opposite result for both criteria.


I worked with Doug on Cyc from ~85-89 (we had overlapped at PARC but didn’t interact much there). The first thing undid was scrap the old implementation and start from scratch, designing the levels system and all the bootstrap code.

It was a fun time with a small core team (mainly me, guha, and Doug) but over time I became dissatisfied with some of the arbitrariness of the KB. By the time I left the Cyc project (for my own reasons unrelated to work) I was somewhat negative towards the foundations of the project, despite the tight relationship we’d had and the fact it ran on my code! But over time I became smarter and came to appreciate once again its value. I had too much of a “pure math” view of things back then.

As I moved on to other things I lost touch with Doug and Mary, and I’m sorry for that.


Doug Lenat, RIP. I worked at Cycorp in Austin from 2000-2006. Taken from us way too soon, Doug none the less had the opportunity to help our country advance military and intelligence community computer science research.

One day, the rapid advancement of AI via LLMs will slow down and attention will again return to logical reasoning and knowledge representation as championed by the Cyc Project, Cycorp, its cyclists and Dr. Doug Lenat.

Why? If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.


Exactly. When I hear books such as Paradigms of AI Programming are outdated because of LLMs, I disagree. They are more current than ever, thanks to LLMs!

Neural and symbolic AI will eventually merge. Symbolic models bring much needed efficiency and robustness via regularization.


If you want to learn about symbolic AI, there are a lot of more recent sources than PAIP (you could try the first half of AI: A Modern Approach by Russel and Norvig), and this has been true for a while.

If you read PAIP today, the most likely reason is that you want a master class in Lisp programming and/or want to learn a lot of tricks for getting good performance out of complex programs (which used to be part of AI and is in many ways being outsourced to hardware today).

None of this is to say you shouldn't read PAIP. You absolutely should. It's awesome. But its role is different now.


Some parts of PAIP might be outdated, but it still has really current material on e.g. embedding Prolog in Lisp or building a term-rewriting system. That's relevant for pursuing current neuro-symbolic research, e.g. https://arxiv.org/pdf/2006.08381.pdf.

Other parts like coding an Eliza chatbot are indeed outdated. I have read AIMA and followed a long course that used it, but I didn't really like it. I found it too broad and shallow.


It would be cool if we could find the algorithmic neurological basis for this, the analogy with LLMs being more obvious multi-layer brain circuits, the neurological analogy with symbolic reasoning must exist too.

My hunch is it emerges naturally out of the hierarchical generalization capabilities of multiple layer circuits. But then you need something to coordinate the acquired labels: a tweak on attention perhaps?

Another characteristic is probably some (limited) form of recursion, so the generalized labels emitted at the small end can be fed back in as tokens to be further processed at the big end.


The best thing Cycorp could do now is open source its accumulated database of logical relations so it can be ingested by some monster LLM.

What's the point of all that data collecting dust and accomplishing not much of anything?


It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.


> It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.

This is hugely problematic. If you get the premises wrong, many fallacies will follow.

LLMs can play many roles around this area, but their output cannot be trusted with significant verification and validation.


*without


LLM statements (distilled into logical statements) would not be logically sound. That's (one of) the main issues of LLMs. And that would make logical inference on these logical statements impossible with current systems.

That's one of the principal features of Cyc. It's carefully built by humans to be (essentially) logically sound. - so that inference can then be run through the fact base. Making that stuff logically sound made for a very detailed and fussy knowledge base. And that in turn made it difficult to expand or even understand for mere civilians. Cyc is NOT simple.


Cyc is built to be locally consistent but global KB consistency is an impossible task. Lenat stressed that in his videos over and over.


My "essentially" was doing some work there. It's been years but I remember something like "within a context" as the general direction? Such as within an area of the ontology (because - by contrast to LLMs - there is one) or within a reasonning problem, that kind of thing.

By contrast, LLMs for now are embarassing. With inconsistent nonsense provided within one answer or an answer not recognizing the context of the problem. Say, the work domain being a food label and the system not recognizing that or not staying within that.


> The best thing Cycorp could do now is open source its accumulated database of logical relations...

This is unpersuasive without laying out your assumptions and reasoning.

Counter points:

(a) It would be unethical for such a knowledge base to be put out in the open without considerable guardrails and appropriate licensing. The details matter.

(b) Cycorp gets some funding from the U.S. Government; this changes both the set of options available and the calculus of weighing them.

(c) Not all nations have equivalent values. Unless one is a moral relativist, these differences should not be deemed equivalent nor irrelevant. As such, despite the flaws of U.S. values and some horrific decision-making throughout history, there are known worse actors and states. Such parties would make worse use of an extensive human-curated knowledge base.


An older version of the database is already available for download, but that's not the approach you want for common sense anyway, no one needs to remember that a "dog is not a cat".


You are probably referring to OpenCyc. It provides much more value than your comment suggests.

I'd recommend that more people take a look and compare its approach against others. https://en.wikipedia.org/wiki/CycL is compact and worth a read, especially the concept of "microtheories".


OpenCyc is already a thing and there's been very little interest in it. These days we also have general-purpose semantic KB's like Wikidata, that are available for free and go way beyond what Cyc or OpenCyc was trying to do.


I think military will take over his work.Snowden documents reveled the cyc was been used to come up with Terror attack scenarios.


> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

This is the definition of a strawman. Who is claiming that NN inference is always the fastest way to run computation?

Instead of trying to bring down another technology (neural networks), how about you focus on making symbolic methods usable to solve real-world problems; e.g. how can I build a robust email spam detection system with symbolic methods?


> Instead of trying to bring down another technology (neural networks), how about you focus on making symbolic methods usable to solve real-world problems; e.g. how can I build a robust email spam detection system with symbolic methods?

I have two concerns. First, just after pointing out a logical fallacy from someone else, you added a fallacy: the either-or fallacy. (One can criticize a technology and do other things too.)

Second, you selected an example that illustrates a known and predictable weakness of symbolic systems. Still, there are plenty of real-world problems that symbolic systems address well. So your comment cherry-picks.

It appears as if you are trying to land a counter punch here. I'm weary of this kind of conversational pattern. Many of us know that tends to escalate. I don't want HN to go that direction. We all have varying experience and points of view to contribute. Let's try to be charitable, clear, and logical.


I am desperately vetting your comment for something I can criticize. An inadvertent, irrelevant, imagined infraction. Anything! But you have left me no opening.

Well done, sir, well done.


Thanks, but if I didn't blunder here, I can assure you I have in many other places. I strive to be mindful. I try not to "blame" anyone for strong reactions. But when we see certain unhelpful behaviors directed at other people, I try to identify/name it without making it worse. Awareness helps.


Without awareness we are just untagged data in a sea of uncompressed noise.


>> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

> This is the definition of a strawman.

(Actually, it is an example of a strawman.) Anyhow, rather than a strawman, I'd rather us get right into the fundamentals.

1. Feed-forward NN computation ('inference', which is an unfortunate word choice IMO) can provably provide universal function approximation under known conditions. And it can do so efficiently as well, with a lot of recent research getting into both the how and why. One "pays the cost" up-front with training in order to get fast prediction-time performance. The tradeoff is often worth it.

2. Function approximation is not as powerful as Turing completeness. FF NNs are not Turing complete.

3. Deductive chaining is a well-studied, well understood area of algorithms.

4. But... modeling of computational architectures (including processors, caches, busses, and RAM) with sufficient detail to optimize compilation is a hard problem. I wouldn't be surprised if this stretches these algorithms to the limit in terms of what developers will tolerate in terms of compile times. This is a strong incentive, so I'd expect there is at least some research that pushes outside the usual contours here.


The point is that symbolic computation as performed by Cycorp was held back by the need to train the Knowledge Base by hand in a supervised manner. NNs and LLMs in particular became ascendant when unsupervised training was employed at scale.

Perhaps LLMs can automate in large part the manual operations of building a future symbolic knowledge base organized by a universal upper ontology. Considering the amazing emergent features of sufficiently-large LLMs, what could emerge from a sufficiently large, reflective symbolic knowledge base?


That what I have settled on. The need for a symbolic library of standard hardware circuits.

I’m making a sloppy version that will contain all the symbols needed to run a multi-unit building.


If anybody wants to hear more about Doug's work and ideas, here is a (fairly long) interview with Doug by Lex Fridman, from last year.

https://www.youtube.com/watch?v=3wMKoSRbGVs&pp=ygUabGV4IGZya...


Thanks for the link. I watched the first part, and an interesting story/claim is that before Cyc started, many "smart people" including Marvin Minsky came up with "~1 million" as the number of things you would have to encode in a system for it to have "common sense".

He said they learned after ~5 years that this was an order of magnitude off -- it's more like 10 M things.

Is there any literature about this? Did they publish?

To me, the obvious questions are -

- how do they know it's not 100M things?

- how do they know it's even bounded? Why isn't there a combinatorial explosion?

I mean I guess they were evaluating the system all along. You don't go for 38 years without having some clear metrics. But I am having some problems with the logic -- I'd be interested in links to references / criticism.

I'd be interested in any arguments for and against ~10 M. Naively speaking, the argument seems a bit flawed to me.

FWIW I heard of Cyc back in the 90's, but I had no idea it was still alive. It is impressive that he kept it alive for so long.

---

Actually the wikipedia article is pretty good

https://en.wikipedia.org/wiki/Cyc#Criticisms

Though I'm still interested in the ~1M or ~10M claim. It seems like a strong claim to hold onto for decades, unless they had really strong metrics backing it up.


> how do they know it's not 100M things?

> how do they know it's even bounded? Why isn't there a combinatorial explosion?

I don't know - I'm in middle of watching the interview too, but he's moved on from that topic already. I'd guess the 10M vs 1M (or 100M) estimate comes from the curve of total "assertions" vs time leveling off towards some asymptotic limit.

I suppose the reason there's no combinatorial explosion is because they're entering these assertions in most general form possible, so considering new objects doesn't necessarily mean new assertions since it may all be covered by the superclasses the objects are part of (e.g. few assertions that are specific to apples since most will apply to all fruit).


Enjoyed watching that. Doug sounds very impressive. RIP.


Just search for Doug Lenat on YouTube. I can guarantee that any one of the other videos will be better than a Fridman interview.


Hey you guys, please don't go offtopic like this. Whimsical offtopicness can be ok, but offtopicness in the intersection of:

(1) generic (e.g. swerves the thread toward larger/general topic rather than something more specific);

(2) flamey (e.g. provocative on a divisive issue); and

(3) predictable (e.g. has been hashed so many times already that comments will likely fall in a few already-tiresome hash buckets)

- is the bad kind of offtopicness: the kind that brings little new information and eventually lots of nastiness. We're trying for the opposite here—lots of information and little nastiness.

https://news.ycombinator.com/newsguidelines.html


Only about two of them will be more contemporary though, and both are academic talks, not interviews. I get that you don't like Lex Fridman, which is a perfectly fine position to hold. But there is something to be said for seeing two people just sit and talk, as opposed to seeing somebody monologue for an hour. The Fridman interview with Doug is, IMO, absolutely worth watching. And so are all of the other videos by / about Doug. shrug


I don't know this particular interview, but it's not necessarily about not liking Lex. I listened to many episodes of his podcast and while I appreciate the selection of guests from the CS domain, many of these interviews aren't very good. They are not completely terrible but they should have been so much better: Lex had so many passionate, educated, experienced and gifted guests, yet his ability to ask interesting and focused questions is not on the same level.


He's a shitty interviewer. Often doesn't even engage with his guest's responses, as if he's not even listening to what they're saying, instead moving mechanically to his next bullet-point. Which is completely ridiculous for what's supposed to be a long-format conversational interview.

The best episodes are ones where the guest drives the interview and has a lot of interesting things to say. Fridman's just useful for attracting interesting domain experts somewhere we can hear them speak for hours on end.

The Jim Keller episodes are excellent IMO, despite Fridman. Guests like Keller and Carmack don't need a good interviewer for it to be a worthwhile listen.


reading the bio of Lex Fridman on wikipedia.. "Learning of Identity from Behavioral Biometrics for Active Authentication" what?


Please don't go offtopic in predictable/nasty ways - more at https://news.ycombinator.com/item?id=37355320.


Makes sense to me. He basically made a system that detects when someone else is using your computer by e.g. comparing patterns of mouse and keyboard input to your typical usage. It would be useful in a situation such as if you left your screen unlocked and a coworker sat down at your desk to prank you by sending an email from you to your boss (or worse, obviously). The computer would lock itself as soon as it suspects someone else is using it instead of you.


Like anything reasonably complex, it means little to you if its not your field - that said, I have no clue either.


It's fun reading through the paper he links just because I've always been enamored by taking a lot of those principles that they believe should be internal to a computer, and instead making them external to a community.

In other words, I think it would be so highly useful to have a browseable corpus of arguments and conclusions, where people could collaborate on them and perhaps disagree with portions of the argument graph, adding to it and enriching it over time, so other people could read and perhaps adopt the same reasoning.

I play around with ideas with this site I occasionally work on, http://concludia.org/ - really more an excuse at this point to mess around with the concept and also get better at Akka (Pekko) programming. At some point I'll add user accounts and editable arguments and make it a real website.


So basically a multi-person Zettelkasten? The idea with a Zettelkasten (zk for short) is that each note is a singular idea, concept, or argument that is all linked together. Arguments can link to their evidence, concepts can link to other related concepts, and so on.

https://en.m.wikipedia.org/wiki/Zettelkasten


Sort of except that it also tracks truth propagation - one person disagreeing would inform others that portion of the graph is contested. So the graph has behavior. And, the links have logical meaning, beyond just "is related to" - it respects boolean logic.

You can see some of the explanation at http://concludia.org/instructions .


You would need a highly disciplined and motivated set of people in the team. I have been on courses where teams do this on pen/paper and it is a real skill and it is all you do for days. Forget anything else like programming, finishing work, etc.


I'd be stoked if you wrote more about this experience and shared it somewhere.


> it respects boolean logic.

Intuitionist or classical?


Intuitionist. Truth is provability; the propagation model is basically digital logic. If you mark a premise to a conclusion false, the conclusion is then marked "false" but it really just means "it is false that it is proven"; vitiated. Might still be true, just needs further work.


Isn’t this what Wikipedia is in essence? Ideas, concepts linked together, with supporting evidence


I don't think this is the goal of your project, so let me ask this way. Is there any similiar project, where we provide truths and fallacies, combine them with logical arguments and have a language model generate sets of probable conclusions?

Would be great for brainstorming.


I've had the same idea (er, came to the same conclusion) but never acted on it. Awesome to see that someone has! Great name too.

I thought of it while daydreaming about how to converge public opinion in a nation with major political polarization. It'd be a sort of structured public debate forum and people could better see exactly where in the hierarchy they disagreed and, perhaps more importantly, how much they in fact agreed upon.


I have always thought of Cyc as being the AI equivalent of Russell and Whitehead's Principia--something that is technically ambitious and interesting in its own right, but ultimately just the wrong approach that will never really work well on a standalone basis, no matter how long you work on it or keep adding more and more rules. That being said, I do think it could prove to be useful for testing and teaching neural net models.

In any case, at the time Lenat starting working on Cyc, we didn't really have the compute required to do NN models at the level where they start exhibiting what most would call "common sense reasoning," so it makes total sense why he started out on that path. RIP.


https://arxiv.org/pdf/2308.04445.pdf

"Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc"

Lenat's last paper (July 31st, with Gary Marcus)

https://news.ycombinator.com/item?id=37354601

This may disabuse you of two ideas:

   1. that NN models (LLMs) exhibit common sense reasoning today
   2. that the approach to AI represented by Cyc and the one represented by LLMs are mutually exclusive


I don’t know about [1]. I asked an example from the paper above to GPT-4: “[If you had to guess] how many thumbs did Lincoln’s maternal grandmother have?”

Response: There is no widely available historical information to suggest that Abraham Lincoln's maternal grandmother had an unusual number of thumbs. It would be reasonable to guess that she had the typical two thumbs, one on each hand, unless stated otherwise.


You didn’t ask something novel enough and/or the LLM got “lucky”. There’s plenty of occasions where they just get it flat wrong. It’s a very bimodal distribution of competence – sometimes almost scarily superhumanly capable, and sometimes the dumbest collection of words that still form a coherent sentence.

The mildly entertaining YouTube video below discusses this. https://youtu.be/QrSCwxrLrRc


ChatGPT is a hybrid system; it isn't "just" an LLM any longer. What people associate with "LLM" is fluid. It changes over time.

So it is essential to clarify architecture when making claims about capabilities.

I'll start simple: Plain sequence to sequence feed-forward NN models are not Turing complete. Therefore they cannot do full reasoning, because that requires arbitrary chaining.


cGPT is exactly "just" an LLM though. a sparse MoE architecture is not an ensemble of experts.


You can't show reasoning in LLMs via the answers they get right, I assert (without citations).


The end of the article [1] reminds me to publish more of what I make and think. I'm no Doug Lenat and my content would probably just add noise to the internet but still, don't let your ideas die with you or become controlled by some board of stakeholders. I'm also no open-source zealot but open-source is a nice way to let others continue what you started.

[1]

"Over the last year, Doug and I tried to write a long, complex paper that we never got to finish. Cyc was both awesome in its scope, and unwieldy in its implementation. The biggest problem with Cyc from an academic perspective is that it’s proprietary.

To help more people understand it, I tried to bring out of him what lessons he learned from Cyc, for a future generation of researchers to use. Why did it work as well as it did when it did, why did fail when it did, what was hard to implement, and what did he wish that he had done differently? ...

...One of his last emails to me, about six weeks ago, was an entreaty to get the paper out ASAP; on July 31, after a nerve-wracking false-start, it came out, on arXiv, Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc (https://arxiv.org/ftp/arxiv/papers/2308/2308.04445.pdf).

The brief article is simultaneously a review of what Cyc tried to do, an encapsulation of what we should expect from genuine artificial intelligence, and a call for reconciliation between the deep symbolic tradition that he worked in with modern Large Language Models."


Right on.

> my content would probably just add noise to the internet

Maybe, but there is worse noise out there for sure. :) Anyhow, some unsolicited advice from me: don't replay this quote to yourself any more than necessary; it isn't exactly a motivational mantra masterpiece. Share what you think is important.

Why? Even small, "improbable" improvements to knowledge can to matter. Given enough of them, statistically speaking, we can move the needle. Yeah, and we need to be able to find the relevant stuff; a big problem in of itself.


Those are very kind words that I will keep.


Never met the guy but his work was one of my biggest inspirations in computing.

I feel it's appropriate to link a blog post of mine from 2018. It's a quick recap of Lenat works on the trajectory that brought him towards Cyc, with links to the papers.

http://blog.funcall.org//lisp/2018/11/03/am-eurisko-lenat-do...


Cyc ("Syke") is one of those projects I've long found vaguely fascinating though I've never had the time / spoons to look into it significantly. It's an AI project based on a comprehensive ontology and knowledgebase.

Wikipedia's overview: <https://en.wikipedia.org/wiki/Cyc>

Project / company homepage: <https://cyc.com/>


I worked with Cyc. It was an impressive attempt to do the thing that it does, but it didn't work out. It was the last great attempt to do AI in the "neat" fashion, and its failure helped bring about the current, wildly successful "scruffy" approaches to AI.

It's failure is no shade against Doug. Somebody had to try it, and I'm glad it was one of the brightest guys around. I think he clung on to it long after it was clear that it wasn't going to work out, but breakthroughs do happen. (The current round of machine learning itself is a revival of a technique that had been abandoned, but people who stuck with it anyway discovered the tricks that made it go.)


"Neat" vs. "scruffy" syncs well with my general take on Cyc. Thanks for that.

I do suspect that well-curated and hand-tuned corpora, including possibly Cyc's, are of significant use to LLM AI. And will likely be more so as the feedback / autophagy problem exacerbates.


Wow -- I hadn't thought of this but makes total sense. We'll need giant definitely-human-curated databases of information for AIs to consume as more information becomes generated by the AIs.


There's a long history of informational classification, going back to Aristotle and earlier ("Categories"). See especially Melville Dewey, the US Library of Congress Classification, and the work of Paul Otlet. All are based on exogenous classification, that is, subjects and/or works classification catalogues which are independent of the works classified.

Natural-language content-based classification as by Google and Web text-based search relies effectively on documents self-descriptions (that is, their content itself) to classify and search works, though a ranking scheme (e.g., PageRank) is typically layered on top of that. What distinguished early Google from prior full-text search was that the latter had no ranking criteria, leading to keyword stuffing. An alternative approach was Yahoo, originally Yet Another Hierarchical Officious Oracle, which was a curated and ontological classification of websites. This was already proving infeasible by 1997/98 as a whole, though as training data for machine classification might prove useful.


I'm so looking forward to the next swing of the pendulum back to "neat", incorporating all the progress that has been made on "scruffy" during this current turn of the wheel.


The GP had the terms "neat" and "scruffy" reversed. CYC is scruffy like biology, and neural nets are neat like physics.

See my sibling post citing Roger Schank who coined the terms, and quoting Marvin Minsky's paper, "Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy" and the "Neats and Scruffies" wikipedia page.

https://news.ycombinator.com/item?id=37354564


The OPs usage seems a lot more intuitive to me, :shrug:. Neural nets don't seem at all "neat like physics" to me.

But I guess I also don't know enough about the CYC approach to say. Maybe neither of them fit what I think of as "neat".


Pamela McCorduck wrote in "Machines Who Think" (2004) that Cyc is "a determinedly scruffy enterprise". Robert Abelson credited the terms to his "unnamed but easily guessable colleague" Roger Shank in his 1981 essay "Constraint, Construal, and Cognitive Science" in the Proceedings of the 3rd Annual Conference of the Cognitive Science, and Marvin Minsky discusses the terms in Patrick Henry Winston's 1990 book "Artificial Intelligence at MIT, Expanding Frontiers, Vol 1", and his own 1991 AI Magazine article, "Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy", but the long standing terms go back to the 70's:

https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

"We should take our cue from biology rather than physics..." -Marvin Minsky

https://grandtextauto.soe.ucsc.edu/2008/02/14/ep-44-ai-neat-...

EP 4.4: AI, Neat and Scruffy

by Noah Wardrip-Fruin, 6:11 am

A name that does appear in Weizenbaum’s book, however, is that of Roger Schank, Abelson’s most famous collaborator. When Schank arrived from Stanford to join Abelson at Yale, together they represented the most identifiable center for a particular approach to artificial intelligence: what would later (in the early 1980s) come to be known as the “scruffy” approach. [7] Meanwhile, perhaps the most identifiable proponent of what would later be called the “neat” approach, John McCarthy, remained at Stanford.

McCarthy had coined the term “artificial intelligence” in the application for the field-defining workshop he organized at Dartmouth in 1956. Howard Gardner, in his influential reflection on the field, The Mind’s New Science (1985), characterized McCarthy’s neat approach this way: “McCarthy believes that the route to making machines intelligent is through a rigorous formal approach in which the acts that make up intelligence are reduced to a set of logical relationships or axioms that can be expressed precisely in mathematical terms” (154).

This sort of approach lent itself well to problems easily cast in formal and mathematical terms. But the scruffy branch of AI, growing out of fields such as linguistics and psychology, wanted to tackle problems of a different nature. Scruffy AI built systems for tasks as diverse as rephrasing newspaper reports, generating fictions, translating between languages, and (as we have seen) modeling ideological reasoning. In order to accomplish this, Abelson, Schank, and their collaborators developed an approach quite unlike formal reasoning from first principles. One foundation for their work was Schank’s “conceptual dependency” structure for language-independent semantic representation. Another foundation was the notion of “scripts” (later “cases”) an embryonic form of which could be seen in the calling sequence of the ideology machine’s executive. Both of these will be considered in more detail in the next chapter.

Scruffy AI got attention because it achieved results in areas that seemed much more “real world” than those of other approaches. For comparison’s sake, consider that the MIT AI lab, at the time of Schank’s move to Yale, was celebrating success at building systems that could understand the relationships in stacks of children’s wooden blocks. But scruffy AI was also critiqued — both within and outside the AI field — for its “unscientific” ad-hoc approach. Weizenbaum was unimpressed, in particular, with the conceptual dependency structures underlying many of the projects, writing, “Schank provides no demonstration that his scheme is more than a collection of heuristics that happen to work on specific classes of examples” (199). Whichever side one took in the debate, there can be no doubt that scruffy projects depending on coding large amounts of human knowledge into AI systems — often more than the authors acknowledged, and perhaps much more than they realized.

[...]

[7] After the terms “neat” and “scruffy” were introduced into the AI and cognitive science discourse by Abelson’s 1981 essay, in which he attributes the coinage to “an unnamed but easily guessable colleague” — Schank.

https://cse.buffalo.edu/~rapaport/676/F01/neat.scruffy.txt

    Article: 35704 of comp.ai
    From: engelson@bimacs.cs.biu.ac.il (Dr. Shlomo (Sean) Engelson)
    Newsgroups: comp.ai
    Subject: Re: who first used "scruffy" and "neat"?
    Date: 25 Jan 1996 08:17:13 GMT
    Organization: Bar-Ilan University Computer Science

    In article <4e2th9$lkm@cantaloupe.srv.cs.cmu.edu> Lonnie Chrisman <ldc+@cs.cmu.edu> writes:

        so@brownie.cs.wisc.edu (Bryan So) wrote:
        >A question of curiosity.  Who first used the terms "scruffy" and "neat"?
        >And in what document?  How about "strong" and "weak"?

        Since I don't see a response yet, I'll take a stab.  The earliest use of
        "scruffy" and "neat" that comes to my mind was in David Chapman's "Planning
        for Conjunctive Goals", Artificial Intelligence 32:333-377, 1987.  "Weak"
        evidence for this being the earliest use is that he does not cite any earlier
        use of the terms, but perhaps someone else will correct me and give an 
        earlier citation.

    One earlier citation is Eugene Charniak's paper in AAAI 1986, "A Neat
    Theory of Marker Passing".  I think, though, that the terms go way
    back in common parlance, almost certainly to the 70s at least.  Any of
    the "old-timers" out there like to comment?
[...]

    Article: 35781 of comp.ai
    From: fass@cs.sfu.ca (Dan Fass)
    Newsgroups: comp.ai
    Subject: Re: who first used "scruffy" and "neat"?
    Date: 26 Jan 1996 10:03:35 -0800
    Organization: Simon Fraser University, Burnaby, B.C.

    Abelson (1981) credits the neat/scruffy distinction to Roger Schank. 
    Abelson says, ``an unnamed but easily guessable colleague of mine 
    ... claims that the major clashes in human affairs are between the
    "neats" and the "scruffies".  The primary concern of the neat is
    that things should be orderly and predictable while the scruffy 
    seeks the rough-and-tumble of life as it comes'' (p. 1).

    Abelson (1981) argues that these two prototypic identities --- neat 
    and scruffy --- ``cause a very serious clash'' in cognitive science 
    and explores ``some areas in which a fusion of identities seems 
    possible'' (p. 1).

    - Dan Fass

    REF

    Abelson, Robert P. (1981).
    Constraint, Construal, and Cognitive Science.
    Proceedings of the 3rd Annual Conference of the Cognitive Science 
    Society, Berkeley, CA, pp. 1-9.
[...]

Aaron Sloman, 1989: "Introduction: Neats vs Scruffies"

https://www.cs.bham.ac.uk//research/projects/cogaff/misc/scr...

>There has been a long-standing opposition within AI between "neats" and "scruffies" (I think the terms were first invented in the late 70s by Roger Schank and/or Bob Abelson at Yale University).

>The neats regard it as a disgrace that many AI programs are complex, ill-structured, and so hard to understand that it is not possible to explain or predict their behaviour, let alone prove that they do what they are intended to do. John McCarthy in a televised debate in 1972 once complained about the "Look Ma no hands!" approach. Similarly, Carl Hewitt, complained around the same time, in seminars, about the "Hairy kludge (pronounced klooge) a month" approach to software development. (His "actor" system was going to be a partial solution to this.)

>The scruffies regard messy complexity as inevitable in intelligent systems and point to the failure so far of all attempts to find workable clear and general mechanisms, or mathematical solutions to any important AI problems. There are nice ideas in the General Problem Solver, logical theorem provers, and suchlike but when confronted with non-toy problems they normally get bogged down in combinatorial explosions. Messy complexity, according to scruffies, lies in the nature of problem domains (e.g. our physical environment) and only by using large numbers of ad-hoc special-purpose rules or heuristics, and specially tailored representational devices can problems be solved in a reasonable time.

Roger Schank

https://en.wikipedia.org/wiki/Roger_Schank

Robert Abelson

https://en.wikipedia.org/wiki/Robert_Abelson

Marvin Minsky

https://en.wikipedia.org/wiki/Marvin_Minsky

Neats and scruffies

https://en.wikipedia.org/wiki/Neats_and_scruffies

>Scruffy projects in the 1980s

>The scruffy approach was applied to robotics by Rodney Brooks in the mid-1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control, the title of a 1989 paper co-authored with Anita Flynn. Unlike earlier robots such as Shakey or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning techniques, and they did not plan their actions using formalizations based on logic, such as the 'Planner' language. They simply reacted to their sensors in a way that tended to help them survive and move.[13]

>Douglas Lenat's Cyc project was initiated in 1984 one of earliest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise".[14] The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning algorithms with natural language processing that could study the text available over the internet), no such project has yet been successful.

[...]

>John Brockman writes "Chomsky has always adopted the physicist's philosophy of science, which is that you have hypotheses you check out, and that you could be wrong. This is absolutely antithetical to the AI philosophy of science, which is much more like the way a biologist looks at the world. The biologist's philosophy of science says that human beings are what they are, you find what you find, you try to understand it, categorize it, name it, and organize it. If you build a model and it doesn't work quite right, you have to fix it. It's much more of a "discovery" view of the world."[4]


I think this all convinces me that neither of the things we're discussing here is "neat". Certainly contemporary LLMs don't seem to fit that definition at all, despite being "mathematical".


It's not really up to you to redefine meaning of well known long standing historic technical terms. They're well defined, widely understood, frequently discussed terms that you would learn if you studied the history of AI, read any of the numerous papers and books and discussions about it that I already cited and quoted, and learned about the works of CogSci and AI pioneers like Robert Abelson, Roger Schank, Marvin Minsky, and others who've written numerous papers about it.

The wikipedia page about Neats and Scruffies that I linked you to is in my opinion well written, clearly defines the meanings each term, and presents plenty of evidence and citations and background. I'll give you the benefit of doubt that of course you've already read and understand it, so if you disagree with the history and citations on the wikipedia page and all the original papers and books and people cited and quoted, and can present better evidence and arguments to prove that you're right and they're all wrong, then you are free to go try to rewrite history by sharing your own definitions and citations, and correcting the errors on wikipedia. Good luck! I suggest you start by writing suggestions and presenting your evidence on the talk page first, instead of just directly editing the wikipedia page itself, to see what other experts in the field think and achieve consensus, or else it will likely be considered vandalism and be reverted.

You seem to be missing the point that the world is not strictly black and white, and ever since the terms were originally coined, the people who defined them and many other people have strongly recommended fusing both the "neat" and "scruffy" approaches, and LLMs actually do incorporate some ad-hoc "scruffy" aspects into their mathematical "neat" approach, and that's why they work so much better than simple perceptrons or neural nets. But they are still much more "neat" than "scruffy", and combining the two approaches does not flip the meaning of the two terms. I just discussed the fusion of scruffy and neat here, and quoted the original 41-year-old essay from 1982 by Robert Abelson that defined the terms and recommended fusing the two different approaches:

https://news.ycombinator.com/item?id=37359235

And also:

https://news.ycombinator.com/item?id=37354564

But before you go off and edit the Neats and Scruffies wikipedia page with your own definitions, please take the time to read the original essay by Robert Abelson that defines the terms first, like I did. In the link above, I cited it, tracked down the pdf, and quoted the relevant part of it for you, but you should probably do your homework first and read the whole thing before editing the wikipedia page about it. But be aware that it uses a lot of other technical terms and jargon that have well known definitions to practitioners in the field, so the common layman definitions of words you learned in grammar school may not apply.

Cyc is clearly the paradigm of "scruffy" and like biology, and perceptrons and neural nets are clearly the paradigm of "neat" and like physics, and that's how those terms have been widely used for more than four decades.


There's no need to be rude! I haven't proposed alternate definitions, I have merely commented on how successful the words used for the AI jargon terms fit with the words in the english language. (And I've appreciated your links on the historical development, which would not have sought out on my own.)

I think what's interesting in the jargon vs. plain definition tension here is related to what you noted in this most recent comment. It's that the words "neat" and "scruffy" - that is, just the english words, not the AI jargon terms - are not really symmetrical. A scruffy thing can easily become more neat while remaining scruffy, but introducing scruffiness into a neat thing tends to just make it scruffy. Neat is more totalizing.

So you say LLMs still fall into the "neat" camp - AI jargon this time - because of their mathematical core and lineage, and that's fair enough. But you also say that they incorporate "scruffy" techniques - jargon again - and I think that makes them - switching to the english words here - seem pretty scruffy, because the scruffy techniques are themselves scruffy, and incorporating all these different techniques is itself a scruffy thing to do.


Definitely would be nice to have a ChatGPT that could reference an ontology to fact check itself.


Why not combine the two approaches? A bicameral mind, of sorts?


I'm sure somebody somewhere is working on it. I've already seen articles teaching LLMs offload math problems onto a separate module, rather than trying to solve them via the murk of neural network.

I suppose you'd architect it as a layer. It wants to say something, and the ontology layer says, "No, that's stupid, say something else". The ontology layer can recognize ontology-like statements and use them to build and evolve the ontology.

It would be even more interesting built into the visual/image models.

I have no idea if that's any kind of real progress, or if it's merely filtering out the dumb stuff. A good service, to be sure, but still not "AGI", whatever the hell that turns out to be.

Unless it turns out to be the missing element that puts it over the top. If I had any idea I wouldn't have been working with Cyc in the first place.


There are absolutely people working on this concept. In fact, the two day long "Neuro-Symbolic AI Summer School 2023"[1] just concluded earlier this week. It was two days of hearing about cutting edge research at the intersection of "neural" approaches (taking a big-tent view where that included most probabilistic approaches) and "symbolic" (eg, "logic based") approaches. And while this approach might not be the contemporary mainstream approach, there were some heavy hitters presenting, including the likes of Leslie Valiant and Yoshua Bengio.

[1]: https://neurosymbolic.github.io/nsss2023/


https://arxiv.org/pdf/2308.04445.pdf

is precisely Doug Lenat & Gary Marcus' thoughts on how to combine them (July 31st 2023, Lenat's last paper)


That's right, and left! ;) Fusing the "scruffy" and "neat" approaches has been the idea since the terms were coined by Roger Schank in the 70's and written about in 1982 by Robert Abelson in his Major Address of the Proceedings of the 3rd Annual Conference of the Cognitive Science Society in "Constraint, Construal, and Cognitive Science" (page 1).

His question is: Is it preferable for scruffies to become neater, or for neats to become scruffier? His answer explains why he aspires to be a neater scruffy.

"But I use the example as symptomatic of one kind of approach to the cognitive science fusion problem: you start from a neat, right-wing point of view, but acknowledge some limited role for scruffy, left-wing orientations. The other type of approach is the obvious mirror: you start from the disorderly leftwing side and struggle to be neater about what you are doing. I prefer the latter approach to the former. I will tell you why, and then lay out the beginnings of such an approach."

https://cse.buffalo.edu/~rapaport/676/F01/neat.scruffy.txt

    Article: 35781 of comp.ai
    From: fass@cs.sfu.ca (Dan Fass)
    Newsgroups: comp.ai
    Subject: Re: who first used "scruffy" and "neat"?
    Date: 26 Jan 1996 10:03:35 -0800
    Organization: Simon Fraser University, Burnaby, B.C.

    Abelson (1981) credits the neat/scruffy distinction to Roger Schank. 
    Abelson says, ``an unnamed but easily guessable colleague of mine 
    ... claims that the major clashes in human affairs are between the
    "neats" and the "scruffies".  The primary concern of the neat is
    that things should be orderly and predictable while the scruffy 
    seeks the rough-and-tumble of life as it comes'' (p. 1).

    Abelson (1981) argues that these two prototypic identities --- neat 
    and scruffy --- ``cause a very serious clash'' in cognitive science 
    and explores ``some areas in which a fusion of identities seems 
    possible'' (p. 1).

    - Dan Fass

    REF

    Abelson, Robert P. (1981).
    Constraint, Construal, and Cognitive Science.
    Proceedings of the 3rd Annual Conference of the Cognitive Science 
    Society, Berkeley, CA, pp. 1-9.
https://cognitivesciencesociety.org/wp-content/uploads/2019/...

[I'll quote the most relevant first part of the article, which is still worth reading in its entirety if you have time, since scanned two column pdf files are so hard to read on mobile, and it's so interesting and relevant to Douglas Lenat's work on Cyc.]

CONSTRAINT, CONSTRUAL, AND COGNITIVE SCIENCE

Robert P. Abelson, Yale University

Cognitive science has barely emerged as a discipline -- or an interdiscipline, or whatever it is -- and already it is having an identity crisis.

Within us and among us we have many competing identities. Two particular prototypic identities cause a very serious clash, and I would like to explicate this conflict and then explore some areas in which a fusion of identities seems possible. Consider the two-word name "cognitive science". It represents a hybridization of two different impulses. On the one hand, we want to study human and artificial cognition, the structure of mental representatives, the nature of mind. On the other hand, we want to be scientific, be principled, be exact. These two impulses are not necessarily incompatible, but given free rein they can develop what seems to be a diametric opposition.

The study of the knowledge in a mental system tends toward both naturalism and phenomenology. The mind needs to represent what is out there in the real world, and it needs to manipulate it for particular purposes. But the world is messy, and purposes are manifold. Models of mind, therefore, can become garrulous and intractable as they become more and more realistic. If one's emphasis is on science more than on cognition, however, the canons of hard science dictate a strategy of the isolation of idealized subsystems which can be modeled with elegant productive formalisms. Clarity and precision are highly prized, even at the expense of common sense realism. To caricature this tendency with a phrase from John Tukey (1959), the motto of the narrow hard scientist is, "Be exactly wrong, rather than approximately right".

The one tendency points inside the mind, to see what might be there. The other points outside the mind, to some formal system which can be logically manipulated (Kintsch et al., 1981). Neither camp grants the other a legitimate claim on cognitive science. One side says, "What you're doing may seem to be science, but it's got nothing to do with cognition." The other side says, "What you're doing may seem to be about cognition, but it's got nothing to do with science."

Superficially, it may seem that the trouble arises primarily because of the two-headed name cognitive science. I well remember the discussions of possible names, even though I never liked "cognitive science", the alternatives were worse; abominations like "epistology" or "representonomy".

But in any case, the conflict goes far deeper than the name itself. Indeed, the stylistic division is the same polarization than arises in all fields of science, as well as in art, in politics, in religion, in child rearing -- and in all spheres of human endeavor. Psychologist Silvan Tomkins (1965) characterizes this overriding conflict as that between characterologically left-wing and right-wing world views. The left-wing personality finds the sources of value and truth to lie within individuals, whose reactions to the world define what is important. The right-wing personality asserts that all human behavior is to be understood and judged according to rules or norms which exist independent of human reaction. A similar distinction has been made by an unnamed but easily guessable colleague of mine, who claims that the major clashes in human affairs are between the "neats" and the "scruffies". The primary concern of the neat is that things should be orderly and predictable while the scruffy seeks the rough-and-tumble of life as it comes.

I am exaggerating slightly, but only slightly, in saying that the major disagreements within cognitive science are instantiations of a ubiquitous division between neat right-wing analysis and scruffy left-wing ideation. In truth there are some signs of an attempt to fuse or to compromise these two tendencies. Indeed, one could view the success of cognitive science as primarily dependent not upon the cooperation of linguistics, AI, psychology, etc., but rather, upon the union of clashing world views about the fundamental nature of mentation. Hopefully, we can be open minded and realistic about the important contents of thought at the same time we are principled, even elegant, in our characterizations of the forms of thought.

The fusion task is not easy. It is hard to neaten up a scruffy or scruffy up a neat. It is difficult to formalize aspects of human thought which are variable, disorderly, and seemingly irrational, or to build tightly principled models of realistic language processing in messy natural domains. Writings about cognitive science are beginning to show a recognition of the need for world-view unification, but the signs of strain are clear. Consider the following passage from a recent article by Frank Keil (1981) in Pscyhological Review, giving background for a discussion of his formalistic analysis of the concept of constraint:

"Constraints will be defined...as formal restrictions that limit the class of logically possible knowledge structures that can normally be used in a given cognitive domain." (p. 198).

Now, what is the word "normally" doing in a statement about logical possibility? Does it mean that something which is logically impossible can be used if conditions are not normal? This seems to require a cognitive hyperspace where the impossible is possible.

It is not my intention to disparage an author on the basis of a single statement infelicitously put. I think he was genuinely trying to come to grips with the reality that there is some boundary somewhere to the penetration of his formal constraint analysis into the viscissitudes of human affairs. But I use the example as symptomatic of one kind of approach to the cognitive science fusion problem: you start from a neat, right-wing point of view, but acknowledge some limited role for scruffy, left-wing orientations. The other type of approach is the obvious mirror: you start from the disorderly leftwing side and struggle to be neater about what you are doing. I prefer the latter approach to the former. I will tell you why, and then lay out the beginnings of such an approach.

[...]

To read why and how:

https://cognitivesciencesociety.org/wp-content/uploads/2019/...


As Roger Schank defined the terms in the 70's, "Neat" refers to using a single formal paradigm, logic, math, neural networks, and LLMs, like physics. "Scruffy" refers to combining many different algorithms and approaches, symbolic manipulation, hand coded logic, knowledge engineering, and CYC, like biology.

I believe both approaches are useful and can be combined and layered and fed back into each other, to reinforce and transcend complement each others advantages and limitations.

Kind of like how Hailey and Justin Bieber make the perfect couple: ;)

https://edition.cnn.com/style/hailey-justin-bieber-couples-f...

Marvin L Minsky: Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy

https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

"We should take our cue from biology rather than physics..." -Marvin Minsky

>To get around these limitations, we must develop systems that combine the expressiveness and procedural versatility of symbolic systems with the fuzziness and adaptiveness of connectionist representations. Why has there been so little work on synthesizing these techniques? I suspect that it is because both of these AI communities suffer from a common cultural-philosophical disposition: They would like to explain intelligence in the image of what was successful in physics—by minimizing the amount and variety of its assumptions. But this seems to be a wrong ideal. We should take our cue from biology rather than physics because what we call thinking does not directly emerge from a few fundamental principles of wave-function symmetry and exclusion rules. Mental activities are not the sort of unitary or elementary phenomenon that can be described by a few mathematical operations on logical axioms. Instead, the functions performed by the brain are the products of the work of thousands of different, specialized subsystems, the intricate product of hundreds of millions of years of biological evolution. We cannot hope to understand such an organization by emulating the techniques of those particle physicists who search for the simplest possible unifying conceptions. Constructing a mind is simply a different kind of problem—how to synthesize organizational systems that can support a large enough diversity of different schemes yet enable them to work together to exploit one another’s abilities.

https://en.wikipedia.org/wiki/Neats_and_scruffies

>In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s.[1][2][3]

>"Neats" use algorithms based on a single formal paradigms, such as logic, mathematical optimization or neural networks. Neats verify their programs are correct with theorems and mathematical rigor. Neat researchers and analysts tend to express the hope that this single formal paradigm can be extended and improved to achieve general intelligence and superintelligence.

>"Scruffies" use any number of different algorithms and methods to achieve intelligent behavior. Scruffies rely on incremental testing to verify their programs and scruffy programming requires large amounts of hand coding or knowledge engineering. Scruffies have argued that general intelligence can only be implemented by solving a large number of essentially unrelated problems, and that there is no magic bullet that will allow programs to develop general intelligence autonomously.

>John Brockman compares the neat approach to physics, in that it uses simple mathematical models as its foundation. The scruffy approach is more like biology, where much of the work involves studying and categorizing diverse phenomena.[a]

[...]

>Modern AI as both neat and scruffy

>New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as mathematical optimization and neural networks. Pamela McCorduck wrote that "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms."[6] This general trend towards more formal methods in AI was described as "the victory of the neats" by Peter Norvig and Stuart Russell in 2003.[18]

>However, by 2021, Russell and Norvig had changed their minds.[19] Deep learning networks and machine learning in general require extensive fine tuning -- they must be iteratively tested until they begin to show the desired behavior. This is a scruffy methodology.


Neats and scruffies also showed up in The X-Files in their first AI episode.


Why did it didn't work out ?


I don't know if there's really an answer to that, beyond noting that it never turned out to be more than the sum of its parts. It was a large ontology and a hefty logic engine. You put in queries and you got back answers.

The goal was that in a decade it would become self-sustaining. It would have enough knowledge that it could start reading natural language. And it just... didn't.

Contrast it with LLMs and diffusion and such. They make stupid, asinine mistakes -- real howlers, because they don't understand anything at all about the world. If it could draw, Cyc would never draw a human with 7 fingers on each hand, because it knows that most humans have 5. (It had a decent-ish ontology of human anatomy which could handle injuries and birth defects, but would default reason over the normal case.) I often see ChatGPT stumped by simple variations of brain teasers, and Cyc wouldn't make those mistakes -- once you'd translated them into CycL (its language, because it couldn't read natural language in any meaningful way).

But those same models do a scary job of passing the Turing Test. Nobody would ever have thought to try it on Cyc. It was never anywhere close.

Philosophically I can't say why Cyc never developed "magic" and LLMs (seemingly) do. And I'm still not convinced that they're on the right path, though they actually have some legitimate usages right now. I tried to find uses for Cyc in exactly the opposite direction, guaranteeing data quality, but it turned out nobody really wanted that.


One sense that I've had of LLM / generative AIs is that they lack "bones", in the sense that there's no underlying structure to which they adhere, only outward appearances which are statistically correlated (using fantastically complex statistical correlation maps).

Cyc, on the other hand, lacks flesh and skin. It's all skeleton and can generate facts but not embellish them into narratives.

The best human writing has both, much as artists (traditional painters, sculptors, and more recently computer animators) has a skeleton (outline, index cards, Zettlekasten, wireframe) to which flesh, skin, and fur are attached. LLM generative AIs are too plastic, Cyc is insufficiently plastic.

I suspect there's some sort of a middle path between the two. Though that path and its destination also increasingly terrify me.


>because they don't understand anything at all about the world.

LLMs understand plenty, in any way that can be tested. It's really funny when i see making mistakes as the evidence of lack of understanding. Guess people don't understand anything at all too.

> I often see ChatGPT stumped by simple variations of brain teasers

Only if everything else is exactly as the basic teaser and guess what ? humans fall for this too. They see something they memorized and go full speed ahead. Simply changing names is enough to get it to solve it.


Thanks - that's was the kind of answer I wanted. Is there any work trying to "merge" the two together ?


Had? Cycorp is still around and deploying their software.


Sounds similar to WolframAlpha?


Take a look at https://en.m.wikipedia.org/wiki/SHRDLU

Cyc is sort of like that, but for everything. Not just a small limited world. I believe it didn’t work out because it’s really hard.


If we are to develop understandable AGI, I think that some kind of (mathematically correct) probabilistic reasoning based on a symbolic knowledge base is the way to go. You would probably need to have some version of a Neural Net on the front end to make it useful though.

So you'd use the NN to recognize that the thing in front of the camera is a cat, and that would be fed into the symbolic knowledge base for further reasoning.

The knowledge base will contain facts like the cat is likely to "meow" at some point, especially if it wants attention. Based on the relevant context, the knowledge base would also know that the cat is unlikely to be able to talk, unless it is a cat in a work of fiction, for example.


At Leela AI we're developing hybrid symbolic-connectionist constructivist AI, combining "neat" neural networks with "scruffy" symbolic logic, enabling unsupervised machine learning that understands cause and effect and teaches itself, motivated by intrinsic curiosity.

Leela AI was founded by Henry Minsky and Cyrus Shaoul, and is inspired by ideas about child development by Jean Piaget, Seymour Papert, Marvin Minsky, and Gary Drescher (described in his book “Made-Up Minds”).

https://mitpress.mit.edu/9780262517089/made-up-minds/

https://leela.ai/leela-core

>Leela Platform is powered by Leela Core, an innovative AI engine based on research at the MIT Artificial Intelligence Lab. With its dynamic combination of traditional neural networks for pattern recognition and causal-symbolic networks for self-discovery, Leela Core goes beyond accurately recognizing objects to comprehend processes, concepts, and causal connections.

>Leela Core is much faster to train than conventional NNs, using 100x less data and enabling 10x less time-to-value. This highly resilient AI can quickly adjust to changes and explain what it is sensing and doing via the Leela Viewer dashboard. [...]

The key to regulating AI is explainability. The key to explainability may be causal AI.

https://leela.ai/post/the-key-to-regulating-ai-is-explainabi...

>[...] For example, the Leela Core engine that drives the Leela Platform for visual intelligence in manufacturing adds a symbolic causal agent that can reason about the world in a way that is more familiar to the human mind than neural networks. The causal layer can cross-check Leela Core's traditional NN components in a hybrid causal/neural architecture. Leela Core is already better at explaining its decisions than NN-only platforms, making it easier to troubleshoot and customize. Much greater transparency is expected in future versions. [...]


I think this is an interesting approach. A child may collectively spend hours looking at toy blocks, to train themselves to understand how what they see maps to an object in three dimensional space. But later on, the child may see a dog for only a few seconds, and be able to construct an internal model for what a dog is. So the child may initially see the dog standing and pointing to the left, but later the child will be able to recognize a dog laying on the floor pointing to the right. And do that without thousands of training examples, because they have constructed an internal mental model of what is a dog. This model is imperfect, and if they child has never seen a cat before, that might be recognized as a dog too.


"The essentialist tradition, in contrast to the tradition of differential ontology, attempts to locate the identity of any given thing in some essential properties or self-contained identities"

May essentialism just does not work.

https://iep.utm.edu/differential-ontology/


As far as I can tell it was more of an aspiration than a product. I worked with a consulting firm that tried to get into AI a few years back and chose Cyc as the platform they wanted to sell to (mostly financial) clients. I don't think a single project ever even started nor was there a clear picture of what could be sold. I hate to think Lenat was a fraud because he certainly seemed like a sincere and brilliant person, but I think Cyc was massively oversold despite never doing much of anything useful. The website is full technical language and not a single case study after 40 years in business.


Unfortunately visiting cyc.com, I only see a bunch of business BS and the "Documention" page shows nothing without logging in.


Weird, I interviewed with him summer 2021 hoping to be able to land an ontologist job at Cycorp. It went spectacularly badly because it turned out I really needed to brush up more on my formal logic skills, but I was surprised to even get an interview, let alone with the man himself. He still encouraged me to work on reviewing logic and to apply again in the future but I stopped seeing listings at Cycorp for ontologists and started putting off returning to that aspiration thinking Cycorp has been around long enough that there was no rush. Memento mori


Here's a 2016 Wired article about Doug Lenat, he was the guy who made Eurisko and CYC https://www.wired.com/2016/03/doug-lenat-artificial-intellig...


I've often thought that Cyc had an enormous value as some kind of component for AI, a "baseline truth" about the universe (to the degree that we understand it and have "explained" our understanding to Cyc in terms of its frames). AM (no relation to any need for screaming) was a taste of the AI dream.


>I've often thought that Cyc had an enormous value as some kind of component for AI

Same. I wonder if training an LLM on the database would make it more "grounded"? We'll probably never know as Cycorp will keep the data locked away in their vaults forever. For what purpose? Probably even they don't know.

>AM (no relation to any need for screaming)

heh.


Sad to hear of his passing, I remember building my uni project around OpenCyc in my one “Intelligent Systems” class many many years ago. It was a dismal failure as my ambition far exceeded my skills, but it was so enjoyable reading about Cyc and the dedicated work Douglas had put in over such a long time.


Even being a controversial figure, he was one of my heroes. Getting excited about Eurisko in the '80s and '90s was a big driver for me at the time! Rest in piece, dear computer pioneer!


He was a hero of knowledge representation and ontology. A bit odd that we learn about his sad passing from a Wikipedia article, while at the time of this comment there is still no mention on e.g. https://cyc.com/.


Thirteen hours later still no mention on the Cycorp website. Also the press doesn't seem to notice. Pretty odd.

The post originally pointed to Lenat's Wikipedia page; now it's an obituary by Gary Marcus which seems more appropriate.


I worked on Cyc in the early 90s, briefly [https://dl.acm.org/doi/pdf/10.1145/165529.993430 shows my MCC address on p.10 :)]. I cherish wonderful memories of being on that project.

Doug was amazing - bold, brilliant, visionary, charismatic.

RIP, dear leader.




Maybe it's a bit on the nose but I had his article summarized by Anthropic's Claude 2 100k model (LLMs are good at summarization) for those who don't have time to read the whole thing:

The article discusses generative AI models like ChatGPT and contrasts them with knowledge-based AI systems like Cyc.

Generative models can produce very fluent text, but they lack true reasoning abilities and can make up plausible-sounding but false information. This makes them untrustworthy.

In contrast, Cyc represents knowledge explicitly and can logically reason over it. This makes it more reliable, though it struggles with natural language and speed.

The article proposes 16 capabilities an ideal AI system should have, including explanation, reasoning, knowledge, ethics, and language skills. Cyc and generative models each have strengths and weaknesses on these dimensions.

The authors suggest combining symbolic systems like Cyc with generative models to get the best of both approaches. Ways to synergize them include:

Using Cyc to filter out false information from generative models.

Using Cyc's knowledge to train generative models to be more correct.

Using generative models to suggest knowledge to add to Cyc's knowledge base.

Using Cyc's reasoning to expand what generative models can say.

Using Cyc to explain the reasoning behind generative model outputs.

Overall, the article argues combining reasoning-focused systems like Cyc with data-driven generative models could produce more robust and trustworthy AI. Each approach can shore up weaknesses of the other.

May he rest in peace.


Related. Others?

Cyc - https://news.ycombinator.com/item?id=33011596 - Sept 2022 (2 comments)

Why AM and Eurisko Appear to Work (1983) [pdf] - https://news.ycombinator.com/item?id=28343118 - Aug 2021 (17 comments)

Early AI: “Eurisko, the Computer with a Mind of Its Own” (1984) - https://news.ycombinator.com/item?id=27298167 - May 2021 (2 comments)

Cyc - https://news.ycombinator.com/item?id=21781597 - Dec 2019 (173 comments)

Some documents on AM and EURISKO - https://news.ycombinator.com/item?id=18443607 - Nov 2018 (10 comments)

One genius's lonely crusade to teach a computer common sense (2016) - https://news.ycombinator.com/item?id=16510766 - March 2018 (1 comment)

Douglas Lenat's Cyc is now being commercialized - https://news.ycombinator.com/item?id=11300567 - March 2016 (49 comments)

Why AM and Eurisko Appear to Work (1983) [pdf] - https://news.ycombinator.com/item?id=9750349 - June 2015 (5 comments)

Ask HN: Cyc – Whatever happened to its connection to AI? - https://news.ycombinator.com/item?id=9566015 - May 2015 (3 comments)

Eurisko, The Computer With A Mind Of Its Own - https://news.ycombinator.com/item?id=2111826 - Jan 2011 (9 comments)

Open Cyc (open source common sense) - https://news.ycombinator.com/item?id=1913994 - Nov 2010 (22 comments)

Lenat (of Cyc) reviews Wolfram Alpha - https://news.ycombinator.com/item?id=510579 - March 2009 (16 comments)

Eurisko, The Computer With A Mind Of Its Own - https://news.ycombinator.com/item?id=396796 - Dec 2008 (13 comments)

Cycorp, Inc. (Attempt at Common Sense AI) - https://news.ycombinator.com/item?id=20725 - May 2007 (1 comment)


* Getting from Generative AI to Trustworthy AI: What LLMs Might Learn from Cyc* - https://news.ycombinator.com/item?id=37354601 - RIGHT NOW

HN location for discussion of Lenat's last paper (with Gary Marcus) about ways to reconcile Cyc's strengths with LLMs.


Perhaps some here aren't familiar with the existence of a (relatively useless in my opinion) POV that pits symbolic systems against statistical methods. But it isn't a zero-sum game. Informed, insightful comparisons are useful, but "holy wars" are not. See also [1] for broad commentary and [2] for a particular application.

[1] https://medium.com/@jcbaillie/beyond-the-symbolic-vs-non-sym...

[2] https://past.date-conference.com/proceedings-archive/2016/pd...


This sibling thread is apropos too: https://news.ycombinator.com/item?id=37356435


Ahh, another one of the old guard has moved on. Here are two excerpts from the book AI: The Tumultuous History Of The Search For Artificial Intelligence (a fantastic read of the early days of AI) to remember him by;

"Lenat found out about computers in a a manner typical of his entrepreneurial spirit. As a high school student in Philadelphia, working for $1.00 an hour to clean the cages of experimental animals, he discovered that another student was earning $1.50 to program the institution's minicomputer. Finding this occupation more to his liking, he taught himself programming over a weekend and squeezed his competitor out of the job by offering to work for fifty cents an hour less.31 A few years later, Lenat was programming Automated Mathematician (AM, for short) as a doctoral thesis project at the Stanford AI Laboratory." p. 178

And here's an count of an early victory for AI in gaming against humans by Lenat's EURISKO system (https://en.wikipedia.org/wiki/Eurisko):

"Ever the achiever, Lenat was looking for a more dramatic way to prove teh capabilities of his creation. The identified the occasion space-war game called Traveler TCS, then quite popular with the public Lenat wanted to reach. The idea was for each player to design a fleet of space battleships according to a thick, hundred-page set of rules. Within a budget limit of one trillion galactic credits, one could adjust such parameters as the size, speed, armor thickness, autonomy and armament of each ship: about fifty adjustments per ship were needed. Since the fleet size could reach a hundred ships, the game thus offered ample room for ingenuity in spite of the anticlimactic character of the battles. These were fought by throwing dice following complex tables based on probability of survival of each ship according to its design. The winner of the yearly national championship was commissioned inter galactic admiral and received title to a planet of his or her choice ouside the solar system.

Several months before the 1981 competition, Lenat fed into EURISKO 146 Traveler concepts, ranging from the nature of games in general to the technicalities of meson guns. He then instructed the program to develop heuristics for making winning war-fleet designs. The now familiar routine of nightly computer runs turned into a merciless Darwinian contest: Lenat and EURISKO together designed fleets that battled each other. Designs were evaluated by how well they won battles, and heuristics by how well they designed fleets. This rating method required several battles per design, and several designs per heuristic, which amounted to a lot of battles: ten thousand in all, fought over two thousand hours of computer time.

To participants in the national championship of San Mateo,California, the resulting fleet of ninety-six small, heavily armored ships looked ludicrous. Accepted wisdom dictated fleets of about twenty behemoth ships, and many couldn't help laughing. When engagements started, they found out that the weird armada held more than met the eye. One interesting ace up Lenat's sleeve was a small ship so fast as to be almost unstoppable, which guaranteed at least a draw. EURISKO had conceived of it through the "look for extreme cases" heuristic (which had mutated, incidentally, into mutated, incidentally, into "look for almost extreme cases")." p. 182

If you're a young person working in AI, by which I mean you're less than 30, and if you have not already done so, you should read about AI history in three decade 60s - 90s.


I may be getting this wrong, but I think I remember hearing that his auto-generated fleets won Traveller so entirely, several years in a row, that they had to shut down the entire competition because it had been broken

Edit: Fixed wrong name for the competition


I think you mean "EURISKO won the Traveller championship so entirely..."

In which case, yes, something like that did happen. Per the Wikipedia page:

Lenat and Eurisko gained notoriety by submitting the winning fleet (a large number of stationary, lightly-armored ships with many small weapons)[3] to the United States Traveller TCS national championship in 1981, forcing extensive changes to the game's rules. However, Eurisko won again in 1982 when the program discovered that the rules permitted the program to destroy its own ships, permitting it to continue to use much the same strategy.[3] Tournament officials announced that if Eurisko won another championship the competition would be abolished; Lenat retired Eurisko from the game.[4] The Traveller TCS wins brought Lenat to the attention of DARPA,[5] which has funded much of his subsequent work.


Whoops yes :)


I still intend to integrate OpenCyc.


Doug was one of my childhood heroes, thanks to a certain book telling the story of his work on AM and Eurisko. My great regret is that I never got the chance to meet him or contribute to his work in any way. RIP Doug, you are a legend.


> I have spent my whole career […], Lenat was light-years ahead of me […]

Lenat is on a short list of people I expected/hoped to meet at some point when context provided the practical reason.

He has been a hero to me for his creativity and fearlessness regarding his symbolic vision.

So sad I will never meet him, but my appreciation for him will never die.


Worked with their ontologists for a couple of years. Someone once told me that they employed more philosophers per capita than any other software company. A dubious distinction, maybe. But it describes the culture of inquisitiveness there pretty well too


Oh, so sorry to hear that. Good summary of his work- the Cyc project- on the twitter thread. Had missed that last paper- with Gary Marcus- on Cyc and LLM.


Anyone know how he died? I can't find any information about it but someone mentioned heart attack on Reddit?


Very visceral oof. I don't remember a time when I knew about AI but not about Eurisko.


72 isn't really too old. Does anyone know what caused his death? Revenge of the COVID?


[flagged]


Why? He shared little with the wider community, contributed to mass surveillance with Cyc's government collaborations, and hasn't really done anything of note.

I don't dislike Lenat, but he doesn't fit the commercial value of people who get black bars, he doesn't fit the ideological one, and he doesn't fit the community-benefit one.


Didn't he:

- invent case based reasoning

- build Eurisko and AM

- write a discipline defining paper ("Why AM and Eurisko appear to work")

- undertake an ambitious but ultimately futile high risk research gamble with Cyc?


Case-based reasoning is VERY old. It shows up prominently in the Catholic tradition of practical ethics, drawing on Aristotelian thought. Of course in a more informal sense, people have been reasoning on a case-by-case basis since time immemorial.


That's not what is meant here by case-based reasoning; CBR instead is an AI method which was prominent in the eighties and nineties where knowledge was represented in a semi-formal text representation and similarity was established by multi-dimensional assiociative indexing. One of the leading figures of the method was Roger Schank.


While futile from a personal and business aspect, it’s certainly valuable and useful otherwise. Maybe that’s implied here as you’re listing contributions, but I wanted to emphasize that it wasn’t a waste outside of that narrow band of futility.


I agree, and the fact that someone walked that path has been extremely valuable as well. I think we learned a lot from the cyc effort.


Consider giving more grace. Life is short, and kindness is free.


Why do people have to have 'commercial value' to get black bars? Why do people have to pass the ideological police? Why isn't serving as a visible advocate of a certain logical model enough?

I think my bias comes from having started my career in AI on the inference side and having (perhaps not so much long term :) seen Cyc as a shining city on a hill. Lenat certainly established that logical model even if we've since gone onto other things.


I believe the parent poster claims that a black bar should meet either a commercial, hacker-cultural, or open-source contribution one.


I got a lot of value out of some of the papers he wrote, and what bits of Building Large Knowledge-Based Systems I managed to read.


he is the patron hacker of players who use computers to break board games or war games


I think you don't understand the meaning of the black bar if "commercial value" is one of the metrics.


Steve Jobs received one - by which criteria, if not commercial (to other people) value?

It certainly wasn't for the warmth of his personality, his impeccable business ethics, or for his libre open-source contributions.


> by which criteria

Historic value.


And which category of important-enough-to-be-historic contributions has he made?


Take a moment to reflect on what you're doing right now.

You're turning a celebration of life for a _very_ recently departed figure into a pissing contest.

Extremely distasteful.


I think you're misunderstanding the direction and intent of this subthread.

You're right that talking about Jobs is off-topic, though.


«Contributions», debatable; «value», debatable; "impact", cannot be ignored.

It is probably best if we stick to Doug Lenat and postpone the meta to a more neutral occasion: Doug Lenat has just died.


Not even Warnock got a black bar even when asked for one as a mark of respect: [0]

I guess the black bar really is an ideological thing. Rather than being supposedly a 'mark of respect'.

Regardless, RIP Doug.

[0] https://news.ycombinator.com/item?id=37197852


Wouldn't it also be a mark of respect to check, before saying something that mean, whether it's true or not?

https://web.archive.org/web/20230821003655/https://news.ycom...


Wow, that is really mean!


While I respect Doug's intelligence, he showed a kind of perverse persistence in a failed idea, and I think it's telling that aspiring AI czar Gary Marcus admires the ruins of Cyc while neglecting to acknowledge that it represents a dead end in AI. Like science, the field of AI advances one funeral at a time. Doug pursued a pipe dream, and convinced others to do the same, despite the brittle and static nature of the AI he sought to build. Cyc was not a precursor to OpenAI, contrary to other comments in this thread. That would be like calling the zeppelin the precursor of the jet. It represents a different school of technology, and a much less effective one.


It’s not going to be popular to highlight the less than hoped for successes of Lenat’s greatest project.

But I think that is one of the things he should be admired for. How can anyone know how an ambitious approach will pan out without the great risk of going all in?

Anyone willing to risk a Don Quixote aspect to their career, in pursuit of a breakthrough, is someone who cares deeply about something beyond themselves.

And recognizing the limits of Lenat’s impact today doesn’t preclude both the direct and indirect impact on future progress.

I found him inspiring on multiple levels.


Wait a few more years and you will see that the effort to formalize this knowledge was not in vain. We will certainly see systems which make use of this data. Cycorp is a different company with a different approach than OpenAI; but the data produced by Cycorp will likely be useful for companies like OpenAI on their way to truly intelligent systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: