Hacker News new | past | comments | ask | show | jobs | submit login
Artificial intelligence just lost a leader (rjlipton.wpcomstaging.com)
152 points by furcyd on Feb 5, 2023 | hide | past | favorite | 34 comments



A colleague of mine told me a story about Roger Schank, whom he knew well. I'd like to think this was apocryphal but I suspect it is not!

As soon as Schank's PhD students graduated, he would start talking trash about them. The colleague asked Schank why in the world he would do something like this, and the response was that Schank had realized that whenever he bad-mouthed someone, others would come to assume that the person's work had to be impressive. So this was his approach to promote his students and increase their hiring prospects.


Probably a bad sign?

It's concerning to see young scientists' job prospects being manipulated based on personal relationships rather than merit.

This form of tribalism goes against the principles of the academic system. If this behavior is deemed as "pragmatism," it's a sign of a larger issue within the academic community.


That's not surprising. The academic community has more issues than Reader's Digest.

The academia is a small, petty world full of inflated egos and prone to political hysteria, with people eternally locked into zero sum games. When I moved to the private sector, it felt like a breath of fresh air.

There, people will "only" try to rip you off, but given that their reputation precedes them, you will orient yourself pretty quickly and stop cooperating with the nasties, unless they are as dominant as Google or Microsoft are. You cannot do this in rigid hierarchies of the academic world.


> the principles of the academic system

Lmao. Those being what, signing one's name under as many bullshit papers and articles as physically possible to get tenure and then doing the bare minimum till you retire while your students hate you?

Today's academia has no principles left, it's a rat race to the bottom.


wow, it's like academic system is some sort of divine intervention and not people doing people's stuff


> It's concerning to see young scientists' job prospects being manipulated based on personal relationships rather than merit.

Try getting into post-grad at prestigious institution with a recommendation from a nobody who teaches at a virtually unknown university. Your actual merit won't mean anything against the recommendation of a big name in the field; even if the student they are recommending has less merit


That's hilarious. I guess it does seem that it is probably true that I would be impressed if somebody applying for a job had managed to elicit such great interest by their advisor that said advisor would publicly denounce them, in turn risking their own reputations and careers. At a certain point a bad reference becomes a good one I guess.


>At a certain point a bad reference becomes a good one I guess.

"There is no such thing as bad publicity."


Kind of the inverse of the Talmud's rule that the sin of speaking badly about someone also covers praising them excessively, on the theory that this will usually lead to pushback.


Why would you want to hire someone who the leading expert in their work says is wrong?


Certain decision makers are attracted (in a hiring sense) to contrarians or an underdog. Followers rarely revolutionize an industry (or so I’ve been told the thought process goes).

To rely on a leading expert as an appeal to authority might get you blindsided.


I'm struggling to grasp this logic still. An incompetent person is more likely to be admonished publicly


Researchers logic stops to their research, they are bad at HR


Ironically, this would be something that IA would not adapt too, i guess.


A huge loss.

Here is the chapter Roger Schank wrote for the 1997 book, The Third Culture (ed. John Brockman)

https://www.edge.org/conversation/information-is-surprises

At the bottom of the page are a number of comments about Schank and his work by other chapter authors: Steven Pinker, Danny Hillis, Marvin Minsky, etc. E.g.

> It was quite hard to persuade our colleagues to consider these kinds of theories. Sometimes, it seems, the only way to get their attention is by shocking them. Roger Schank is good at this. His original discussion of conceptual dependency used such examples as "Jack threatened to choke Mary unless she would give him her book." ... I once asked Roger why so many of his examples were so bloodthirsty. He replied, "Ah, but notice how clearly you remember them!" -- Minsky


I wasn't aware of this! It's funny because when we were taught Conceptual Dependency (in a GOFAI course as part of my Masters), this is something we used to joke about, that so many of the examples are violent.

Yes, a great loss. I had hoped that Conceptual Dependency (CD) would be revived by marrying it to deep learning systems - I still hope someone works on it. I still own a copy of the CD book "Scripts, Plans, Goals and Understanding: An Inquiry Into Human Knowledge Structures".


Why is it a great loss? He was 77 years old (a fairly normal age to die), had lived a rich and productive life, had some real impact on his chosen field, and was no long actively working. Why do we keep talking about people's death as a loss when they've already made their contributions to the world, and leave behind them a wide and/or deep legacy for the future?


IMHO, the longer someone is around to experience and relate to the world, the greater their loss to those who can never connect to their past. In this case, it’s also losing a living connection to the roots of AI and computer science.


It is a way to say all you said, and more, is now gone.


Yes, that's how I feel. I admit it was a trite expression that didn't capture what I meant.


It's a better loss than if he died at 70 or even 60. But every year that went by with him were immeasurably great. Basically all human loss of life is great loss. It's not really possible to measure the value of human life other than vague terms like "great." So when the human life is gone the loss is similarly difficult to describe.


I just don't agree. Living things are born, they live, they die. It can be a loss when they die unexpectedly early, due to disease or injury. But this pattern of birth/life/death is the very essence of what living means, and I just don't think it is useful to think of it as a loss when one iteration of it completes. That's even more true when the life part of that cycle resulted in such contributions to the world.


I understand that some people have a spiritual understanding of life that has death as a requirement for meaning, however I don't have that. I can accept that I'll almost certainly one day die and I can apply the proper psychological and spiritual techniques to not be very stressed about that (finding meaning in death does this, which is almost the same thing as saying life had meaning because we die)... BUT I firmly believe death is a tragic disease we absolutely must continue to mitigate for those that wish to do so, indefinitely if possible.

I simply can't see how it wouldn't be better if everyone could get a few more decades or even centuries if possible. Imagine the value of two centuries of physics research, two centuries of artistic development and expression. Life is so precious I don't accept that just 80 years is anywhere near enough of it.


I can relate to this idea. IMO ee have a bias towards the future and tend to think of the destruction of something as erasing all the value it ever had. But if you think of all of spacetime rather than your specific point in it, any life that's ended compared to those still ongoing is exactly as real and has as much "value", which isn't lost.

Considering a completely different meaning of "loss", there's certainly benefit to the constant renewing of the world by death and birth, but there's still a very real loss in our best and brightest growing old and dying (mainly due to the former), because of their lost experiences (hence skills, etc). If everyone lived half as long but the birth rate doubled surely the world would be a far worse place. So I do think "it is useful to think of it as a loss."

Thanks for discussing, regardless of some people apparently being displeased.


Roger Schank was one of the pioneers in Natural Language Understanding with his work on Conceptual Dependencies.

I remember reading, in the late 70s, about his SAM (Script Applier Mechanism) and PAM (Plan Applier Mechanism) and be amazed that a computer program could answer questions about natural language text. I still have my copy of his book "Inside Computer Understanding, Five Programs Plus Miniatures" which gave detailed explanations of how they were implemented.


Also informative is the article's 1 comment is from Alan Kay:

> Roger and I were “friends” — kind of “New York friends” — since we met at the Stanford AI project in late 69 or early 70 — so about 53 years or so. He was really great at thinking things out from scratch — like an ancient Greek — and some of his insights really helped make progress.


Can someone explain what Kay means by "New York friends"?

My Google-fu is weak this morning, so I'm just getting a bunch of stuff about the old television show.


Roger Schank happened to be the reason why I got interested in AI in the first place. I read a German translation of a popular science book in high school - I've tried to find out the title and author again without success. (Maybe Schank was the co-author?) It was about his group and how they watched Mork & Mindy on Friday afternoons because Mork made so many interesting, intelligent mistakes. This book and Schank & Riesbeck's Conceptual Dependency Theory was the reason I decided to study "AI." Unfortunately, we only had a new line of study called cognitive science then, and the psychologists didn't really integrate well. So I ended up studying Philosophy and General Linguistics (at a center for computational linguistics).

Fast forward 30 years and I'm now organizing my own seminar series on Friday afternoons and participate in a EU project on XAI. However, I should have stayed in computational linguistics, going into philosophy was a mistake. Anyway, I wish I had met Schank to tell him how much his early work on CD influenced me.


> Like Marvin Minsky, he takes the strong AI view, but rather than trying to build an intelligent machine he wants to deconstruct the human mind. He wants to know, in particular, how natural language— one’s mother tongue — is processed, how memory works, and how learning occurs.

Which led me to the "Chinese room" argument which is rather interesting.

https://en.wikipedia.org/wiki/Chinese_room


A buddy of mine went to Yale in the late 80s, and used to tell some tall tales about (grad student) parties in Schank's basement, with free flowing booze and weed. Sounded like a guy who worked hard and partied hard.


To be honest, I wasn't smart enough to wrap my head around Schank's semantic graphs. However, was/am inspired by this: "best way to make a machine think is to first make it teach" [1]

[1] https://www.wired.com/1994/08/schank/


> "best way to make a machine think is to first make it teach"

That applies to humans too. Richard Feynman said that if you can't explain something to someone else you don't really understand it yourself.

"Feynman was a truly great teacher. He prided himself on being able to devise ways to explain even the most profound ideas to beginning students. Once, I said to him, “Dick, explain to me, so that I can understand it, why spin one-half particles obey Fermi-Dirac statistics.” Sizing up his audience perfectly, Feynman said, “I’ll prepare a freshman lecture on it.” But he came back a few days later to say, “I couldn’t do it. I couldn’t reduce it to the freshman level. That means we don’t really understand it.”

David Goodstein: http://calteches.library.caltech.edu/563/2/Goodstein.pdf


This is first time I've read about Roger Schank. From what little I know about him, I suspect he believed, like me, that the quest to understand AI is also a quest to understand ourselves and what makes us human.


I believe he built the “Cognitive Arts” company for learning software. That was an early influence on my own company. So I remember when I met him and showed him our games for teaching kids fractions. His response: “why the hell should we teach kids fractions?”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: