I missed the AMA, but glad to see it here and it looks like there are some team members monitoring this thread so I'll throw some questions out there.
I did my MS in AGI at the National Intelligence University with Ben Goertzel as my outside research advisor. My thesis was to determine what were defense implications of an AGI and who was making the best progress toward actually building one.
Since then the DoD has started to take an interest in AGI, and in fact today, during my one weekend a month drill at the Pentagon, I had a great long conversation with Maj. General Seng [1] who is heading up efforts around implementing ISR systems with more autonomous capabilities and exploring how an AGI would be utilized in defense.
One of our big open questions is what's the "stack" for AGI, conceptually? I didn't come to any conclusion on this and had to make a lot of assumptions to close out the research. I would be curious to hear the OpenAI team's thoughts on it.
Will you all be coming to AGI 16 this year in New York?
I know you don't particularly subscribe to the "AGI is extremely dangerous" line of thinking, but let's say for the sake of argument we're a decade in the future and it's starting to become very evident that it in fact is dangerous—like in a recursively self-improving nightmare type way—and it could be realized via a few thousand AWS instances with the application of some very specialized knowledge.
What do you imagine the U.S. Government's reaction might be?
I know you don't particularly subscribe to the "AGI is extremely dangerous" line of thinking
I think the question in my mind is "dangerous to who and when?"
Is AGI an existential threat? Probably. But on what time horizon? Through what mechanism? And can we evolve and collaborate with AGI instead of de-facto competing with it?
None of the AGI warning people (Bostrom, Yudkowski, Barrat et al.) have come up with a plausible chain of events that leads to human irrelevance or extinction. They always make a few assumptions and then claim "and then exponential growth happens" and boom everyone's a paperclip.
The USG doesn't have a position at this point and it's ill prepared to react. In reality if there is some kind of world ending gray goo scenario that an AGI creates - nothing the US or any other Government can do will matter. I think that's about as likely as Roko's Basilisk being true though (eg. 0% chance).
>The USG doesn't have a position at this point and it's ill prepared to react.
That's what I thought. Kind of bizzare considering USG usually tends to stay far ahead of the curve when dealing with technology that has potentially profound national security implications. Yet at the same time, programs such as DARPA SyNAPSE exist (though only at moderate funding levels).
>In reality if there is some kind of world ending gray goo scenario that an AGI creates - nothing the US or any other Government can do will matter.
For sure. My hypothetical was intended more as taking place prior to any such disaster—in a world where AGI is viewed in the same light as WMDs, but where nothing catastrophic has yet happened. In that scenario, USG could potentially have great influence (for better or worse) over any outcome.
For my money, counter-proliferation measures would be ultimately worthless in that situation, except as a stop-gap to buy time for a larger project to solve AGI correctly.
Kind of bizzare considering USG usually tends to stay far ahead of the curve when dealing with technology that has potentially profound national security implications.
It's basically impossible to have a plan of response for something that nobody even knows how to build. That said, we have a lot crisis action plans that might apply depending on whatever actions are happening. It would likely fall into the realm of contingency plans that have "Complex electromagnetic environments" as core assumptions.
I know that this is completely off topic, but for the sake of argument, let's imagine that we are thirty years into the future, we have invented efficient teleportation and have colonized mars. What do you think the government should do to encourage family ties to remain strong when loved ones are living on another planet?
While I appreciate you mocking me, I can't help but disagree with the implication that my post was completely off-topic.
AndrewKemendo has conducted research into AGI on behalf of the military. My hypothetical was intended as a near-term scenario in which the technology proved far more dangerous than originally thought. Asking him how he thinks USG would react to such a scenario doesn't strike me as unreasonable given his background.
Did you really expect to get a response other than "It's basically impossible to have a plan of response for something that nobody even knows how to build"?
I'm just tired of hearing unproductive questions like this to which any response other than "we don't know" is literally science fiction. Andrew's response would have applied equally to teleportation.
Why don't we talk instead about how methods such as deep learning actually do work, and what problems that have been successfully applied to?
Why don't we talk instead about how methods such as deep learning actually do work, and what problems that have been successfully applied to?
Well we do, but it's clear to the community that we won't get to AGI with deep learning classifiers and systems alone. So the questions we are asking are "what would a system look like that results in X kind of behavior."
I don't disagree with your teleportation analogy either, but I think you weight it too heavily with impossibility. In fact there are serious people working on teleportation - at this point it's quantum state teleportation [1] but it's a start.
I would like to point out that AGI is incredibly abstract and entirely theoretical right now (and philosophical, depending on the researcher, e.g. Bostrom). Deep learning is very engineering driven and very much focused on working systems that produce real world results. Even though there is some work being done on theory, it is very shaky.
As such, as far as deep learning is concerned there is no stack for AGI yet, because a lofty goal that is so far away from what is currently possible.
I was hoping that someone out there knew something I didn't on this, because that was the conclusion I came to in my research, that nobody even has a conceptual stack.
That said, I think OpenCOG has the closest to something of a larger conceptual stack in mind based on the atom-space approach. I think the folks over at Deep Mind might have some thoughts, and perhaps the numenta people as well.
Their system is I think an AGI as in artificial general intelligence system though not at human levels yet (apart from playing Breakout and Space Invaders). Here's Demis Hassabis talking about it https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...
>Since then the DoD has started to take an interest in AGI, and in fact today, during my one weekend a month drill at the Pentagon, I had a great long conversation with Maj. General Seng [1] who is heading up efforts around implementing ISR systems with more autonomous capabilities and exploring how an AGI would be utilized in defense.
Isn't that basically the most obviously bad idea ever, so terribly stupid that there have been several movies chronicling how massively bad an idea it actually is to build "AGI" for military goals?
This idea is cliché precisely because it's the most obvious course of action for the military. Don't forget that it's DoD who sponsored the previous AI summer.
Besides, if they don't do on it, the enemy surely will!
Well no, mostly we don't, because the brains who happen to be the weapons still need a world to live in after they fight. The difference is that with artificial brains, hey, why would they give a crap what comes after the killing?
However, we mostly only arm brains that have an IQ of <120 and some human values baked in (e.g. empathy to other humans and living beings; a family to protect, or dependence on other social structures that demand pro-social attitudes).
Scott Alexander expressed some important concerns about the project[0].
Edited:
<del>So far they've managed to label them as "coming from the LessWrong background" and subsequently dismiss via appeal to a strawman Paperclip Maximizer. It doesn't give me much confidence in them.</del>
<ins>Nevermind. I didn't realize this comment was not made by an OpenAI representative. Also, we could use a strikethrough formatting tag on HN. 'dang?</ins>
I hope they eventually address those points though.
The person who referred to "LessWrong background" and talked about paperclip maximizers wasn't (at least, so it looks to me) mocking or dismissing.
The last paragraph of that comment does say "the LessWrong folks tend to be overly dramatic in their concerns" but goes on immediately to add "But they do have a point that the problem of controlling something much more intelligent than yourself is hard [...] and, if truly super-human intelligence is practically possible, then it needs to be solved before we build it".
2. Currently, this is my main goal related to Artificial General Intelligence. One should know how far or near they are from creating Artificial General Intelligence.
I think there are legitimate research questions and concerns about AGI; unfortunately there's also a lot of fluff surrounding the area (e.g. singularity stuff, doomsday scenarios, etc.).
The way I see it, there's only one conceptual barrier to cross between current AI and "AGI-like" technology, which could be summed up as 'models which take themselves into account'.
Whilst it's trivial to have software modify itself, we don't have good models for predicting what those modifications will do (in a way which is more computationally efficient than just running them). An analogy is how encoding programs as ANNs lets us perform gradient descent, which we couldn't do if we encoded them as e.g. string of Java code.
If we find a powerful software model which allows efficient prediction with white-box self-references, then I think lots of progress will be made quite quickly.
Considering it's explicitly called out in their charter [1]:
It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.
and basically the entire reason the project exists in the first place is to try and get a technical handle on AGI risks. I would have to say it makes perfect sense.
We're only a long way off if we continue to put tiny dollar figures into it. You should be thinking about it more like the Nucelar issue. Would you rather more or less thought be put into it?
I did my MS in AGI at the National Intelligence University with Ben Goertzel as my outside research advisor. My thesis was to determine what were defense implications of an AGI and who was making the best progress toward actually building one.
Since then the DoD has started to take an interest in AGI, and in fact today, during my one weekend a month drill at the Pentagon, I had a great long conversation with Maj. General Seng [1] who is heading up efforts around implementing ISR systems with more autonomous capabilities and exploring how an AGI would be utilized in defense.
One of our big open questions is what's the "stack" for AGI, conceptually? I didn't come to any conclusion on this and had to make a lot of assumptions to close out the research. I would be curious to hear the OpenAI team's thoughts on it.
Will you all be coming to AGI 16 this year in New York?
[1] http://www.af.mil/AboutUs/Biographies/Display/tabid/225/Arti...