Hacker News new | past | comments | ask | show | jobs | submit login

While I appreciate you mocking me, I can't help but disagree with the implication that my post was completely off-topic.

AndrewKemendo has conducted research into AGI on behalf of the military. My hypothetical was intended as a near-term scenario in which the technology proved far more dangerous than originally thought. Asking him how he thinks USG would react to such a scenario doesn't strike me as unreasonable given his background.




Did you really expect to get a response other than "It's basically impossible to have a plan of response for something that nobody even knows how to build"?

I'm just tired of hearing unproductive questions like this to which any response other than "we don't know" is literally science fiction. Andrew's response would have applied equally to teleportation.

Why don't we talk instead about how methods such as deep learning actually do work, and what problems that have been successfully applied to?


Why don't we talk instead about how methods such as deep learning actually do work, and what problems that have been successfully applied to?

Well we do, but it's clear to the community that we won't get to AGI with deep learning classifiers and systems alone. So the questions we are asking are "what would a system look like that results in X kind of behavior."

I don't disagree with your teleportation analogy either, but I think you weight it too heavily with impossibility. In fact there are serious people working on teleportation - at this point it's quantum state teleportation [1] but it's a start.

[1] http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.70....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: