Hacker News new | past | comments | ask | show | jobs | submit login

>The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved

Right, which is an impossibility in my opinion. There is an inherent conflict between systems with asymmetric power and capability in a resource constrained environment. Trying to get around that fundamental principle is an exercise in futility.




Could you elaborate? "Systems with asymmetric power" presumably refers to the AGI - or does it? Maybe you are referring to the AI box or the utility function design or the "nanny AI" which is meant to contain the AGI? I don't know what "capability in a resource constrained environment" refers to because that could refer to pretty much anything in our universe or any finite universe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: