Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The current AI approach is like a pure function in programming: no side effects, and given the same input you always get the same output. The “usage” and “training” steps are seperate. There is no episodic memory, especially there is no short term memory.

Biological networks that result in conscious “minds” have a ton of loops and are constantly learning. You can essentially cut yourself off from the outside world in something like a sensory deprivation bath and your mind will continue to operate, talking to itself.

No current popular and successful AI/ML approach can do anything like this.




Agreed, but I also wonder if this is a "necessary" requirement. A robot, perhaps pretrained in a highly accurate 3d physics virtual simulation, which has an understanding of how it can move itself and others in the world, and how to accomplish text defined tasks, is already extremely useful and much more general than an image classificiation system. It is so general, in fact, that it would begin reliably replacing jobs.


But it's not AGI.


Ok, so now we just have to define "AGI" then. A robot, which knows its physical capabilities, which can see the world around it through a frustrum and identifies objects by position, velocity, rotation, which understands the passage of time and can predict future positions for example, which can take text input and translate that into a list of steps it needs to execute, which is functionally equivalent to an Amazon warehouse employee, we are saying is not AGI.

What is an AGI then?


An Amazon warehouse worker isn’t a human, an Amazon warehouse worker is a human engaged in an activity that utilises a tiny portion of what that human is capable of.

A Roomba is not AGI because it can do what a cleaner does.

“Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.”


I think the key word in that quote is "any" intellectual task. I don't think we are far from solving all of the mobility and vision-related tasks.

I am more concerned though if the definition includes things like philosophy and emotion. These things can be quantified, like for example with AI that plays poker and can calculate the aggressiveness (range of potential hands) of the humans at the table rather than just the pure isolated strength of their hand. But it seems like a very hard thing to generally quantify, and as a result a hard thing to measure and program for.

It sounds like different people will just have different definitions of AGI, which is different from "can this thing do the task i need it to do (for profit, for fun, etc)"


I think you're on to something very practical here.

Chat GPT allows for conversation that is pretty remarquable today. It hasn't learned the way us humans have - so what?

I think a few more iterations may lead to something very, very useful to us humans. Most humans may just as well say Chat GPT version X is Artificial, and Generelly Intelligent.


Totally agree. Better to evaluate something by its array of capabilities, rather than if it fits a label that has a murky definition :)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: