Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, animal goals are self-regulative and defined in terms of their capacities alone.

If you can build a system like this, with no presumed domain distribution internally, then you're a step towards intelligence.



I think you are probably right about needing to be "in the world" (if by that, you mean being able to interact with it and see how its state, and therefore the data you have about it, changes), but I feel you are being too hasty in ruling out any situation where humans have had any role in preparing the data.

In the evolution of natural intelligence, physics provides the domain distribution and Darwin has explained what the reward function is. It does not follow that no learning can occur in an environment where either or both of the domain and reward function are defined by humans.

As far as I can tell, this was the explicit goal of AlphaGo Zero. Even if we were to accept your position that the data it was started with "contained the solution", then the fact it does rather better than humans at finding that solution is, by itself, significant. (In this view, one might characterize the evolution of natural intelligence as finding, within the biosphere domain, a rather successful solution to the survival problem.)

It would seem to me to be begging the question to say that the success of AlphaGo Zero is not learning because the environment was human-specified (At the same time, as Go is a highly-constrained environment, there is no reason to suppose that its successes indicate we are anywhere near AGI.)


I dont mind people preparing data to build intelligent machines -- I mind what data they are preparing, and subsequently, what their claims about these machines are.

If you can build a hand to grasp objects, and train its substructure to grasp this-way-and-that -- fine. So long as it can, in the end, also train itself... as we all do when we type.

Nature provides something to bootstrap learning, but it isnt "data" in the sense in which ML requires data. It isnt relevant, quantified, premeasured. The "data" nature provides is in our biochemistry .. how we react to our environment, etc.

If AI research can produce an intelligence which is able to formulate the very terms of problems it wishes to solve, great. I dont see "summarising the solution" as a strategy which is event in the ballpack.


To be more specific, do you regard the data AlphaGo Zero was initialized with as being relevant, quantified, and premeasured to the point that no learning occurred between then and its defeat of expert players? I don't think the data they had as they learned to become experts was any less relevant, quantified, and premeasured - and quite possibly more so, if they read about tactics and strategy in the game.

While I agree that nature's data is less relevant, quantified, and premeasured than what current ML feeds on, I don't see that as establishing that there is a relevant qualitative distinction that renders this divide unbridgeable in principle. Every organism that senses its environment is processing data.

With regard to formulating the very terms of problems it wishes to solve, I have no difficulty in seeing that this has not been achieved yet, and personally, I don't expect it any time soon. At the same time, you seem to be very close to saying that no artificial system could do this because its goals are always, in some sense, those of its creators. To be clear, I would regard such a position as mostly avoiding the issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: