Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reality is reality regardless of what you perceive.

Presumably the argument is that training a neural net from a basis of complete ignorance is inefficient because we have facts with which we can initialize the model.

So far as applicability to TFA, we can and probably should initialize or bias models that select candidates so their inferences reflect our values.



Sure but if you randomly initialize weights you can keep shuffling the initial state to discover new local maxima. Baking in an informed starting state biases the results and requires a new biased start if you wish to explore other areas of the gradient. Basically, the student seems to have a reasonable approach. So how does the lesson follow from the preceding paragraph?


I think it's ambiguous because tic-tac-toe is a solved problem, so presumably it's being done as an exercise to learn something about neural nets.

If the idea were to write an AI to win at a harder game, it would make more sense to add whatever biases you can. You might get better performance that way. Or maybe that's what they thought back when that story was written? Game AI was nothing like we think about it now.


Even randomness isn't unbiased. A random distribution will have local minima and maxima, just not in all the same places in the next try.


> Reality is reality regardless of what you perceive.

Except that in nearly all nontrivial topics we only see a small sample of reality.

So even if we are lucky enough to be starting out with a set of only verifiable, reproducible, true facts, we are still biased in their selection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: