Hacker News new | past | comments | ask | show | jobs | submit login

I don't know what his reasons are but it makes sense to me. Yes, there are incredible results coming out of the AI world but the methods aren't necessarily that interesting (i.e. intellectually stimulating) and it can be frustrating working in a field with this much noise.



I don't want to come across as too harsh but having studied machine learning since 2015 I find the most recent crop of people excited about working on AI are deep in Dunning-Kruger. I think I conflate this a bit with the fascination of results over process (I suppose that befuddlement is what led me to physics over engineering) but working in ML research for so long it's hard to gin up a perspective that these things are actually teleologically useful, and not just randomly good enough most of the time to keep up the illusion.


What do you mean by "things that are actually teleologically useful"?

Fellow physicist here by the way


Like useful in an intentional way: purpose-built and achieves success via accurate, parsimonious models. The telos here being the stated goal of a structurally sound agent that can emulate a human being, as opposed to the accidental, max-entropy implementations we have today.


Sounds like an arbitrary telos, especially in a world where one of the most useful inventions in human existence has been turning dead dinosaurs into flying metal containers to transport ourselves great distances in.


Every goal is equally arbitrary, I'm speaking to the assumed ideology of the AI fanatics.


Is a guide dog teleologically useful?


Not if you’re taste testing ceviche


I see, so humans are also not usefully intelligent in an intentional way, because they also follow the 2nd law of thermodynamics and maximize entropy and aren't deterministic?


Pure, refined “but humans also”.


What do you mean by "Pure, refined"?

You're right that "but humans also" is better than my "and humans also"


Not OP, but I'm assuming he means that they are maddeningly black-boxy, if you want to know how the sausage is made.


I feel that way sometimes too.

But then I think about how maddeningly unpredictable human thought and perception is, with phenomena like optical illusions, cognitive biases, a limited working memory. Yet it is still produces incredibly powerful results.

Not saying ML is anywhere near humans yet, despite all the recent advances, but perhaps a fully explainable AI system, with precise logic, 100% predictable, isn’t actually needed to get most of what we need out of AI. And given the “analog” nature of the universe maybe it’s not even possible to have something perfect.


> But then I think about how maddeningly unpredictable human thought and perception is, with phenomena like optical illusions, cognitive biases, a limited working memory.

I agree with your general point (I think), but I think that "unpredictable" is really the wrong word here. Optical illusions, cognitive biases and limited working memory are mostly extremely predictable, and make perfect sense if you look at the role that evolution played in developing the human mind. E.g. many optical illusions are due to the fact that the brain needs to recreate a 3-D model from a 2-D image, and it has to do this by doing what is statistically most likely in the world we live in (or, really, the world of African savannahs where humans first evolved and walked upright). This, it's possible to "tricks" this system by creating a 2D image from a 3D set of objects that is statistically unlikely in the natural world.

FWIW Stephen Pinker's book "How the Mind Works" has a lot of good examples of optical illusions and cognitive biases and the theorized evolutionary bases for these things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: