Hacker News new | past | comments | ask | show | jobs | submit login

> They address a class of questions that were previously ‘hard for computers and easy for people’, or, perhaps more usefully, ‘hard for people to describe to computers’.

Those aren't the only problems. ML can also solve problems that were previously 'hard for people and with no good algorithm for computers'. These are problems where there is a good labeled dataset, but no good algorithm to map from data to label. For example, the work determining sexual orientation from images (https://osf.io/fk3xr/).

The problem with this approach is you get predictive ability, but no insight. But still can be of great value, and potentially great danger too.




To be clear, it's not obvious that the paper you linked is actually accurate. A lot of researchers consider that paper to be deeply flawed, and to show something other than what it claims to.


What are others saying it shows?



Absolutely. People are using ML for plenty of difficult problems. In operations research, a lot of time is spent coming up with heuristic solutions to hard optimization problems. This problem is plenty hard for people to do. There has been some recent work in using ML to create solution methods, like the paper Learning Combinatorial Optimization Algorithms over Graphs (https://arxiv.org/abs/1704.01665).


> For example, the work determining sexual orientation from images (https://osf.io/fk3xr/).

> The problem with this approach is you get predictive ability, but no insight.

Is it not possible to take one specimen and tweak it just a little at a time until it classifies as a different category, and that way find the border between categories?


Yes. It always bothers me when people claim you can't get insight out of a nonlinear black-box model, because you absolutely can. It's just not right in front of you, and it's not always as clear-cut as what you might find in a linear regression model. But even a linear regression model with quadratic interactions is already pushing the limits of interpretability, so it's not a problem that's unique to neural networks. It is, however, limited by computational ability.

Partial dependence plots: https://journal.r-project.org/archive/2017/RJ-2017-016/RJ-20...

Locally-interpretable model-agnostic explanations (LIME): https://www.oreilly.com/learning/introduction-to-local-inter...


If there is good labeled data, is it still 'hard for people'?


Presumably the labels aren't coming from human's visual classification of the images.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: