Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've seen a lot of doctors chime in various threads and say their jobs couldn't possible be done by machine learning. The same thing was said about self driving cars before the darpa challenges - when some profs actually put their mind to it, it was done in a couple of years. . If the data was available, there is probably quite a few people who can actually detect cancer in slides.


That's not at all what he said. He said that this particular research involved a simpler (for CS) problem than either the title or his day to day job tackles.

Do you have a background in bio or medicine or computer vision? It's very interesting to see two informed people disagree about applied computer science, so I'd love you to contribute something more specific to the thread.


Actually, he also said that he is not concerned that machine learning could initially detect cancer anytime soon. Also, if you've been machine learning trends recently(past 5 years), you'll see that deep learning methods (hinton, lacunn, ng, bengio) have actually made a huge leap over what came before, and are believed to be that "final" in some sense algorithm that can allow to tackle any learning problem. These just haven't spread widely anough yet.


As a computer vision researcher, I'm not at all convinced that deep learning methods will be "final" in any sense. I know that in the past, neural networks were "final", and then graphical models were "final", and so on.

And while deep learning methods have indeed shown remarkable improvements recently, they're not yet state-of-the-art on the most important/relevant computer vision benchmarks.


As a computer vision researcher it must be pain you to see that all your learnings are for nought when faced with deep learning methods which can get amazing performances from raw pixels (see mnist results for example). Also see ronan collobert's natural language processing from scratch paper where handily beats the past few decades of nlp research in parsing (in terms of efficiency, and probably performances soon too). Or see the microsoft research speech recognition swork which has beaten out all previous by a significant margin using deep learning.


Not at all! I'd love for vision to be solved, no matter what the method. I'm more than happy to move onto another field if that's the case.

But I don't think it is. MNIST data is not particularly challenging. It's great that deep learning methods work there -- they must be doing something right.

Come back and taunt me when deep learning methods start getting state-of-the-art results on, e.g., Pascal VOC: http://pascallin.ecs.soton.ac.uk/challenges/VOC/


getting best results on the harder vision challenges is simply a matter of let the computers run long enough. Collobert's work for example took 3 months of training. I don't see why vision challenges should any different. Perhaps the vision researchers, of which there are many more people than the few deep learning groups should try it.


Cars can currently drive themselves in certain limited environments tracks at specially designated competitions. How long do you think it will be before the country has the physical and legal infrastructure to support general-purpose automated cars?

Two thought experiments: 1) Do you think the general public would support the use of self-driving cars on public streets as they operate today, even after seeing the DARPA results? 2) Do you think the general public would support the use of computers to diagnose cancer without involving human doctors anytime within the next 50 years?

Remember that when [specialized worker X] says their job can't be done by [new technology Y], they aren't just referring to the technology being unable to fulfill the task. There is a whole economic, political and sociological matrix on top of the job market that prevents technology from displacing workers, and certain regulated industries are more sheltered than others. The hospital is probably one of the most insulated working environments for technological advances (just take a poke at any of their EMR systems to see what I mean.)


Google's self driving car[1] has logged almost 200,000 miles on real roads. It has a better record than the average driver. A judge in California has deemed that Google is allowed to test on the road as long as they are responsible for the damages. Nevada has already passed laws saying that self driving cars are legal. So in answer to your question, we already have the physical and legal infrastructure to support general-purpose automated cars, and we have the technological capacity.

This shouldn't be a question of the general public supporting it, it should be a statistical question: Are our silicon counterparts better equipped to do the job? If so, then we should have them do it. The day when computers can diagnose cancer better than human's is not far off, and we should welcome it as an indicator of more precise identification rather than shun it out of fear.

[1] http://news.discovery.com/autos/how-google-self-driving-car-...


So in answer to your question, we already have the physical and legal infrastructure to support general-purpose automated cars, and we have the technological capacity.

That is such a stretch from the four sentences before it. You are discussing 1) a prototype vehicle that is not available to consumers and requires supervision by a cadre of engineers and 2) a recent law in just one of the least populous states of the country. How about a few choice details from that article you cited:

"... with only occasional human intervention."

"Before sending the self-driving car on a road test, Google engineers [have to] drive along the route one or more times to gather data about the environment."

"...there are many challenges ahead, including improving the reliability of the cars and addressing daunting legal and liability issues."

You must have read it with unrestrained optimism. I also applaud your idealistic notion that statistics matter more than public opinion, but the country isn't run by scientists and mathematicians (that's actually a good thing in certain respects). The reality is the general public does have to support changes that affect society, like laws and the development of physical and legal infrastructure, and there are many ways of formulating reasonable policy arguments with or without statistics.


I think you are misunderstanding the nature of the problem. There is no easy way to assess how good some mythical algorithm will be at interpreting pathology slides. Therefore, you are in essence asking doctors and patients to accept another non-human opinion about what is going on. So why should I accept your algorithm's opinion? I would rather have a human who has enough insight to say they are not sure and can discuss the case with me, and also understands that life changing decisions are being made on the basis of what they say.

Anyway, pathologists are most useful in unusual or difficult cases, which by definition have little available data. You want me to trust an algorithm trained using some kind of statistical mechanic on a dataset to interpret an edge case?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: