Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really disappointing to hear this. The ML revolution is very real and so is the immense value it’s capable of granting us...HOWEVER it’s really only in a narrow category of problems and people don’t want to admit that so they try to shoehorn it into every corner of everything...not too dissimilar from blockchain.

That narrow problem space where ML has become revolutionary is classification problems where the cost of a false positive is marginal. In the industry we frequently refer to it as “professional judgement” and anyone who has ever referred to that statement in the course of their work should be concerned because ML is coming for you. As far as the false positive part of it, we’ll no on bats an eye when a surgeon loses a patient, but we’re unlikely to accept the same from a computer any time soon.

The biggest area where I can think of that this narrow problem space exists to be capitalized on is...search. Not surprising then that Google became a king of ML because to them it was actually a revolutionary leap forward to their core problem.



And just like blockchain, the reason it gets forced into every field is VCs.

"We are going to challenge existing players in $market" gets you nothing, "We are going to disrupt $market with blockchain/ML" gets you a eight-digit seed round.


You don’t have to take the money, or at least take it on those terms. Entrepreneurs have agency. They should exercise it.


most common strategy (I think) is to raise with a vague ML promise but execute using whatever makes the most sense in practice


Seriously? A narrow category of problems?

ML has been used and is being used to significantly advance image processing, video processing, image classification, speech-to-text, natural language understanding, medical imaging interpretation, medical notes and differential diagnosis, warehouse management, shipping and delivery, transportation, networking, agriculture, biomedical research, insurance, law practice (document scanning), journalism, politics (through better polling, targeting, gerrymandering, whatever), probably other things I'm missing.


That list is flag planting of the first order — like a dog claiming territory as a kingdom after a few stray golden showers here and there.

Yes, ML has been applied to all those topics, but to narrow/superficial applications & with limited success (in most of those areas, any how). The applications have also been explored in relatively ad-hoc ways, with little improvement in systematic understanding/knowledge of any of those fields.


Please. The vast majority of the above are fields where ML failed spectacularly.

If you had any idea about medical diagnosis, biomedical research, supply chain optimization, politics and journalism you would know that machine learning is a laughing stock in these fields.

ML had 2 big wins: (image & data) Classification & NLP. It is stupid to not use ML for these problems, but it equally stupid to try to fit ML in fields that it cannot work.


Let's not claim something has failed when it has just begun... Given today's hardware and given that it's a very new topic of research, IMHO the accomplishments are incredible. It's not yet production ready, but that doesn't mean another 10 years of progress won't get it there.


We need to invest in long term R&D to potentially achieve an ML breakthrough in one of the above fields instead of allocating enormous capital to ML unicorn businesses.

But to do so, we need to first openly admit the truth. ML is not working for the wide range the problems it is currently pitched for.


Seconded. Today’s “narrow”applications are quite wide compared to the expert systems of decades ago. I wouldn’t say we are in a second AI winter when cool new applications of DNNs pop up frequently on HN.


My impression about ML is that it shines where "intuition of a master" is needed. That is, for example, the mastery of a "technician painter" who has build an intuition of imitating Van Gogh painting can be achieved through AI.

Any intuitive skill that can be built through hard work and years of experience seems to be within the realms of what AI/ML can learn to do. Separating background from the subjects, guessing the 3D shape of an object from a 2D image etc. Anything that people can master through experience, including stuff like "sensing that there's something fishy but can't tell exactly what" kind of intuition.

I bet that there would be welding machines that can help an amateur to weld like a master by learning and imitating the way a master welder does its job.


In you view, why do you think the society has an issue when a “machine “ makes a decision vs a human? Can you think of a legitimate areas where trust in machine outputs wouldn’t be favoured vs a human?


ML systems will struggle when the question itself is ill-posed.

A human can say “I’ve been instructed to group these data into those categories, but this particular example doesn’t fit into any them.” and then devise a way to handle special cases.

By construction, an ML system can’t. At the end of the day, a classifier needs to assign one of the predefined labels to each example. At best, it might give you a confidence value, or a probability distribution over labels. However, interpreting those is usually outside of the system itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: