Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI as a bunch of if else statements is a very 80 centrics view of current AI.

My computer vision professor has us read a current paper every week. He really really dislikes the use of deep learning and has said in class he tries to look for papers that don't use it.

But nearly all of the current papers are using it and he admits every week that the papers he presents are blowing old research out the water using it. Throwing a bunch of GPUs and image data at a problem is surprisingly effective for a lot of problems that were previously extremely difficult. He begrudgingly admits this every week, but he does continue to find papers that get slightly better results by combining deep learning with traditional computer vision techniques. I think he is right in that there probably is a peak quickly approaching where data/augmented data with zero previous knowledge will peak with the current black box approach, but cutting edge currently is definitely not a lot of if-else statements.



Well, what's disappointing from a vision standpoint is that we are not making much progress understanding vision. Instead we are making progress in solving machine vision.

I think your professor isn't looking for results so much as a good explanation for the vision problem, a "unified theory" of vision that you could, in principle, code up from theory alone.


That is my impression of current AI research as an almost layman at the field. Using traditional proven methods to get an approximation and then deep learning and/or other machine learning algorithms (and trained data) to refine those approximations.

Seem to be a very "organic" kind of AI, even with the flaws of human minds. Thinking about it can be spooky given how much it evolved in just a few years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: