Hacker News new | past | comments | ask | show | jobs | submit login

Are the two devices running the same model? The article claims the DSP has higher confidence, but I don't see why that would be the case. I suppose one could work at a higher precision but that wouldn't make sense if they're comparing performance.



I've talked a little bit with some engineers at Qualcomm who worked on projects like this. My impression was that they make a lot of compromises when they optimize a computer vision algorithm for their hardware which slightly alters the result, but can run extremely fast with comparable performance. It's likely they're doing something similar here, which might explain the difference in confidences, but I highly doubt that it objectively classifies images better than the one running on the CPU. If anything the better performance is an illusion just because the model running on the DSP reacts more quickly.


> FPS – the DSP captures more images (frames-per-second/FPS), thus increasing the app’s accuracy.

Over a given time-slice, the DSP is able to take in and process/use more images of the object, allowing it to be more precise in it's predictions.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: