Hacker News new | past | comments | ask | show | jobs | submit login

Intel EOLed RealSense

https://www.crn.com/news/components-peripherals/intel-says-i...

https://www.theverge.com/2021/8/17/22629528/intel-realsense-...

It's typical Intel behavior, they LOVE to EOL products with only sudden notice and not a milligram of effort to try to sell the business to someone who will maintain it.




I think depth cameras haven't really taken off.

One issue is that, with neural networks, you can accomplish many things a depth camera can do without the depth camera.

For instance you can train a neural network to guess at a depth field for a scene. It's not going to deal with

https://en.wikipedia.org/wiki/Ames_room

but it is good enough for many purposes. Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.

On top of that people just haven't found applications for depth cameras that are all that compelling. I guess "Windows Hello" uses something like a depth camera but that creates a strong incentive for the camera's functionality not be exposed to the end user because if you can see what the camera sees you are in a much better place to abuse it.

The other thing is that depth cameras just haven't gotten better from a practical point of view. On paper the XBOX One Kinect is a lot better than the original Kinect but in practice it doesn't perform any better.


Real depth cameras perform better and cost far less than the $1500 GPU that you're going to need to run the monocular depth network. Monocular depth still struggles to generalize to environments that are even slightly different from the training set.

Try git cloning niantic labs' monodepth2 (or whatever is the state of the art on Papers with Code) and running it on your living room. It's not good.

https://github.com/nianticlabs/monodepth2

https://paperswithcode.com/task/monocular-depth-estimation


There is a depth camera in every new iphone.


> Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.

These still look so fake, and they tend to blur out objects that you're trying to hold up in the video, that I actually created my own virtual camera that blurs progressively more over depths based on RealSense measured depth and looks far more realistic.

https://github.com/dheera/bokeh-camera


They're not that good but people will tolerate poor quality.


For video conference calls, where it's mostly an extra bonus? Yes.

RealSense was used for industrial operations, I personally was looking into them for packing items in transport containers (specific to the factory involved). Poor quality of depth information would mean jams involving robot capable of goring through industrial enclosures, printers, and maintenance engineers.


People maybe but not the industry.


‘Industry’ could care less about an API for depth sensors on the Windows platform — cause industry can’t accept an autonomous system that has a ‘failed to download software update’ dialogue on the screen 80% of the time.

Intel’s pitch was always aimed at consumers or maybe light ‘enterprise’ such as an interior decorator who takes measurements with a tablet. Self-driving cars, industrial robots, etc. were always going to be based on a more robust platform.


You have a much more optimistic perception of the quality of industrial deployments than I've heard from people who work on or with them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: