It's typical Intel behavior, they LOVE to EOL products with only sudden notice and not a milligram of effort to try to sell the business to someone who will maintain it.
but it is good enough for many purposes. Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.
On top of that people just haven't found applications for depth cameras that are all that compelling. I guess "Windows Hello" uses something like a depth camera but that creates a strong incentive for the camera's functionality not be exposed to the end user because if you can see what the camera sees you are in a much better place to abuse it.
The other thing is that depth cameras just haven't gotten better from a practical point of view. On paper the XBOX One Kinect is a lot better than the original Kinect but in practice it doesn't perform any better.
Real depth cameras perform better and cost far less than the $1500 GPU that you're going to need to run the monocular depth network. Monocular depth still struggles to generalize to environments that are even slightly different from the training set.
Try git cloning niantic labs' monodepth2 (or whatever is the state of the art on Papers with Code) and running it on your living room. It's not good.
> Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.
These still look so fake, and they tend to blur out objects that you're trying to hold up in the video, that I actually created my own virtual camera that blurs progressively more over depths based on RealSense measured depth and looks far more realistic.
For video conference calls, where it's mostly an extra bonus? Yes.
RealSense was used for industrial operations, I personally was looking into them for packing items in transport containers (specific to the factory involved). Poor quality of depth information would mean jams involving robot capable of goring through industrial enclosures, printers, and maintenance engineers.
‘Industry’ could care less about an API for depth sensors on the Windows platform — cause industry can’t accept an autonomous system that has a ‘failed to download software update’ dialogue on the screen 80% of the time.
Intel’s pitch was always aimed at consumers or maybe light ‘enterprise’ such as an interior decorator who takes measurements with a tablet. Self-driving cars, industrial robots, etc. were always going to be based on a more robust platform.
https://www.crn.com/news/components-peripherals/intel-says-i...
https://www.theverge.com/2021/8/17/22629528/intel-realsense-...
It's typical Intel behavior, they LOVE to EOL products with only sudden notice and not a milligram of effort to try to sell the business to someone who will maintain it.