Hacker News new | past | comments | ask | show | jobs | submit login
Best RealSense replacements? (github.com/intelrealsense)
96 points by paulkrush on Aug 18, 2021 | hide | past | favorite | 82 comments



Incoming hearsay, but I've heard conflicting things from RealSense integrators on what Intel is actually doing with this product over the longer term. Someone mentioned that ongoing R&D is being shuttered, but the depth camera product would still be available for purchase for a while. It would be nice to get some more clarity on what's actually happening.


This kind of makes sense as bigger customers would have to have some sort of availability contract. I guess we just have to wait for statements. Realsense cameras that are not being developed, but promised to be available for purchase for 5-10 years might still be better than anything else out there, or at least to bridge us through the chipageddon gap. Does Realsense see the threat of 3D image sensors/solid state/flash LiDARs taking over and are bailing out early? I guess if this was the case why not just keep the L515 LiDAR?


Intel is notorious for not giving reasonable EOL notices nor last-buy opportunities. I don’t trust them as a supplier. I would not make the assumption that customers were treated well. Until I hear from my contacts, I assume nothing. (Source: worked at Intel for 11 years. Later worked at customers that got gored by this practice a couple of times.)


+1. I got left high and dry when Intel got rid of the Edison.


Makes sense, if you have a product where you can make a profit on the manufacturing, why would you stop supply. Intel never broke out Realsense in any public finical data, I wonder if the R&D was not profitable, and is not a core part of Intel's business so the new CEO decided to shut it down.


This is true. We have had feedback from Framos ( A big client of the Realsense) saying basically that)


Recent and related:

Intel is giving up on its AI-powered RealSense cameras - https://news.ycombinator.com/item?id=28218354 - Aug 2021 (10 comments)


I'm out of this loop, why is it gone? Clicking on the repo it seems perfectly active, last release was a few days ago. The repo is not archived, the readme doesn't mention it's deprecated. What's going on here?


Intel EOLed RealSense

https://www.crn.com/news/components-peripherals/intel-says-i...

https://www.theverge.com/2021/8/17/22629528/intel-realsense-...

It's typical Intel behavior, they LOVE to EOL products with only sudden notice and not a milligram of effort to try to sell the business to someone who will maintain it.


I think depth cameras haven't really taken off.

One issue is that, with neural networks, you can accomplish many things a depth camera can do without the depth camera.

For instance you can train a neural network to guess at a depth field for a scene. It's not going to deal with

https://en.wikipedia.org/wiki/Ames_room

but it is good enough for many purposes. Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.

On top of that people just haven't found applications for depth cameras that are all that compelling. I guess "Windows Hello" uses something like a depth camera but that creates a strong incentive for the camera's functionality not be exposed to the end user because if you can see what the camera sees you are in a much better place to abuse it.

The other thing is that depth cameras just haven't gotten better from a practical point of view. On paper the XBOX One Kinect is a lot better than the original Kinect but in practice it doesn't perform any better.


Real depth cameras perform better and cost far less than the $1500 GPU that you're going to need to run the monocular depth network. Monocular depth still struggles to generalize to environments that are even slightly different from the training set.

Try git cloning niantic labs' monodepth2 (or whatever is the state of the art on Papers with Code) and running it on your living room. It's not good.

https://github.com/nianticlabs/monodepth2

https://paperswithcode.com/task/monocular-depth-estimation


There is a depth camera in every new iphone.


> Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.

These still look so fake, and they tend to blur out objects that you're trying to hold up in the video, that I actually created my own virtual camera that blurs progressively more over depths based on RealSense measured depth and looks far more realistic.

https://github.com/dheera/bokeh-camera


They're not that good but people will tolerate poor quality.


For video conference calls, where it's mostly an extra bonus? Yes.

RealSense was used for industrial operations, I personally was looking into them for packing items in transport containers (specific to the factory involved). Poor quality of depth information would mean jams involving robot capable of goring through industrial enclosures, printers, and maintenance engineers.


People maybe but not the industry.


‘Industry’ could care less about an API for depth sensors on the Windows platform — cause industry can’t accept an autonomous system that has a ‘failed to download software update’ dialogue on the screen 80% of the time.

Intel’s pitch was always aimed at consumers or maybe light ‘enterprise’ such as an interior decorator who takes measurements with a tablet. Self-driving cars, industrial robots, etc. were always going to be based on a more robust platform.


You have a much more optimistic perception of the quality of industrial deployments than I've heard from people who work on or with them.



i remember in 2020 they were running a pretty aggressive ad campaign that made me think "hmm. they seem starved for applications."


Your best bet is probably to order cheap IR or color industrial cameras and use stereo block matching. It's implemented in OpenCV and works okay-ish. For example, Basler dart USB3 cameras can be synchronized through the IO expension port.

That said, maybe it's time for someone to produce a good replacement. In my opinion, the RealSense was always quite pricey when compared with what it could do, especially now that we can just run AI algorithms on the edge.


There is no way this is a replacement for the RealSense depth cameras. These are USB, bus powered, factory calibrated, cameras doing the block matching in an ASIC. Often with an IR projector to help the matching. This like saying, oh well, I can’t find a GPU, so I’ll just compile Mesa for the CPU and my product will be fine.


Put an NVIDIA Jetson Nano in between and you also have hardware matching.

IR projector is $19: https://www2.mouser.com/ProductDetail/ams/AQAA-20?qs=DRkmTr7...


This seems a little bit like claiming that while you can't deliver an aircraft carrier right now, you do have several canoes available and since they both float it's basically the same thing.


Yes and no. Of course, a homemade module like this will be way less polished than a RealSense camera. But on the other hand, it'll be more flexible and possibly tailored to your problem. So a homemade module might actually have better performance.


That one only has 4700 dots, which severely limits the potential applications. The one in the iPhone face scanner has 30,000 dots. (Last time I looked, it was very hard to source IR projectors that you could use to make a reasonably accurate depth camera.)


It has about 5500 dots so it should be similar to the projector inside a RealSense D435.


That's like saying instead of riding in a car, you are going to get a wheelbarrow and be pulled by an even bigger car. It's absurd to say that is a replacement.


Not quite. It's like instead of buying a car, you buy wheels and a frame and do a car mechanic training. With a bit of effort and skill, you'll end up with a car, but it won't look like a factory-produced car.


There is no universe where what you described ends up being similar to an integrated product with a dedicated ASIC. What you are saying is a stab in the dark that isn't backed up by real results.


Quite a lot of hobbyists are going that route instead of purchasing a RealSense camera:

https://www.youtube.com/watch?v=IfklQ-O-FPc

Also, I believe Amazon went that way when designing their DeepRacer. Note that it has no IR projector but 2 Cameras and a commodity PC for processing.


Those results are not even remotely close to Intel's cameras let alone the azure kinect, yet it will be much more power hungry.

The resolution, noise, frame rate and even consistency (notice the flashing) is a tiny fraction of basic hardware solutions.


There are plenty of other 3D depth cameras out there, just not with the brand recognition of Intel, and now the quick and easy purchase of one unit at $200USD.


I have been using Intel‘s D-435i for 6D object Pose estimation and i have to say i was not really happy with the quality of the retrieved depth-frames. The Depth images are very noisy, flat surfaces appear heavily wavy thus making it difficult to apply it for tasks that demand a relatively exact 3D representation of the surrounding Space, which is a problem that has been mentioned in a few issues by several other clients. Just Google ghost noise or wavyy Noise D-435i.

Moreover the AI implementier is actually Not really AI, it‘s just a bunch of common filters for depth-noise reduction such as a Median filter, just look up the filters. And lastly the depth-measurement becomes more unexact the further the point is away, therefore making it necessary to apply a depth-correction, in my case a correction polynom of 2nd Order.

All in all i have to say i‘d not use it again, especially not in industrial contexts. The Kinectv2 delivers a much more stable and reliable depth-image, Even though the realsense programming Interface is mostly nice to work with.


I second the L515 as it's much cleaner then the D435(if you need this). That being said the L515 is not like scanning LiDAR, but it's great tool at the price point.


The L515 is AFAIK based on scanning MEMS rather than structured light on the 435 and together with its integrated “normal” camera should give better results (did for us).


The Kinect Azure outperforms the RealSense devices.

There are also devices based on the Infineon lidar, such as the pieye Nimbus 3D camera.


I just called for Kinect Azure and they told me Oct 2022 for next stock. Thanks I will have too look into the Infineon lidar. I'm guessing Velabit is a chipageddon story as well.


Sorry to hear about the Kinect delay. The device works very well in my experience.


also very very expensive.


$400 USD is 'very very expensive' ?


For a business no, for a hobby yes.


It costs the about same as the Intel Realsense L515 though.


Does the Kinect 2 still outperform the Azure Kinect at body tracking?

https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/...


There are also ToF cameras from Basler, we have their early model, and it works reasonably well, though hot and rather power-hungry. Based on that experience, their current models look promising: https://www.baslerweb.com/en/products/cameras/3d-cameras/


I've worked with these (Azure Kinect), and they are quite nice, but they are much bigger and heavier and require a lot more power.


I haven't used these specific cameras, but enjoyed working with the company as client.

https://en.ids-imaging.com/ensenso-stereo-3d-camera.html


Like you, I have used other cameras from this company, and I'd say they know what they are doing. Probably worth checking out.


Any idea on retail pricing, e.g. is there a comparable model to RealSense? There's no obvious "Buy" button on the site.


I wrote them, explained my job and asked for a camera sample to test with our software (video security) and they sent one. It was intended for a specific client and a special case use, which it satisfied 100%. Whatever that cost was, it was bundled into the client's fees, so I have no idea. Afterwards the sample hardware was returned, but they sent it back to us saying "you might have future special needs clients. Keep that camera, and we'll send you another for just asking."


Give that marketing team a bonus!


Same question. Couldn't find anywhere to click "buy"


Check out Luxonis’ offerings.

https://www.luxonis.com/


I don't understand at all how you got downvoted for posting this. Thanks for the tip, cheers!


I've really enjoyed using Luxonis's OAK-D camera. First Kickstarter I've gotten, and does what they said it would. They have a very active Discord channel as well.


Sounds interesting, thanks for sharing.

How is the quality compared to RealSense? Do I understand correctly that OAK-D camera requires two cables (USB-C and barrel jack power), not just one?


I can't directly compare quality as I've not used Realsense devices, but the main difference is that the OAK-D uses stereoscopy without IR. I've been playing with it in various light conditions and it's surprisingly robust in low light as long as features are visible in both cameras. It only requires 1 USB-C cable if you're using it over USB3(i.e. a device that can source >1A iirc). I use it with a Jetson Nano(albeit not to its full capacity-yet) with just the USB.


Realsense is not getting out of the business completely. They are restructuring and dropping the LiDAR development. They will continue with their more popular cameras.


That's cool, where did you see this?


I work at a large robotics company on an active project using their technology. There have been high level meetings on this topic.


For a T265 tracking camera replacement, visual pose and odometry, perhaps the Zed mini? This paper[1] suggests comparable accuracy. I've not used it. However, the Zed computation is software-sdk side, not integrated.

Hmm, it looks like Structure is now supporting linux?

[1] https://www.mdpi.com/2218-6581/9/3/56/htm


Is there some GPU or FPGA opensource implementation of depth camera comparable to RealSense in quality of its output?

I've seen many projects that call themself opensource while utilizing RealSense and I always scream internally how can they call themself opensource when absolute most of the work is being done by nongeneric and essential/irreplacable piece of proprietary technology.


if it were possible to make a camera with a GPU or FPGA, there would probably be something, but it isn't. FPGAs and GPUs aren't (and can't be coerced to become) light sensors or structured light emitters.


Well you probably could get pretty far with 2 hardware synchronized IR cameras, a IR projector, and a FPGA. But the thing is, most hobbyists will just use two color cameras and AI instead.


I thought that RealSense cameras were just ordinary global synchronized shutter cameras with some stereophotogrammetry processor.


they use structured light; all the ones I have, anyway.


Note some nice discussion along the same line as this thread over on ROS discourse: https://discourse.ros.org/t/intel-cancelling-its-realsense-b...


Shameless self promotion, I'm a co-founder of Chronoptics and we design bespoke iToF depth cameras, have been thinking we should release a module to fill up the gap in the market caused by Intel's move.


Will you be able to do it at within the same price bracket as the realsense cameras ?


Realsense cameras were cool, it's a shame to see them go :(


AIRY3D has been developing a passive, single sensor depth solution that has lightweight power and compute requirements. With a single CMOS sensor you get both 2D and depth realtime streams. I work at AIRY3D and we have advanced prototypes with some samples available for eval/purchase (limited supply). Definitely could be a replacement for RealSense for certain applications (eg under 1 meter range, and ideal for outdoors).



I hope development continues as an open source project. I just got an Intel L515 not that long ago and it is pretty great!


Take a look at the Stereolabs Zed cameras.


Zed sucks, they eat into your GPU instead of on-board ASIC so you have no GPU left to do neural nets or anything else.

RealSense on the other hand does computations on-board and consumes very little CPU and no GPU.


But be aware that they are rolling shutter and only barely synchronized.


My company uses ZEDs now, and we are planning to use their new models which are weatherproof, and they currently seem to be the only inexpensive option for outdoor sensing that works out of box. And we don't like them: need your own GPU (at least they support Jetson & co.), not enough SW flexibility, USB-only connection limits the cable length.

Longer-term, we plan to use our own software with either a pair of conventional machine vision cameras, or, preferably, find some ready-made assembly of a pair of cameras + SoC with full user programmability since we like to be in control of our perception algorithms.


You can make extra-long USB connections with USB-over-fibre products like [1] (if you've got the money)

[1] https://www.amazon.co.uk/FIBBR-Cable-Female-Active-Extension...


Zed cameras has a very hostile EULA (borderline illegal, but IANAL), I would not recommend using them unless it's absolutely the only option.


Didn't know about that, could you point at specific bad parts?


Here is the license as of today:

``` <...> 2. License

a. The Software is licensed to You, not sold. You are licensed to use the Software only as downloaded from the stereolabs.com website, and updated by STEREOLABS from time to time. You may not copy or reverse engineer the Software. b. As conditions to this Software license, You agree that:

        i.      You will use Your Software with ZED, ZED 2 or ZED Mini camera only and not with any other device (including). You will not use Unauthorized Accessories. They may not work or may stop working permanently after a Software update.

        ii.     You will not use or install any Unauthorized Software with an Authorized Accessory. If You do, Your ZED, ZED 2 or ZED Mini camera may stop working permanently at that time or after a later Software update.

        iii.    You will not attempt to defeat or circumvent any Software technical limitation, security, or anti-piracy system. If You do, Your ZED, ZED 2 or ZED Mini camera may stop working permanently at that time or after a later Software update.

        iv.     STEREOLABS may use technical measures, including Software updates, to limit use of the Software to the ZED, ZED 2 or ZED Mini camera, to prevent use of Unauthorized Accessories, and to protect the technical limitations, security and anti-piracy systems in the ZED, ZED 2 or ZED Mini camera.

        v.      STEREOLABS may update the Software from time to time without further notice to You, for example, to update any technical limitation, security, or anti-piracy system.
<...> ```

The problem with this license (beyond the fact that they are restricting a lot of user freedom) is that it's not specific. For instance, it's unclear if improving depth quality by applying an extra median filter is a violation of iii as it eliminates a technical limitation.

While I don't expect them to actually go after their users, the fact that EULA is so hostile was the reason we can't use it in our team.


OK, RealSense CTO put a comment in another issue: https://github.com/IntelRealSense/librealsense/issues/9648

tl;dr: RealSense Stereo cameras are mostly sticking around.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: