Incoming hearsay, but I've heard conflicting things from RealSense integrators on what Intel is actually doing with this product over the longer term. Someone mentioned that ongoing R&D is being shuttered, but the depth camera product would still be available for purchase for a while. It would be nice to get some more clarity on what's actually happening.
This kind of makes sense as bigger customers would have to have some sort of availability contract. I guess we just have to wait for statements. Realsense cameras that are not being developed, but promised to be available for purchase for 5-10 years might still be better than anything else out there, or at least to bridge us through the chipageddon gap. Does Realsense see the threat of 3D image sensors/solid state/flash LiDARs taking over and are bailing out early? I guess if this was the case why not just keep the L515 LiDAR?
Intel is notorious for not giving reasonable EOL notices nor last-buy opportunities. I don’t trust them as a supplier. I would not make the assumption that customers were treated well. Until I hear from my contacts, I assume nothing. (Source: worked at Intel for 11 years. Later worked at customers that got gored by this practice a couple of times.)
Makes sense, if you have a product where you can make a profit on the manufacturing, why would you stop supply. Intel never broke out Realsense in any public finical data, I wonder if the R&D was not profitable, and is not a core part of Intel's business so the new CEO decided to shut it down.
I'm out of this loop, why is it gone? Clicking on the repo it seems perfectly active, last release was a few days ago. The repo is not archived, the readme doesn't mention it's deprecated. What's going on here?
It's typical Intel behavior, they LOVE to EOL products with only sudden notice and not a milligram of effort to try to sell the business to someone who will maintain it.
but it is good enough for many purposes. Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.
On top of that people just haven't found applications for depth cameras that are all that compelling. I guess "Windows Hello" uses something like a depth camera but that creates a strong incentive for the camera's functionality not be exposed to the end user because if you can see what the camera sees you are in a much better place to abuse it.
The other thing is that depth cameras just haven't gotten better from a practical point of view. On paper the XBOX One Kinect is a lot better than the original Kinect but in practice it doesn't perform any better.
Real depth cameras perform better and cost far less than the $1500 GPU that you're going to need to run the monocular depth network. Monocular depth still struggles to generalize to environments that are even slightly different from the training set.
Try git cloning niantic labs' monodepth2 (or whatever is the state of the art on Papers with Code) and running it on your living room. It's not good.
> Similarly you can train a neural network to separate the speaker on a video call from the background, so you just don't need a depth camera.
These still look so fake, and they tend to blur out objects that you're trying to hold up in the video, that I actually created my own virtual camera that blurs progressively more over depths based on RealSense measured depth and looks far more realistic.
For video conference calls, where it's mostly an extra bonus? Yes.
RealSense was used for industrial operations, I personally was looking into them for packing items in transport containers (specific to the factory involved). Poor quality of depth information would mean jams involving robot capable of goring through industrial enclosures, printers, and maintenance engineers.
‘Industry’ could care less about an API for depth sensors on the Windows platform — cause industry can’t accept an autonomous system that has a ‘failed to download software update’ dialogue on the screen 80% of the time.
Intel’s pitch was always aimed at consumers or maybe light ‘enterprise’ such as an interior decorator who takes measurements with a tablet. Self-driving cars, industrial robots, etc. were always going to be based on a more robust platform.
Your best bet is probably to order cheap IR or color industrial cameras and use stereo block matching. It's implemented in OpenCV and works okay-ish. For example, Basler dart USB3 cameras can be synchronized through the IO expension port.
That said, maybe it's time for someone to produce a good replacement. In my opinion, the RealSense was always quite pricey when compared with what it could do, especially now that we can just run AI algorithms on the edge.
There is no way this is a replacement for the RealSense depth cameras. These are USB, bus powered, factory calibrated, cameras doing the block matching in an ASIC. Often with an IR projector to help the matching. This like saying, oh well, I can’t find a GPU, so I’ll just compile Mesa for the CPU and my product will be fine.
This seems a little bit like claiming that while you can't deliver an aircraft carrier right now, you do have several canoes available and since they both float it's basically the same thing.
Yes and no. Of course, a homemade module like this will be way less polished than a RealSense camera. But on the other hand, it'll be more flexible and possibly tailored to your problem. So a homemade module might actually have better performance.
That one only has 4700 dots, which severely limits the potential applications. The one in the iPhone face scanner has 30,000 dots. (Last time I looked, it was very hard to source IR projectors that you could use to make a reasonably accurate depth camera.)
That's like saying instead of riding in a car, you are going to get a wheelbarrow and be pulled by an even bigger car. It's absurd to say that is a replacement.
Not quite. It's like instead of buying a car, you buy wheels and a frame and do a car mechanic training. With a bit of effort and skill, you'll end up with a car, but it won't look like a factory-produced car.
There is no universe where what you described ends up being similar to an integrated product with a dedicated ASIC. What you are saying is a stab in the dark that isn't backed up by real results.
There are plenty of other 3D depth cameras out there, just not with the brand recognition of Intel, and now the quick and easy purchase of one unit at $200USD.
I have been using Intel‘s D-435i for 6D object Pose estimation and i have to say i was not really happy with the quality of the retrieved depth-frames. The Depth images are very noisy, flat surfaces appear heavily wavy thus making it difficult to apply it for tasks that demand a relatively exact 3D representation of the surrounding Space, which is a problem that has been mentioned in a few issues by several other clients. Just Google ghost noise or wavyy Noise D-435i.
Moreover the AI implementier is actually Not really AI, it‘s just a bunch of common filters for depth-noise reduction such as a Median filter, just look up the filters. And lastly the depth-measurement becomes more unexact the further the point is away, therefore making it necessary to apply a depth-correction, in my case a correction polynom of 2nd Order.
All in all i have to say i‘d not use it again, especially not in industrial contexts. The Kinectv2 delivers a much more stable and reliable depth-image, Even though the realsense programming Interface is mostly nice to work with.
I second the L515 as it's much cleaner then the D435(if you need this). That being said the L515 is not like scanning LiDAR, but it's great tool at the price point.
The L515 is AFAIK based on scanning MEMS rather than structured light on the 435 and together with its integrated “normal” camera should give better results (did for us).
I just called for Kinect Azure and they told me Oct 2022 for next stock. Thanks I will have too look into the Infineon lidar. I'm guessing Velabit is a chipageddon story as well.
There are also ToF cameras from Basler, we have their early model, and it works reasonably well, though hot and rather power-hungry. Based on that experience, their current models look promising: https://www.baslerweb.com/en/products/cameras/3d-cameras/
I wrote them, explained my job and asked for a camera sample to test with our software (video security) and they sent one. It was intended for a specific client and a special case use, which it satisfied 100%. Whatever that cost was, it was bundled into the client's fees, so I have no idea. Afterwards the sample hardware was returned, but they sent it back to us saying "you might have future special needs clients. Keep that camera, and we'll send you another for just asking."
I've really enjoyed using Luxonis's OAK-D camera. First Kickstarter I've gotten, and does what they said it would. They have a very active Discord channel as well.
I can't directly compare quality as I've not used Realsense devices, but the main difference is that the OAK-D uses stereoscopy without IR. I've been playing with it in various light conditions and it's surprisingly robust in low light as long as features are visible in both cameras. It only requires 1 USB-C cable if you're using it over USB3(i.e. a device that can source >1A iirc). I use it with a Jetson Nano(albeit not to its full capacity-yet) with just the USB.
Realsense is not getting out of the business completely. They are restructuring and dropping the LiDAR development. They will continue with their more popular cameras.
For a T265 tracking camera replacement, visual pose and odometry, perhaps the Zed mini? This paper[1] suggests comparable accuracy. I've not used it. However, the Zed computation is software-sdk side, not integrated.
Hmm, it looks like Structure is now supporting linux?
Is there some GPU or FPGA opensource implementation of depth camera comparable to RealSense in quality of its output?
I've seen many projects that call themself opensource while utilizing RealSense and I always scream internally how can they call themself opensource when absolute most of the work is being done by nongeneric and essential/irreplacable piece of proprietary technology.
if it were possible to make a camera with a GPU or FPGA, there would probably be something, but it isn't. FPGAs and GPUs aren't (and can't be coerced to become) light sensors or structured light emitters.
Well you probably could get pretty far with 2 hardware synchronized IR cameras, a IR projector, and a FPGA. But the thing is, most hobbyists will just use two color cameras and AI instead.
Shameless self promotion, I'm a co-founder of Chronoptics and we design bespoke iToF depth cameras, have been thinking we should release a module to fill up the gap in the market caused by Intel's move.
AIRY3D has been developing a passive, single sensor depth solution that has lightweight power and compute requirements. With a single CMOS sensor you get both 2D and depth realtime streams. I work at AIRY3D and we have advanced prototypes with some samples available for eval/purchase (limited supply). Definitely could be a replacement for RealSense for certain applications (eg under 1 meter range, and ideal for outdoors).
My company uses ZEDs now, and we are planning to use their new models which are weatherproof, and they currently seem to be the only inexpensive option for outdoor sensing that works out of box. And we don't like them: need your own GPU (at least they support Jetson & co.), not enough SW flexibility, USB-only connection limits the cable length.
Longer-term, we plan to use our own software with either a pair of conventional machine vision cameras, or, preferably, find some ready-made assembly of a pair of cameras + SoC with full user programmability since we like to be in control of our perception algorithms.
a. The Software is licensed to You, not sold. You are licensed to use the Software only as downloaded from the stereolabs.com website, and updated by STEREOLABS from time to time. You may not copy or reverse engineer the Software.
b. As conditions to this Software license, You agree that:
i. You will use Your Software with ZED, ZED 2 or ZED Mini camera only and not with any other device (including). You will not use Unauthorized Accessories. They may not work or may stop working permanently after a Software update.
ii. You will not use or install any Unauthorized Software with an Authorized Accessory. If You do, Your ZED, ZED 2 or ZED Mini camera may stop working permanently at that time or after a later Software update.
iii. You will not attempt to defeat or circumvent any Software technical limitation, security, or anti-piracy system. If You do, Your ZED, ZED 2 or ZED Mini camera may stop working permanently at that time or after a later Software update.
iv. STEREOLABS may use technical measures, including Software updates, to limit use of the Software to the ZED, ZED 2 or ZED Mini camera, to prevent use of Unauthorized Accessories, and to protect the technical limitations, security and anti-piracy systems in the ZED, ZED 2 or ZED Mini camera.
v. STEREOLABS may update the Software from time to time without further notice to You, for example, to update any technical limitation, security, or anti-piracy system.
<...>
```
The problem with this license (beyond the fact that they are restricting a lot of user freedom) is that it's not specific. For instance, it's unclear if improving depth quality by applying an extra median filter is a violation of iii as it eliminates a technical limitation.
While I don't expect them to actually go after their users, the fact that EULA is so hostile was the reason we can't use it in our team.