I used this technique commercially (about 5 years ago) to build voxel representations of the world in real-time from lidar data (Velodyne VLP-32C).
Each lidar return (a 3D point) provides a line of empty voxels from the lidar origin to the location where the laser hit a surface. Because you know the light followed that line uninterrupted, you know all the voxels on the line are empty.
The lidar is giving you thousands of points per second and the lines (derived from the points) carve the voxel space. The voxel space is rendered continuously. It's very satisfying to watch the space being carved out.
I don't know of anyone else using this technique before me, and I don't know of anyone doing it now.
From a laymans point of view (aka knowing nothing about voxels), that sounds like a clever idea and I'm tempted to try this for my next gaming/visualization side project.
I read or saw a similar paper. I guess that they used (multiple) 3D cameras instead of a LiDAR to get the point cloud. But otherwise it was similar. They used a octree or similar data structure for speed up. What did you use?
They were talking about the concept of carving out empty space to leave a 3d representation of the thing you’re interested in, which was the primary method in your comment. This can be done with images using the background segmentation, and cutting with the edge/profile of the object, for each 2d view, and assisted with other 2d to 3d methods.
It can be used for live 3d models with single (tracked and moving) or multiple (known and fixed) cameras.
If it was the same that I read, it was a neat paper, but I’m having trouble finding it within a few searches. I don’t recall I’d they used a point cloud or voxels, but the difference between the storage of the 3d structure as a point cloud or voxel doesn’t seem to warrant your response. They’re trivially converted to one another, in this context.
Each lidar return (a 3D point) provides a line of empty voxels from the lidar origin to the location where the laser hit a surface. Because you know the light followed that line uninterrupted, you know all the voxels on the line are empty.
The lidar is giving you thousands of points per second and the lines (derived from the points) carve the voxel space. The voxel space is rendered continuously. It's very satisfying to watch the space being carved out.
I don't know of anyone else using this technique before me, and I don't know of anyone doing it now.