They were talking about the concept of carving out empty space to leave a 3d representation of the thing you’re interested in, which was the primary method in your comment. This can be done with images using the background segmentation, and cutting with the edge/profile of the object, for each 2d view, and assisted with other 2d to 3d methods.
It can be used for live 3d models with single (tracked and moving) or multiple (known and fixed) cameras.
If it was the same that I read, it was a neat paper, but I’m having trouble finding it within a few searches. I don’t recall I’d they used a point cloud or voxels, but the difference between the storage of the 3d structure as a point cloud or voxel doesn’t seem to warrant your response. They’re trivially converted to one another, in this context.
But thanks for taking the time to post a comment!