Hacker News new | past | comments | ask | show | jobs | submit login

This seems really cool, but I don't quite understand what I'm looking at. Is this a processed version of the LIDAR data for the environment?

Also why do the "pixels" get less dense at the edges of the view? I.e as you rotate, the pixels that were previously at the center of the screen get more sparse as they reach the edges of your view? My intuition is that if you sample points on a hemisphere equally (a difficult task in of itself), then you shouldn't get this kind of pixelation. So is there something going on here with either the orientation of the squares, or sampling that causes the density/exposure to fall-off with cosine of the vector looking ahead and vector to the side?




Yep - there is way to grab the depth data from each Street View pano along with the image data. I plot each point 3D space and grab the corresponding color from the image. The separation is uneven - depends on what the the depth camera sees I guess - many of the points are marked as off at infinity.


Interesting, so it's the data they provide that is sparse. If you expand the view to full screen, and stare straight-on at the wall, you can clearly see the 'sparse' pixels form a circle around the camera on the ground and at the sky.

I don't really know how LIDAR works, so I don't know if it's something intrinsic to the process, or a decision made by the engineers.


Yah, I've noticed that too - I wondered if it was an. artifact of the way I render the points but since the building look mostly right, I figured that was the way it is.


That's a possibility. Are you rotating your squares in just the xy plane, or also along the "pitch"?

I find your project fascinating, thank you for sharing it!


Thank you so much. I use the built in point cloud primitive which I think is a list of billboards geometry and all you can change is the size of each point.


Okay, so it has nothing to do with the orientation of the point planes.

Having thought about it some more, I think this is a consequence of the reduction of surface area hit by the LIDAR rays, as the square of it's distance. Basically, the rays are cast in a spherical distribution (which has a surface area of 4pir^2). So the further out you go, the rays "capture" less of the environment, and you get the sort of sparse pixels at a distance.

So those circles are just reductions in pixel density that are proportional to linear distance from the center of the LIDAR sphere. You can kind of see how depending on the distance to the building walls, the surrounding 'halo' of circular pixel density increases or decreases.


Nice! I noticed that halo effect you mentioned - fascinating to get some background. Thanks for taking the time to post.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: