That's doesn't properly look like lidar data to me; at least it wouldn't be lidar that is mounted right on the street view car. Maybe they use aerial lidar somehow? Or maybe the resolution is purposefully poor?
It is not proper pointclouds, but it is the data used by Street view to highlight if you are looking at a wall, or at a street. It is also used for transitions between one scene to another.
You can see it is very basic 90deg angle geometry, instead of actual pointclouds. Still pretty cool!
https://imgur.com/a/EgC0RbN
Oh yeah, I didn't mean to come off as judgemental or nitpicky. This is fantastic work! And this data can actually be pretty useful, let's say if somebody wanted to build a racing game based on street view, using the boundaries of the street for collisions etc. Too many fun possibilities!
Someone used it years ago to make a neat visualization where they rendered the Street View imagery as normal and then used the same 3D data to overlay foliage and create an apocalyptic environment. Seems to be gone now but there are some articles about it like this one: https://www.citylab.com/life/2014/03/epic-google-street-view...
Since the word 'lidar' can be taken out of the title without damaging it, I've done so. (Submitted title was "Show HN: Street View Simple – Explore Street View Lidar Data in a Browser")
Understood but I think it might be LiDAR - perhaps just an early or unusual implementation? Plenty of articles out there like this one: https://arstechnica.com/gadgets/2017/09/googles-street-view-... that mention a particular LiDAR scanner Velodyne VLP-16 "Puck" - anyhow, doesn't make much difference and thank you for posting why it was changed.
Yeah, this definitely doesn't look like lidar data... unless its really low quality. The buildings show spatial depth, but the cars and pedestrians are pretty much all in a circle (they have no depth).
I don't know much about lidar but.. is it possible Google have done this intentionally with some sort of algorithm? After all, pedestrians and vehicles are just "noise" in the context of mapping/visualising streets.
I think they have removed the cars from the lidar on purpose as there would be privacy issues if the cars were shown. Same would apply for pedestrians and shop facades.
Interesting - I've seen it touted as LiDAR data but since it's all a bit unofficial, I guess it could be anything. I'll see if I can dig up any old articles about it - it's been there for quite some time.
This seems really cool, but I don't quite understand what I'm looking at. Is this a processed version of the LIDAR data for the environment?
Also why do the "pixels" get less dense at the edges of the view? I.e as you rotate, the pixels that were previously at the center of the screen get more sparse as they reach the edges of your view? My intuition is that if you sample points on a hemisphere equally (a difficult task in of itself), then you shouldn't get this kind of pixelation. So is there something going on here with either the orientation of the squares, or sampling that causes the density/exposure to fall-off with cosine of the vector looking ahead and vector to the side?
Yep - there is way to grab the depth data from each Street View pano along with the image data. I plot each point 3D space and grab the corresponding color from the image. The separation is uneven - depends on what the the depth camera sees I guess - many of the points are marked as off at infinity.
Interesting, so it's the data they provide that is sparse. If you expand the view to full screen, and stare straight-on at the wall, you can clearly see the 'sparse' pixels form a circle around the camera on the ground and at the sky.
I don't really know how LIDAR works, so I don't know if it's something intrinsic to the process, or a decision made by the engineers.
Yah, I've noticed that too - I wondered if it was an. artifact of the way I render the points but since the building look mostly right, I figured that was the way it is.
Thank you so much. I use the built in point cloud primitive which I think is a list of billboards geometry and all you can change is the size of each point.
Okay, so it has nothing to do with the orientation of the point planes.
Having thought about it some more, I think this is a consequence of the reduction of surface area hit by the LIDAR rays, as the square of it's distance. Basically, the rays are cast in a spherical distribution (which has a surface area of 4pir^2). So the further out you go, the rays "capture" less of the environment, and you get the sort of sparse pixels at a distance.
So those circles are just reductions in pixel density that are proportional to linear distance from the center of the LIDAR sphere. You can kind of see how depending on the distance to the building walls, the surrounding 'halo' of circular pixel density increases or decreases.
The software is not open-source but both Mapillary and OpenStreetCam have very permissive licenses. I contribute to and use both services to improve OpenStreetMap.
OpenStreetCam itself isn’t quite closed-source: it is on Github. But the app was built on top of Facebook tools, which many people will not want on their phones. See the notorious outstanding Github issue [0]. At least OSC’s hard dependency on Google Play Services appears to have been removed, though – last time I looked into installing Mapillary, it still would not run on a bare Android like LineageOS.
I believe you can publish your own images and have them available via Google Maps but I've never done it. This might be a decent starting point: https://www.google.com/streetview/contributors/
This is very cool!
I'm thinking the depth data that is captured is of higher resolution. Is that true? Was this limited by the API? Or a limitation of the browser?
Thank you! As far as I know (it's undocumented) the only source of depth data is very low resolution. The image data (where the color of each point comes from) is much, much higher - shame they're not on par with each other.
I've been very interested in getting access to the depth data for a VR project I'm working on. Is this something you could talk more about, perhaps over email (in my profile)?
Yes of course - email sent - I spoke too soon - message was blocked for unspecified reasons. My email is my profile if you'd like to start a conversation.
Even with the full text I do not understand what I am seeing. The comparison to previous projects did not help because I do not know them either.
It appears to be a different rendering of the Street View data that will be loaded from Google servers. What is the purpose of this site? Just to show how Street View works internally?
Looking at this makes me wonder how Google combines the point data to generate clean polygons. I.e when I hover my mouse over a wall in Street View, it identifies correctly the entire connected plane.
Thanks for the suggestion. I usually try to remember to at least do a smoke test in the other major browsers (Firefox, Safari and Edge) but didn't get chance this time.