Hacker News new | past | comments | ask | show | jobs | submit login
Real-time 3D visualization of geospatial data with Blender (github.com/ptabriz)
116 points by based2 on Aug 15, 2017 | hide | past | favorite | 24 comments



This is very cool. After a quick skim I noticed this relies on Blender's ops api. Using bpy.ops is generally considered a bad practice because a lot of the bpy.ops operators depend on the state of the UI - things like which objects are selected and which object interaction mode is active. The alternative to bpy.ops is to write scripts that manipulate the datablocks directly. Using bpy.ops can save a lot of time as it maps more cleanly to the GUI, but if you use it too much things can spiral out of control. Its just something to be aware of.


The fact the scripting seems to be on top of the GUI state rather than the underlying scene graph is what turns me off to Blender scripting.


You can access the underlying datablocks. The blender python api basically give you access to everything, so it's up to you if you want a script that's lower level or simply fires off GUI events.


This is quite interesting, does anyone know if Blender can work with particles too or is it only 3d polygons?

Also, if the original poster is reading this: I am at foss4g with a 360 gopro camera rig, perhaps we can go shoot some high fps immersive video of old Harvard buildings and brainstorm about how to get that into Blender



Could Blender be used as a lidar point cloud annotation tool?


Sorry for the self advertisement, but I think that it fits really well.

If you are thinking about creating annotated point clouds you could use our software (also based on blender):

http://www.blensor.org/

It virtually scans scenes and can store the id of the scanned object for each individual point in the point cloud.


I'm envisioning this as a source for streams of synthetic point cloud data.

Any idea if it can simulate specific Velodyne products? Just wondering if it could be used to compare efficacy of one of the pucks vs the larger kit for a specific use case. E.g. hang a virtual LIDAR off a virtual UAV and fly over a simulated environment.


This was the reason why we initially started this project. This was back when the HDL-64E cost around $72k. It supports the LIDAR with 64 and 32 lasers, but it also has a generic mode where you can set an arbitrary configuration of lasers (albeit a bit crude)

But it is now also used by other researchers to create synthetic point clouds for deep learning.


Thanks for linking to it, and it seems to be open source too! I'll try it out and try to join the community.

There is a person talking about LIDAR in the Boston Self Driving car meetup tomorrow, I'll ask him what automakers usually use for this task and if he was aware of this open source option.


Does it support billions of points or at least a couple of millions?


Millions is not a problem. For billions is probably too slow or you run into memory issues.


Yes.


Are you aware of plugins using similar functionality or any experiments in that area?


My girlfriend and I have been working on a startup for the last year to do realtime 3D geospatial vis on the web and mobile with a focus on AR.

This is a really cool approach! I maybe have to fork to add in support for our data mixing platform.


I had no idea Blender could even do this. Very cool.

Although like using blender from the UI, reading through the code I feel like there is probably a large learning curve here.


I have been doing blender scripting for work for the past few days only. You can pick it up pretty quick because it has a nice feature: there is a scripting screen/mode and as you do the manual mouse/keyboard steps with the normal UI there is a little window that prints out the equivalent python. So when you want to do a script for task X you just do it once manually and cut and past the commands into your script. Then you have to bang it into shape. I have been really impressed with blender.


I don't think it's the equivalent, I suspect it's the exact code being ran.


The blender python api is pretty complex, I personally wouldn't suggest learning it without learning Blender's gui first to build an intuition of how things are structured.


It's cool they used Blend4Web engine to show the model on the internet. Really, WebGL is the future of interactive 3D.


Hmm this seems really useful for rendering lidar data points in a 3d mesh/map.


This is amazing. Blender truly has infinite potential.


Very nice.


I'm really glad they went into the detail they did on the landing page on Github. But, not knowing exactly what kind of diagrams/visualizations can be done, it took quite a while to find an example of what they were talking about. The very first thing I should have seen there should be examples of output that can be produced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: