This is very cool. After a quick skim I noticed this relies on Blender's ops api. Using bpy.ops is generally considered a bad practice because a lot of the bpy.ops operators depend on the state of the UI - things like which objects are selected and which object interaction mode is active. The alternative to bpy.ops is to write scripts that manipulate the datablocks directly. Using bpy.ops can save a lot of time as it maps more cleanly to the GUI, but if you use it too much things can spiral out of control. Its just something to be aware of.
You can access the underlying datablocks. The blender python api basically give you access to everything, so it's up to you if you want a script that's lower level or simply fires off GUI events.
This is quite interesting, does anyone know if Blender can work with particles too or is it only 3d polygons?
Also, if the original poster is reading this: I am at foss4g with a 360 gopro camera rig, perhaps we can go shoot some high fps immersive video of old Harvard buildings and brainstorm about how to get that into Blender
I'm envisioning this as a source for streams of synthetic point cloud data.
Any idea if it can simulate specific Velodyne products? Just wondering if it could be used to compare efficacy of one of the pucks vs the larger kit for a specific use case. E.g. hang a virtual LIDAR off a virtual UAV and fly over a simulated environment.
This was the reason why we initially started this project. This was back when the HDL-64E cost around $72k. It supports the LIDAR with 64 and 32 lasers, but it also has a generic mode where you can set an arbitrary configuration of lasers (albeit a bit crude)
But it is now also used by other researchers to create synthetic point clouds for deep learning.
Thanks for linking to it, and it seems to be open source too! I'll try it out and try to join the community.
There is a person talking about LIDAR in the Boston Self Driving car meetup tomorrow, I'll ask him what automakers usually use for this task and if he was aware of this open source option.
I have been doing blender scripting for work for the past few days only. You can pick it up pretty quick because it has a nice feature: there is a scripting screen/mode and as you do the manual mouse/keyboard steps with the normal UI there is a little window that prints out the equivalent python. So when you want to do a script for task X you just do it once manually and cut and past the commands into your script. Then you have to bang it into shape. I have been really impressed with blender.
The blender python api is pretty complex, I personally wouldn't suggest learning it without learning Blender's gui first to build an intuition of how things are structured.
I'm really glad they went into the detail they did on the landing page on Github. But, not knowing exactly what kind of diagrams/visualizations can be done, it took quite a while to find an example of what they were talking about. The very first thing I should have seen there should be examples of output that can be produced.