Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In order to transfer learnings from virtual worlds to the real world more efficiently, I imagine a module that translates real world footed to an abstract representation that is the same as for rendered footage.

In other words, you would train a NN on input from a footage of a walkthrough of a real building to output rendered footage from the same path (3D data from architects is already there). The middle layer of this quasi autoencoder then is the basis to train fully simulated tasks, e.g. autonomous vehicles. In a way, it would be similar to colorization of b/w footage. Would that scale training data?



Yea, I mean that is basically what we do with our home furnishings app - except we have to SFM to build the models.

The challenge is labeling - or autolabeling pixels.

One thing we are trying to work out is how do you label and build nets on volumes - rather than just pixels? I'm thinking it's going to be a magnitude harder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: