Deep nets are only loosely inspired by neurobiology. That's why LeCun calls them "convolutional nets" and not "convolutional neural nets" and prefers "nodes" over "neurons".
It is, however, possible to have a deep net produce 3D models/images: https://www.youtube.com/watch?v=QCSW4isBDL0 "Learning to Generate Chairs with Convolutional Neural Networks".
I also suspect a different part of cognition is used when humans are asked to recreate a "fire truck" than when humans are asked to classify a "fire truck" from a "car". The former seems closer to using memory ("what did the last five fire trucks I saw look like?"). A fairly recent addition to deep nets is making use of memory: http://arxiv.org/pdf/1410.5401.pdf "Neural Turing Machines". So the difference may quickly become less significant.
It is, however, possible to have a deep net produce 3D models/images: https://www.youtube.com/watch?v=QCSW4isBDL0 "Learning to Generate Chairs with Convolutional Neural Networks".
I also suspect a different part of cognition is used when humans are asked to recreate a "fire truck" than when humans are asked to classify a "fire truck" from a "car". The former seems closer to using memory ("what did the last five fire trucks I saw look like?"). A fairly recent addition to deep nets is making use of memory: http://arxiv.org/pdf/1410.5401.pdf "Neural Turing Machines". So the difference may quickly become less significant.