As a researcher, I was more interested in studying the doodles produced by children, compared to studying the drawings produced by professional artists or designers who may have been taught to draw a certain way, since perhaps the doodles are more closely aligned with the way we naturally think.
I was also fascinated at trying to understand how we are able to translate a vague concept in our minds, into a sequence of motor actors that doodles out this concept onto a piece of paper. We also take into account some feedback information during the doodling process. For example, I compare what I already doodled with what I actually want to draw, and decide I'll doodle next based on this information.
So we thought one way to study this ability of going from concept -> sketch, is to construct a very simple model of this doodling process, and try to train the model to doodle. We model this "vague concept" as a vector of floating point numbers. To model this "vagueness", we add noise to this vector. This way the model must learn to work with noisy concepts. The model takes this floating point vector as an input and (randomly) samples an output sequence of simple motor actions that doodles out an object. The sampling process is random (the model's outputs are the parameters of a pdf at each timestep), so the model can produce many different outputs given the same input. During the sampling process, the model feeds back into its input what it has just drawn, and process this information to decide what to draw next.
We show that our simplified model of the doodling process is also able to go from concept -> sketch, and also from sketch -> concept, and show that the concepts can be augmented, to alter the sketch the model produces in a meaningful way. We tried to make this model simple and robust enough so that in the future, we can incorporate it into more complicated models that try to do more than just doodling a simple object.
I was also fascinated at trying to understand how we are able to translate a vague concept in our minds, into a sequence of motor actors that doodles out this concept onto a piece of paper. We also take into account some feedback information during the doodling process. For example, I compare what I already doodled with what I actually want to draw, and decide I'll doodle next based on this information.
So we thought one way to study this ability of going from concept -> sketch, is to construct a very simple model of this doodling process, and try to train the model to doodle. We model this "vague concept" as a vector of floating point numbers. To model this "vagueness", we add noise to this vector. This way the model must learn to work with noisy concepts. The model takes this floating point vector as an input and (randomly) samples an output sequence of simple motor actions that doodles out an object. The sampling process is random (the model's outputs are the parameters of a pdf at each timestep), so the model can produce many different outputs given the same input. During the sampling process, the model feeds back into its input what it has just drawn, and process this information to decide what to draw next.
We show that our simplified model of the doodling process is also able to go from concept -> sketch, and also from sketch -> concept, and show that the concepts can be augmented, to alter the sketch the model produces in a meaningful way. We tried to make this model simple and robust enough so that in the future, we can incorporate it into more complicated models that try to do more than just doodling a simple object.