It's not really "auto-draw" as much as it's a visual search in which you suggest shapes and it looks across the collection for visually similar icons. Impressive and fun, but not yet a huge advancement over just typing "house" or "cake" to search the image library.
Their description made it sound like a really cool lower-level tool, so obviously it ends up being a letdown.
Drawing/art programs generally have a line smoothing feature - just smoothing your wobbly lines as you draw, using relatively simple algorithms. The description here made me hope for something more "medium-level", half-way between the two. It wouldn't just smooth your lines - it would adjust them according to context, based on a corpus of more precise line drawings, and perhaps predict/suggest the next strokes. It might be difficult to pull off though, if implemented naively it would probably just work against the artist.
This kind of functionality sort of exists in a basic form in the default mail on iOS using a feature called Markup. Markup tries to guess if you drew an arrow or a circle and suggests it based on your drawing.
In some cases, yes its easier to search, but it does fill a use case. I'm not an illustrator yet often need icons. I tend to have a rough idea of what I want but can struggle to find the right keywords. Visual search here is very useful, and allows an element of play for finding the icon. (Hopefully play, not horrible frustration)
I'm not sure what you mean by "compositional", but you can keep drawing after you've matched a shape and it will start suggesting matches for your new shape as well--though unfortunately it doesn't seem to auto-scale the icon you pick to the size of your original drawing, so unless they're already the size you want it's not as helpful as one might hope.
I guess GP means, I start drawing a cat, and it becomes a nicely drawn cat, then I add wings and a horn and a pistol and it becomes a flying laser unicorn cat.
I assumed this was the big idea in TFA, but it seems it's a collection of clip arts, with a terrible interface for looking them up.
I guess GP means, I start drawing a cat, and it becomes a nicely drawn cat, then I add wings and a horn and a pistol and it becomes a flying laser unicorn cat.
Yes, exactly. And then you start sketching a few rough buildings, a beam, some comets, some explosions, and a helicopter: it becomes a drawing of a giant flying unicorn laser cat from space, attacking Tokyo.
OP's example is maybe a little odd, but I admit that the very first thing I tried was "compositional" as well. I drew a mountain, clicked the icon to make it into a mountain, and then drew a bike going up it. However, there was no way to have a drawing with both a mountain and a bike.
I could have sworn none of those controls were there an hour ago. At any rate, now that I can scale things here's a random icon I drew to demonstrate the point: https://www.autodraw.com/share/G2IIQMITW0BB
Indeed ... Google docs has had this feature for some time. If you want to insert a special character or symbol, it gives the option to draw it and then shows similar characters.
If you mean that the algorithm was non-trivial, that's probably true. But I don't see what else you could do with it besides recognising a hand-drawn shape.
It took me about an hour, after I got incredibly frustrated that it wouldn't let me draw anything. Can't draw a robot. Can't draw a sad face (only smiley face). Can't even draw a stick figure. Can't draw a speech bubble.
I felt like it was fighting with me for what it wanted to draw, while leaving very basic and fundamental shapes out. There were more things I couldn't draw, I can't even remember everything.
-> There's an undo button, it works well. But there should be a redo button. (Or the Apple-Y or Ctrl-Y keyboard shortcut for redo ought to work.)
-> See how my smiley face is too big on the right? Well I can't make it smaller: even if I zoom way in (there's a zoom functionality) I can't use the select tool to just select the smiley face (inside the jail) to reduce it in size. I'd have to recreate the parts of this image separately.
-> There is no way to set line thickness on the clip art! This should be one of the easiest things to set - but you can only scale the whole image, not the line width. That makes it hard to work with.
Overall I found the experience frustrating.
I have a challenge for you guys though: for the most common hundred thousand or so words, use a machine learning algorithm on your own Image Search results, to try to come up with canonical ideas of what the objects in question might look like, after sorting them into categories based on similarity of recognized features. Then have the algorithm create an outline using the canonical idea it has derived for each category.
What I mean is that if someone Google's "hand" they might get: left hand, right hand, fist, middle finger, OK sign, I mean there really are only so many ways to hold a hand, or visual meanings/memes for the idea of "hand", and other artists already have introduced a canonical version. (Likewise "stick figure" has a meme around it.)
So for each one of those, the algorithm could learn from every version of that that it judges as similar to each other -- and then draw it's own for each one! (Computer algorithms are good at drawing in a learned style, even such as Van Gogh's, etc.)
Other simple examples include a "peace sign". If you Google image search "peace sign" you obviously get a very canonical shape. Why can't a machine learning algorithm draw its own?
This idea of deriving free, creative-commons licensed images (not subject to trademark search of course), by a machine learning algorithm trained on a huge corpus of image results date (in a fair use way), without copying any of them in particular, would be huge.
You have most of the interface to do this. It is a nice next challenge for you - and a very serious one. I suggest you do it!
Whats the process for Google to make this sort of thing? Does some 7 figure exec say we need to make it easier to draw bikes and then Google gets their army of 10x engineers to make this happen?
For stuff like this, it usually starts from the bottom. Engineers have ideas, convince others to help them work on their ideas, build prototypes (alone or with others), sometimes get help from product managers to develop a business plan, then pitch it to senior leadership to get some funding.
It takes a lot of skill, tact, and product acumen to get things out the door -- probably the same set of skills as you would need outside Google. (Except that within, you have access to more and better resources, but the bar is much higher.)
Obviously, this doesn't mean that every idea will stick... a lot of them won't -- some don't make money, some don't provide real value, and some are just terrible ideas. But it's a much better process than just top-down alone.
[Edited because I'm dumb and can't count figures] I've mostly seen this kind of thing happen because some engineer(s) wanted to try an idea, not because it was imposed from above.
It looks like Google persistently feels guilty for getting enormous amounts of money without bringing too much of a value (Ads). So they try to compensate that by giving back. Most of the stuff they offer is honestly crap, but this one (Autodraw) and stuff like GMail are very decent.
This would be great for flowcharts and diagrams. Sketch out a rough diagram on a tablet, and then have the shapes and lines "snap" to crisp versions as soon as they are identified. Even better if I could draw it on a whiteboard, take a photo, upload it, and get a response back as soon as it's done being converted.
There are bunch of apps that do this on ipad and Android... Plus Microsoft's note taking app, Lenovo/IBM's old X-series apps, and I'm sure others. Heck, the Newton did it.
If you're curious, try one of them out. It gets frustrating pretty quick.
Could you please name some? There seems to be an even bigger bunch of apps that do plain drawing, and it can be hard to find the needle (apps that convert rough sketches to clean line art) in the haystack (many apps for sketching; most just replicating the paper experience on a screen without adding functionality). Thanks.
Google needs a better way to lifecycle these things. Clearly this project will be cancelled, so rather than just reinforce its reputation for killing its projects, perhaps they need "experimental" projects that might even get spun out of the company. Or something like that.
Google keeps coming up with ways to use machine learning to do autofills, suggestions, etc.
A month ago Allo [0], then that article in Verge about computational photography [1], then cameras without lenses [2] and now this. There is no question that this is all very powerful and awesome, but it also raises some questions, like who is the creator of a photo / drawing? Is every photo / drawing going to look the same in the future?
Here is an illustration of what I am concerned about:
My wife downloaded google "Allo" (Yet another chat app where you can change font size. Innovative, I know.). It also happens to suggest answers so you don't have to type as much.
Here is how it went:
She: Hi!
Me: Hi how r u
She: Where r u
She: Where r u now?
She: At home?
She: Working?
She: I missed u
Me: Working
Me: Missed u too
Me: What u doing?
She: How are u?
Me: Fine thank u
Me: What about u?
Me: What are u doing?
Me: Can i see u?
She: Working
Me: Oh
She: Yes
Me: Where r u from?
Me: Who are u?
And it kept on going for a long long time, none of us actually saying anything real, but both of us learning a lot about what looks like an average socially awkward American teenager conversation.
It had love, beauty, cuteness, gifs, it even made us add some daily love quote bot to our thread, but we never actually typed anything ourselves because it was so easy not to.
Of course we both knew it and thought that it's funny, but I can't shake this weird feeling
that something is very wrong with this and that in the long term we are being brainwashed to be a dumber, more superficial version of ourselves.
I was surprised how poorly it ran on my very modern phone. And then how tiny everything was on my desktop.
When I looked past that and tried to draw a cat, it wasn't all that useful. I mean cool, you saw I was drawing a face and gave me 50 options. But what am I supposed to do with that?
It feels like a rehash of what the Newton would do when you tried to draw stuff. But it does it better. I think if I could skip the "pick what I meant" step, it would be cool for whiteboarding in the office.
That's because your very modern phone has a very puny CPU compared to even the average desktop CPU. I'm surprised about how few people know that their "2 GHz multi-core" phone is 5-10x slower than an average 5 year old desktop on common tasks.
(Edit: hehe, as evidenced by this post being downvoted. The HN audience doesn't know any better either?)
Probably downvoted because you're wrong? (Not that I did.) But this is with the caveat that this is comparing a desktop Mac to an iPhone and I haven't the faintest clue about top Android phones, although I have the understanding that the A10 destroys the current Qualcomm SoCs.
The relevant quote:
>The most remarkable thing about this is how similar it looks to the Mac results above. Looking back at the old tests, the iPhone was orders of magnitude slower. An Objective-C message send, for example, was about 4.9ns on the Mac, but it took an eternity on the iPhone at nearly 200ns. A simple C++ virtual method call took a bit over a nanosecond on the Mac, but 80ns on the iPhone. A small malloc/free at around 50ns on the Mac took about 2 microseconds on the iPhone.
>Comparing the two today, and things have clearly changed a lot in the mobile world. Most of these numbers are just slightly worse than the Mac numbers. Some are actually faster! For example, autorelease pools are substantially faster on the iPhone. I guess ARM64 is better at doing the stuff that the autorelease pool code does.
>Reading and writing small files stands out as an area where the iPhone is substantially slower. The 16MB file tests are comparable to the Mac, but the iPhone takes nearly ten times longer for the 16-byte file tests. It appears that the iPhone's storage has excellent throughput but suffers somewhat in latency compared to the Mac's.
I think it's more because you're making a straw man argument. Nobody disagrees that they're slower. I'm saying it's not enjoyable on my phone despite them saying it's good on phones.
Just one of my many fun throwaway projects :P It used to be up at skrch.com, but went down a while ago. Couldn't figure out how to monetize/sell it so I moved on -- I'm still not sure what sector could use something like it. The original idea actually came about in a dream (true story!) and I wondered if I could actually implement it. Took me about a month or two as I had never used OpenCV before.
The search was done with a very simple histogram analysis algorithm and the image database had about 10,000 pictures from Flickr. Results were pretty decent, but sometimes hit and miss[1]. Database costs were pretty high as I don't think there's any database out there that has any way to efficiently hash 2d histograms (so everything was stored in memory). That could be a fun challenge.
This reminds me of Chinese handwriting input methods, which have almost the exact same UI. You draw a character on the screen, and you get a selection of results at the top.
Since nobody has mentioned this yet: I found that the core search functionality is not very good.
I tried drawing a frowny face, a stick figure person, and a puppy face, and it didn't recognize any of them. I'm terrible at drawing, but I feel these are objects that have a universally-understood outline.
Fun idea but doesn't really work. Just sorta replaces your random doodle with a random piece of clip art. Any trace of your original drawing is gone. Disappointing.
Yes, that is the point of this tool, and it probably beats slapping together a bunch of clipart based on google image search, but I was hoping it was something more than that. The tagline is misleading because it's not really helping anyone draw -- it's just a visual search for sketches.
I sketched a really rough palm tree and it suggested a bunch of tree drawings, one of which was a palm. That's helpful, but everyone who wants a palm gets the same palm. Wouldn't it be great if the tool recognized that I was trying to draw a palm and then improved mine by adjusting it according to what it knows about sketches of palms (smoothing the lines, adjusting angles, etc)?
This could be so much more useful than it currently is. If anyone has used Microsoft Visio you know what a pain it is to find symbols while searching through a library, especially if it isn't associated with a common noun. Where are the simple arrows, Greek letters, schematic components? Perhaps they'll be integrating this type of technology into an actual useful product in the future, like a Google Drive version of Visio?
Somebody need to feed the robot overlords with more dirty body parts.
joke aside, Adobe should not be worried just yet. It seems to be just a image tagging service or a terrible drawing application. When I draw a face of a cat it suggests a body of a cat. When I draw a rocket is suggests a glass of vine and so on.
The QuickDraw game was fun and a good idea, but basing a drawing application on "topics" from that game seems like a bad choice.
Boring. Really, what's the point? It doesn't even make a connection between more than one "drawing". Try to draw a triangle, select the shape and then draw another triangle to make the two look like a square - you won't see a square option in the suggestions pane, since it doesn't see/remember your first triangle. My kid would probably like it though.
But, art even/esp hand drawn scribbles losses almost everything without the character and idiosyncrasies of the artist. This is little more than a fancy ui for a clipart library. Clip art sucks. Although it does have a purpose, limited as it may be.
Okay. This is great, but I feel like Google's off doing everything but looking after their current products. Google Inbox is in dire needs of new features to bring it in line with competitive mail products, and GMail needs a facelift.
>Built by Dan Motzenbecker and Kyle Phillips with friends at Google Creative Lab.
Are Dan Motzenbecker and Kyle Phillips responsible for Google Inbox development? If not it seems a bit silly to criticise a project made by <10 people in a company with 72,053 in total
Wow, it's really annoying that there's no way to type text in and get that shape. I get that it's cool that it will (sometimes) recognize what I draw, but apparently I can't draw the Space Needle for the world! It would be nice to still be able to search their images with text manually instead of having to try to draw everything you want when you know you'll use what they have.
This is pretty incredible. The only really important thing that's missing is the ability to flip the images horizontally or vertically. Otherwise my cow is unable to wear a helmet: https://www.autodraw.com/share/T7HFJ9TVN91J
Awesome. As a board game designer, I could see myself using this to make prototype cards that look decent much easier. Although I'm sure I'll still need Illustrator to take it to the next level. But for a quick and dirty prototype, it should work great.
Much better than searching the web for hours for icons that have a similar-ish art style that have what I need.
I keep forgetting about the noun project. Also, those icons by solis look like they could be useful, thanks for letting me know about that. I was aware of an old card game design series he did, but not his Patreon. I've also picked up a few of his games that are still on my Wall of Board Game Shame and need to get played.
Two minutes use and color me unimpressed. The first thing I drew should have been a slam-dunk and it wasn't recognized. (I challenge anyone to draw a padlock that autodraw can recognize!)
Also "Fill" doesn't fill except with pre-defined shapes. What's with that?
I can do a passable Bart Simpson face and the top three recommendations were for Teddy Bears. I think that's quite cool. I might throw some other stuff at it later when I have a stylus to play with. Love finding out about these things.
It's really cute. I wonder if Google is going to use all the classification that people will do of their own sketches to teach its machines... to recognize hand-drawn sketches.
That was my first thought as well. But I do wonder about the quality of submissions given that most people can't draw with a mouse as well as they can with a writing utensil (assuming a drawing tablet or touchscreen is not used either).
One issue I see is that you can only export pngs. It would have been really useful to be able to copy the selected drawing elements and paste them into a Google Drawing
interesting, it definitely supports the narrative of AI replacing changing our jobs, in this case the designer.
For less artistic folks like me, this tool is from heavens. How many times did you want to illustrate a simple diagram but couldn't draw or use photoshop?
Having said that I can see Autodraw still needs more work done. It failed to recognize a phallus.
Really interesting idea that isn't yet implemented well enough to be that interesting in practice.
The idea of machine learning refining your drawings as you go, forming a sort of cooperative artistic partnership, is fascinating.
The idea of machine learning somewhat sloppily matching your drawings to pre-existing ones and just replacing them.....well, kinda just feels like image search copy/pasted into microsoft paint.
This is literally the "Write the kanji" feature which has been present in Google Translate for years, with a different 'character set.' Not exactly impressive.
1. Make a sketch of your choice
2. Pick the first AutoDraw suggestion (or randomly one of the first N)
3. Feed that to google image search
4a. Google's best guess for this image is a prompt for the next human sketch. Repeat from 2.
OR
4b. Pick a sketch-like image from the results of 3 and reproduce on the AutoDraw canvas. Repeat from 2.
I got an amazing result on my first attempt at 1 - 3: