Hacker News new | past | comments | ask | show | jobs | submit login
Anime2Sketch: A sketch extractor for illustration, anime art, manga (github.com/mukosame)
273 points by lnyan on May 7, 2021 | hide | past | favorite | 54 comments



I think I'm missing something. I get the Sketch-to-Photo synthesis that this is based on. It's really weird, neat stuff. But as a layman, I'm having trouble seeing the difference between the result of this anime-to-sketch synthesis and what I'd expect to get out of a simple edge detection. Is the difference that it's more clever about which details to ignore?


I only dabble in graphics - but generallly simple edge detection needs really uniform tonality and no textures in the input to work well. Look for example in the more "sketchy" examples how the linework that "looks right" is extracted from a quite noisy input. Also, in the top example where there are houses, the contrast difference that gets extracted to linework is lower than in the character areas.

So, the flat shaded images with explicit black outlines - yes, it's likely there is much difference with edge detection. But when the image has lots of different contrasts and tonalities this looks much more impressive.


Not actually, here is what most common edge detection methods produce (last one is Anime2Sketch): https://i.imgur.com/nt0D1ef.png


Your edge detection examples are really basic stuff: there are way more advanced studies and examples.

Search for example for "Coherent Line Drawing" by Henry Kang (image from the research: https://d3i71xaburhd42.cloudfront.net/92b2ec5ef4f58c4206fc5c...) or for "XDoG: An eXtended difference-of-Gaussians ..." by Holger Winnemoller, Jan Eric Kyprianidis, ... (https://ars.els-cdn.com/content/image/1-s2.0-S00978493120004...)


Okay, that's pretty significant. Neat, thanks!


link appears to be broken?


It's working for me, here it is on a different hosting site if that helps https://i.ibb.co/pn5d01f/nt0-D1ef-d.webp


Nope, getting a timeout for that too.


It must be your connection then (or the image format), last try: here it is again in Google Drive: https://drive.google.com/file/d/1rMagXGgth-pAXH2RQ-Ltj59jpgb...


I was actually going to ask if someone had done a comparison with edge detection.


Also not familiar but presumably a temporal aspect weighs in, so whether something is a meaningful edge isn’t strictly dependent only on the content within a specific frame?


My high level guess is, you would do some linear cost minimization in deciding what are edges over time when doing things the traditional way. The neural network can handle nonlinearities for the optimization so you can get better results for some set of inputs (in this case, some class of anime images)


This looks like a great tool to generate some Pokémon and Beyblade coloring pages for my kids. We went through everything in Google image results many moons ago.


I really want to see how this performs on Octonauts stills


Just a heads up, you should use higher quality (or better, just use PNG) for the output.

The default Image.save quality is very low to a point that the JPEG artifact is more prominent than the line art themselves.

L91 @ data.py: image_pil.save(image_path, format='PNG')


Then someone can use https://github.com/taivu1998/GANime to recolor them.


Do it iteratively one after the other to see if after a while the results are unrecognizable from the originals. Like those experiments that translated a text between languages to create gibberish.


This feels like a tool with lots of business cases.

Studios may be able to accelerate digitalization and colorization.

The ability to convert stills to a fillable outline or repurchase for labels/marketing/branded coloring books (or apps) could be worth some money to those with a large content library.


Looks more like shaded art to unshaded lineart rather than sketch. Sketches are usually way more messy, like a blueprint for the final product.


This is actually pretty impressive, and I can see it being really useful if it can generate clean line art from Animation roughs.

It would be really interesting to see this in OpenToonz or the like.


Semi off-topic – is there a tool to turn a picture into an drawing? I sometimes see websites where people have created a avatar of their headshot that looks ‘toonish’.


I believe you're looking for something like https://github.com/taki0112/UGATIT.



How WSJ stipple drawings are made: https://youtu.be/sZzP9PQJXLs


Is there any machine learning tool to do that already?



Thank you!


Also checkout u-2-net: https://github.com/xuebinqin/U-2-Net There is a variant that can turn images into line drawings.


Interesting. I wonder how it fares with 3D renderings? I'm a Blender user and unfortunately, Blender "Toon Shading" capabilities are not very good compared to say Cinema 4D.


There are lots of very good techniques to do toon shading in Blender. Here's a few:

BEER NPR (MALT renderer) - https://blendernpr.org/beer/

Lightning Boy shader 2.0 - https://www.youtube.com/watch?v=8fHZcnTFYEI

Procedural hatching and manga shaders - https://gumroad.com/l/tcKOI


How is this technically different from photoshop filters? https://design.tutsplus.com/tutorials/sketch-photoshop-effec...


Photoshop just detects edges and does thus ends up detecting both sides of a drawn line or change in shading for example. This does not appear to show such artefacts.


The project is much cleaner, and from the link you posted, the picture is noisy.


I wonder if this can be used for comic book inking. It looks like they have an example of that.

Typically the workflow is pencil drawing -> cleaned up ink drawing (japanese animation uses a similar process too). If this can speed up that process it could save a lot of time.


Does not work as well as advertised :) I think the author clearly cherry picked their examples.


Can you provide some counter-examples, probably as issues in the repo?


Yup, will do that. I found several anime pictures that did not work remotely as well as the examples.


Could we see the pictures that did not work well?


Does anyone know a similar model that transforms normal images into Western Comic Book style? I've seen it a lot for Anime/Manga, but never for that classic style of 90's comic books.


I'm not super familiar with Deep Learning, but based on the fact this is effectively extracting edges and the ConvTranspose2d layers I'm guessing it's some sort of Convolutional Neural Net?


It is pretty neat. Off-topic, what anime is that test from?



Which I found by observing that the file name is "vinland_saga.gif" when I went to try putting the image in a search engine.


off-topic, but this is a great open-source anime image reverse search engine: https://trace.moe/


Fantastic anime, I highly recommend it!


What could we use this for? The immediate thing that comes to mind is making a coloring book. I’m wondering if I could use it to make something original


If I want to use this program do I have to have a good GPU in my computer to run this program or I just need to install the required software?


I believe that this will not be too GPU-intensive, but that will of course depend on the input resolution of the video.


The training is what requires a good GPU. For inference, a CPU should be fine.


Is there any way for someone to post a Google collab notebook with this.

I think this would be pretty cool if it would support any picture or video.


can anyone show what happens if you feed it regular video / photo?



If you find this interesting, you may also consider canny edge detector.


Anything preventing this from running on Python3 on Windows?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: