It's copy-pasting parts of the training images over and over.
In figure 8 of the technical report [0], compare the hair in images (0,0), (2,0), (3,0), (3,3), (4,4).
The paper suggests their method generates copyright-free images, yet they are very obviously derived from the input images and you can identify the parts of individual input images that are mashed together to form the output.
All in all their method seems to be performing "obfuscated memorization," in the sense that the generated images are scrambled up enough to fool their plagiarsim-detector loss function.
But as the online article states, that figure represents a case where the model is explicitly set to "generate images [which] have similar major visual features with different attribute combinations": http://make.girls.moe/#/news
So some degree of repetition is to be expected, since you've turned off random noise. And despite that the images do still exhibit some variation if you look closely.
For the uninitiated: (form Wikipedia) Moe (萌え, pronounced [mo.e]) is a Japanese slang loanword that refers to feelings of strong affection mainly towards characters in anime, manga, and video games.
Unsurprisingly! They didn't exclude male characters, so if you ask for short hair, you'll be more likely to be drawing from a male-biased area of the latent space/noise.
By the way, the Getchu and illustration2vec links on the news page are broken.
Edit: This part from the Tips page might be why it initially didn't generate great images:
The input of the model consists of two parts, the random noise part and the condition part. If you generate a good image, you could try to fix the noise part and use random conditions to get more good images. We have observed that a good random noise is important for the better generation.
Edit 2: Actually, no. According to the news page, if the noise is fixed, the generated pictures would be all similar.
In order to make our model more accessible, we build this website interface with React.js for open access. We also make the generation completed done on the browser side, by imposing WebDNN and converting the trained Chainer model to the WebAssembly based Javascript model. For a better user experience, we would like to keep the size of generator model small since users need to download the model before generating, so we replace the DCGAN generator by SRResNet generator can make the model 4 times smaller. Speed-wise, even all computations are done on the client side, on average it takes only about 6 seconds to generate a single image.
Well, Chrome updates automatically and it's obviously not safe to run an outdated browser. So you have a bigger issue than a random script not working on your browser.
Looking at current browser market shares on netmarketshare.com (first link that came up via google search) it looks like this version of chrome accounts for 0.07% of traffic this month. Why are you pinned on such an old version?
Interestingly enough, the older version 45 has 6.08% share. Is some operating system pinning chrome 45 and perhaps providing security updates?
Well, "make.girls.moe" is the URL and name of the tool.
According to the technical report [1], they used character portraits from Getchu [2] for training data. A cursory glance shows that the overwhelming majority of the characters are female. As a result, the characters the tool generates are likely to appear female to our eyes.
After the code is open sourced, perhaps someone should try to create make.boys.moe using character portraits from otome games [3].
Please don't propagate that sexist, binary view on gender. If you make the effort to criticise non-equal treatment, please always also include genderless, non-binary bigender or trigender, pangender, trans woman, trans man and any other-gendered.
It seems to be very easy for people who get the gender expression they need out of their assigned label and gender role to dismiss the importance of those things. What you're saying is about as smart as saying "people don't need glasses; I can see just fine."
No arrogance nor ignorance meant from my post - if inferred, I take it back.
The last line of the comment was aiming to bring my point across (which seems to have failed) - people do not need to label themselves as anything in order to be who they are.
Please re-read my post with a little more of a light heart - the sooner we all stop defining ourselves (and others) so seriously as members of tribes old and new, the sooner we can all speak to each other on a level playing field.
In figure 8 of the technical report [0], compare the hair in images (0,0), (2,0), (3,0), (3,3), (4,4).
The paper suggests their method generates copyright-free images, yet they are very obviously derived from the input images and you can identify the parts of individual input images that are mashed together to form the output.
All in all their method seems to be performing "obfuscated memorization," in the sense that the generated images are scrambled up enough to fool their plagiarsim-detector loss function.
[0] http://make.girls.moe/technical_report.pdf