Hacker News new | past | comments | ask | show | jobs | submit login

There are a few things to say about this:

1. If you train a conditional GAN to do image inpainting (for example, left to right), it should be quite apparent the degree to which the model is copying and pasting the training set - by running the model with "given" parts from the test set.

2. I disagree that an ideal GAN could just output the training set. I think the right conceptual framework is that any generative model is trying to produce a distribution similar to the data distribution, and we try to accomplish this by using samples from the data distribution. So if the model memorizes the training set, then it isn't actually that close to the true underlying data distribution. In likelihood-based models (for example the usual generative RNN) you can test this by evaluating likelihood on a validation set.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: