Hacker Newsnew | past | comments | ask | show | jobs | submit | kcazyz's commentslogin

Hi, author here - GPT-2: a recent large language model (generates sentences conditioned on previous sentences) LM: language model ^ PPL: perplexity (https://en.wikipedia.org/wiki/Perplexity) KL: Kullback–Leibler divergence (https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_diver...)


Cool demo, is the source live? I'd like to see how it works with ipv4-over-twitter.

Edit: nevermind, found it. For those looking, it's at https://github.com/harvardnlp/NeuralSteganography


Thank you for going the extra mile and including direct Wikipedia links.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: