Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice! Thanks for sharing. I hadn't seen that paper before. Looks like they take in a real-world video and then re-generate the mouth to get to lip synch. In our solution, we take in an image and then generate the entire video.

I am sure they will have open source solutions for fully-generated real-time video within the next year. We also plan to provide an API for our solution at some point.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: