Hacker News new | past | comments | ask | show | jobs | submit login

I wonder how the hardware encoders and decoders compare to software implementations. They of course use less CPU, but also generally tend to compress worse and have higher latencies than software implementations. Is nVidia's hardware specially optimized for this use case?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: