Hacker News new | past | comments | ask | show | jobs | submit login

> If we had had to write our own image resizing kernel, this would have taken much longer

I don’t disagree but this is very subjective.

You don’t need to invent anything, you only need to carefully implement a well known approach, e.g. this one: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch24.ht...

Also there’re third party libraries for that, e.g. here’s one from the same company who do JPEG codec: http://fastcompression.com/products/resizer/gpu-resizer.htm

> what about PNG, GIF, and WEBP?

As far as I understand, you goal was to cut server costs, right?

I assume the majority of pictures on the Internet are jpegs. If you have them processing on the GPU, this leaves you 16 virtual CPUs you’ve already paid for just sitting idle and waiting for the GPU to finish the job. No need to do everything on GPU.

P.S. Some other people already implemented what I’m telling you: http://code.flickr.net/2015/06/25/real-time-resizing-of-flic...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: