Try glsl, like on shadertoy.com, for a high fun-factor intro to GPU programming.
I enjoy coding in Python and JavaScript more than I enjoy coding in C++ or CUDA, if I'm looking only at the language itself, but I have to admit that it's also very fun to make something run 100 or 1000 times faster than it did before. That kind of fun helps me overlook language differences.
To me the quote sounds pretty funny, because a cloud of 500 CPUs running Swift, right now, is way more expensive and way less efficient than a single GPU. The current generation of GPU have over 10k single-thread cores...
> To me the quote sounds pretty funny, because a cloud of 500 CPUs running Swift, right now, is way more expensive and way less efficient than a single GPU. The current generation of GPU have over 10k single-thread cores...
The number of threads in a GPU cannot be compared to the number of CPU threads or CPUs, especially in 3D rendering. CPU threads are vastly more independent and powerful than those on a GPU. GPU thread divergence places significant burden and limitations on the design of GPU kernels. Memory bandwidth is very costly as well. This is a very active and open research area for 3D rendering, and this Disney scene is designed to test those limitations (among other things). There is a reason why most of the 3D animated movies you see were rendered on CPUs.
Those are good points, however 3d rendering is one domain where it does makes some sense to compare CPU & GPU threads, especially 3d game rendering. You certainly can compare threads between CPUs and GPUs for fp32 performance, if you have a pure compute workload without divergence. I work on OptiX, BTW, so I’m biased there, but there is also a reason why most commercial renderers are steadily moving toward the GPU. I predict it won’t be long before your statement flips and most CG movies are rendered on GPUs.
I enjoy coding in Python and JavaScript more than I enjoy coding in C++ or CUDA, if I'm looking only at the language itself, but I have to admit that it's also very fun to make something run 100 or 1000 times faster than it did before. That kind of fun helps me overlook language differences.
To me the quote sounds pretty funny, because a cloud of 500 CPUs running Swift, right now, is way more expensive and way less efficient than a single GPU. The current generation of GPU have over 10k single-thread cores...