Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My first two demos (Waveride and Armitage by Straylight if you search on Pouet) used a 3-array water surface simulation, cycling through them like you mentioned. The first used an implementation on the CPU with a 128x128 grid, the second did 5 separate 128x128 surfaces on the GPU (render to texture). The second ran all 5 about 10x faster than running a single one on the CPU, and that was with effectively no optimization on the GPU side. With some tweaking, I would've been able to do 10000^2 grids cheaply; the biggest bottleneck was that I moved the textures to and from the GPU for each frame, which I could eliminate with some use of vertex shaders and abuses therein.

Anyway, saying that GPGPU is overkill here is pretty silly -- sure, you can do some water surface simulation on the CPU, but you hit the wall very, very quickly if you're doing anything but that.



I never readback the textures. The water surface is entirely generated and simulated on the GPU, and its heightfield is converted into a normal map and fed into the renderer every frame. There was literally zero performance overhead (we were CPU limited, not GPU limited).

The lake surface looked utterly convincing; it was as if you were watching a rainstorm pour down on it. I wish I had a video.

The bottleneck you ran into was readback. Transferring data from GPU->CPU was, is, and always will be, "expensive".

GPGPU is completely unrelated to this; it merely enables you to perform computations on the GPU faster, nothing more and nothing less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: