Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fluid Simulation for Video Games (part 1 of 10) (intel.com)
90 points by spacemanaki on Aug 15, 2011 | hide | past | favorite | 16 comments


The algorithms are embarrassingly parallel, but he doesn't use the GPU because it's busy rendering. I wonder if intel's increasingly powerful integrated graphics (usually supplanted with a video card) could be used for this?

> In the endeavor to achieve real-time fluid motion, some other fluid simulations exploit general-purpose computing on graphics processing unit (GPGPU). However clever, such approaches do not help with current gaming hardware, because in video games, the GPU tends to be busy with rendering and has no time left over for simulation. http://software.intel.com/en-us/articles/fluid-simulation-fo...

Also: pretty videos http://www.youtube.com/mijagourlay#p/u/4/G9E8xEjGzk0 (download http://software.intel.com/file/23546/)


GPGPU is incredibly overkill for simulating a water surface.

Create two 256x256 RGB textures, call them foo and bar.

The R channel will store current height of wave. The G channel will store the previous height of wave (allowing you to derive velocity).

  ... each frame ...
  render_to(bar)
  sample_from(foo)
  /* insert fancy partial differential algorithm */
  foo,bar = bar,foo
In other words, you're "ping pong"-ing between two textures.

Now if you set an arbitrary texel to, say, 1.0 (maximum), that texel will propogate outwards as if it were a ripple.

Bonus: the result is a seamless tilable texture.

Look up "toyshop rain" for the exact details. But I used this method to augment an ingame lake surface's normal map (thunderstorm effect --- rippling raindrops propagating across the lake). This was in 2007, running on 2007 hardware. GPGPU need not apply.


My first two demos (Waveride and Armitage by Straylight if you search on Pouet) used a 3-array water surface simulation, cycling through them like you mentioned. The first used an implementation on the CPU with a 128x128 grid, the second did 5 separate 128x128 surfaces on the GPU (render to texture). The second ran all 5 about 10x faster than running a single one on the CPU, and that was with effectively no optimization on the GPU side. With some tweaking, I would've been able to do 10000^2 grids cheaply; the biggest bottleneck was that I moved the textures to and from the GPU for each frame, which I could eliminate with some use of vertex shaders and abuses therein.

Anyway, saying that GPGPU is overkill here is pretty silly -- sure, you can do some water surface simulation on the CPU, but you hit the wall very, very quickly if you're doing anything but that.


I never readback the textures. The water surface is entirely generated and simulated on the GPU, and its heightfield is converted into a normal map and fed into the renderer every frame. There was literally zero performance overhead (we were CPU limited, not GPU limited).

The lake surface looked utterly convincing; it was as if you were watching a rainstorm pour down on it. I wish I had a video.

The bottleneck you ran into was readback. Transferring data from GPU->CPU was, is, and always will be, "expensive".

GPGPU is completely unrelated to this; it merely enables you to perform computations on the GPU faster, nothing more and nothing less.


"toyshop rain" http://www.youtube.com/watch?v=LtxvpS5AYHQ (but youtube video content seems to be down at the moment, at least here in Australia anyway)

BTW: The article (10 parts!) deals with other fluid effects besides surfaces, the first few paragraphs even distinguish between them and gases.


Thank you. Probably my favorite tech demo of all time. It gives me goosebumps every time I watch it. Likely watched it at least 30 times by now.

Everything about it is beautiful, from the art to the music to the sound effects to the incredible feeling of "being drenched". Natalya Tatarchuk is truly a hero.

Watch it all the way through, with headphones, in fullscreen, or don't watch it at all!

The experience was so much more intense back in 2007; it was breathtaking.

(Nooooo I don't wanna get old! Make it stop!)


In case you haven't seen it, From Dust uses fluid simulation as a game mechanic. It's a god game, with flowing water that erodes rivers etc, at a constant 30 fps. There's an xbox demo, and a PC demo to come. Here's a video: http://www.youtube.com/watch?v=gSOQGazo7Oo&t=33s (I think the fluid grid is slightly coaster in the actual game, but water basically flows the same). BTW: I think it's fantastic as a toy, less good as a game.

However, from what you're saying, the fluid simulation isn't as expensive as I thought.


Wow, that's incredible! Thanks for sharing.

Sorry, I miscommunicated --- fluid simulation isn't expensive as long as it doesn't affect gameplay. Because then you can do it all on the GPU, and you don't have to transfer it back to the CPU (to perform checks like "is the water touching me?")

In this case, From Dust is simulating the fluids on the CPU, then building geometry / textures on the fly and uploading that to the GPU and rendering. It's quite an impressive tech demo, both from a technical and artistic standpoint.


There's the possibility of shared memory, between CPU and integrated GPU, as John Carmack noted, skipping the transfer. Apparently, the xbox does this already.

Big extrapolation: If intel can pull this off, they may be able to own the next platform, of GPU-based computation. I suggest that's the next platform because it seems to be the only place where many-core code is really happening. And many-core is the only way to get Moore power, since clock rates hit a wall.


It has the "uncanny valley" effect. I feel like any moment gumby is going to show up. Very creepy.


http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html

^This is another great tutorial for real-time fluid simulation. If anyone's interested, I adapted this method to simulate fire as well. (demo here: http://www.youtube.com/watch?v=MIi62cwjqMA)


Are fluid simulations for video games different from "actual" fluid simulations? The article just seemed like a discussion about fluid dynamics, but when it says things like "... for games", I always think it's about making something look realistic without actually solving the real equations.


I think that most fluid simulations for games would be pretty far from physically accurate just for the sake of speed.

Doing a full simulation at any sort of resolution would totally kill your framerate, and you've got a lot of other stuff going on as well.

A relevant quote from Dwarf Fortress developer ToadyOne (regarding the fluid system in DF): "The only thing impressive about it is probably that it runs at all while everything else is going on".


I felt the same about the title. Games today have no need for realistic fluids, and for that reason the trick is they only render the surface.


Yes, actual fluid dynamics are extremely slow and not needed just for the visuals.

Read this seminal paper for the details:

http://www.dgp.toronto.edu/people/stam/reality/Research/pdf/...

Sample quote:

We believe that a better alternative is to use the physics of fluid flows which have been developed since the time of Euler, Navier and Stokes (from the 1750’s to the 1850’s). These developments have led to the so-called Navier-Stokes Equations, a precise mathematical model for most fluid flows occurring in Nature. These equations, however, only admit analytical solutions in very simple cases. No progress was therefore made until the 1950’s when researchers started to use computers and develop numerical algorithms to solve the equations. In general, these algorithms strive for accuracy and are fairly complex and time consuming. This is because the applications that require these solvers have to be physically accurate. It is obviously crucial that the stresses and drag on an airplane or a bridge are calculated precisely.

In computer graphics and in games on the other hand what matters most is that the simulations both look convincing and are fast. In addition it is important that the solvers aren’t too complex so that they can be implemented on standard PCs, game consoles or PDAs. In this paper we present a set of algorithms which meet these requirements. To achieve these goals we depart from conventional wisdom in computational physics and develop algorithms custom tailored for creating visual effects. Unlike physically accurate solvers which have strict bounds on their time steps, our algorithms are stable, and never “blow up.”


Thanks. This is a very useful info.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: