What? Isn't motion blur always done in post anyway? As far as I know, motion blur is just a property of the camera, it's caused by after images that show up in the picture when things move too fast, it has nothing to do with the actual 3d world out there.
Yes, everyone knows where motion blur comes from in a real camera, but in a rendering view, there is no real camera. The renderer is what applies simulated motion blur. There are some simulated optical effects must be accounted for in order to render a realistic motion blur, like depth of field. Motion blur is quite often added to stylized animation in an equally stylized sort of arcing cloud, which requires the renderer to also have knowledge of the literal 3d model that's being animated.
> Motion blur is quite often added to stylized animation in an equally stylized sort of arcing cloud, which requires the renderer to also have knowledge of the literal 3d model that's being animated.
You mean smears? That's the animation technique that literally deforms geometry, it's a kind of motion blur, and it's based on the 3d model of course. But I don't see how a gas simulation benefits from smears.
You need information about the motion in order to simulate motion blur. It makes perfect sense for this to be generated by the 3D renderer rather than trying to guess at the motion after the fact when all you have are 2D frames.
Does 3D motion blur look better? The standard of realistic motion blur is probably real cameras, no? Real cameras don't need to guess the motion it's just an afterimage in a particular frame while the shutter is open.
Bit late to the party, but I can shed some light on this. Since a real camera has the shutter open for some duration, any moving light will smear across the sensor. If you similarly "smear" a path traced object by stochastically randomizing the position of the object while the path tracer is gathering samples for the frame, you get exact physically plausible motion blur without having discrete "ghosts" corresponding to sampled subframes.
I don't have any knowledge of this specific implementation, but my guess would be that it is a question of optimization. If you were to implement something similar to what's physically happening in a camera with a long exposure, you'd have to do a large number of oversamples (e.g. generate 10 sub-frames for every output frame) and merge them together. That's a ton of extra rendering. If, on the other hand, you can get the 3D renderer to generate "smeared geometry" (based on its knowledge of the motion speed, direction, and virtual shutter duration) and render each output frame once, that will get you faster render times.
What? Isn't motion blur always done in post anyway? As far as I know, motion blur is just a property of the camera, it's caused by after images that show up in the picture when things move too fast, it has nothing to do with the actual 3d world out there.