I too have been writing a raytracer in Rust[0] recently. I tend to use raytracers as my go to project when learning a new language, especially those that are OOP, since it touches a few different areas that are commo such as:
+ Protocols/traits/interfaces and using that for polymorphishm, think intersectable/primitve types.
+ It typicall involves both understanding heap and stack allocated memory in the language as well as understanding the general memory model to build scene graphs etc
+ Building a small linear algebra library usually touches things like low level operations and performance operations
as well as operator overloading if the language supports it.
+ Writing images to disks via pixel buffers
Primarily though I think raytracers are very fun projects because you can produce nice looking results quickly which I find helps with motivation and passion for the project. I'm pretty pleased with some of my renders already[1]
With a big emphasis in "small"... for a raytracer you don't need much more than a 3D vector with addition, multiplication, and dot product. For reference, the "linear algebra library" in my tiny raytracer [1] is 6 lines of code (and two are method closing braces): http://gabrielgambetta.com/tiny_raytracer_full.js (functions dot and A_minus_Bk)
For where I'm at currently you don't need that much. This is it at the moment https://github.com/k0nserv/rusttracer/blob/master/src/math.r.... I learned about rust macros, operator overloading, traits, derive, tests, the difference between copy and clone in the process. It was a good first step in rust for sure. It's also one of those things that lends them self to TDD very nicely and that cuts down the need to debug anything.
I also agree. Learning a new language by building a raytracer is a lot of fun.
What also helps a lot is that you can use only spheres to create a Cornell box (use very large spheres for the walls). And ray-sphere intersection is 'easy'.
Then the next step is path tracing. This will help you to learn a lot about handling recursion (with or without recursion).
Other areas that you learn:
+ How scope is handled
+ (In dynamic languages) how the conversion between floats and ints work.
Intersecting with an infinite plane is also pretty easy so you can build a Cornell box just using planes. However the massive sphere trick is neat, never thought of that before
Totally forgot about recursion. The recursive steps for reflection and refraction aren't strictly speaking path tracing are they? I honestly don't know that distinction well. Multi threading was definitely a lot easier in Rust than when I gave it a shot in my Swift version.
I think path tracing is just a form of ray tracing. The only difference is that you continue another path at a point where the ray hits an object and collect all the light energy back to the pixel.
Path tracing doesn't trace towards light sources for shadow rays and instead just sends several rays in different directions and accumelates the resulting colors of those rays.
Path tracing is just like ray tracing with randomized deviations of the rays and reflections, and this is repeated many times to reduce the noise.
This can also be done with shadow rays to some extent, for soft shadows.
(But it seems like every person you ask knows a different meaning for terms like raycasting, path tracing and physical based rendering)
I'm not sure this is true. The shadow ray pass is used to determine how much light is returned to the pixel. So something like:
* shoot a ray from the pixel
* when the ray hits an object check how much light that point receives
* bounce the ray according to the hit material properties
* repeat the bounces X times and gather how much light the pixel will receive.
Because most bounes are in a random direction it's best to sample a pixel multiple times. And ofcourse more bounces give a more realistic global illumination result.
It's going to depend on the material of your object. If it's perfectly reflective, you'll want to trace different rays than if it's perfectly diffuse, and if it's a rough surface (and how rough).
Definitely, but I find it can also be massively difficult to figure out why something is wrong. It's not as clear as for example backend web programming. For example this is a progression of me implementing a camera that can move https://twitter.com/K0nserv/status/846488675794014208
> raytracing an image takes much longer than the polygon-based rendering done by most game engines.
Minor nitpick, but it has nothing to do with the fact that it renders polygons. Ray tracing can also render polygons. More precisely, game engines use rasterization which works by projecting triangles onto the screen rather than tracing rays through the screen.
It also depends very much the geometric complexity of your scene. With hundreds of millions of polygons it's not difficult for raytracing to outperform rasterization, especially if most of those polygons are instanced.
Indeed, with hundreds of millions of polygons, a rasterisation method will generally have to splat them all onto the screen one by one (minus some clever occlusion pre-processing). By contrast, a ray-tracer has the ability to shove all the objects into a R-tree or kd-tree, and efficiently search for only those objects that intersect the ray, and produce the objects in guaranteed order of distance from the camera.
Yes, but on the other hand rasterization is implemented in hardware on GPUs which gives it a great performance advantage. Also, sorting before rasterizing allows most triangles to be discarded before any fragments are created. Besides, creating and traversing an acceleration structure does definitely not make ray tracing free. Especially traversal is not exactly cache-friendly. Also, rays passing closely by geometry, but not hitting it, will traverse rather deep into the tree and then backtrack, which is rather costly. Another advantage of rasterization is that it is data-parallel at the vertex level. The disadvantage is that it is far less flexible in what you can render compared to ray tracing; it's only really practical for camera rays.
I'm a bit sceptic of this claim. You can also produce acceleration structures for reasterized polygons, create hierachical level of detail representations of your scene and render whatever LOD is necessary. This reduces the number of polygons that have to be rendered considerably. It always seems like the claim that raytracing is faster for tens of millions of polygons due to acceleration structures misses the point that accelerations structures can also be applied to rasterization.
"This requires a bit more geometry. Recall from last time that we detect an intersection by constructing a right triangle between the camera origin and the center of the sphere. We can calculate the distance between the center of the sphere and the camera, and the distance between the camera and the right angle of our triangle. From there, we can use Pythagoras’ Theorem to calculate the length of the opposite side of the triangle. If the length is greater than the radius of the sphere, there is no intersection."
The two sides he describes have the camera in common - so the "opposite" side of that triangle is the line from the center of the sphere to the right angle - I don't see how this helps....
Edit: ok I finally get it but I think he should just label some of these lengths on the diagram with letters (a,b,c etc) and then just show how they are related by stating Pythagoras theorem explicitly...
Of course, because we have the plane fixed at 1 unit in front of the camera, instead of moving that distance we have an adjusting factor that we multiple the co-ordinates by to change our field of view.
Thanks, I looked at scratchapixel but only found the stuff about how pinhole cameras work.
> Despite that, it also happens to be the simplest way to render 3D images.
I'm not sure I would claim that -- with a line drawing routine in hand (a for loop), you can have 3D perspective renderings of objects with a few matrix vector multiplies.
Line drawing algorithms can be surprisingly tricky. I don't think what you describe would be easier than a basic raytracer, which would be about a page of code, and the most complex math involved is the quadratic formula.
Surprisingly tricky being something around two or three dozen lines of C for filling a triangle (with some constraints), after it's been transformed appropriately.
But that is not the only way. You can iterate over a plane and test whether the coordinates are within a triangle. It ends up being very similar to the code you'd have for ray tracing a triangle.
This is correct, RenderMan is a path tracer which is more physically correct in a lot of aspects (light bounces, caustics, conservation of energy, etc). Before that it relied on the Reyes architecture with ray tracing extensions according to Wikipedia but Pixar stopped supporting it in 2016.
To be really pedantic about it, RenderMan is an API. Photorealistic RenderMan (aka PRMan) was the Reyes implementation of said API, and the new path tracer is called RIS.
+ Protocols/traits/interfaces and using that for polymorphishm, think intersectable/primitve types.
+ It typicall involves both understanding heap and stack allocated memory in the language as well as understanding the general memory model to build scene graphs etc
+ Building a small linear algebra library usually touches things like low level operations and performance operations as well as operator overloading if the language supports it.
+ Writing images to disks via pixel buffers
Primarily though I think raytracers are very fun projects because you can produce nice looking results quickly which I find helps with motivation and passion for the project. I'm pretty pleased with some of my renders already[1]
0: https://github.com/k0nserv/rusttracer
1: https://raw.githubusercontent.com/k0nserv/rusttracer/master/...