Was pleasantly surprised to learn that the author of this blog post was none other than the CTO of Stripe! Glad to see a CTO who's still hands-on and even willingly blogs about making so-called 'beginner' mistakes with the float/integer behaviour in Rust. Big props.
Nice! This made me think that working in tech at Stripe is likely to be more fun and productive than in other comparable places. I believe in experts leading, and I also believe in leaders showing their human face - the author did both :)
This is something I like seeing as well. Show your mistakes and let the rest of us learn from them. Real development is messy and shit happens, its good to show beginners that this is normal and expected
Not only that, but it was clear that 1/2 == 1 was wrong 21 years ago, dating from PEP 238 -- Changing the Division Operator . https://www.python.org/dev/peps/pep-0238/
> The classic division operator makes it hard [in a dynamic language like Python] to write numerical expressions that are supposed to give correct results from arbitrary numerical inputs ... Another way to look at this is that classic division makes it difficult to write polymorphic functions that work well with either float or int arguments; all other operators already do the right thing. No algorithm that works for both ints and floats has a need for truncating division in one case and true division in the other.
In Python 2.0, this is incorrect, unless you could guarantee that at least one of distance and time was not a integer. The correct solution is something like:
while other valid-seeming solutions are subtly broken:
x = float(x) # Broken if x is complex
x = x+0.0 # Broken if x is -0.0
Why you would pass in a complex number, I've no idea. But perhaps the imaginary component is 0?
>>> 50/(30+0j)
(1.6666666666666667+0j)
In C this isn't a problem because the "double distance" and "double time" in the argument list ensure the algorithm is dealing with doubles.
In addition, though the PEP doesn't say it, there's some influence from the Alice programming language, which started in the 1990s and built on Python. Quoting "Alice: Lessons Learned from Building a 3D System For Novices" at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.... :
> Although we resisted changing the Python implementation, our user testing data forced us to make two changes. First, we modified Python’s integer division, so users could type 1/2 and have it evaluate to 0.5, rather than zero. Second, we made Python case insensitive. Over 85% of our users made case errors and many continued to do so even after learning that case was significant.
Feedback from Alice helped influence van Rossum's decision for changing division. As for "Case Insensitivity", see slide 44 from the OSCON presentation, with that as the title and "[ducks :-]" as the sole content.
Oh, I did something like this a while back as well! Aside from ray tracing, my renderer also supports ray marching, so it can render some cool fractals[1]. Writing path tracers is so much fun, love the write-up!
(I also went through and love this book). It isn't really even written in any code. It has some pseudocode to describe some things, but it primarily describes tests to write. It's a really excellent book to kick the tires of a new language you want to try out.
This article hit home... I took a similar approach and spent bit of time over the summer to familiarize myself with the language by writing a ray tracer. (Super simple, not as fancy as the one in the article.)
After spending as much time lately as I have in Scala and Python, my immediate reaction to Rust was quite positive. Python has performance issues from 1985 and the build ecosystem seems borderline chaotic. (Made me seriously miss Maven, etc.) Scala seems a lot like C++ - an amazing intellectual accomplishment, a great place to spend all of your time, but not so good as a part time language. (I spend a bunch of my time in Scala mentally expanding out shorthand notation the way I might be mentally macroexpanding in a Lisp.)
Rust, in contrast, seems to have struck a nice balance between expressive power and runtime performance. Expressively, it has a lot of what I like about Scala with a syntax that makes more sense to my C-style upbringing. Performance seems to be everything I'd expect from the fully compiled language that it is. (In terms of performance, there's no way Python would've let me get away with some of what I got away with in my Rust ray tracer.)
Given that the language gave such a positive initial impression, the questions I still have are more about what it feels like in the large. ie: Working with a significantly sized team jointly on a codebase that might last 1, 5, 10 or more years. (Even then, I'm pretty optimistic.)
I guess this thread is the best place to ask for this:
Do you know of good open source real time (preferably gpu based) ray tracing engines out there? That can be used for games? I see a bunch of lists but want to get your expert opinion.
Every time I read another one of these articles I have to ask myself what kind of a programmer am I if I've never written even a basic ray tracer. Seems like a fun hobby project as you can iterate for eternity. I even initially got into programming thru playing with Povray for school art projects.
True, but almost everything is. The Book's entry on statements is like four paragraphs, while the entire rest of that chapter is about expressions.
For example all Rust's loops are expressions. This isn't very useful in many cases, but breaking an ordinary loop can yield a value, so e.g. you can loop checking all the integers until you find one you like, and then return it as the value of the loop. Nice.
Detail that sometimes bites me: the `loop` looping construct is an expression that can yield a value. `for` and `while` technically are expressions, but they can only ever yield `()` (unit type).
Coming from python, i feel doing something like this:
let x = for v in iter {
if cond(v) {
break v;
}
} else {
default_v
}
would be nice, but this was explicitly rejected by the core team. So for now anyway, only `loop` can break with value.
The thing is, loop gets to break with value because it otherwise doesn't end (so the type of the loop when it doesn't break is never aka ! and that's compatible with any type), but a for loop will always end anyway, so what's the value of the for loop when that happens?
In your example code, I see there's an else clause on the for loop which is presumably special syntax so as to provide the value if it doesn't break, but that feels pretty clumsy to me and I'm not sure there are many cases where it's more readable than what I'd do now.
If I could see a nice way to do it without such clumsy special syntax, I'd favour this, but with the else syntax (or some equivalent) it feels like it doesn't pull its weight.
Cool! Just to clarify the idea, it's not rendering it on screen (I was first curious how you'd do that from Rust in a simple example), but rather saves image frames.
When I wrote my ray tracer, I set up a simple websocket server to stream pixels to a canvas element.[1] it’s a trivial way to do this without a ton of GUI code.
Interesting. I'd guess you could do it without GUI, but you still need access to something like WSI / Vulkan to actually get to the screen if you want to do it directly.
In the decade I spent working on RenderMan at Pixar, I learned just how immensely useful it was to have an image viewer running in a separate process talking to the renderer over a socket or pipe. (The Image Tool, or "It" is RenderMan's viewer.) Having it stay up even if you kill the render or it crashes for some reason and being able to flip back and forth to easily compare test renders across recompiles is game changing.
If I were to start writing a new renderer, the first thing I'd do is to hook it up to an external image viewer over some protocol. These days, I find myself liking TEV (https://github.com/Tom94/tev) a lot as a simple open-source image viewer that supports this and most other basic features that I'd want. See the links in the README for Python and Rust implementations of its protocol.
“Minimal code to put pixels on the screen?” is a question I’ve seen often enough that I’ve made a little gist to link whenever it comes up. https://gist.github.com/CoryBloyd/6725bb78323bb1157ff8d4175d... It requires https://www.libsdl.org/ But, making a window and putting some pixels in it, on all the world’s varied platforms, is the premier feature of that lib.
Yeah, I figured you could use SDL. But probably more interesting to do it more directly with Rust and some WSI functions to get the needed surface (though it would be a lot more code).
with multithreaded rendering, is it possible to use SDL ?
anecdotally, in my cpu based ray-tracers (i have one in c++ and python so far) , i have found that SDL based rendering causes noticeable slowdowns to become quite useless after the initial novelty wears off.
I've checked the code under the hood of SDL and it does pretty much the best you can do for fast uploads of an image from the CPU to the GPU.
My CPU-based ray tracer uses the code I linked. SDL_LockTexture, go wide with threads writing into the locked buffer, SDL_UnlockTexture. If I skip the actual ray tracing, I get 1400 FPS uploading a 1024x1024 image.
Antarctica looks too large, so I'm guessing that the implemented uv-mapping and the projection used to create that rectangular Earth texture are not the same hence the distortion
p.s. the Earth is also rotating the wrong direction!
3D games have basically never used ray tracing. They're starting to, a little bit. Some old 3D games (think Doom-era) used an algorithm called "ray casting", which is similar to ray tracing but much more primitive and much faster - among other simplifications, instead of one ray per pixel, you only have one ray per column.
Doom’s predecessor Wolfenstein 3D was famous for using ray casting, and the Doom engine inherited ray casting and extended it to handle different floor heights. https://lodev.org/cgtutor/raycasting.html
That's Wolfenstein 3d though. It's been a few years since I last checked Doom's source code but I don't remember seeing anything about raycasting in it. And a quick search leads me to things like [1] which seem to confirm it. But then all the replies to my comment seem sure that it used raycasting, so maybe I'm missing something.
Maybe you’re right! I could be wrong, I was only repeating what I’ve heard second hand and read in blog posts. Looking at the code right now, there is a ray casting function called P_CheckSight() that is used for collision and enemy tests, but isn’t part of the core rendering algorithm. Carmack did say “I used the BSP tree for rendering things” but also that the basic rendering concept is “horizontal and vertical lines of constant Z”. It seems entirely possible that this engine is a sort of hybrid of the category we think of “ray casting”, that it’s not exactly what someone assumes when hearing that phrase, but also not entirely different either.
The early Doom 1 and 2 being software renderers, used ray casting (386 protected mode FTW to boot!) The later versions though, were part of the first wave of games to leverage GPU cards. I don't know if those newer releases still kept any ability to render purely in software though.
Ray tracing is extremely costly for real time rendering and is almost never used unless there is some trickery. We have some limited amount of it now, especially since the Nvidia RTX series of GPUs, but rasterization is still king.
Among the trickery being used is raycasting, used in early 3D games like Doom, which is a kind of 2D ray tracing that works by column instead of by pixel. Real time ray tracing techniques, in particular signed distance field raymarching is a staple of the demoscene, this is made possible by using mathematically defined objects.
For prerendered graphics (ex: Pixar movies), the dominant technique is path tracing, it is a randomized variant of ray tracing that produces a noisy image that is progressively refined. It is even more costly than raytracing, but is much better at global illumination.
An interesting caveat is that ray tracing has a high up-front cost, but rendering time is less sensitive to scene complexity than traditional polygon rasterization. Beyond a certain level of scene complexity, ray tracing can be faster, it's just that games are generally limited by what can fit in memory of a modern graphics card.
Regarding Pixar, they actually avoided ray tracing until they decided they really did need accurate reflections for the movie Cars. The reason is that traditional rendering is memory parallel: you can render a scene that won't fit in memory on a single computer by spreading the scene across a cluster. With ray tracing, the memory access patterns aren't predictable, so you have to have the whole scene fit in memory on every compute node. This doesn't matter for games because games don't divide the graphics computation across multiple machines.
I suspect the nested option type in this particular implementation has to do with the step-by-step implementation in the guide, as you gradually add various features one at a time. As a result, you might end up with some features in your code that reflect less-general behaviors from earlier chapters, if you don't fully refactor them as you go along.
I don't think there are actually any cases in which you can usefully do something with the color but without the ray (the reason that the ray output is optional is that the ray might be absorbed rather than reflected, but in this case I think its color becomes meaningless, or becomes the equivalent of (0.0, 0.0, 0.0)). However, someone implementing this based on the tutorial might not think of it that way, because the different features in question were added separately at different times.
In the original C++ example in the tutorial, there is a boolean return value indicating whether there is an output ray or not, which I think corresponds to the Some case for the option type here. As there is ultimately only one boolean, not two, and as material implementations are expected to set both the ray and the color when scattering occurs, I think it's correct that you could combine the vector and color return values into a single struct wrapped in a single option type, at least with the implementation strategy that the tutorial is suggesting.
I worked through about 90% of the guide in Python and then about 60% in Rust; this article definitely makes me want to pick it up again.
Edit:
> The `Some((None, Srgb))` case would be a non-reflective surface, it changes the color if hit by a ray but does not reflect the ray further.
While this feels like a plausible guess, I don't believe it actually aligns with the strategy suggested in the tutorial. See section 8:
Diffuse materials (non-specular reflection) are implemented by having them scatter incident light in a random direction, possibly with some attenuation and some change of color. The change of color, though, is only meaningful when rays are scattered. See also section 9.3
(talking about how you can either scatter every ray and attenuate its intensity, or scatter a fraction of rays with no attenuation but absorb some rays at random, with the same statistical result on the output)
To each one its own. After >40 years of programming, and being quite bored, Rust has been a breath of fresh air and made me interested again in coding.