Hacker News new | past | comments | ask | show | jobs | submit login
Sphere Rendering: Flat Planets (emildziewanowski.com)
246 points by skilled 7 months ago | hide | past | favorite | 48 comments



The author dismisses cubemaps pretty quickly, but imo it’s the simplest solution & it’s what I did when rendering dynamic gas giants on my own personal project a number of years back* . Using a cubemap doesn’t result in a 6x increase in memory usage, you’re just splitting the texture size from one large rectangular face into 6 smaller rectangular faces, but the total texture detail is the same. The nice part about a cubemap is you don’t have to worry about pole pinching at all + you can use a 3 or 4 dimensional noise function to easily create a seamless flow field for texture animation/distortion.

* https://www.junkship.net/News/2016/06/09/jupiter-jazz


If you want to avoid weird seam artifacts, using 2 hemispheres of stereographic projection is probably better still. Each hemisphere projects to a disk but you can just fill out the texture to the boundaries of a square, duplicating a bit of each hemisphere into the corners of the other's texture (or you could leave those parts blank if you want). There's a 1:2 difference in scale from the center to the edge of each disk, so you could argue this is slightly wasteful of pixels for a given minimum required level of detail, but the projection is conformal so it's considerably less tricky to figure out how to sample it to decide the color for destination pixels drawn at steep perspective, and the stereographic projection is very cheap to compute in both directions (1 division + some additions and multiplications per projected point), even cheaper than the gnomonic projection used for a cubemap.

If you want something conformal that has less scale variation and wastes fewer corner pixels than a pair of stereographically projected hemispheres and is still not too conceptually tricky, you can use a pair of slightly overlapping Mercator projections, at right angles to each-other, covering the sphere like the two pieces of leather covering a baseball. Each one can have a rectangular texture. There are some NOAA papers suggesting this approach for the grids for solving differential equations needed in weather simulation of the Earth.

The most pixel-efficient projection I know starts by breaking the sphere into an octahedron, then taking each octant to be covered in a grid of hexagonal pixels, using "spherical area coordinates" in each octant to determine the grid. Each octant can then be represented in an ordinary square-pixel image by a half square ("45–45–90 right triangle"), so the result is something like this <https://observablehq.com/@jrus/sac-quincuncial> with a hexagon grid like <https://observablehq.com/@jrus/sphere-resample> (scroll a few examples down from the top of the page). But figuring out the details about how to sample the texture when you need to cross edge boundaries, etc., makes using this quite a bit more fiddly than the 2 stereographic projection version. And there will be some seam artifacts.



And if you want to further minimise distortion, you could group the triangular faces of an icosahedron into 10 rhombuses, each covered by one square texture.

It's more math, though, and usually not worth it unless you're already planning to subdivide the surface further for some other reason.


Thank you for posting that. I might use that curl-noise to generate clouds on a planetary-scale game I'm working on right now.


I’ve since done a bunch more on planetary cloud rendering, need to do a proper write up, but it’s a combination of volumetric noise, flow fields & atmospheric scattering.

https://x.com/mr_sharpoblunto/status/1653986502106570757


unrelated: if you want to talk about that particular type of game shoot me an email (see profile)


Texture pinching at the poles is an extreme version of a form of distortion that is actually present on the whole surface. The distortion is usually only obvious at the poles but it can also become visible elsewhere if you use adaptive subdivision of the sphere into triangles because the distortion will change as the subdivision changes.

The problem is the sphere is divided into quads which are represented by two triangles, which each have equal area in UV space, but one of the triangles is smaller than the other in 3D space (the one with its horizontal edge closer to the pole). Despite this difference, the UVs are interpolated linearly across the triangle, which means half of the texture is shrunk and half is stretched. At the pole this becomes extreme because one of the triangles actually has zero area in 3D space, so only half of the texture is actually rendered, which causes obvious seams between the triangles.

The right solution is to calculate the UV coordinates per pixel in the pixel shader, instead of per vertex with linear interpretation. If done properly the poles will be seamless.


Hi! Could you provide some more information please (either in hints/keywords to help search, or links if you have them handy)? I've recently started dabbling a bit rendering, shaders and game dev. So I'd love to know more!


See the Rasterisation algorithms[1] section of Wikipedia's Texture mapping article.

[1]: https://en.wikipedia.org/wiki/Texture_mapping#Rasterisation_...


Thank you!


Doesn't scaling W (instead of leaving it as 1 in the XYZ entries) fix this?


Yes, I think it could probably be fixed this way, if you calculate the W coordinates appropriately.


Are spheres ever rendered with multiple rendering poles to decrease the pinching you describe?

As an example a sphere with a “true north” pole that is rendered with one rendering pole at “true north” and another at 0',0’. When the user is looking at the sphere sidelong the former is used and if the user looks at it from closer to the “true north” pole, the equatorial render is used.



Reminds me to revisit displacement mapping — probably not going to be a replacement for the problem that author is trying to solve but simpler and kind of fun.

I wrote a kind of cool music visualizer for SoundJam perhaps 25 years ago I called "Eclipse". Input data was an array of levels (probably integers) across some range of audible frequencies — a left and right channel.

Think of the eclipsed sun with corona ejections — that was what I was going for. The music data was the "ejections". The frequency of the data determined where around the disc of the sun it would appear.

Over time the ejecta moved away from the sun and soon disappeared as they "cooled" to black — the initial color of the ejecta being white for the strongest signals — yellow, orange, red, brown when weaker. (Think of the black body curve.)

I had to keep a circular buffer of the sound data values (an array of arrays) large enough to represent how much time the ejecta would "live" before disappearing to black.

In any event, the whole display of the "eclipse", ejecta, was just a displacement map. I had pre-calculated a bitmap where the value for each "pixel" was an offset into the buffer of sound level values. "pixels" close to the surface of the sun would have offsets to the new data coming in, pixels further out would have offsets into the tail of the buffer that was about to expire. With the circular, radial aspect of the ejecta, there was some math involved in generating the displacement values in order to map from essentially a radial space to a cartesian one.

With that established as described, the main loop simply pulled in new sound values, over-wrote the oldest values in the circular buffer with the new and then iterated row and column-wise over the displacement map, grabbing the corresponding sound data value, mapped it to a color in a fixed palette and pushed that color into display buffer.

Although not as "flashy" as other visualizers, there was I thought a calm beauty to it. And it very much represented the music data (you know, as opposed to later visualizers where, even when presented with silence, they seemed to be unable to settle down).


Wow, this article is cool, but it loads a lot of shaders the more you scroll down the page, and unless you are on a very powerful computer, your browser might slow to a crawl.


What is a very powerful computer? Would any laptop no older than 10 years with integrated graphics have trouble rendering these?


FWIW my laptop from last year with integrated graphics had a conniption. Was barely able to see the animated gas giant picture and wasn't able to scroll past it due to the lagging page. I don't know if it's the graphics rendering or something else going on.


Interesting how differently people interpret "realism". This was really stark in the realm of games when resources were more limited - yet some games managed to be more immersive with blocky pixelated low-palette graphics than today's big-budget productions.

> the gas giant’s surface texture will be generated in realtime using a pixel shader and a render-to-texture approach

It's fascinating to watch someone descend into a rabbithole, starting with an impractical, unbounded approach, finding ways to invite performance bottlenecks, and carrying on, and on and on, meandering towards... did OP eventually get this working?

I guess this is a difference between a project with real-life constraints, and a hobby. Let's use a pre-rendered, animated texture for the gas giant, and move on to the rest of the project - come back to cosplay Slartibartfast once everything is up and running.


I was thinking, maybe games were more immersive with lower fidelity because it left something for our imagination to do. And that made us more engaged.


Reminds me of this article on a similar approach: https://bgolus.medium.com/rendering-a-sphere-on-a-quad-13c92...


Do no GPUs or 3D libraries include a function to do this by just horizontally walking each line of a circle and mapping the X,Y from the texture to the "3D" position on the sphere?

It seems like a magnificent waste of resources to rotate and project a million vertices for the triangles of this thing when a sphere is a circle, the algorithm for drawing a circle is simple, and you only need to walk horizontally and then drop down a line and keep going until you've drawn every line.

I remember doing stuff like this in the late 80s to precompute magnifying lenses, like the one in Second Reality.


Probably due to the fact that spherical linear interpolation would be slower and more specialised than linear interpolation; which the entire render pipeline is already constructed to handle. You can draw a perfect, anti-aliased sphere using a single shader program and 1 triangle.



GPUs don't rasterise line-by-line like old software renderers, and they basically only know how to rasterise triangles. If you're rendering on GPU then I think the approach in the article is pretty good, despite its apparent complexity.

This is a great series of blog posts about how GPU based rendering is structured (long but excellent if you're interested):

https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...

Part 6 is about rasterisation.


> GPUs ... basically only know how to rasterise triangles.

www.shadertoy.com would like a word.


Pixel shaders are covered in parts 8 and 9 of the blog series I linked to.

What aspect of shadertoy disagrees with what I wrote?


All of it that shows rendering beyond what GPUs "only know".


I said "rasterisation", not "rendering".

Rasterisation is one specific step of the rendering process (+). GPUs rasterise triangles. If you're writing a software renderer you can include algorithms to directly rasterise other shapes such as circles, which is what started this whole thread. But if you're rendering with a GPU and relying on its built in rasterisation then triangles is where you start, even if circles is what you ultimately want.

(+) It's also possible to render without the rasterisation step at all - for example, a pure ray tracing based renderer doesn't have an explicit rasterisation step as it's normally thought of in graphics terms.


Ah, so you meant the only rasterisation they know is triangles. OK, got you.


GPT gave me this, which seems to work when I ran it:

    public void DrawSphere(int centerX, int centerY, int radius, double angle)
    {
     for (int y = -radius; y <= radius; y++)
     {
      for (int x = -radius; x <= radius; x++)
      {
       if (x * x + y * y <= radius * radius)
       {
        double z = Math.Sqrt(radius * radius - x * x - y * y);
  
        double xRot = x * Math.Cos(angle) - z * Math.Sin(angle);
        double zRot = x * Math.Sin(angle) + z * Math.Cos(angle);
  
        double u = 0.5 + (Math.Atan2(zRot, xRot) / (2 * Math.PI));
        double v = 0.5 - (Math.Asin(y / (double)radius) / Math.PI);
        int textureX = (int)(u * _textureWidth) % _textureWidth;
        int textureY = (int)(v * _textureHeight) % _textureHeight;
  
        Rgba32 color = _texture[textureX, textureY];
        SetPixel(centerX + x, centerY + y, color);
       }
      }
     }
    }


Are all the downvotes because I used GPT to produce the code? The code worked just fine, so it's not bad code.


FYI, the imposter planet as a 2D pixel shader on a backdrop is a godsend for those doing procedural universes where you have lots of planets. Due to memory and GPU bandwidth, I restrict my spherical cube-based planets to 1 instance and draw the nearest bodies using the backdrop technique. When in space, there’s a distance in the middle of two bodies where both are rendered as imposters, before settling on the nearest body for spherical quad-tree goodness. It’s not perfect but the illusion is near flawless. This also makes it rather trivial to add light physics to your camera lens for long distance galaxies and bodies. Since they are on a plane… such as when the moon comes up and looks bigger than it is, or a galaxy in the distance being warped by gravity.


Love the gas giant and the corresponding page: https://emildziewanowski.com/flowfields/


Icosphere would unwrap a lot more smoothly, and has regular vertex positons and element sizes. Not trival to unwrap, but doable


Great explanations and visuals!

Triplanar mapping may have worked as well. It would have fixed the seam and also the polar region. Given the simple texture patterns in the examples it may have worked well.


Most of the images of moons in this post resemble false-color imagery, and don’t reflect how these moons would actually appear to human eyes. Just fyi.


Man, HNers have an expert opinion on everything. This person appears to be weighing in on the color accuracy of fictional artwork moons.


From TFA:

> I’m taking a similar approach in my project. The skybox I’m working on will feature an animated moon and a gas giant, adding some extra visual flair. Both planets will spin, and in addition, the gas giant will feature moving atmospheric currents. Normally, these motions are too subtle to be seen, but I’ll accelerate them for greater visual impact.

Sounds like they're attempting to create background imagery for a video game or movie or something. Maybe they're fine adjusting the colors just like they adjusted the wind speeds. But they should know that they're trading off accuracy.


Yea! And Picasso's paintings don't use an accurate perspective projection and don't reflect how his subjects would actually appear to human eyes, either!


That page literally hung my Win10 PC. I had to power cycle to recover!


That’s definitely not an issue with the page.


I think that there's something up with that page. There may be a pathological path when suitable GPU support is not available.

My machine slowed down tremendously, with a YouTube video in another tab pausing entirely until I managed to close the tab. I tried again with Task Manager open and saw a process named "System Interrupts" with 75% of the CPU before I choked off the tab again.


I agree there's something wonky with the page. My argument is different: A web page should not be able to take down the computer, no matter what. Web pages are untrusted; this is a DoS threat.

Really, it also shouldn't be able to choke the browser.


Page unscrollable (presumably) without enabling third party JS.


This feels very complicated and a lot like yak shaving. I would have just created a programmed shader with some n-D perlin noise on top.

Effects like swirl etc are easy, and you can use a range of parameters for the noise input: polar angle, azimut, time, 3d coordinates, ... Configuration could be easy by adding things like hue etc


You are absolutely right. That's how I do it in my game project. Just ignore the ignorent downvoters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: