Hacker News new | past | comments | ask | show | jobs | submit login
How to generate uniformly random points on n-spheres and in n-balls (extremelearning.com.au)
168 points by egorpv 9 months ago | hide | past | favorite | 81 comments



I actually needed this at work once! We needed to fuzz peoples address in a mapped view for analytics, without revealing PII. It ended up never being shipped, but we needed to fuzz geographic data and the thinking was like:

1. Truncate your lat longs to some arbitrary decimal place (this is very very stupid, you end up with grid lines [1])

2. The above method ^^ but everyone tries basically doing like random angle + random length along angle, which doesn't generate uniform results in a circle as the article mentions[2]. So then you try generating points in a box that encloses your circle, and rejecting anything outside the circle, but that smells bad. So you do some googling and find method 6 listed in the article (Good! and Fast!)

3. Realize that fuzzing points is stupid, what you really want is to view points in aggregate anyways, so you try heat maps

[1]: Ok you always end up with grid lines, but truncating to like 1-6 decimal places produces very obvious grid lines to the human eye

[2]: Try this in pyplot! You'll see right away with ~100 points its not uniform in the way you expect


I needed something related, n roughly evenly distributed points on the surface of a sphere. Ended up using a Fibonacci spiral.

https://extremelearning.com.au/how-to-evenly-distribute-poin...


Hey me too, very interesting problem. I also used this method as well. https://www.sciencedirect.com/science/article/abs/pii/S00104...


Force directed layout is my favorite for this.


Neat. My n was on the order of thousands, so that would have taken a while, but it could have been cached.


...but then the algorithmic complexity goes from O(N) to O(N^2), or at least O(N log N), since points have to interact with each other.

(N denotes the number of points you need to generate)


I confess I've never used this in a situation where time complexity was at all relevant. For one thing, it is possible that a result only needs to be computed once for each N (and per shape, since this approach works on a wider range of shapes than just spheres).


The rejection method smells pretty good to me, in the sense that it should be pretty obvious to anybody with, like, middle school level math I think (right?).

It might fail for higher dimensions, but lots of programs only run on a 3D sphere of a planet, haha!


Fail is an understatement, the ratio between the two volumes is basically (n/2)!(4/pi)^(n/2). Which is also the expected number of tries you'll need. The time doesn't merely grow exponentially it grows faster than exponential.

I don't actually know of any useful algorithms with worse asymptomatic behaviour.


Sure, but for the most-used real-world geometry, n = 2 or 3. So how fast it grows really doesn't matter.

Most maps are 2-dimensional and this is fine.


When I saw "n" in the title I assumed they wanted something that worked reasonably for large n. Rejection is no good because of the notorious "curse of dimensionality". So my idea was to choose a suitable distribution on the radius, then draw from it and choose the angles at random (not sure what those angles are called). You might have to delete the point at the center for that to work.


The article explicitly discusses both. Versions for low dimensions that have favorable properties, and those for high dimensions.

The method you propose is not used in high dimensions in practice as it would involve evaluating order n^2 trigonometric functions and is also far harder to implement than the methods discussed in the article.


Maybe n=4 if you have a timestamp as another dimension.


Do your timestamps occupy a hypersphere?


A 1-dimensional hypersphere is a line segment so sure.


Quibble: I don’t think “fail” is an understatement, it is as strong a word as anything else for something that doesn’t work.

Actually, it is an overstatement I think, or something orthogonal; the algorithm is still correct just wildly inefficient. Fail should be reserved for algorithms that produce the wrong result, not ones that produce the right result very slowly, however slowly.


> I don't actually know of any useful algorithms with worse asymptomatic behaviour.

Note: "asymptomatic" does not mean the same as "asymptotic"


Most of my code has bad asymptomatic bugs but they never show up in testing because the tests don’t have data sets big enough to show the symptoms!


I doubt that the number of dimensions is the parameter that grows. It’s going to be constant for 99.99% of cases.


I'd just have used Uber's hexagonal tiling library. https://www.uber.com/blog/h3/


This is what I landed on! It actually works very very well for aggregate data.


Long time ago, I had the same problem while ray tracing using Monte Carlo techniques. Using Mersenne Twister fixed the clustering and grid like randomization. https://en.m.wikipedia.org/wiki/Mersenne_Twister


What if someone lives in a really remote location so they're the only ones in the heat map cell?


I should think after defining a uniform distribution of points, you could cluster them as needed to form larger lower-resolution chunks according to population size. Binning everyone in that region to say the central most point. Which could then be adaptive as the populations change.


This is the correct answer. You need an aggregation threshold that restricts precision when population count is too low. In this case, the cell size needs to increase until there are at least N users.


Another user here mentioned Uber's h3, which is actually what I used. You end up being able to anonymize over arbitrarily large geographic areas using something like a tile, rather than a point


Assign everyone a random offset, that doesn't change, that is large enough to obscure the address with some reasonable radius but small enough to not drastically skew the heat map at a lower zoom level


As an RA, before starting grad school, I hacked together some code to choose a random point on a sphere, for some psychophysics experiment I was doing.

Fortunately, I never used it for anything, because I made the classic naive mistake of simply choosing a random theta in 0,2pi and phi in -pi,pi, which ends up with points biased towards the poles.

Somehow *12 years later* my subconscious flagged it up and I woke up in the middle of the night realizing the issue. Even though I'd never revisited it since then!

https://github.com/dmd/thesis/commit/bff319690188a62a79821aa...


A friend of mine made this mistake a long time ago when sampling points in a circle. He was baffled and couldn't understand why there was concentration of points around some areas of the circle. My explanation: draw a square around this circle; trace a horizontal line passing along its center; now, draw a diagonal line; can you see it is more likely that a randomly sampled point is closer to the diagonal than to horizontal line? He said "sure!" and I asked "why?" to what he promptly answered "because its longer.". He then asked what he should do about it, I simply said: "just ignore the points that are too far". He immediately understood that points inside the square but outside the disk were the reason for the concentration of the points.


> simply choosing a random theta in 0,2pi and phi in -pi,pi, which ends up with points biased towards the poles.

Am I right in thinking that it’s because circles of constant theta on the sphere contain the same ‘expected’ number of points (since phi is uniformly distributed), and these circles get smaller (and in the limit, shrink to a point) as one travels towards the poles — hence, higher density of points…?

…makes me realise that spherical polar coordinates, as natural as they seem, aren’t in some way homogeneous in the sense that whilst the mapping (theta, phi) —> (x, y, z) is continuous, it isn’t uniformly so… or something like that. I don’t know enough (possibly differential) geometry to articulate it properly, but it feels like the problem of getting genuinely uniform sampling from a sphere (or indeed any manifold) is somewhat equivalent to having a ‘nice’ coordinate system (one that respects the symmetry of the sphere). Basically, polar coordinates are prejudiced and treat points differently depending on how close to the poles they are.

By definition, our sampling in the domain space (two intervals) is uniform; the problem comes when we project through a coordinate system that doesn’t respect this.

Which better solution did you use? I’m having trouble reading your code on my current device.


> whilst the mapping (theta, phi) —> (x, y, z) is continuous, it isn’t uniformly so… or something like that.

The term is "measure preserving". The mapping is continuous but it doesn't preserve the length of intervals projected through it.


Yes. That seems close to what I was looking for. Thanks.


> Which better solution did you use?

I didn't, since I haven't touched or needed this code since 2003. I just put a warning saying don't use it.


I learned the easiest way to do d-dimension sampling in Foundations of Data Science: see https://news.ycombinator.com/item?id=34575637 or https://www.cs.cornell.edu/jeh/book.pdf?file=book.pdf

I don't think it's a good idea to introduce over 20 different methods before talking about the correct one that works for any number of dimensions, say n, and the reason behind its correctness is very obvious:

* Generate a random vector by sampling n standard normal distributions: `vector = np.random.randn(n)`

* Key step: show that the vector has a uniform direction.

The proof is as follows: you look at the probability density function of a normal distribution, which is `p(x)=1/sqrt(2pi)*exp(-x^2/2)`, and the probability density function of the vector is the product of all these densities of its individual dimensions. Now the product `p(x1)p(x2)...p(xn)=1/sqrt(2pi)^n*exp(-(x1^2+x2^2+...+xn^2)/2)` since `exp(a)exp(b)=exp(a+b)` due to power functions.

Now it's easy to see that probability density is invariant to vector length, which means the vector has uniform probability for any specific value of x1^2+x2^2+...+xn^2. Whatever rotation you apply after sampling this vector, since rotation preserves x1^2+x2^2+...+xn^2 by definition, you get the exact same probability density function and therefore the same distribution of vectors.

* Now that the direction of the vector is uniformly sampled, decide the radius separately: for n-sphere the radius is just 1, and for n-ball the volume of the radius is proportional to the nth power of the radius, so you sample a uniform number from [0,1] as the volume and take the nth root as the radius: `radius = np.random.uniform(0, 1)*(1/n)`

* Normalize the vector to the radius: `vector = vector / np.sqrt((vector*2).sum()) * radius`

I think Section 2 of this book provides a much better perspective on the problem of generating uniform random points, since it also provides intuition behind the geometry of high dimensions and properties of the unit ball, etc.


I can't believe I didn't already know this


This is the way.


Another way to generate uniformly random points on a 2D disk that the author forgot to mention: let A be an n x n complex matrix whose elements are iid copies of a fixed random variable with unit variance. Let lambda_i be its ith eigenvalue, and let x_i = 1/sqrt(n) real(lambda_i) and y_i = 1/sqrt(n) imag(lambda_i). As n approaches infinity, the distribution of x, y approaches almost certainly to the uniform distribution over the unit disk.

Tao, T., Vu, V., and Krishnapur, M. (2010) Random matrices: universality of ESDs and the circular law. The Annals of Probability. 38(5) 2023-2065.


It's very inefficient, both on terms of runtime and in terms wasted entropy.


Indeed, haha.


If you're interested in things like this, you might like to read the paper of Diaconis, Holmes, and Shashahani on sampling from a manifold. https://arxiv.org/abs/1206.6913


2-ball (disk) distribution is kind of interesting in gaming when simulating gun projectile hit locations.

If the circle represents the area in which a simulated projectile will hit, you probably don't want a truly random distribution of points but instead have a bias towards the middle of the circle. A real gun shot multiple times at a fixed target will probably (assuming perfect aim but some variation on every shot) have more shots hit the middle of the pattern than the edges.

Some early Valve shot code actually had a purely random distribution, but at some point an alternate version got written and shared at https://developer.valvesoftware.com/wiki/CShotManipulator

Ironically the biased version is based on a pretty simple method that in fact people sometimes get wrong when they want a truly random point distribution in a circle. Just doing a random radius and theta will lead to a biased distribution. Wolfram Mathworld has a good writeup on it at https://mathworld.wolfram.com/DiskPointPicking.html


> truly random distribution

> purely random distribution

Nitpick: you don’t mean random; you mean uniform.


Game players live and die (virtually) by the fabled RNG and pray to RNGesus even. So though mathematically uniform is the right word, gamer-wise, random is a pretty good word to use.


I was always surprised at how easily you get biased sampling when generating random points despite all input - something I often saw students do was essentially normalize({2rand() - 1, 2rand() - 1, 2rand() - 1}) or variations of that (where rand() is "good" not the literal rand(3)), and there are numerous other ways that are more subtly wrong. IIRC the nominally correct way for a sphere specifically is something like a=rand() b=rand() and the random point is something* like { cos(a)sin(b), sin(b), cos(a)cos(b) }[1]

I think the best illustration of "reasonable choices of what random values should be used leading to biased results" is Bertrand's paradox which I was introduced to via numberphile/3blue1brown: https://www.youtube.com/watch?v=mZBwsm6B280 and am just glad that nothing I have ever needed random sampling for has ever been important :D

[1] please don't use this blindly, I'm really just going off very old recollection, if you need it google random sphere sampling :D


I used to think “normal distributions are everywhere” but the more math and science I watch on YouTube the more the central limit theorem pops up. It’s the CLT that’s everywhere, it just brings normal distribution as it’s +1.


Indeed. The fundamental insight behind CLT i.e. sample average is normally distributed even when population distribution is not normal is intuitive, yet the the theorem is magical.


When you have more than one variable, the outcomes get clumpy.


I find it hard to believe the dropped coordinates approaches were first noticed in 2010 and proven in 2017.


Some years ago I saw a blue and red image that caught my attention. The red dots seemed to be floating around the blue ones. I later found out that has a name - Chromostereopsis [1]. So I decided to make my own image and needed to distribute some points in a circle, this how I did it [2].

[1] - https://en.wikipedia.org/wiki/Chromostereopsis

[2] - https://jsfiddle.net/victorqribeiro/vxf2ajzm/48/


I encountered the need for uniform random points on hyperspheres, and found this solution (with Python code) very helpful: https://stackoverflow.com/a/59279721.

Currently, I am porting my codebase to Rust and did that part over the weekend, so if anyone is interested in this exact implementation, I'd willing to share it (as a crate if necessary).


The linked SO question is not about uniform random points. The poster explicitly excludes answers involving uniform random distribution on a hypersphere.


I wonder if CS people have studied the method of Lubotzky, Phillips and Sarnak to produce well-distributed points on spheres: https://onlinelibrary.wiley.com/doi/abs/10.1002/cpa.31603907...


I find it amusing that you need a normal distribution of scalars to generate a uniform distribution of vectors on an n-sphere.


Is this the same as uniform SO(N) sampling? I had to do something similar like this for some deep learning training method I was working on. I gave up


If you are looking for low discrepancy (fills the space "smoothly") quasy random numbers check out Sobol and Niederreiter sequences.


No don't. Look at plastic numbers instead. They are much better.


Off topic: isn't "uniformly random" a contradiction of terms?


No. Random just means you can't predict what value will be chosen, but doesn't tell you how likely different values are.

If I roll a 12-sided die, that's random. If I roll two 6-sided dice and add the result, that's also random, but it has a different distribution of values.

The two dice verison, there's one way to get a result of 2, but _several_ ways to get 7. You'll get 7 way more often than you'll get 2.

The one-die version each outcome is equally likely. You're exactly as likely to get 2 as you are to get 7 or any other value in the range of possibilities.

The one-die version is a uniform distribution. The two-dice version is not uniform.


Random is not precise; it does specify a distribution, though uniform is commonly assumed. A better term is "uniformly distributed".


See also this method based on optimizing nearest-neighbor distances after initializing with fibonacci spiral:

https://extremelearning.com.au/how-to-evenly-distribute-poin...


I'm definitely not at all qualified to talk about this, but... aren't "uniform" and "random" antonyms?


No, 'uniform' refers to the distribution, which need not be uniform, e.g.

https://numpy.org/doc/stable/reference/random/generated/nump...

Or in god's own words (TAOCP section 3.4):

Applications of random numbers often call for other kinds of distributions, however; for example, if we want to make a random choice from among k alternatives, we want a random integer between 1 and k. If some simulation process calls for a random waiting time between occurrences of independent events, a random number with the exponential distribution is desired. Sometimes we don't even want random numbers — we want a random permutation (a random arrangement of n objects) or a random combination (a random choice of k objects from a collection of n).

In principle, any of these other random quantities can be obtained from the uniform deviates U0, U1, U2, ...; people have devised a number of important "random tricks" for the efficient transformation of uniform deviates. A study of these techniques also gives us insight into the proper use of random numbers in any Monte Carlo application.


"All distributions are uniform" is one of the two cardinal crimes of a school level of statistics understanding, the other being "all probabilities are independent".


"Uniform" here indicates that all values have an equal probability of being picked. The distribution is flat.


Nope, uniform is a description of the probabilities of a random distribution.

The easiest example most people are aware of in practice is throwing two 6 sided dice vs one 12 sided dice. The outcome of both is random, but people seem to know fairly intuitively that the two dice case produces is more likely to produce middle values than extreme ones, while the single 12 sided dice doesn't have that behavior.


The article explains this quite early on, as I understand it, it means that every spot has the same chance of being picked.


No. I would go so far as to say, if it’s not uniformly random, it isn’t random.

My reasoning is, if you are counting cards in a game, you are giving yourself an advantage. But what really happens is that from your point of view cards will be drawn less and less uniformly randomly. Or put in another way, if you know the distribution is normal, you can bet on the the result being near the center and come out on top, but if it’s uniformly random distribution all hopes are out.


Random (in statistics) means you cannot conclude the next value ahead of time. That's different than saying the next value is uniformly likely to be any value and the two concepts are actually orthogonal. E.g. If I had a bag with 99 red balls and 1 blue ball then randomly selected one you may assume there is a high chance there will be a red ball, based on the odds, but you aren't able to actually know which ball will be next until it's drawn.


The second pick will have better odds of a blue ball, assuming the first one picked was red and the ball was not put back however - correct?

Even more so if you’ve gone 50 picks like that, and are now down to 49 red balls and one blue ball.

That is what card counting helps you with - knowing the odds based on your current state, as compared to the initial state.


Depends if the balls are placed back or not. Similar to cards, any game which never replenishes the deck and lets you draw all the cards down to the last one lets the last pick (but not the prior ones) be non-random.


I literally stated ‘if the balls don’t get put back’.

And the last card was still random, btw. But can be guessed now with perfect precision due to the process of elimination.

Random at shuffle doesn’t mean unguessable or unpredictable as the game goes on.


Sorry, meant to say the more generic "replenished" but typed a bit of your comment while reading it at the same time :p. That's what I was saying by the next part, it doesn't matter _how_ it's replenished just that the pool doesn't dwindle to 1 option for any reason. E.g. you could not put them back pull and mix from multiple decks/ball sources as the game progresses and even though 51 cards/99 balls are discarded you don't know what n+1 is for certain yet.

I disagree the last card is "still" random, at least in the sense of statistics. It "was" random up until the measure of uncertainty of the next event reaches 0 (i.e. entropy reaches 0). At that point it's no longer a "guess", there is no uncertainty and the remaining pattern is always 100% predictable in that regardless which proceeding events occured to get there it can always be known what the next value is without uncertainty. Since there ceases to be any uncertainty in what the remaining pattern will be there ceases to be randomness in the next value generated. That the card's value was not known at n=0 does not affect whether the n+1 card still is/isn't random when n=51. In another form, that you didn't previously know the value of card n=52 with past information holds no influence whether the value is random or not with new information. Statistical randomness is all about what you know of the future predictability, not about how something came about.

This is also true of events which fall into predictable patterns at any point along the path. E.g. if I had a (relatively useless) hardware random number generator that generated random numbers 0-127 once per second until it generated a 0 at n=17, at which point it ceased being able to pull randomness from it's dead circuits and always produced 0 afterward, the first 17 values were all statistically random at the time of their draw but n=[18,inf) are all now predictable and no longer random from that point on.


I think you’re getting confused? Or we’re talking past each other?

Knowing the outcome of a die roll after the roll doesn’t make the roll of the die non-random.

Any more than the die coming up 6 a bunch of times in a row does. Though maybe it would be a good idea to check the weight/balance of it, hah.

Knowing what the last card has to be after all the other cards have been played doesn’t make the process which made that card the last card any less random either.

Picking cards off a deck isn’t a randomization/non-randomization event, shuffling the deck is what does that.

Probability however, which directly impacts gameplay in some kinds of games, is directly impacted by knowledge exposed during gameplay.

So for instance, the second to last card (and also the last card) can be known with about 50/50 odds (probability wise) if someone is good at counting, which is damn good! Way better than 1/2704 odds of guessing a sequence of two cards at the start of the deck.

In the example of the broken hardware random number generator, knowledge of the defect can be used to attack a system if it assumes a continuous probability distribution out it’s outputs, if the attacker knows this.

The same as an attacker at a casino could probably manipulate a game to make money if they knew something was broken in the deck shuffling and the same two cards were always at some position in a ‘new’ deck.


> Knowing what the last card has to be after all the other cards have been played doesn’t make the process which made that card the last card any less random either.

Exactly, same page then. The card not being random at the end of game frame of reference places no limitation that it was random from the beginning of the game frame of reference. In the end of game reference the last card is never random, it's only ever the remaining card. In the beginning game reference which card will end up being the last card is still random at that point despite it ceasing to be from the later frame.


But if you’re given a double or nothing bet option on each draw, you know there’s a range of strategies that vary from maximum winnings, to most likely winnings. You want a good story you go for max. You need a few thousand to head off a problem with your car, you bet a different way.


The uniform distribution is in some sense the most random distribution you can choose, given that you have a finite number of cards. Formally, it's the maximum entropy distribution on a finite domain.

But what if the domain isn't finite? e.g. the entire real line? Then we can't define a uniform distribution, because the total probability can never sum to one. We can still define a maximum entropy distribution, but only by making some more assumptions. The normal distribution is the maximum entropy distribution on the real line, given a particular mean and variance.

Other examples: on the positive real line, the exponential distribution is maximum-entropy for a given mean. Poisson distribution is the equivalent on the non-negative integers.


There are plenty of things that are random, that are not uniformly random. The hardware RNGs that are used for physically based RNG are not uniform, including the great lava lamp wall or geiger counters, etc.

The problem is if you are doing something that requires uniform random values, and your random source is not uniform, then things will go wrong.


A simple example of non-uniform randomness is the two-dice roll.

Rolling two dice gives you a number which is random in [2, 12], but not uniform -- 7 is far more likely than 2 or 12.


By that criteria, wouldn't a discrete distribution with infinite support be even more random?

Finite support somehow seems less random because you can still say something interesting about its mean and bounds in the same way you can say something about the mean and variance of a normal distribution.


> Or put in another way, if you know the distribution is normal, you can bet on the the result being near the center and come out on top, but if it’s uniformly random distribution all hopes are out.

That’s just because the payout distribution and real distribution are different, and there’s a place the payout is at a loss.

If you bet on a uniform distribution of 2 through 12 that payed out according to a normal distribution, you’d make a killing betting on 2 or 12 (the “outliers”). If you roll 2d6 (normalish) and get paid according to a uniform distribution, you’ll make a killing betting on 7.


A normal random variate is just a uniform random variate that's gone thru a 1-1 function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: