Hacker News new | past | comments | ask | show | jobs | submit login
A practical use for space-filling curves (gatech.edu)
192 points by RiderOfGiraffes on Feb 19, 2011 | hide | past | favorite | 31 comments



Another good example is using Morton Order for things like texture maps in graphics - it is easy to derive a memory address by interleaving the bits of the x,y coords:

http://www.devmaster.net/forums/showthread.php?t=10125

http://en.wikipedia.org/wiki/Z-order_curve

From distant memory - switching to a variant of this from simple row major order textures gave > 20% increase in a scene rendering benchmark (software pipe, full scene, no game logic). Prior to that I had been doing daft things like having textures stored in column or row major order depending on their 'usual' orientation.


Yes, that's texture swizzling.

It's useful any time you are accessing higher-rank arrays in a spatially coherent manner without any directional bias.

Bit interleaving only works with power-of-two dimensions. That's limiting. But for the cache benefits you don't need self-similarity at all levels. For less regularly sized arrays, you can play this trick with only some slices of lower bits, and then it's usually called tiling.

Scientific computing people also do this whenever they want to lay out a matrix in memory so that it efficiently supports both row-coherent and column-coherent access, which is required for basic operations like matrix multiplication. You can formulate tiled matrix multiplication in terms of smaller tile vs tile multiplications.

The earliest GPUs supported only square power-of-two-sized textures using swizzling. Later ones like the GeForce FX supported non-square, non-power-of-two textures using a simple memory unit with tiling. But the total number of tiles in use was a limited and non-scaleable resource, and once you exceeded the limit you fell off a performance cliff. Finally, the last two generations of GPUs (starting with the G80 microarchitecture in NVIDIA's case) have a more scaleable multi-level tiling approach that combines benefits of both swizzling and tiling although with higher hardware complexity than either of them.

My coworker wrote a great blog post on fast multi-level tiling in software: http://fgiesen.wordpress.com/2011/01/17/texture-tiling-and-s...


I came here to say this. Not just graphics, but indexing arrays in general on the GPU for use in OpenCL or CUDA. We are working on optimizing our neighbor searches using this.

A good book: Foundations of Multidimensional and Metric Data Structures by Hanan Samet 2006

A good paper: Interactive SPH Simulation and Rendering on the GPU by Goswami


Samet's book is the definitive reference, but I have trouble understanding his explanations. They do a good job of summarizing what's distinctive about each technique relative to the others, but a poor job of explaining what the techniques are. This is a pitfall of technical writing, especially by experts: it's easy to produce something that's intelligible only if you already know the material, which defeats the purpose (assuming your purpose is explanatory and not only compository). Still, the book is indispensable as a reference.


When I was building OBsearch.net I used this book extensively. I grant you that some explanations are not so detailed. On the other hand, the explanations of the most relevant techniques are quite good. Check for instance the section that explains LSH or the explanation about Navarro's SAT.


:-)

I remember using Morton Order for memory layout of dense matrix operations. Very handy and much better than row or column major order. At the time, I was using this for wavelet operations.

Some of the nastiest C code I ever wrote from a readability perspective. The experience permanently scarred me - prior to doing it, I wrote nice, clean C. After, I couldn't read my own code after a week . . .


There are lots of practical uses. One that immediately comes to mind is cache-friendly layout of power-of-two textures, usually called texture swizzling. Another application is designing clock trees on VLSI chips to minimize clock skew; that's actually a space-filling tree rather than a curve, but it's closely related.


"here is the tradeoff: Use our heuristic and you get a reasonable route immediately. Alternatively, configure a network of 110 processors, then spend two months computing the shortest route to save a month of driving."

Or, of course, use one of the thousands of other heuristics, most of which will likely provide as good an answer in as short a time as this heuristic. I mean, why provide comparisons to comparable heuristics? That would be like doing science.


If you're interested in such a comparison, Google is your friend. Here, I've tried this search:

http://www.google.com/search?q=comparison+of+heuristics+for+...

It depends on what you actually want, but here are two links that might be the sort of thing you're looking for:

http://www.agent.ai/doc/upload/200302/stut00.pdf

http://www.psych.lancs.ac.uk/people/uploads/TomOrmerod200712...

But let me just say that I found your reply to be unnecessarily nasty in its tone - snippy and snarky. You seem simply to be criticising the authors because they produced a brief paper describing a particular heuristic, and you seem simply to be demanding that they do go further than they have in this paper. Perhaps they already have, perhaps it's work in process.

I find it frustrating when people criticise a link largely because it is what it is and isn't something else. If you want something on comparisons, go find it. Or alternatively, do the work yourself. Then regardless, come and share your results.


It's a bit annoying to me mainly because this sort of thing is really common among heuristics developers (even in papers submitted to conferences/journals). They develop a heuristic for a problem, and then show that it beats the exact, guaranteed optimal algorithmic solution. But that's only the right comparison if it's the first-ever heuristic for a problem! Otherwise it's a bit of a straw man, because its real competitors are other existing heuristics and randomized or approximate algorithms for that problem, and/or general, domain-independent heuristics like randomized hill-climbing.

Apart from the claim that it beats competitors, it's a really interesting post, though; light-weight heuristics based on clever connections are interesting in their own right. (So I sort of agree on the tone.)


This is extremely common in compression algorithms "research" as well. At least in video compression, the vast majority of papers are complete tripe, comparing only to algorithms known to be bad (i.e. an exhaustive search). You can easily spot the few good papers simply by looking for the ones that compare themselves to good algorithms.


To answer some of the questions and address some of the issues raised ...

I'm not the author, not connected with the author, and don't know the author. I found the "paper" and thought "That's neat and elegant."

Yes, we could do with a complete analysis of all existing algorithms and heuristics, finding the strengths and weaknesses, and for each one, finding and characterising cases where they do well and cases where they do badly.

But personally, I like to see papers like this, papers that can be considered to be a "work in progress." I don't want only to read about studies that are exhaustive, complete, and answer all the questions. I think there is value in seeing descriptions of algorithms, along with a simple comparison to known algorithms largely for the purpose of understanding the detail and the potential.

This is an invitation to explore, not a closed and completed "job done." This is an invitation to Google for other heuristics to enhance my understanding of the field. This is an invitation to wonder if my unique range of experience and skills might find a better heuristic.

Full analysis is the work of a Ph.D. and more. Don't expect it in a paper on the web.


Like pascal_cuoq, I was bothered by the apparent lack of rigour here. The problem is a mismatch of expectations. The post is presented like a paper (references, novel algorithm, comparisons, future work), but it isn't a paper. Viewed as a blog post or work-in-progress, it's very cool. Viewed as a paper (which are available on the Web nowadays), it has some misleading statements in it. There is nothing anti-wonder or anti-knowledge about pointing that out.


I didn't want to, but I have to agree with pascal_cuoq. It isn't useful to view this space filling curve as a revolutionary alternative to computing the provably optimal solution. When the author decided to include comparison to other solutions, they should have included comparable heuristics and evaluated them on runtime and path efficiency.


If I reviewed any paper, for any conference, which failed to compare a new TSP algorithm to any of the existing heuristics, I would reject it out of hand.

While this is cute, I strongly believe that any simple algorithms will compare. Also they seem to hint there is a lack of parallelisable algorithms. Many people have already done work on parallelising TSP by space partitioning.


Are you the author, or connect with the author?

If so, remember that HN readers tend to be a lot more critical of the article than towards other comments.

Anyway, the article didn't really claim to be the fastest heuristic, just an elegant one with some nice properties. It's O(n log n) - what more do people want? O(n)? O(1)? n log n is almost always fast enough.


remember that HN readers tend to be

user: RiderOfGiraffes created: 822 days ago karma: 18098


Indeed, the comparison with the 110-processor 6-month effort to solve the 15112-cities problem is a bit misleading, because most of that time was spent proving that a solution with length 1573084 was optimal.

The team behind that effort has made their software available (CONCORDE - http://www.tsp.gatech.edu//concorde/index.html), and includes a few standard heuristics that are already quite fast:

- for the record, the space-filling curve approach takes less than a second and produces a tour 33% larger than the optimal, which is roughly 1573084*4/3=2097445

- the "greedy" heuristic takes less than a second and computes a solution with length 1803479, which is already better than the space-filling curve

- the "quick boruva" heuristic is even faster and produces a solution with length 1789519 (less than 14% larger than optimal)

- the "Lin Kernighan" heuristic takes a few minutes and produces a solution with length 1577339 (less than 0.3% larger than optimal)

So, while this is an interesting application of space-filling curves, it cannot really be considerered to be competitive with the state-of-the-art in TSP heuristics.


Hello All! I am one of the authors. Thanks for the interesting comments. The web page is not intended to be a study but rather an enticement to read the technical papers referenced therein. The JACM paper originally included detailed comparisons with alternative heuristics but the referees thought it made the paper too long. Quick summary: Few of the alternatives had been analyzed for expected performance and most had worst worst-case performance. But this was all 25 years ago and could be updated. Rgds, JJB


You can also compactly visualize an extremely long vector by mapping it onto a space-filling curve: http://www.ebi.ac.uk/huber-srv/hilbert/

I'm using this right now to visualize sequence coverage depth across the human genome (an integer vector of length 3 billion).


>that are about 25% longer than optimum

Is this a property or mere probability? What are the random point sets for TSP testing? Given n points in a plane , the number of Hamiltonian paths can be drawn is (n-1)!. One of these is minimum, and our optimum solution to TSP. If we distribute the frequencies of the lengths of these graphs, (is there a particular name for this graph?) we will find the probability of a particular length-range occurring. This graph, should be different for different complete graphs. So an improvement might simply be that the 'high ridges' on such graph of input of random sets given , lay towards the left sides of 'peak'.

>A useful property of a spacefilling curve is that it tends to visit all the points in a region once it has entered that region. Thus points that are close together in the plane will tend to be close together in appearance along the curve.

Or very far! Like points around averaged areas (like middle lines, etc) and spacefilling curves do generate averaged areas. Eg yellow points around blue middle lines here http://i.imgur.com/Dl7Vo.gif

NB: I do not mean to criticize. asking questions.


Almost certainly a probability. You could probably prove that it's not worse than a certain amount, because it won't jump the farthest distance every time in anything but a truly trivial graph, but a value like that is unlikely to be an actual limit.

A reason for my reasoning: they visit things in a certain order. It's therefore possible to design a path that falls directly on non-optimal points along a certain order of that curve; I'd guess you could get multiple times the shortest length, instead of just 25% greater. But you can get a new path by simply increasing / decreasing the order of the curve, so running through a few iterations could probably get you an exceptionally-good result for the sort of speed it provides.


Sorry , I think, I haven't been clear. My question is 'What are the input of random sets given as input?' because I feel if we write ANY algorithm, however bad it may be, until it gives us a path, saying that that it is only x% longer is a property of the input given by us. I am saying that x could turn out to be 25 in most of the cases who knows.


Space filling curves are also good for building multi-dimensional indexes for databases.

http://www.dcs.bbk.ac.uk/TriStarp/pubs/bncod17.pdf


If your dimensionality is low then I totally agree, there is a library that can help you to achieve this: http://code.google.com/p/uzaygezen/ If you are dealing with objects with intrinsic dimension >=20 then you should use other techniques.


I was just thinking the other day that you could use a space-filling curve to order a table of geographical locations in a relational database. I decided that I was unlikely to have a problem that would ever justify such an approach. It's nice to know that there's some literature out there already if I do.


Yet another example is organizing color libraries / color swatches:

http://people.csail.mit.edu/jaffer/Color/CSDR


Another example of Hilbert curves used for trawling the RGB cube: http://corte.si/posts/code/sortvis-fruitsalad/index.html

(And, of course, IP addresses: http://xkcd.com/195/)


This seems similar to the pyramid model of Graham, Joshi, Pizlo (2000).

Modeling how (they think) human vision works, they made a heuristic based on clustering and top-down refinement. The map is clustered into a small handful of groups, and these groups are placed in a shortest tour. Each cluster is then clustered in turn into another handful of sub-groups, and these sub-groups are inserted into the tour at the position of their parent.

The pyramid algorithm is highly parallel with, IIRC, the same time characteristics as the space-filling curve algorithm.

What made the connection in my mind was that the simplest implementation of the pyramid algorithm is to cluster by simply dividing the map into quadrants; when you do so, you end up with the same space-filling curve they use at the top of the page.


That's genius. So simple and elegant.


Space-filling curves can also be used for spatial indexing. Nick Johnson wrote a nice introduction to the technique on his blog: http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-Spatial-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: