"here is the tradeoff: Use our heuristic and you get a reasonable route immediately. Alternatively, configure a network of 110 processors, then spend two months computing the shortest route to save a month of driving."
Or, of course, use one of the thousands of other heuristics, most of which will likely provide as good an answer in as short a time as this heuristic. I mean, why provide comparisons to comparable heuristics? That would be like doing science.
But let me just say that I found your reply to be unnecessarily nasty in its tone - snippy and snarky. You seem simply to be criticising the authors because they produced a brief paper describing a particular heuristic, and you seem simply to be demanding that they do go further than they have in this paper. Perhaps they already have, perhaps it's work in process.
I find it frustrating when people criticise a link largely because it is what it is and isn't something else. If you want something on comparisons, go find it. Or alternatively, do the work yourself. Then regardless, come and share your results.
It's a bit annoying to me mainly because this sort of thing is really common among heuristics developers (even in papers submitted to conferences/journals). They develop a heuristic for a problem, and then show that it beats the exact, guaranteed optimal algorithmic solution. But that's only the right comparison if it's the first-ever heuristic for a problem! Otherwise it's a bit of a straw man, because its real competitors are other existing heuristics and randomized or approximate algorithms for that problem, and/or general, domain-independent heuristics like randomized hill-climbing.
Apart from the claim that it beats competitors, it's a really interesting post, though; light-weight heuristics based on clever connections are interesting in their own right. (So I sort of agree on the tone.)
This is extremely common in compression algorithms "research" as well. At least in video compression, the vast majority of papers are complete tripe, comparing only to algorithms known to be bad (i.e. an exhaustive search). You can easily spot the few good papers simply by looking for the ones that compare themselves to good algorithms.
To answer some of the questions and address some of the issues raised ...
I'm not the author, not connected with the author, and don't know the author. I found the "paper" and thought "That's neat and elegant."
Yes, we could do with a complete analysis of all existing algorithms and heuristics, finding the strengths and weaknesses, and for each one, finding and characterising cases where they do well and cases where they do badly.
But personally, I like to see papers like this, papers that can be considered to be a "work in progress." I don't want only to read about studies that are exhaustive, complete, and answer all the questions. I think there is value in seeing descriptions of algorithms, along with a simple comparison to known algorithms largely for the purpose of understanding the detail and the potential.
This is an invitation to explore, not a closed and completed "job done." This is an invitation to Google for other heuristics to enhance my understanding of the field. This is an invitation to wonder if my unique range of experience and skills might find a better heuristic.
Full analysis is the work of a Ph.D. and more. Don't expect it in a paper on the web.
Like pascal_cuoq, I was bothered by the apparent lack of rigour here. The problem is a mismatch of expectations. The post is presented like a paper (references, novel algorithm, comparisons, future work), but it isn't a paper. Viewed as a blog post or work-in-progress, it's very cool. Viewed as a paper (which are available on the Web nowadays), it has some misleading statements in it. There is nothing anti-wonder or anti-knowledge about pointing that out.
I didn't want to, but I have to agree with pascal_cuoq. It isn't useful to view this space filling curve as a revolutionary alternative to computing the provably optimal solution. When the author decided to include comparison to other solutions, they should have included comparable heuristics and evaluated them on runtime and path efficiency.
If I reviewed any paper, for any conference, which failed to compare a new TSP algorithm to any of the existing heuristics, I would reject it out of hand.
While this is cute, I strongly believe that any simple algorithms will compare. Also they seem to hint there is a lack of parallelisable algorithms. Many people have already done work on parallelising TSP by space partitioning.
If so, remember that HN readers tend to be a lot more critical of the article than towards other comments.
Anyway, the article didn't really claim to be the fastest heuristic, just an elegant one with some nice properties. It's O(n log n) - what more do people want? O(n)? O(1)? n log n is almost always fast enough.
Indeed, the comparison with the 110-processor 6-month effort to solve the 15112-cities problem is a bit misleading, because most of that time was spent proving that a solution with length 1573084 was optimal.
The team behind that effort has made their software available (CONCORDE - http://www.tsp.gatech.edu//concorde/index.html), and includes a few standard heuristics that are already quite fast:
- for the record, the space-filling curve approach takes less than a second and produces a tour 33% larger than the optimal, which is roughly 1573084*4/3=2097445
- the "greedy" heuristic takes less than a second and computes a solution with length 1803479, which is already better than the space-filling curve
- the "quick boruva" heuristic is even faster and produces a solution with length 1789519 (less than 14% larger than optimal)
- the "Lin Kernighan" heuristic takes a few minutes and produces a solution with length 1577339 (less than 0.3% larger than optimal)
So, while this is an interesting application of space-filling curves, it cannot really be considerered to be competitive with the state-of-the-art in TSP heuristics.
Or, of course, use one of the thousands of other heuristics, most of which will likely provide as good an answer in as short a time as this heuristic. I mean, why provide comparisons to comparable heuristics? That would be like doing science.