I question the inclusion of bubble sort especially two versions of it. I worry that people are more prone to using bubble sort, because it's conceptually simple.
This always strikes me as odd. Personally, had I heard about it, I would never come up with bubble sort myself, nor any of my friends I asked about it. The most intuitive sorting algorithm is obviously selection sort. I'd even say that merge sort is more conceptually clear than bubble sort. Bubble sort is not much easier to implement than selection sort either. It has no interesting properties which make it worth considering (unlike, for instance, insertion sort). The only possibility of one knowing bubble sort is by hearing about it on algorithms classes, and in that case one already know heapsort and quicksort, so that there is no need for bubble sort.
To sum up, I fear bubble sort in production much, much less than, say, selection sort.
> The only possibility of one knowing bubble sort is by hearing about it on algorithms classes,
That's simply not true. I know for a fact that for me personally the bubble sort was the most "intuitive" algorithm because when I was in elementary school we had a computer club, and we were given the task for figuring out how to sort a list. My implementation was a bubble-sort, and at the time I had absolutely no knowledge of sorting algorithms at all.
Obviously different people will find different things "intuitive".
Try asking 10 colleagues to right now write bubble sort on the white board. I bet at least half write some sorting algorithm that is not bubble sort. In contrast, if you ask these same 10 to do quicksort, you'll end up with 10 quicksort implementations.
I think it's nice to have it included to realize that while merge sort is not significantly more difficult (once the idea of divide and conquer is clear) the latter is a lot faster.
I actually don't like the particular iterative version of merge sort, to me it's lacking in clarity; then again, I like to use the ternary operator ?: on the left side of an assignment in javascript (as an array index or function selector), so I'm probably not the right judge when it comes to clarity ;)
I was thinking to include some sorting visualizations in my html5 canvas experiments, this looks like a neat start,thanks for posting it!
The Google Closure library also has quite a few data structures + algorithm implementations as well across different domains (2D affine transformations, tree implementations, etc), I highly recommend checking it out
As the other commenters have noted, this is not nearly complete (it was rather disappointing in its completeness, in fact), and that doesn't mean you shouldn't do it. It'd be a great learning exercise and if you could add a twist to them (visualization on a webpage, etc.), it would be even more helpful to others.
Not that I don't appreciate these examples, but they shouldn't have to be in javascript for somebody to learn the algorithm. I've always learned more when I've read some pseudo code (or another language) and implemented an algorithm myself.
Also, to anyone planning to use these in code, please note the copyright. Event though it's MIT, you still have to add the credit to your code. sigh Gotta love copyright law ;D
if (items.length == 1) {
return items;
}
var middle = Math.floor(items.length / 2),
left = items.slice(0, middle),
right = items.slice(middle);
return merge(mergeSort(left), mergeSort(right));
}
You don't need to run merge on the left and right partitions every call. You only need to do that if the left partition's last element is greater than the right partition's first element. Otherwise, you can just append the right partition to the left, since they are already in sorted order.
Why does that make the algorithm incorrect? At most your method might be faster, although I doubt it because the probability that the left's last is greater than the right's first is small and gets exponentially smaller as the length of the sequences you're merging increases.
It /might/ be faster? What? Do you fully understand the invariant of the algorithm? The left should always be less than the right partition. There's no need to re-merge them each call. As it is, you are always comparing all elements in the left and right partitions with the merge when you don't have to!
Nope. What I said is still true. Here is my code in Java:
public List<Integer> mergeSort(List<Integer> list) {
// Base case
// If list has one (or less) element, return list as is
int size = list.size();
if (size <= 1) {
return list;
}
// Partition list in half
int m = size / 2;
List<Integer> left = new ArrayList<Integer>();
List<Integer> right = new ArrayList<Integer>();
for (int i = 0; i < m; i++) {
left.add(list.get(i));
}
for (int i = m; i < size; i++) {
right.add(list.get(i));
}
// Recursively merge sort each partition
left = mergeSort(left);
right = mergeSort(right);
// If the last element of the left partition is greater than the first element of the right partition
// The left and right partitions need to be rearranged
if (left.get(left.size() - 1) > right.get(0)) {
return merge(left, right);
// Otherwise left and right partitions are in the correct order
} else {
left.addAll(right);
return left;
}
}
protected List<Integer> merge(List<Integer> left, List<Integer> right) {
List<Integer> result = new ArrayList<Integer>();
// While both containers are non-empty
// Move lesser elements to the front of the result, and remove them from their containers
while (!left.isEmpty() && !right.isEmpty()) {
if (left.get(0) < right.get(0)) {
result.add(left.remove(0));
} else {
result.add(right.remove(0));
}
}
// The container that still has elements contains elements greater than those in the other container
// It is assumed that the elements in the container are also already sorted
// So the non-empty container's elements should be appended to the end of the list
if (!left.isEmpty()) {
result.addAll(left);
} else {
result.addAll(right);
}
return result;
}
That's what I'm saying. It's a minor optimization of the mergesort algorithm, and it's not the usual way that mergesort is presented. And I'm skeptical that it's really an optimization. Have you benchmarked it?
This condition is going to be true most of the time:
left.get(left.size() - 1) > right.get(0)
As the lists get longer (where this optimization could pay off), the condition is more and more likely to be true.
For the case of nearly sorted lists, it could be an important optimization. But to say that the original mergesort is incorrect is not true.
https://github.com/jashkenas/coffee-script/tree/master/examp...