4 Sorting, part 2: Divide-and-conquer algorithms

In the previous chapter we presented three simple and relatively slow sorting algorithms, with quadratic runtime behaviour. Now we will introduce two algorithms with considerably better performance, with linearithmic worst-case or average-case running time: Mergesort and Quicksort. In addition we will briefly discuss some special-purpose sorting algorithms, such as Radix sort.

Both these algorithms make use of a basic strategy for algorithm design, which is called divide and conquer. The basic idea is to divide a big problem (how to sort a big array) into subproblems that can be solved independently (how to sort two smaller arrays). The main difference between the algorithms is how they do the dividing: Mergesort divides the array in two equal-sized halves, while Quicksort divides the array into the big values and the small values. Radix sort in turn divides the problem by working on one digit of the key at a time.

We will also make use of different algorithm analysis techniques. Quicksort illustrates that it is possible for an algorithm to have an average case whose growth rate is significantly smaller than its worst case. We can use code tuning to improve these algorithms, by falling back to a simpler algorithm (such as Insertion sort) when the subarray is small enough.