## Lecture 17: Quicksort

The third fast sorting algorithm is quicksort. Like mergesort, quicksort is a divide and conquer algorithm.

### High-level code

```smallValue = 7;

int[] quicksort(int[] A) {
if (A.length <  smallValue)
return insertionSort(A);  // Base case of the recursion
p = a pivot for A, a number in A that is more or less in the middle of
the values of A;
B1 = an array with the numbers less than p;
B2 = an array with the numbers greater than p;
C1 = quicksort(B1);           // Recursively solve on the subproblems
C2 = quicksort(B2);
D = C1 followed by p followed by C2 // Combine
return D;
}
```
We'll come back later to the question of how the pivot is chosen. In the example below, I'll just pick it "magically" to be some number more or less in the middle.

As with mergesort, when you get down to arrays of size 7 or so, you just turn to an insertion sort for efficiency, rather than recurring down to size 1.

### Example:

Using smallSize = 4;
```A = [31,41,59,26,53,58,27,18,28,45,9,14,42,12,17]

quicksort(A)
pivot = 31;
B1 = [26,27,18,28,9,14,12,17]
B2 = [41,59,53,58,45,42]

C1 = quicksort([26,27,18,28,9,14,12,17]
pivot = 17;
B1 = [9,14,12]
B2 = [26,27,28]

C1=quicksort([9,14,12]) = [9,12,14]
C2=quicksort([26,27,18,28]) = [18,26,27,28]
D = append(C1,[pivot],C2) = [9,12,14,17,18,26,27,28]

C1 = [9,12,14,17,18,26,27,28]

C2 = quicksort([41,59,53,58,45,42])
pivot = 53;
B1 = [41,45,42]
B2 = [59,58]

C1 = quicksort([41,45,42]) = [41,42,45]
C2 = quicksort([59,58]) = [58,59]
D = append(C1,[pivot],C2) = [41,42,45,53,58,59]
```

### In-place version

Unlike mergesort, quicksort can be written as an in-place algorithm, just using the space in the original array. In dividing the array into the small numbers and the big numbers, it puts the small numbers at the front of the array and the big numbers at the end of the array, and the pivot at the center.

To do this, it uses a two-fingered method. First, move the pivot itself to the front of the array, to get it out of the way. Then run the left finger up from the left, until you find a number larger than the pivot. Run the right finger from the right, until you find a number smaller than the pivot. Swap these. Continue on, until the two fingers meet. Then put the pivot into its right place. This is called the "partition" function. (There are many different ways of writing the partition function, none of them quite as elegant as one would like.)

```// Partition the elements of array A between indices L and U inclusive,
// using the value initially at A[IPIVOT]

partition(a,l,u,ipivot) {
pivot = a[ipivot];
swap(a[ipivot],a[l]);  // Move the pivot to the front.
i = l+1;               // left-handed pointer.
j = u;                 // right-handed pointer.
while (i < j) {
while (i < j && a[i] < pivot) i++;
while (j > i && a[j] > pivot) j--
if (i < j) {
swap(a[i],a[j]);
i++;
j--;
}
}  // the partitioning is complete.
if (a[i] < pivot)
ipivot = i;              // i is the last small number
else ipivot = i-1;          // i-1 is the last small number.
swap(a[l],a[ipivot])         // swap the pivot into the middle position.
return ipivot;
}                             // return the new position of the pivot.
```

#### Example of partition

```A = [31,41,59,26,53,58,27,18,28,45,9,14,42,12,17]
Choose ipivot = 1 (pivot = 41).

partition(A,0,14,2)

swap(A[ipivot],A[0]);
A = [41,31,59,26,53,58,27,18,28,45,9,14,42,12,17]
i = 1;
j = 14;
move i up to 2 (A[i]=59 > pivot)
keep j at 14 (A[i] = 17 < pivot)
swap(A[i],A[j])
A = [41,31,17,26,53,58,27,18,28,45,9,14,42,12,59]
i                                 j
Move i up to 4 (A[i] = 53 > pivot)
Move j back to 12 (A[i] = 12 < pivot)
swap(A[i],A[j])
A = [41,31,17,26,12,58,27,18,28,45,9,14,42,53,59]
i                         j

Move i up to 5 (A[i]=58 > pivot)
Move j back to 11 (A[i] = 14 < pivot)
swap(A[i],A[j])
A = [41,31,17,26,12,14,27,18,28,45,9,58,42,53,59]
i               j
Move i up to 9 (A[i]=45 > pivot)
Move j back to 10 (A[i] = 14 < pivot)
swap(A[i],A[j])
A = [41,31,17,26,12,14,27,18,28,9,45,58,42,53,59]
i  j
i++ = 10, j-- = 9,  so the outer loop exits

ipivot = 9;
swap(A[l],A[ipivot]
A = [9,31,17,26,12,14,27,18,28,41,45,58,42,53,59]
return ipivot = 9, the index of the pivot.
Note that everything before 9 is less than the pivot and everything
after is greater.
```

#### Revised main routine

```smallValue = 7;

// quicksort array A between indices l and u

int[] quicksort(int[] A, int l,u)  {
if (l - u <  smallValue)
return insertionSort(A);  // Base case of the recursion
ipivot = choose an index for a pivot;
m = partition(A,l,u,ipivot);   // Divide into small and large numbers
quicksort(A,l,m-1);      // Recursively sort the small numbers
quicksort(A,m+1,u);      // Recursively sort the large numbers
}  // the combination step is nothing, because the numbers are
// already where they should be.
```

#### Example 2

Take smallValue = 3, and choose ipivot to be l.
```A = [31,41,59,26,53,58,27,18,28,45,9,14,42,12,17]

qs(0,14)
pivot = 31

partition(0,14,0)
A = [[28,17,12,26,14,9,27,18]  31  [43,58,53,42,59,41]]
partition returns 8

qs(0,7)
pivot = 28
partition(0,7,0)
A = [[[18,17,12,26,14,9,27], 28] 31 [43,58,53,42,59,41]]
partition returns 6

qs(0,6)
pivot = 18
partition(0,6,0)
A = [[[[14,7,12,9], 18, [26,27]],28],31,[43,58,53,42,59,41]]
partition returns 4

qs(0,3)
pivot=14
partition(0,3,0)
A = [[[[[9,7,12],14], 18, [26,27],28],31,[43,58,53,42,59,41]]
partition returns 2

qs(0,2) calls insertion sort
A = [[[[[7,9,12],14], 18, [26,27],28],31,[43,58,53,42,59,41]]
qs(0,2) returns
qs(0,3) returns

qs(5,6) calls insertion sort
A = [[7,9,12,14,18,26,27,28],31,[43,58,53,42,59,41]]
qs(5,6) returns
qs(0,6) returns
qs(0,7) returns

qs(9,14)
pivot = 43
partition(9,14,9)
A = [[7,9,12,14,18,26,27,28],31,[[42,41],43,[53,59,58]]]
partition returns 11

qs(9,10) calls insertion sort
A = [[7,9,12,14,18,26,27,28],31,[[41,42],43,[53,59,58]]]
qs(9,10) returns

qs(12,14) calls insertion sort
A = [[7,9,12,14,18,26,27,28],31,[[41,42],43,[53,58,59]]]
qs(12,14) returns
qs(9,14) returns
qs(0,14) returns
A = [[7,9,12,14,18,26,27,28],31,[[41,42],43,[53,58,59]]]
```

### Running time analysis

The running time for one execution of partition is always O(N).

The running time of quicksort depends on whether the pivot actually divides the data more or less in half.

#### Best case

The best case is where you always succeed in picking the actual middle element as the pivot. In that case, the two recursive calls are each over an array half the size of the original. The analysis is the same as mergesort.
So the overall running time is O(N*log(N))

You can show by a similar argument that if there is a fixed maximum "lopsidedness" of the partition --- for instance, if the division is never worse than 90% on one side and 10% on the other ---- then the overall running time is still O(N*log(N)), though the constant factor is larger.

#### Worst case

The worst case is where you always pick the pivot to be the largest or smallest element. In that case, the recursive calls are over an array of size 0 and an array of size N-1. The overall time then is proportional to N + (N-1) + (N-2) + ... + 3 + 2 + 1 = N(N+1)/2 = O(N^2) i.e. no better than the "easy" sorting routines and in fact almost certainly worse in terms of the constant factor and lower-order terms because of the greater overhead of the more complex routine.

#### Average case

One can prove that in the average case the running time is O(N*log(N)). "Average" can be in either of two senses:
• You pick the pivot at random.
• You pick the pivot systematically (e.g. A[0]) but the initial order of the data is random.
Moreover, in practice, on average quicksort runs faster than either heapsort or mergesort (hence the name).

#### How do you pick the pivot?

So how can you pick the pivot to guarantee an O(N * log(N)) running time?

For practical purposes, you can't.

In theory, there is an O(N) algorithm to find the median element of an array, and if you use that to choose the pivot, you get an O(N * log(N)) worst case version of quicksort. However, the median algorithm is so complicated that this version of quicksort will run way slower than heapsort or mergesort. So there's no point to it.

A couple of strategies people use in practice.

• Take the first, the last, and the middle element and take the median of those as the pivot. This is likely to be reasonaby close to the median if the data is random, and likely to be the median if the data is originally sorted or nearly sorted.
• Take a random element.
• Take K random elements for some small K and use the median of those.