Merge sort: divide, conquer, stitch
A divide-and-conquer sort with a guarantee
Merge sort splits the array in half, recursively sorts each half, and then merges the two sorted halves into a single sorted whole. The merging step is the heart of the algorithm and the reason merge sort runs in Θ(n log n) every single time — no adversarial input can degrade it.
public static void mergeSort(int[] a) {
if (a.length < 2) return;
int[] buf = new int[a.length];
mergeSort(a, buf, 0, a.length - 1);
}
private static void mergeSort(int[] a, int[] buf, int lo, int hi) {
if (lo >= hi) return;
int mid = lo + (hi - lo) / 2;
mergeSort(a, buf, lo, mid);
mergeSort(a, buf, mid + 1, hi);
merge(a, buf, lo, mid, hi);
}
private static void merge(int[] a, int[] buf, int lo, int mid, int hi) {
for (int k = lo; k <= hi; k++) buf[k] = a[k];
int i = lo, j = mid + 1, k = lo;
while (i <= mid && j <= hi) {
a[k++] = (buf[i] <= buf[j]) ? buf[i++] : buf[j++];
}
while (i <= mid) a[k++] = buf[i++];
// remaining j..hi is already in place
}Why Θ(n log n) — draw the tree
The recursion tree has depth log₂ n. At each level, the total work across all merges is Θ(n). Therefore total work is Θ(n log n) — best, average, and worst case. Memory is Θ(n) for the auxiliary buffer.
Where merge sort wins in the real world
- External sorting of files larger than RAM — merges align perfectly with sequential disk reads.
- Linked lists — merging is natural and requires no random access.
- Stable sort requirements — the algorithm is stable if merge picks from the left on equality.
Takeaways
- Merge sort has guaranteed Θ(n log n) runtime; no adversarial input hurts it.
- Memory cost is linear — fine for most problems, a drawback for memory-tight ones.
- It is stable and plays beautifully with linked lists and external storage.
Enjoying This Lesson?
Your support helps create more comprehensive courses and lessons like this one. Help me build better learning experiences for everyone.
Support Awashyak