Short lesson below, then interactive pieces you can run. Same idea everywhere: sorted data, compare the middle, throw away half the search space.
Most people explain why software feels fast by pointing at hardware: faster processors, more RAM, better networks. Those ingredients matter, but they are not the whole story—and they are not what makes huge problems feel small.
When you search the web, page through a large dataset, or query a big system, the underlying data can be enormous. If the program had to touch every record, every time, the math would break long before the silicon got “fast enough.” Useful answers still arrive quickly because the system avoids doing unnecessary work.
The real advantage is not raw speed alone; it is shrinking how much work has to happen—the algorithms and structures that skip most of the haystack instead of walking it end to end.
Pick binary or linear, edit the list and target, then Prepare steps. Use Step or Play / Pause.
If you change the list, target, or mode, press Prepare steps again.
Swipe sideways if the row is wider than the screen.
low = 0, high = n - 1
while (low <= high):
mid = low + (high - low) / 2
if a[mid] == target: return mid
else if a[mid] < target: low = mid + 1
else: high = mid - 1
return -1
for i from 0 to n - 1:
if a[i] == target: return i
return -1
Picture a phone book with one million names in random order. Someone asks you to find Rahul Sharma.
Where do you start?
Select an option and click "Check answer".
Let us look at worst-case checks:
If data becomes 10x bigger, what happens?
Select an option and click "Check answer".
What does O(n) mean?
In computer science, Big O notation describes how the amount of work grows as the input size grows. The letter O is read as “order of”; n is the size of your data (for example, number of names in the phonebook). So O(n) means: in the worst case, work grows in proportion to n.
That is why linear search is called linear growth: we write it as O(n) (no special symbols needed in normal text).
Bar height shows worst-case comparisons: for linear search, that equals n. When n grows, the bar grows in the same proportion.
Keep the same million names, but list them from A to Z, like a printed phonebook. Once the names stay in order, you do not need to start at page one every time: you can open near the middle, compare one name, and tell which half of the book you can skip next.
If you jump to the middle of a sorted phonebook, what should you do next?
Select an option and click "Check answer".
Binary search is a short loop you run on sorted data: look at the middle of the range that could still contain the answer, compare it to the value you want, and use the ordering to decide which side of the list can be ruled out. Then do the same thing again on the smaller range, until you find the value or nothing is left to search.
You are not walking the list from front to back; each comparison removes a large slice of remaining options. The gain comes from fewer steps overall, not from buying a faster chip so each step feels a little quicker.
Fifteen sorted slots (values 1 … 15). Target value 10. Active range is kept; the rest fades out after each decision.
What is a logarithm, and how do you read log₂(128)?
A logarithm answers: “To what exponent must I raise the base, to get this number?” The little 2 in log₂ is the base (here, powers of two).
In symbols: log₂(128) = x means the same thing as 2x = 128. So you are looking for the exponent x that turns 2 into 128.
Solve log₂(128). The walkthrough starts on its own; use Pause if you want to read a step, then Play to continue. It loops when finished.
Each ×2 along the chain is one doubling. Seven doublings turn 1 into 128, so the exponent is 7.
Binary search keeps halving the remaining range (divide by two, over and over). The number of halvings you need, for a list of size n, is in the same family as log₂(n)—that is why people call it logarithmic and write O(log n) for the growth rate.
That is why binary search is called logarithmic growth: you only need about as many middle checks as you can halve the list down to one candidate, which scales like log₂(n). In Big O we write that as O(log n)—same meaning as the logarithm above, without needing special notation in normal text.
Now here is the most important idea:
Binary search works like this:
Every step divides the problem into half.
If you have 32 items: how many times can you divide by 2 until 1 remains?
Try the chain
Select an option and click “Check answer”.
Now think: what if we double the list to 64 items? How many divide-by-two steps until 1 remains?
Hint: 64 → 32 → 16 → 8 → 4 → 2 → 1 — count the arrows (each arrow is one halving).
Select an option and click “Check answer”.
Let’s compare at real-world sizes:
| Method | 1,000,000 items |
|---|---|
| Linear search | up to 1,000,000 steps |
| Binary search | ~20 steps |
About 20 steps matches log₂(1,000,000) rounded up. Exact counts vary a little by code, but binary search stays tiny next to linear.
Longer comparison table — from 100 items to 1 billion.
If the data becomes 1 billion items, how many binary-search steps (same “halve the range” idea)?
Select an option and click “Check answer”.
After you check: the work does not explode when the list gets huge — it grows slowly, like a logarithm.
Binary search works only when the data is sorted, because the order is what makes each step a reliable rule for which side of the list can still contain the answer.
Once sorted, the list becomes a decision space: you compare the target against the middle element of the current range, and that single comparison rules out about half of the remaining data, so each probe divides the work still ahead by two.
Read the story on the left, then practice on the right. The playground at the top uses the same low, high, and middle index.
Picture a row of numbers in increasing order. You are looking for 23—not by scanning every cell, but by asking a few smart questions.
You do not yet know where 23 sits. So you look at the middle of the part you still trust. In the full list, the middle value is 16.
Ask: is 23 smaller than, equal to, or larger than 16? It is larger. Because the list is sorted, everything to the left of that middle is too small to be 23—you can mentally drop that whole left block.
What remains is the right side: 23, 38, 56, 72. You take the middle again. This time it lands on 23—so you are done.
Every time: peek at the middle of the current window, compare to the target, and throw away about half of what is still in play. That is the same rhythm the algorithm follows.
This only works when values are sorted along the row. If the same numbers were placed in random slots, you could not trust “drop the left half” after one comparison—use See the problem: jumbled list in the panel to watch that failure mode.
Use the nine cells below. Choose Practice: sorted list and press Begin—you will answer twice, and the gray cells show what got ruled out. Then switch to See the problem: jumbled list for the same numbers in a random order.
Pick Practice: sorted list or See the problem: jumbled list, then press Begin.
Why does binary search insist on sorted data?
Select an option and click “Check answer”.
Same job—find a value—but different rules and different growth as data gets large.
Keep a window of indices [low, high], always look at the middle of that window, then shrink the window.
Middle index uses mid = low + (high - low) / 2 with whole-number division.
[2, 5, 8, 12, 16, 23, 38, 56, 72]
23
Window indices 0–8 → middle index 4, value 16
23 is greater than 16, so discard the left side; continue with indices 5–8.
Window indices 5–8 → middle index 6, value 23
Equal to the target → found at index 6.
Same algorithm everywhere: two bounds, one middle index, compare, shrink the window. Pick a language—behavior matches the interactive panel at the top of this page.
Language
public static int binarySearch(int[] arr, int target) {
int low = 0, high = arr.length - 1;
while (low <= high) {
int mid = low + (high - low) / 2;
if (arr[mid] == target) return mid;
else if (target > arr[mid]) low = mid + 1;
else high = mid - 1;
}
return -1;
}
def binary_search(arr: list[int], target: int) -> int:
low, high = 0, len(arr) - 1
while low <= high:
mid = low + (high - low) // 2
if arr[mid] == target:
return mid
if target > arr[mid]:
low = mid + 1
else:
high = mid - 1
return -1
function binarySearch(arr, target) {
let low = 0;
let high = arr.length - 1;
while (low <= high) {
const mid = low + Math.floor((high - low) / 2);
if (arr[mid] === target) return mid;
if (target > arr[mid]) low = mid + 1;
else high = mid - 1;
}
return -1;
}
function binarySearch(arr: number[], target: number): number {
let low = 0;
let high = arr.length - 1;
while (low <= high) {
const mid = low + Math.floor((high - low) / 2);
if (arr[mid] === target) return mid;
if (target > arr[mid]) low = mid + 1;
else high = mid - 1;
}
return -1;
}
#include <vector>
int binarySearch(const std::vector<int>& arr, int target) {
int low = 0, high = static_cast<int>(arr.size()) - 1;
while (low <= high) {
int mid = low + (high - low) / 2;
if (arr[mid] == target) return mid;
if (target > arr[mid]) low = mid + 1;
else high = mid - 1;
}
return -1;
}
public static int BinarySearch(int[] arr, int target)
{
int low = 0, high = arr.Length - 1;
while (low <= high)
{
int mid = low + (high - low) / 2;
if (arr[mid] == target) return mid;
if (target > arr[mid]) low = mid + 1;
else high = mid - 1;
}
return -1;
}
func binarySearch(arr []int, target int) int {
low, high := 0, len(arr)-1
for low <= high {
mid := low + (high-low)/2
if arr[mid] == target {
return mid
}
if target > arr[mid] {
low = mid + 1
} else {
high = mid - 1
}
}
return -1
}
pub fn binary_search(arr: &[i32], target: i32) -> Option<usize> {
let mut low: isize = 0;
let mut high: isize = arr.len() as isize - 1;
while low <= high {
let mid = low + (high - low) / 2;
let u = mid as usize;
use std::cmp::Ordering::*;
match arr[u].cmp(&target) {
Equal => return Some(u),
Less => low = mid + 1,
Greater => high = mid - 1,
}
}
None
}
fun binarySearch(arr: IntArray, target: Int): Int {
var low = 0
var high = arr.lastIndex
while (low <= high) {
val mid = low + (high - low) / 2
when {
arr[mid] == target -> return mid
target > arr[mid] -> low = mid + 1
else -> high = mid - 1
}
}
return -1
}
function binary_search(array $arr, int $target): int {
$low = 0;
$high = count($arr) - 1;
while ($low <= $high) {
$mid = $low + intdiv($high - $low, 2);
if ($arr[$mid] === $target) {
return $mid;
}
if ($target > $arr[$mid]) {
$low = $mid + 1;
} else {
$high = $mid - 1;
}
}
return -1;
}
def binary_search(arr, target)
low = 0
high = arr.length - 1
while low <= high
mid = low + (high - low) / 2
return mid if arr[mid] == target
if target > arr[mid]
low = mid + 1
else
high = mid - 1
end
end
-1
end
Simple code. Massive impact.
Linear work grows in proportion to how much data you have. The binary search algorithm grows with how many times you can cut the remaining range in half—that is a logarithm, so the step count stays small even when N explodes.
Try it
Move the slider: pretend the list has this many items
100 items
Linear search (worst case)
100 comparisons (worst case, scan the whole list)
Binary search algorithm
~7 middle probes (halve the range each time)
These numbers are rough worst-case steps, good for intuition.
| Data size | Linear (worst) | Binary (~halvings) |
|---|---|---|
| 100 | 100 steps | ~7 steps |
| 10,000 | 10,000 steps | ~14 steps |
| 1,000,000 | 1,000,000 steps | ~20 steps |
| 1,000,000,000 | 1,000,000,000 steps | ~30 steps |
Check your read · linear
You double the dataset from N to 2N. Linear search worst case roughly…
Choose an option, then Check answer.
Check your read · binary
You double the dataset from N to 2N. Binary-search-style probes roughly…
Choose an option, then Check answer.
Each card is one sentence you can reuse when explaining performance.
About 20 binary-style probes versus up to a million linear checks in the worst case.
Roughly 30 halvings versus up to a billion one-by-one comparisons.
When N doubles, linear worst-case work about doubles; the binary search algorithm usually needs only about one more comparison round.
Same pattern: sorted or indexed data + discard half the remaining space of possibilities.
Databases
Indexes and storage engines keep keys ordered so lookups can jump in logarithmic time instead of scanning every row.
Networks
Routing tables and search structures often rely on ordering so systems can narrow where a packet or rule might live.
Apps & platforms
Autocomplete, sorted feeds, and “jump to” features reuse the same halving logic under the hood.
Developer tools
Version control, bisecting bugs, and searching sorted logs all benefit from cutting the candidate set in half.
Up-front cost
The binary search algorithm only “trusts” halves if data is sorted (or indexed so you can treat it like sorted). Sorting or building an index costs time and space before the first search.
What you buy
After that prep, each lookup stays cheap. In practice systems often pay once to sort or index, then answer millions of queries with tiny probe counts.
Binary search is a small loop, but it trains a bigger habit.
Avoid
Checking everything “just to be sure.”
Prefer
Eliminating impossible work as early as the rules allow.
Final thought
Modern systems feel fast when algorithms refuse to do unnecessary work.
Same hardware. Better method. Better result.