Skip to content

Big-O Notation ✅

We express complexity (time, space etc) using big-O notation.

For a problem of size N:

  • a constant-time algorithm is "order 1": \(O(1)\)
  • a linear-time algorithm is "order N": \(O(N)\)
  • a quadratic-time algorithm is "order N squared": \(O(N^2)\)

Big 0, big omega, and big theta describe the upper, lower, and tight bounds for the runtime.

Space Complexity

Time is not the only thing that matters in an algorithm. We might also care about the amount of memoryor space-required by an algorithm.

Space complexity is a parallel concept to time complexity. If we need to create an array of size n, this will require \(O(n)\) space. If we need a two-dimensional array of size \(n*n\), this will require \(O(n^2)\) space

Drop the Constants

  • \(O(2N)\) becomes \(O(N)\)

Drop the Non-Dominant Terms

  • \(O(N^2 + N)\) becomes \(O(N^2)\)
  • \(O(N + log N)\) becomes \(O(N)\)
  • \(0(5*2^N + 1000N^100)\) becomes \(0(2^N)\)

-

We might still have a sum in a runtime. For example, the expression \(O(B^2 + A)\) cannot be reduced (without some special knowledge of A and B)

Multi-Part Algorithms

Add

for (int a : arrA) {
  print(a);
}

for (int b : arrB) {
  print(b);
}

\(O(A + B)\)

Multiply

for(int a : arrA){
  for(int b: arrB){
    print(a + "," b)
  }
}

\(O(A * B)\)

Log N Runtimes

TODO: Derive \(O(logN)\) for binary search Calculating Time complexity for Binary Search:

  • Let's say the iteration in Binary Search terminates after k iteration. If it terminates after 3 iteration, k = 3
  • At each iteration, the array is divided by half. So, lt's say the length of an array at any iteration is n
  • At iteration 1, Length of array = n

  • At Iteration 2, Length of array = n⁄2

  • At Iteration 3, Length of array = \(\((n⁄2)⁄2 = n⁄2^2\)\)

  • Therefore, after Iteration k Length of array = \(n⁄2^k\)

  • Also, we know that after After k divisions, the length of array becomes 1
  • Therefore Length of array = \(\(n⁄2^k = 1\)\) => n = 2k
  • Applying log function on both sides: => \(log_2 (n) = log_2 (2^k)\) => \(log_2 (n) = k log_2 (2)\)
  • As \((log_a (a) = 1)\) Therefore, => k = \(log_2 (n)\)

Hence, the time complexity of Binary Search is, \(\(log_2 (n)\)\)

Alternate Derivation

\[ 1 = N / 2^x \\ 2^x = N \\ log_2(2^x) = log_2 N \\ x _ log_2(2) = log_2 N \\ x _ 1 = log_2 N \]

Below are some examples for each category of performance

\(O(1)\)

The most common example with \(O(1)\) complexity is accessing an array index.

let value = array[5]

Another example of \(O(1)\) is pushing and popping from Stack.

\(O(log N)\)

var j = 1;
while (j < n) {
  // do constant time stuff
  j *= 2;
}

Instead of simply incrementing, 'j' is increased by 2 times itself in each run or complexity space becomes half

Binary Search Algorithm is an example of \(O(log N)\) complexity.

\(O(n)\)

for (let index = 0; index < array.length; index++) {
  const element = array[index];
}

Array Traversal and Linear Search are examples of \(O(n)\) complexity.

\(O(N log N)\)

for (let index = 0; index < array.length; index++) {
  var j = 1;
  while (j < n) {
    j *= 2;
    // do constant time stuff
  }
}

OR

for i in stride(from: 0, to: n, by: 1) {
  func index(after i: Int) -> Int? { // multiplies `i` by 2 until `i` >= `n`
    return i < n ? i * 2 : nil
  }
  for j in sequence(first: 1, next: index(after:)) {
    // do constant time stuff
  }
}

Quick Sort, Merge Sort and Heap Sort are examples of \(O(N log N)\) complexity.

\(O(n^2)\)

for (let index = 0; index < array.length; index++) {
  for (let index = 0; index < array.length; index++) {}
}

Traversing a simple 2-D array and Bubble Sort are examples of \(O(n^2)\) complexity.

\(O(n^3)\)

for (let index = 0; index < array.length; index++) {
  for (let index = 0; index < array.length; index++) {
    for (let index = 0; index < array.length; index++) {
      const element = array[index];
    }
  }
}

\(O(2^N)\)

Algorithms with running time \(O(2^N)\) are often recursive algorithms that solve a problem of size N by recursively solving two smaller problems of size N-1. The following example prints all the moves necessary to solve the famous "Towers of Hanoi" problem for N disks.

func solveHanoi(n: Int, from: String, to: String, spare: String) {
  guard n >= 1 else { return }
  if n > 1 {
      solveHanoi(n: n - 1, from: from, to: spare, spare: to)
      solveHanoi(n: n - 1, from: spare, to: to, spare: from)
  }
}

\(O(n!)\)

The most trivial example of function that takes O(n!) time is given below.

const factorial = n => {
  let num = n;

  if (n === 0) return 1;
  for (let i = 0; i < n; i++) {
    num = n * factorial(n - 1);
  }

  return num;
};

Often you don't need math to figure out what the Big-O of an algorithm is but you can simply use your intuition. If your code uses a single loop that looks at all n elements of your input, the algorithm is \(O(n)\). If the code has two nested loops, it is \(O(n^2)\). Three nested loops gives \(O(n^3)\), and so on.

Note that Big-O notation is an estimate and is only really useful for large values of n. For example, the worst-case running time for the insertion sort algorithm is \(O(n^2)\). In theory that is worse than the running time for merge sort, which is \(O(n log n)\). But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already!

If you find this confusing, don't let this Big-O stuff bother you too much. It's mostly useful when comparing two algorithms to figure out which one is better. But in the end you still want to test in practice which one really is the best. And if the amount of data is relatively small, then even a slow algorithm will be fast enough for practical use.

Big-O comparison Table

Big-O Name Description
O(1) constant This is the best. The algorithm always takes the same amount of time, regardless of how much data there is. Example: looking up an element of an array by its index.
O(log n) logarithmic Pretty great. These kinds of algorithms halve the amount of data with each iteration. If you have 100 items, it takes about 7 steps to find the answer. With 1,000 items, it takes 10 steps. And 1,000,000 items only take 20 steps. This is super fast even for large amounts of data. Example: binary search.
O(n) linear Good performance. If you have 100 items, this does 100 units of work. Doubling the number of items makes the algorithm take exactly twice as long (200 units of work). Example: sequential search.
O(n log n) "linearithmic" Decent performance. This is slightly worse than linear but not too bad. Example: the fastest general-purpose sorting algorithms.
O(n^2) quadratic Kinda slow. If you have 100 items, this does 100^2 = 10,000 units of work. Doubling the number of items makes it four times slower (because 2 squared equals 4). Example: algorithms using nested loops, such as insertion sort.
O(n^3) cubic Poor performance. If you have 100 items, this does 100^3 = 1,000,000 units of work. Doubling the input size makes it eight times slower. Example: matrix multiplication.
O(2^n) exponential Very poor performance. You want to avoid these kinds of algorithms, but sometimes you have no choice. Adding just one bit to the input doubles the running time. Example: traveling salesperson problem.
O(n!) factorial Intolerably slow. It literally takes a million years to do anything.

Comparison of Big O computations