Poyters
Published on

Understanding and calculating Big O notation

Authors

Alghoritms complexity is a key concept in computer science that allows for the evaluation of code performance and scalability. In this article, we will discuss how to calculate algorithm complexity in Big O notation and how to find faster and more optimal solutions. Code examples are provided in TypeScript.

What is algorithm complexity?

Algorithm complexity is a measure of the resources required to execute an algorithm, most often in terms of time (execution time) or space (memory). The most popular notation used to express algorithm complexity is Big O notation.


Big O Notation

Big O notation describes how the number of operations performed by an algorithm grows with the size of the input (n). Some basic Big O notations are:

  • O(1)O(1) - constant complexity, regardless of the input size
  • O(n)O(n) - linear complexity, grows linearly with the size of the input
  • O(n2)O(n^2) - quadratic complexity, grows quadratically with the size of the input
  • O(logn)O(log_n) - logarithmic complexity, grows logarithmically with the size of the input

Calculationg Big O examples

O(1)O(1) - constant complexity

An algorithm with constant complexity always performs the same number of operations, regardless of the input size.

function getFirstElement(arr: number[]): number {
  // This function always takes the first element of the array,
  // regardless of the array's size. Hence, it performs a single operation.
  // Time complexity: O(1)
  return arr[0]
}

O(n)O(n) - linear complexity

An algorithm with linear complexity performs operations directly proportional to the size of the input.

function sumArray(arr: number[]): number {
  let sum = 0
  for (let i = 0; i < arr.length; i++) {
    // This loop iterates over each element of the array exactly once.
    // Therefore, the number of operations grows linearly with the array size.
    // Time complexity: O(n)
    sum += arr[i]
  }
  return sum
}

O(n2)O(n^2) - quadratic complexity

An algorithm with quadratic complexity performs operations directly proportional to the square of the input size.

function bubbleSort(arr: number[]): number[] {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr.length - i - 1; j++) {
      // The outer loop runs n times, and for each iteration of the outer loop,
      // the inner loop runs approximately n times as well.
      // Therefore, the total number of operations is roughly n * n.
      // Time complexity: O(n^2)
      if (arr[j] > arr[j + 1]) {
        ;[arr[j], arr[j + 1]] = [arr[j + 1], arr[j]]
      }
    }
  }
  return arr
}

O(logn)O(log_n) - logarithmic complexity

An algorithm with logarithmic complexity performs operations that grow logarithmically with the size of the input.

function binarySearch(arr: number[], target: number): number {
  let left = 0
  let right = arr.length - 1

  while (left <= right) {
    const mid = Math.floor((left + right) / 2)
    // Each iteration of the loop halves the size of the search space.
    // Hence, the number of iterations required is proportional to log(n).
    // Time complexity: O(log n)
    if (arr[mid] === target) {
      return mid
    } else if (arr[mid] < target) {
      left = mid + 1
    } else {
      right = mid - 1
    }
  }
  return -1
}

Real-world example: calculating the total cost of a shopping cart

Let's consider a function that calculates the total cost of items in a shopping cart. We will also determine its complexity.

interface CartItem {
  name: string
  price: number
  quantity: number
}

function calculateTotalCost(cart: CartItem[]): number {
  // Using the reduce function to iterate over the cart items and calculate the total cost.
  // This method is more concise and functional.
  // Time complexity: O(n)
  return cart.reduce((totalCost, item) => totalCost + item.price * item.quantity, 0)
}

Optimization

Now, let's explore some real-world examples where we can apply common techniques to reduce the Big O notation of our algorithms.

Using hash maps for faster lookup

Let's say, that you need to find duplicates in array:

function findDuplicates(array: number[]): number[] {
  const duplicates: number[] = []

  for (let i = 0; i < array.length; i++) {
    for (let j = i + 1; j < array.length; j++) {
      // This nested loop checks each pair of elements in the array.
      // If a duplicate is found, it is added to the duplicates array.
      // Time complexity: O(n^2)
      if (array[i] === array[j] && !duplicates.includes(array[i])) {
        duplicates.push(array[i])
        break // Break to avoid adding the same duplicate multiple times
      }
    }
  }

  return duplicates
}

Explanation:

  • Outer loop: Iterates through each element of the array array.
  • Inner loop: For each element i, iterates through the remaining elements j in the array.
  • Duplicate check: If element array[i] is equal to element array[j] and is not already in the duplicates array, it adds array[i] to the duplicates array.
  • Break statement: Breaks the inner loop to avoid adding the same duplicate multiple times.

Time Complexity:

  • Outer loop: O(n)O(n)
  • Inner loop: O(n)O(n)
  • Total: O(n)O(n)=O(n2)O(n) * O(n) = O(n^2)

The above function has a time complexity of O(n2)O(n^2) due to the nested loops, making it less efficient for large arrays.

Instead of using a nested loop to find duplicates (O(n2))(O(n^2)), use a hash map for O(n)O(n) complexity:

function findDuplicates(array: number[]): number[] {
  const seen = new Set<number>()
  const duplicates = new Set<number>()

  for (const num of array) {
    // Using a set for lookups takes constant time, O(1).
    // Therefore, iterating through the array and performing set operations takes O(n).
    if (seen.has(num)) {
      duplicates.add(num)
    } else {
      seen.add(num)
    }
  }

  return Array.from(duplicates)
}

Optimizing Recursion with memoization

Using memoization to optimize recursive algorithms, such as calculating Fibonacci numbers:

function fibonacci(n: number, memo: { [key: number]: number } = {}): number {
  if (n <= 1) return n
  if (memo[n]) return memo[n]
  // Memoization stores previously computed values to avoid redundant calculations.
  // Hence, each value is computed only once, reducing the time complexity to O(n).
  return (memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo))
}

Conclusion

Understanding algorithm complexity and being able to analyze it is crucial for writing efficient code. Using Big O notation helps developers assess how algorithms scale with the size of input data. This enables identifying and optimizing algorithms, leading to more efficient and scalable solutions.

Remember, optimization is not always necessary at an early stage of software development, but understanding its basics allows for more informed design decisions and better performance of the final product.


Additional resources