Alghoritms complexity is a key concept in computer science that allows for the evaluation of code performance and scalability. In this article, we will discuss how to calculate algorithm complexity in Big O notation and how to find faster and more optimal solutions. Code examples are provided in TypeScript.
What is algorithm complexity?
Algorithm complexity is a measure of the resources required to execute an algorithm, most often in terms of time (execution time) or space (memory). The most popular notation used to express algorithm complexity is Big O notation.
Big O Notation
Big O notation describes how the number of operations performed by an algorithm grows with the size of the input (n). Some basic Big O notations are:
O(1) - constant complexity, regardless of the input size
O(n) - linear complexity, grows linearly with the size of the input
O(n2) - quadratic complexity, grows quadratically with the size of the input
O(logn) - logarithmic complexity, grows logarithmically with the size of the input
Calculationg Big O examples
O(1) - constant complexity
An algorithm with constant complexity always performs the same number of operations, regardless of the input size.
function getFirstElement(arr: number[]): number { // This function always takes the first element of the array, // regardless of the array's size. Hence, it performs a single operation. // Time complexity: O(1) return arr[0];}
public class Main { public static int getFirstElement(int[] arr) { // This function always takes the first element of the array, // regardless of the array's size. Hence, it performs a single operation. // Time complexity: O(1) return arr[0]; }}
defmodule Main do def get_first_element([head | _]) do # This function always takes the first element of the array, # regardless of the array's size. Hence, it performs a single operation. # Time complexity: O(1) head endend
O(n) - linear complexity
An algorithm with linear complexity performs operations directly proportional to the size of the input.
function sumArray(arr: number[]): number { let sum = 0; for (let i = 0; i < arr.length; i++) { // This loop iterates over each element of the array exactly once. // Therefore, the number of operations grows linearly with the array size. // Time complexity: O(n) sum += arr[i]; } return sum;}
public class Main { public static int sumArray(int[] arr) { int sum = 0; for (int i = 0; i < arr.length; i++) { // This loop iterates over each element of the array exactly once. // Therefore, the number of operations grows linearly with the array size. // Time complexity: O(n) sum += arr[i]; } return sum; }}
defmodule Main do def sum_array(arr) do Enum.reduce(arr, 0, fn x, acc -> # This loop iterates over each element of the array exactly once. # Therefore, the number of operations grows linearly with the array size. # Time complexity: O(n) acc + x end) endend
O(n2) - quadratic complexity
An algorithm with quadratic complexity performs operations directly proportional to the square of the input size.
function bubbleSort(arr: number[]): number[] { for (let i = 0; i < arr.length; i++) { for (let j = 0; j < arr.length - i - 1; j++) { // The outer loop runs n times, and for each iteration of the outer loop, // the inner loop runs approximately n times as well. // Therefore, the total number of operations is roughly n * n. // Time complexity: O(n^2) if (arr[j] > arr[j + 1]) { [arr[j], arr[j + 1]] = [arr[j + 1], arr[j]]; } } } return arr;}
public class Main { public static int[] bubbleSort(int[] arr) { int n = arr.length; for (int i = 0; i < n; i++) { for (int j = 0; j < n - i - 1; j++) { // The outer loop runs n times, and for each iteration of the outer loop, // the inner loop runs approximately n times as well. // Therefore, the total number of operations is roughly n * n. // Time complexity: O(n^2) if (arr[j] > arr[j + 1]) { int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; } } } return arr; }}
defmodule Main dodef bubble_sort(list) do list |> Enum.with_index() |> Enum.reduce_while([], fn {_, i}, acc -> sorted_list = Enum.reduce_while(list, [], fn x, acc_inner -> case acc_inner do [y | _] = acc when x < y -> {:halt, [x | acc]} acc -> {:cont, [x | acc]} end end) |> Enum.reverse() if i == length(list) - 1 do {:halt, sorted_list} else {:cont, sorted_list} end end)endend
O(logn) - logarithmic complexity
An algorithm with logarithmic complexity performs operations that grow logarithmically with the size of the input.
function binarySearch(arr: number[], target: number): number { let left = 0; let right = arr.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); // Each iteration of the loop halves the size of the search space. // Hence, the number of iterations required is proportional to log(n). // Time complexity: O(log n) if (arr[mid] === target) { return mid; } else if (arr[mid] < target) { left = mid + 1; } else { right = mid - 1; } } return -1;}
public class Main { public static int binarySearch(int[] arr, int target) { int left = 0; int right = arr.length - 1; while (left <= right) { int mid = (left + right) / 2; // Each iteration of the loop halves the size of the search space. // Hence, the number of iterations required is proportional to log(n). // Time complexity: O(log n) if (arr[mid] == target) { return mid; } else if (arr[mid] < target) { left = mid + 1; } else { right = mid - 1; } } return -1; }}
defmodule Main do def binary_search(arr, target) do binary_search(arr, target, 0, length(arr) - 1) end defp binary_search(_arr, _target, left, right) when left > right do -1 end defp binary_search(arr, target, left, right) do mid = div(left + right, 2) cond do Enum.at(arr, mid) == target -> mid Enum.at(arr, mid) < target -> binary_search(arr, target, mid + 1, right) true -> binary_search(arr, target, left, mid - 1) end endend
Real-world example: calculating the total cost of a shopping cart
Let’s consider a function that calculates the total cost of items in a shopping cart. We will also determine its complexity.
interface CartItem { name: string; price: number; quantity: number;}function calculateTotalCost(cart: CartItem[]): number { // Using the reduce function to iterate over the cart items and calculate the total cost. // This method is more concise and functional. // Time complexity: O(n) return cart.reduce( (totalCost, item) => totalCost + item.price * item.quantity, 0 );}
public class Main { static class CartItem { String name; double price; int quantity; CartItem(String name, double price, int quantity) { this.name = name; this.price = price; this.quantity = quantity; } } public static double calculateTotalCost(CartItem[] cart) { double totalCost = 0; for (CartItem item : cart) { // Using the reduce function to iterate over the cart items and calculate the total cost. // This method is more concise and functional. // Time complexity: O(n) totalCost += item.price * item.quantity; } return totalCost; }}
defmodule Main do defmodule CartItem do defstruct name: "", price: 0.0, quantity: 0 end def calculate_total_cost(cart) do Enum.reduce(cart, 0, fn item, total_cost -> # Using the reduce function to iterate over the cart items and calculate the total cost. # This method is more concise and functional. # Time complexity: O(n) total_cost + item.price * item.quantity end) endend
Optimization
Now, let’s explore some real-world examples where we can apply common techniques to reduce the Big O notation of our algorithms.
Using hash maps for faster lookup
Let’s say, that you need to find duplicates in array:
function findDuplicates(array: number[]): number[] { const duplicates: number[] = []; for (let i = 0; i < array.length; i++) { for (let j = i + 1; j < array.length; j++) { // This nested loop checks each pair of elements in the array. // If a duplicate is found, it is added to the duplicates array. // Time complexity: O(n^2) if (array[i] === array[j] && !duplicates.includes(array[i])) { duplicates.push(array[i]); break; // Break to avoid adding the same duplicate multiple times } } } return duplicates;}
import java.util.ArrayList;import java.util.List;public class Main { public static List<Integer> findDuplicates(int[] array) { List<Integer> duplicates = new ArrayList<>(); for (int i = 0; i < array.length; i++) { for (int j = i + 1; j < array.length; j++) { // This nested loop checks each pair of elements in the array. // If a duplicate is found, it is added to the duplicates array. // Time complexity: O(n^2) if (array[i] == array[j] && !duplicates.contains(array[i])) { duplicates.add(array[i]); break; // Break to avoid adding the same duplicate multiple times } } } return duplicates; } public static void main(String[] args) { int[] array = {1, 2, 3, 2, 4, 3, 5, 1}; List<Integer> duplicates = findDuplicates(array); System.out.println("Duplicates: " + duplicates); }}
defmodule Main do def find_duplicates(array) do find_duplicates(array, [], []) end defp find_duplicates([], duplicates, _seen) do Enum.reverse(duplicates) end defp find_duplicates([head | tail], duplicates, seen) do if Enum.member?(seen, head) and not Enum.member?(duplicates, head) do find_duplicates(tail, [head | duplicates], seen) else find_duplicates(tail, duplicates, [head | seen]) end endend
Explanation:
Outer loop: Iterates through each element of the array array.
Inner loop: For each element i, iterates through the remaining elements j in the array.
Duplicate check: If element array[i] is equal to element array[j] and is not already in the duplicates array, it adds array[i] to the duplicates array.
Break statement: Breaks the inner loop to avoid adding the same duplicate multiple times.
Time Complexity:
Outer loop: O(n)
Inner loop: O(n)
Total: O(n)∗O(n)=O(n2)
The above function has a time complexity of O(n2) due to the nested loops, making it less efficient for large arrays.
Instead of using a nested loop to find duplicates (O(n2)), use a hash map for O(n) complexity:
function findDuplicates(array: number[]): number[] { const seen = new Set<number>(); const duplicates = new Set<number>(); for (const num of array) { // Using a set for lookups takes constant time, O(1). // Therefore, iterating through the array and performing set operations takes O(n). if (seen.has(num)) { duplicates.add(num); } else { seen.add(num); } } return Array.from(duplicates);}
import java.util.HashSet;import java.util.Set;import java.util.List;import java.util.ArrayList;public class Main { public static List<Integer> findDuplicates(int[] array) { Set<Integer> seen = new HashSet<>(); Set<Integer> duplicates = new HashSet<>(); for (int num : array) { // Using a set for lookups takes constant time, O(1). // Therefore, iterating through the array and performing set operations takes O(n). if (seen.contains(num)) { duplicates.add(num); } else { seen.add(num); } } return new ArrayList<>(duplicates); } public static void main(String[] args) { int[] array = {1, 2, 3, 2, 4, 3, 5, 1}; List<Integer> duplicates = findDuplicates(array); System.out.println("Duplicates: " + duplicates); }}
defmodule Main do def find_duplicates(array) do find_duplicates(array, MapSet.new(), MapSet.new()) |> MapSet.to_list() end defp find_duplicates([], _seen, duplicates), do: duplicates defp find_duplicates([head | tail], seen, duplicates) do if MapSet.member?(seen, head) do find_duplicates(tail, seen, MapSet.put(duplicates, head)) else find_duplicates(tail, MapSet.put(seen, head), duplicates) end endend
Optimizing Recursion with memoization
Using memoization to optimize recursive algorithms, such as calculating Fibonacci numbers:
function fibonacci(n: number, memo: { [key: number]: number } = {}): number { if (n <= 1) return n; if (memo[n]) return memo[n]; // Memoization stores previously computed values to avoid redundant calculations. // Hence, each value is computed only once, reducing the time complexity to O(n). return (memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo));}
import java.util.HashMap;import java.util.Map;public class Main { public static int fibonacci(int n, Map<Integer, Integer> memo) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // Memoization stores previously computed values to avoid redundant calculations. // Hence, each value is computed only once, reducing the time complexity to O(n). int result = fibonacci(n - 1, memo) + fibonacci(n - 2, memo); memo.put(n, result); return result; } public static void main(String[] args) { Map<Integer, Integer> memo = new HashMap<>(); int result = fibonacci(10, memo); System.out.println("Fibonacci of 10: " + result); }}
defmodule Main do def fibonacci(n) do fibonacci(n, %{}) end defp fibonacci(0, _memo), do: 0 defp fibonacci(1, _memo), do: 1 defp fibonacci(n, memo) do case Map.get(memo, n) do nil -> # Memoization stores previously computed values to avoid redundant calculations. # Hence, each value is computed only once, reducing the time complexity to O(n). result = fibonacci(n - 1, memo) + fibonacci(n - 2, memo) Map.put(memo, n, result) result value -> value end endend
Conclusion
Understanding algorithm complexity and being able to analyze it is crucial for writing efficient code. Using Big O notation helps developers assess how algorithms scale with the size of input data. This enables identifying and optimizing algorithms, leading to more efficient and scalable solutions.
Remember, optimization is not always necessary at an early stage of software development, but understanding its basics allows for more informed design decisions and better performance of the final product.