Big O Notation and Time Complexity - Easily Explained (2024)

The big O notation¹ is used to describe the complexity of algorithms.

On Google and YouTube, you can find numerous articles and videos explaining the big O notation. But to understand most of them (like this Wikipedia article), you should have studied mathematics as a preparation. ;-)

That' s why, in this article, I will explain the big O notation (and the time and space complexity described with it) only using examples and diagrams – and entirely without mathematical formulas, proofs and symbols like θ, Ω, ω, ∈, ∀, ∃ and ε.

You can find all source codes from this article in this GitHub repository.

¹ also known as "Bachmann-Landau notation" or "asymptotic notation"

Contents hide

1Types of Complexity

1.1Computational Time Complexity

1.2Space Complexity

2Complexity Classes

2.1O(1) – Constant Time

2.2O(n) – Linear Time

2.4O(log n) – Logarithmic Time

2.5O(n log n) – Quasilinear Time

2.6Big O Notation Order

2.7Other Complexity Classes

3Summary

Types of Complexity

Computational Time Complexity

Computational time complexity describes the change in the runtime of an algorithm, depending on the change in the input data's size.

In other words: "How much does an algorithm degrade when the amount of input data increases?"

Examples:

  • How much longer does it take to find an element within an unsorted array when the size of the array doubles? (Answer: twice as long)
  • How much longer does it take to find an element within a sorted array when the size of the array doubles? (Answer: one more step)

Space Complexity

Space complexity describes how much additional memory an algorithm needs depending on the size of the input data.

This does not refer to the memory required for the input data itself (i.e., that twice as much space is naturally needed for an input array twice as large), but the additional memory needed by the algorithm for loop and helper variables, temporary data structures, and the call stack (e.g., due to recursion).

Complexity Classes

We divide algorithms into so-called complexity classes. A complexity class is identified by the Landau symbol O ("big O").

In the following section, I will explain the most common complexity classes, starting with the easy-to-understand classes and moving on to the more complex ones. Accordingly, the classes are not sorted by complexity.

O(1) – Constant Time

Pronounced: "Order 1", "O of 1", "big O of 1"

The runtime is constant, i.e., independent of the number of input elements n.

In the following graph, the horizontal axis represents the number of input elements n (or more generally: the size of the input problem), and the vertical axis represents the time required.

Since complexity classes can only be used to classify algorithms, but not to calculate their exact running time, the axes are not labeled.

Big O Notation and Time Complexity - Easily Explained (1)

O(1) Examples

The following two problems are examples of constant time:

  • Accessing a specific element of an array of size n: No matter how large the array is, accessing it via array[index] always takes the same time².
  • Inserting an element at the beginning of a linked list: This always requires setting one or two (for a doubly linked list) pointers (or references), regardless of the list's size. (In an array, on the other hand, this would require moving all values one field to the right, which takes longer with a larger array than with a smaller one).

² This statement is not one hundred percent correct. Effects from CPU caches also come into play here: If the data block containing the element to be read is already (or still) in the CPU cache (which is more likely the smaller the array is), then access is faster than if it first has to be read from RAM.

O(1) Example Source Code

The following source code (class ConstantTimeSimpleDemo in the GitHub repository) shows a simple example to measure the time required to insert an element at the beginning of a linked list:

public static void main(String[] args) { for (int n = 32; n <= 8_388_608; n *= 2) { LinkedList<Integer> list = createLinkedListOfSize(n); long time = System.nanoTime(); list.add(0, 1); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static LinkedList<Integer> createLinkedListOfSize(int n) { LinkedList<Integer> list = new LinkedList<>(); for (int i = 0; i < n; i++) { list.add(i); } return list;}Code language: Java (java)

On my system, the times are between 1,200 ns and 19,000 ns, unevenly distributed over the various measurements. This is sufficient for a quick test. But we don't get particularly good measurement results here, as both the HotSpot compiler and the garbage collector can kick in at any time.

The test program TimeComplexityDemo with the ConstantTime class provides better measurement results. The test program first runs several warmup rounds to allow the HotSpot compiler to optimize the code. Only after that are measurements performed five times, and the median of the measured values is displayed.

Here is an extract of the results:

--- ConstantTime (results 5 of 5) ---ConstantTime, n = 32 -> fastest: 31,700 ns, median: 44,900 nsConstantTime, n = 16,384 -> fastest: 14,400 ns, median: 40,200 nsConstantTime, n = 8,388,608 -> fastest: 34,000 ns, median: 51,100 nsCode language: plaintext (plaintext)

The effort remains about the same, regardless of the size of the list. The complete test results can be found in the file test-results.txt.

O(n) – Linear Time

Pronounced: "Order n", "O of n", "big O of n"

The time grows linearly with the number of input elements n: If n doubles, then the time approximately doubles, too.

"Approximately" because the effort may also include components with lower complexity classes. These become insignificant if n is sufficiently large so they are omitted in the notation.

In the following diagram, I have demonstrated this by starting the graph slightly above zero (meaning that the effort also contains a constant component):

Big O Notation and Time Complexity - Easily Explained (2)

O(n) Examples

The following problems are examples for linear time:

  • Finding a specific element in an array: All elements of the array have to be examined – if there are twice as many elements, it takes twice as long.
  • Summing up all elements of an array: Again, all elements must be looked at once – if the array is twice as large, it takes twice as long.

It is essential to understand that the complexity class makes no statement about the absolute time required, but only about the change in the time required depending on the change in the input size. The two examples above would take much longer with a linked list than with an array – but that is irrelevant for the complexity class.

O(n) Example Source Code

The following source code (class LinearTimeSimpleDemo) measures the time for summing up all elements of an array:

public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long sum = 0; long time = System.nanoTime(); for (int i = 0; i < n; i++) { sum += array[i]; } time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}Code language: Java (java)

On my system, the time degrades approximately linearly from 1,100 ns to 155,911,900 ns. Better measurement results are again provided by the test program TimeComplexityDemo and the LinearTime algorithm class. Here is an extract of the results:

--- LinearTime (results 5 of 5) ---LinearTime, n = 512 -> fastest: 300 ns, median: 300 nsLinearTime, n = 524,288 -> fastest: 159,300 ns, median: 189,400 nsLinearTime, n = 536,870,912 -> fastest: 164,322,600 ns, median: 168,681,700 nsCode language: plaintext (plaintext)

You can find the complete test results again in test-results.txt.

What is the Difference Between "Linear" and "Proportional"?

A function is linear if it can be represented by a straight line, e.g. f(x) = 5x + 3.

Proportional is a particular case of linear, where the line passes through the point (0,0) of the coordinate system, for example, f(x) = 3x.

As there may be a constant component in O(n), it's time is linear.

O(n²) – Quadratic Time

Pronounced: "Order n squared", "O of n squared", "big O of n squared"

The time grows linearly to the square of the number of input elements: If the number of input elements n doubles, then the time roughly quadruples. (And if the number of elements increases tenfold, the effort increases by a factor of one hundred!)

Big O Notation and Time Complexity - Easily Explained (3)

O(n²) Examples

Examples of quadratic time are simple sorting algorithms like Insertion Sort, Selection Sort, and Bubble Sort.

O(n²) Example Source Code

The following example (QuadraticTimeSimpleDemo) shows how the time for sorting an array using Insertion Sort changes depending on the size of the array:

public static void main(String[] args) { for (int n = 32; n <= 262_144; n *= 2) { int[] array = createRandomArrayOfSize(n); long time = System.nanoTime(); insertionSort(array); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createRandomArrayOfSize(int n) { ThreadLocalRandom random = ThreadLocalRandom.current(); int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = random.nextInt(); } return array;}private static void insertionSort(int[] elements) { for (int i = 1; i < elements.length; i++) { int elementToSort = elements[i]; int j = i; while (j > 0 && elementToSort < elements[j - 1]) { elements[j] = elements[j - 1]; j--; } elements[j] = elementToSort; }}Code language: Java (java)

We can obtain better results with the test program TimeComplexityDemoand the QuadraticTime class. Here is an excerpt of the results, where you can see the approximate quadrupling of the effort each time the problem size doubles:

QuadraticTime, n = 8,192 -> fastest: 4,648,400 ns, median: 4,720,200 nsQuadraticTime, n = 16,384 -> fastest: 19,189,100 ns, median: 19,440,400 nsQuadraticTime, n = 32,768 -> fastest: 78,416,700 ns, median: 79,896,000 nsQuadraticTime, n = 65,536 -> fastest: 319,905,300 ns, median: 330,530,600 nsQuadraticTime, n = 131,072 -> fastest: 1,310,702,600 ns, median: 1,323,919,500 nsCode language: plaintext (plaintext)

You can find the complete test results intest-results.txt.

O(n) vs. O(n²)

At this point, I would like to point out again that the effort can contain components of lower complexity classes and constant factors. Both are irrelevant for the big O notation since they are no longer of importance if n is sufficiently large.

It is therefore possible that, for example, O(n²) is faster than O(n) – at least up to a certain size of n.

The following diagram compares three fictitious algorithms: one with complexity class O(n²) and two with O(n), one of which is faster than the other. It is good to see how up to n = 4, the orange O(n²) algorithm takes less time than the yellow O(n) algorithm. And even up to n = 8, less time than the cyan O(n) algorithm.

Above a sufficiently large n (that is n = 9), O(n²) is and remains the slowest algorithm.

Big O Notation and Time Complexity - Easily Explained (4)

Let's move on to two, not-so-intuitive complexity classes.

O(log n) – Logarithmic Time

Pronounced: "Order log n", "O of log n", "big O of log n"

The effort increases approximately by a constant amount when the number of input elements doubles.

For example, if the time increases by one second when the number of input elements increases from 1,000 to 2,000, it only increases by another second when the effort increases to 4,000. And again by one more second when the effort grows to 8,000.

Big O Notation and Time Complexity - Easily Explained (5)

O(log n) Example

An example of logarithmic growth is the binary search for a specific element in a sorted array of size n.

Since we halve the area to be searched with each search step, we can, in turn, search an array twice as large with only one more search step.

(The older ones among us may remember searching the telephone book or an encyclopedia.)

O(log n) Example Source Code

The following example (LogarithmicTimeSimpleDemo) measures how the time for binary search changes in relation to the array size.

public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long time = System.nanoTime(); Arrays.binarySearch(array, 0); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}Code language: Java (java)

We get better measurement results with the test program TimeComplexityDemoand the class LogarithmicTime. Here are the results:

LogarithmicTime, n = 32 -> fastest: 77,800 ns, median: 107,200 nsLogarithmicTime, n = 2,048 -> fastest: 173,500 ns, median: 257,400 nsLogarithmicTime, n = 131,072 -> fastest: 363,400 ns, median: 413,100 nsLogarithmicTime, n = 8,388,608 -> fastest: 661,100 ns, median: 670,800 nsLogarithmicTime, n = 536,870,912 -> fastest: 770,500 ns, median: 875,700 nsCode language: plaintext (plaintext)

In each step, the problem size n increases by factor 64. The time does not always increase by exactly the same value, but it does so sufficiently precisely to demonstrate that logarithmic time is significantly cheaper than linear time (for which the time required would also increase by factor 64 each step).

As before, you can find the complete test results in the filetest-results.txt.

O(n log n) – Quasilinear Time

Pronounced: "Order n log n", "O of n log n", "big O of n log n"

The effort grows slightly faster than linear because the linear component is multiplied by a logarithmic one. For clarification, you can also insert a multiplication sign: O(n × log n).

This is best illustrated by the following graph. We see a curve whose gradient is visibly growing at the beginning, but soon approaches a straight line as n increases:

Big O Notation and Time Complexity - Easily Explained (6)

O(n log n) Example

Efficient sorting algorithms like Quicksort, Merge Sort, and Heapsort are examples for quasilinear time.

O(n log n) Example Source Code

The following sample code (class QuasiLinearTimeSimpleDemo) shows how the time for sorting an array with Quicksort³ grows in relation to the array size:

public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long time = System.nanoTime(); Arrays.binarySearch(array, 0); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}Code language: Java (java)

The test program TimeComplexityDemowith the class QuasiLinearTimedelivers more precise results. Here is an extract:

QuasiLinearTime, n = 256 -> fastest: 12,200 ns, med.: 12,500 nsQuasiLinearTime, n = 4,096 -> fastest: 228,600 ns, med.: 234,200 nsQuasiLinearTime, n = 65,536 -> fastest: 4,606,500 ns, med.: 4,679,800 nsQuasiLinearTime, n = 1,048,576 -> fastest: 93,933,500 ns, med.: 95,216,300 nsQuasiLinearTime, n = 16,777,216 -> fastest: 1,714,541,900 ns, med.: 1,755,715,000 nsCode language: plaintext (plaintext)

The problem size increases each time by factor 16, and the time required by factor 18.5 to 20.3. You can find the complete test result, as always, in test-results.txt.

³ More precisely: Dual-Pivot Quicksort, which switches to Insertion Sort for arrays with less than 44 elements. For this reason, this test starts at 64 elements, not at 32 like the others.

Big O Notation Order

Here are, once again, the complexity classes, sorted in ascending order of complexity:

  • O(1) – constant time
  • O(log n) – logarithmic time
  • O(n) – linear time
  • O(n log n) – quasilinear time
  • O(n²) – quadratic time

And here the comparison graphically:

Big O Notation and Time Complexity - Easily Explained (7)

I intentionally shifted the curves along the time axis so that the worst complexity class O(n²) is fastest for low values of n, and the best complexity class O(1) is slowest. To then show how, for sufficiently high values of n, the efforts shift as expected.

Other Complexity Classes

Further complexity classes are, for example:

  • O(nm) – polynomial time
  • O(2n) – exponential time
  • O(n!) – factorial time

However, these are so bad that we should avoid algorithms with these complexities, if possible.

I have included these classes in the following diagram (O(nm) with m=3):

Big O Notation and Time Complexity - Easily Explained (8)

I had to compress the y-axis by factor 10 compared to the previous diagram to display the three new curves.

Summary

Time complexity describes how the runtime of an algorithm changes depending on the amount of input data. The most common complexity classes are (in ascending order of complexity): O(1), O(log n), O(n), O(n log n), O(n²).

Algorithms with constant, logarithmic, linear, and quasilinear time usually lead to an end in a reasonable time for input sizes up to several billion elements. Algorithms with quadratic time can quickly reach theoretical execution times of several years for the same problem sizes⁴. You should, therefore, avoid them as far as possible.

⁴ Quicksort, for example, sorts a billion items in 90 seconds on my laptop; Insertion Sort, on the other hand, needs 85 seconds for a million items; that would be 85 million seconds for a billion items – or in other words: two years and eight months!

If you liked the article, please share it using one of the share buttons at the end or leave me a comment.

Do you want to be informed when new articles are published on HappyCoders.eu? Then click here to sign up for the HappyCoders.eu newsletter.

Big O Notation and Time Complexity - Easily Explained (2024)

FAQs

Big O Notation and Time Complexity - Easily Explained? ›

In Big O notation, "O" represents the order of the function, and "f(n)" represents the function describing the algorithm's time complexity in terms of the input size "n." The notation "O(f(n))" signifies that the algorithm's time complexity grows no faster than a specific function of "n." Here, "f(n)" is a mathematical ...

What is Big-O notation and time complexity? ›

Time complexity in Big O notation is a measure of how an algorithm's running time increases with the size of its input. It provides an estimate of the worst-case time required to execute an algorithm as a function of the input size.

What is time complexity simplified? ›

Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm.

Is Big-O notation hard to understand? ›

It is extremely common for programmers of all levels to have an extremely weak understanding of performance analysis with big O notation. Beginners often refuse to engage altogether, believing the topic to be impractical or too difficult.

What is the Big-O notation of quick sort time complexity? ›

The big-O notation of the quicksort algorithm is O(n log n) in the best case and average case, and O(n^2) in the worst case. Quicksort is a popular sorting algorithm that uses the divide-and-conquer strategy to sort elements. The big-O notation is used to describe the performance or complexity of an algorithm.

What is the formal definition of Big-O? ›

(definition) Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation f(n) = O(g(n)) means it is less than some constant multiple of g(n).

Is Big-O the worst case? ›

The Big-O notation describes the worst-case running time of a program. We compute the Big-O of an algorithm by counting how many iterations an algorithm will take in the worst-case scenario with an input of N. We typically consult the Big-O because we must always plan for the worst case.

What is complexity in layman's terms? ›

When you see the word complexity, think of something with a lot of pieces, something not easy to put into words or understand. Things that can have complexity include: the events leading up to the American Civil War, a broth made with many ingredients, your relationship with your parents.

How do you calculate time complexity easily? ›

The steps to calculate the time complexity of an algorithm are:
  1. Identify the basic operations ( Ex: searching an array, sorting a list, or adding two numbers. )
  2. Count the number of operations ( loop structures and nested loops in the algorithm )
  3. Express the result ( The most commonly used big O notations ).
Feb 5, 2023

Why time complexity is needed? ›

Time complexity is an important concept in computer science that measures the efficiency of algorithms in terms of the amount of time they take to run as a function of the size of their input.

What is the big 0 for dummies? ›

In Simple terms, Big O is a way to measure how fast an algorithm is or As your input grows, how fast does the runtime of your algorithm grow. The Big O notation is expressed using a mathematical formula that uses the letter "O" and a function of "n," which represents the input size.

What is the Big-O notation for kids? ›

Big O notation tells you how fast an algorithm is. For example, suppose you have a list of size n. Simple search needs to check each element, so it will take n operations. The run time in Big O notation is O(n).

What is the simplified Big-O notation for? ›

Big O notation tells you how much slower a piece of a program gets as the input gets larger. For example: Returning the first element of an array takes a constant amount of time. If you double the length of an array, it still takes the same amount of time.

What is the Big-O notation for time complexity? ›

Big O notation is a mathematical notation used in computer science to describe the upper bound or worst-case scenario of the runtime complexity of an algorithm in terms of the input size. It provides a standardized and concise way to express how the performance of an algorithm scales as the size of the input grows.

Which sorting algorithm has the best Big O time complexity? ›

Which sorting algorithm has best time complexity? The quick sort algorithm has the best time complexity of all the sorting algorithms. The best, average, and worst cases of Quicksort's temporal complexity are O(N log N), O(N log N), and O(N log N), respectively.

What is Big-O notation and little O notation? ›

Big-O is an inclusive upper bound, while little-o is a strict upper bound. For example, the function f(n) = 3n is: in O(n²) , o(n²) , and O(n) not in O(lg n) , o(lg n) , or o(n)

Which is greater, O'n or O logn? ›

O(n) means that the algorithm's maximum running time is proportional to the input size. basically, O(something) is an upper bound on the algorithm's number of instructions (atomic ones). therefore, O(logn) is tighter than O(n) and is also better in terms of algorithms analysis.

What is the Big-O notation in machine learning? ›

Big O notation assists in identifying bottlenecks in algorithms, guiding developers towards more efficient implementations. For example, reducing the complexity from O(n²) to O(n log n) can significantly speed up a model's training time.

Which Big-O notation is fastest? ›

Constant-Time Algorithm - O (1) - Order 1: This is the fastest time complexity since the time it takes to execute a program is always the same. It does not matter that what's the size of the input, the execution and the space required to run this will be the same.

Top Articles
This week’s Trib HSSN broadcasts | Trib HSSN
Trib HSSN broadcast schedule: Week of Oct. 25, 2021 | Trib HSSN
No Hard Feelings (2023) Tickets & Showtimes
Printable Whoville Houses Clipart
Frederick County Craigslist
Nco Leadership Center Of Excellence
Couchtuner The Office
Devotion Showtimes Near Mjr Universal Grand Cinema 16
DENVER Überwachungskamera IOC-221, IP, WLAN, außen | 580950
Nikki Catsouras Head Cut In Half
Bed Bath And Body Works Hiring
Weekly Math Review Q4 3
Citymd West 146Th Urgent Care - Nyc Photos
Jc Post News
Gon Deer Forum
Mile Split Fl
Georgia Vehicle Registration Fees Calculator
Pickswise Review 2024: Is Pickswise a Trusted Tipster?
Transactions (zipForm Edition) | Lone Wolf | Real Estate Forms Software
The Old Way Showtimes Near Regency Theatres Granada Hills
Teen Vogue Video Series
Naval Academy Baseball Roster
Mdt Bus Tracker 27
Egusd Lunch Menu
Wolfwalkers 123Movies
Big Boobs Indian Photos
Craftsman Yt3000 Oil Capacity
Barbie Showtimes Near Lucas Cinemas Albertville
Graphic Look Inside Jeffrey Dresser
P3P Orthrus With Dodge Slash
Ixl Lausd Northwest
Everything You Need to Know About NLE Choppa
Nacho Libre Baptized Gif
Unity Webgl Player Drift Hunters
Go Upstate Mugshots Gaffney Sc
Ksu Sturgis Library
2700 Yen To Usd
Gvod 6014
Urban Blight Crossword Clue
O'reilly's El Dorado Kansas
062203010
Walmart Pharmacy Hours: What Time Does The Pharmacy Open and Close?
Callie Gullickson Eye Patches
Kutty Movie Net
LoL Lore: Die Story von Caitlyn, dem Sheriff von Piltover
Mathews Vertix Mod Chart
Hillsborough County Florida Recorder Of Deeds
Barber Gym Quantico Hours
German American Bank Owenton Ky
Gummy Bear Hoco Proposal
Cool Math Games Bucketball
Latest Posts
Article information

Author: Kimberely Baumbach CPA

Last Updated:

Views: 6144

Rating: 4 / 5 (41 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Kimberely Baumbach CPA

Birthday: 1996-01-14

Address: 8381 Boyce Course, Imeldachester, ND 74681

Phone: +3571286597580

Job: Product Banking Analyst

Hobby: Cosplaying, Inline skating, Amateur radio, Baton twirling, Mountaineering, Flying, Archery

Introduction: My name is Kimberely Baumbach CPA, I am a gorgeous, bright, charming, encouraging, zealous, lively, good person who loves writing and wants to share my knowledge and understanding with you.