The big O notation¹ is used to describe the complexity of algorithms. On Google and YouTube, you can find numerous articles and videos explaining the big O notation. But to understand most of them (like this Wikipedia article), you should have studied mathematics as a preparation. ;-) That' s why, in this article, I will explain the big O notation (and the time and space complexity described with it) only using examples and diagrams – and entirely without mathematical formulas, proofs and symbols like θ, Ω, ω, ∈, ∀, ∃ and ε. You can find all source codes from this article in this GitHub repository. ¹ also known as "Bachmann-Landau notation" or "asymptotic notation" Contents hide 1Types of Complexity 1.1Computational Time Complexity 1.2Space Complexity 2Complexity Classes 2.1O(1) – Constant Time 2.2O(n) – Linear Time 2.3O(n²) – Quadratic Time 2.4O(log n) – Logarithmic Time 2.5O(n log n) – Quasilinear Time 2.6Big O Notation Order 2.7Other Complexity Classes 3Summary Computational time complexity describes the change in the runtime of an algorithm, depending on the change in the input data's size. In other words: "How much does an algorithm degrade when the amount of input data increases?" Examples: Space complexity describes how much additional memory an algorithm needs depending on the size of the input data. This does not refer to the memory required for the input data itself (i.e., that twice as much space is naturally needed for an input array twice as large), but the additional memory needed by the algorithm for loop and helper variables, temporary data structures, and the call stack (e.g., due to recursion). We divide algorithms into so-called complexity classes. A complexity class is identified by the Landau symbol O ("big O"). In the following section, I will explain the most common complexity classes, starting with the easy-to-understand classes and moving on to the more complex ones. Accordingly, the classes are not sorted by complexity. Pronounced: "Order 1", "O of 1", "big O of 1" The runtime is constant, i.e., independent of the number of input elements n. In the following graph, the horizontal axis represents the number of input elements n (or more generally: the size of the input problem), and the vertical axis represents the time required. Since complexity classes can only be used to classify algorithms, but not to calculate their exact running time, the axes are not labeled. The following two problems are examples of constant time: ² This statement is not one hundred percent correct. Effects from CPU caches also come into play here: If the data block containing the element to be read is already (or still) in the CPU cache (which is more likely the smaller the array is), then access is faster than if it first has to be read from RAM. The following source code (class ConstantTimeSimpleDemo in the GitHub repository) shows a simple example to measure the time required to insert an element at the beginning of a linked list: On my system, the times are between 1,200 ns and 19,000 ns, unevenly distributed over the various measurements. This is sufficient for a quick test. But we don't get particularly good measurement results here, as both the HotSpot compiler and the garbage collector can kick in at any time. The test program TimeComplexityDemo with the ConstantTime class provides better measurement results. The test program first runs several warmup rounds to allow the HotSpot compiler to optimize the code. Only after that are measurements performed five times, and the median of the measured values is displayed. Here is an extract of the results: The effort remains about the same, regardless of the size of the list. The complete test results can be found in the file test-results.txt. Pronounced: "Order n", "O of n", "big O of n" The time grows linearly with the number of input elements n: If n doubles, then the time approximately doubles, too. "Approximately" because the effort may also include components with lower complexity classes. These become insignificant if n is sufficiently large so they are omitted in the notation. In the following diagram, I have demonstrated this by starting the graph slightly above zero (meaning that the effort also contains a constant component): The following problems are examples for linear time: It is essential to understand that the complexity class makes no statement about the absolute time required, but only about the change in the time required depending on the change in the input size. The two examples above would take much longer with a linked list than with an array – but that is irrelevant for the complexity class. The following source code (class LinearTimeSimpleDemo) measures the time for summing up all elements of an array: On my system, the time degrades approximately linearly from 1,100 ns to 155,911,900 ns. Better measurement results are again provided by the test program TimeComplexityDemo and the LinearTime algorithm class. Here is an extract of the results: You can find the complete test results again in test-results.txt. A function is linear if it can be represented by a straight line, e.g. f(x) = 5x + 3. Proportional is a particular case of linear, where the line passes through the point (0,0) of the coordinate system, for example, f(x) = 3x. As there may be a constant component in O(n), it's time is linear. Pronounced: "Order n squared", "O of n squared", "big O of n squared" The time grows linearly to the square of the number of input elements: If the number of input elements n doubles, then the time roughly quadruples. (And if the number of elements increases tenfold, the effort increases by a factor of one hundred!) Examples of quadratic time are simple sorting algorithms like Insertion Sort, Selection Sort, and Bubble Sort. The following example (QuadraticTimeSimpleDemo) shows how the time for sorting an array using Insertion Sort changes depending on the size of the array: We can obtain better results with the test program TimeComplexityDemoand the QuadraticTime class. Here is an excerpt of the results, where you can see the approximate quadrupling of the effort each time the problem size doubles: You can find the complete test results intest-results.txt. At this point, I would like to point out again that the effort can contain components of lower complexity classes and constant factors. Both are irrelevant for the big O notation since they are no longer of importance if n is sufficiently large. It is therefore possible that, for example, O(n²) is faster than O(n) – at least up to a certain size of n. The following diagram compares three fictitious algorithms: one with complexity class O(n²) and two with O(n), one of which is faster than the other. It is good to see how up to n = 4, the orange O(n²) algorithm takes less time than the yellow O(n) algorithm. And even up to n = 8, less time than the cyan O(n) algorithm. Above a sufficiently large n (that is n = 9), O(n²) is and remains the slowest algorithm. Let's move on to two, not-so-intuitive complexity classes. Pronounced: "Order log n", "O of log n", "big O of log n" The effort increases approximately by a constant amount when the number of input elements doubles. For example, if the time increases by one second when the number of input elements increases from 1,000 to 2,000, it only increases by another second when the effort increases to 4,000. And again by one more second when the effort grows to 8,000. An example of logarithmic growth is the binary search for a specific element in a sorted array of size n. Since we halve the area to be searched with each search step, we can, in turn, search an array twice as large with only one more search step. (The older ones among us may remember searching the telephone book or an encyclopedia.) The following example (LogarithmicTimeSimpleDemo) measures how the time for binary search changes in relation to the array size. We get better measurement results with the test program TimeComplexityDemoand the class LogarithmicTime. Here are the results: In each step, the problem size n increases by factor 64. The time does not always increase by exactly the same value, but it does so sufficiently precisely to demonstrate that logarithmic time is significantly cheaper than linear time (for which the time required would also increase by factor 64 each step). As before, you can find the complete test results in the filetest-results.txt. Pronounced: "Order n log n", "O of n log n", "big O of n log n" The effort grows slightly faster than linear because the linear component is multiplied by a logarithmic one. For clarification, you can also insert a multiplication sign: O(n × log n). This is best illustrated by the following graph. We see a curve whose gradient is visibly growing at the beginning, but soon approaches a straight line as n increases: Efficient sorting algorithms like Quicksort, Merge Sort, and Heapsort are examples for quasilinear time. The following sample code (class QuasiLinearTimeSimpleDemo) shows how the time for sorting an array with Quicksort³ grows in relation to the array size: The test program TimeComplexityDemowith the class QuasiLinearTimedelivers more precise results. Here is an extract: The problem size increases each time by factor 16, and the time required by factor 18.5 to 20.3. You can find the complete test result, as always, in test-results.txt. ³ More precisely: Dual-Pivot Quicksort, which switches to Insertion Sort for arrays with less than 44 elements. For this reason, this test starts at 64 elements, not at 32 like the others. Here are, once again, the complexity classes, sorted in ascending order of complexity: And here the comparison graphically: I intentionally shifted the curves along the time axis so that the worst complexity class O(n²) is fastest for low values of n, and the best complexity class O(1) is slowest. To then show how, for sufficiently high values of n, the efforts shift as expected. Further complexity classes are, for example: However, these are so bad that we should avoid algorithms with these complexities, if possible. I have included these classes in the following diagram (O(nm) with m=3): I had to compress the y-axis by factor 10 compared to the previous diagram to display the three new curves. Time complexity describes how the runtime of an algorithm changes depending on the amount of input data. The most common complexity classes are (in ascending order of complexity): O(1), O(log n), O(n), O(n log n), O(n²). Algorithms with constant, logarithmic, linear, and quasilinear time usually lead to an end in a reasonable time for input sizes up to several billion elements. Algorithms with quadratic time can quickly reach theoretical execution times of several years for the same problem sizes⁴. You should, therefore, avoid them as far as possible. ⁴ Quicksort, for example, sorts a billion items in 90 seconds on my laptop; Insertion Sort, on the other hand, needs 85 seconds for a million items; that would be 85 million seconds for a billion items – or in other words: two years and eight months! If you liked the article, please share it using one of the share buttons at the end or leave me a comment. Do you want to be informed when new articles are published on HappyCoders.eu? Then click here to sign up for the HappyCoders.eu newsletter.Types of Complexity
Computational Time Complexity
Space Complexity
Complexity Classes
O(1) – Constant Time
O(1) Examples
array[index]
always takes the same time².O(1) Example Source Code
public static void main(String[] args) { for (int n = 32; n <= 8_388_608; n *= 2) { LinkedList<Integer> list = createLinkedListOfSize(n); long time = System.nanoTime(); list.add(0, 1); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static LinkedList<Integer> createLinkedListOfSize(int n) { LinkedList<Integer> list = new LinkedList<>(); for (int i = 0; i < n; i++) { list.add(i); } return list;}
Code language: Java (java)--- ConstantTime (results 5 of 5) ---ConstantTime, n = 32 -> fastest: 31,700 ns, median: 44,900 nsConstantTime, n = 16,384 -> fastest: 14,400 ns, median: 40,200 nsConstantTime, n = 8,388,608 -> fastest: 34,000 ns, median: 51,100 ns
Code language: plaintext (plaintext)O(n) – Linear Time
O(n) Examples
O(n) Example Source Code
public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long sum = 0; long time = System.nanoTime(); for (int i = 0; i < n; i++) { sum += array[i]; } time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}
Code language: Java (java)--- LinearTime (results 5 of 5) ---LinearTime, n = 512 -> fastest: 300 ns, median: 300 nsLinearTime, n = 524,288 -> fastest: 159,300 ns, median: 189,400 nsLinearTime, n = 536,870,912 -> fastest: 164,322,600 ns, median: 168,681,700 ns
Code language: plaintext (plaintext)What is the Difference Between "Linear" and "Proportional"?
O(n²) – Quadratic Time
O(n²) Examples
O(n²) Example Source Code
public static void main(String[] args) { for (int n = 32; n <= 262_144; n *= 2) { int[] array = createRandomArrayOfSize(n); long time = System.nanoTime(); insertionSort(array); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createRandomArrayOfSize(int n) { ThreadLocalRandom random = ThreadLocalRandom.current(); int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = random.nextInt(); } return array;}private static void insertionSort(int[] elements) { for (int i = 1; i < elements.length; i++) { int elementToSort = elements[i]; int j = i; while (j > 0 && elementToSort < elements[j - 1]) { elements[j] = elements[j - 1]; j--; } elements[j] = elementToSort; }}
Code language: Java (java)QuadraticTime, n = 8,192 -> fastest: 4,648,400 ns, median: 4,720,200 nsQuadraticTime, n = 16,384 -> fastest: 19,189,100 ns, median: 19,440,400 nsQuadraticTime, n = 32,768 -> fastest: 78,416,700 ns, median: 79,896,000 nsQuadraticTime, n = 65,536 -> fastest: 319,905,300 ns, median: 330,530,600 nsQuadraticTime, n = 131,072 -> fastest: 1,310,702,600 ns, median: 1,323,919,500 ns
Code language: plaintext (plaintext)O(n) vs. O(n²)
O(log n) – Logarithmic Time
O(log n) Example
O(log n) Example Source Code
public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long time = System.nanoTime(); Arrays.binarySearch(array, 0); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}
Code language: Java (java)LogarithmicTime, n = 32 -> fastest: 77,800 ns, median: 107,200 nsLogarithmicTime, n = 2,048 -> fastest: 173,500 ns, median: 257,400 nsLogarithmicTime, n = 131,072 -> fastest: 363,400 ns, median: 413,100 nsLogarithmicTime, n = 8,388,608 -> fastest: 661,100 ns, median: 670,800 nsLogarithmicTime, n = 536,870,912 -> fastest: 770,500 ns, median: 875,700 ns
Code language: plaintext (plaintext)O(n log n) – Quasilinear Time
O(n log n) Example
O(n log n) Example Source Code
public static void main(String[] args) { for (int n = 32; n <= 536_870_912; n *= 2) { int[] array = createArrayOfSize(n); long time = System.nanoTime(); Arrays.binarySearch(array, 0); time = System.nanoTime() - time; System.out.printf("n = %d -> time = %d ns%n", n, time); }}private static int[] createArrayOfSize(int n) { int[] array = new int[n]; for (int i = 0; i < n; i++) { array[i] = i; } return array;}
Code language: Java (java)QuasiLinearTime, n = 256 -> fastest: 12,200 ns, med.: 12,500 nsQuasiLinearTime, n = 4,096 -> fastest: 228,600 ns, med.: 234,200 nsQuasiLinearTime, n = 65,536 -> fastest: 4,606,500 ns, med.: 4,679,800 nsQuasiLinearTime, n = 1,048,576 -> fastest: 93,933,500 ns, med.: 95,216,300 nsQuasiLinearTime, n = 16,777,216 -> fastest: 1,714,541,900 ns, med.: 1,755,715,000 ns
Code language: plaintext (plaintext)Big O Notation Order
Other Complexity Classes
Summary
FAQs
Big O Notation and Time Complexity - Easily Explained? ›
In Big O notation, "O" represents the order of the function, and "f(n)" represents the function describing the algorithm's time complexity in terms of the input size "n." The notation "O(f(n))" signifies that the algorithm's time complexity grows no faster than a specific function of "n." Here, "f(n)" is a mathematical ...
What is Big-O notation and time complexity? ›Time complexity in Big O notation is a measure of how an algorithm's running time increases with the size of its input. It provides an estimate of the worst-case time required to execute an algorithm as a function of the input size.
What is time complexity simplified? ›Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm.
Is Big-O notation hard to understand? ›It is extremely common for programmers of all levels to have an extremely weak understanding of performance analysis with big O notation. Beginners often refuse to engage altogether, believing the topic to be impractical or too difficult.
What is the Big-O notation of quick sort time complexity? ›The big-O notation of the quicksort algorithm is O(n log n) in the best case and average case, and O(n^2) in the worst case. Quicksort is a popular sorting algorithm that uses the divide-and-conquer strategy to sort elements. The big-O notation is used to describe the performance or complexity of an algorithm.
What is the formal definition of Big-O? ›(definition) Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items. Informally, saying some equation f(n) = O(g(n)) means it is less than some constant multiple of g(n).
Is Big-O the worst case? ›The Big-O notation describes the worst-case running time of a program. We compute the Big-O of an algorithm by counting how many iterations an algorithm will take in the worst-case scenario with an input of N. We typically consult the Big-O because we must always plan for the worst case.
What is complexity in layman's terms? ›When you see the word complexity, think of something with a lot of pieces, something not easy to put into words or understand. Things that can have complexity include: the events leading up to the American Civil War, a broth made with many ingredients, your relationship with your parents.
How do you calculate time complexity easily? ›- Identify the basic operations ( Ex: searching an array, sorting a list, or adding two numbers. )
- Count the number of operations ( loop structures and nested loops in the algorithm )
- Express the result ( The most commonly used big O notations ).
Time complexity is an important concept in computer science that measures the efficiency of algorithms in terms of the amount of time they take to run as a function of the size of their input.
What is the big 0 for dummies? ›
In Simple terms, Big O is a way to measure how fast an algorithm is or As your input grows, how fast does the runtime of your algorithm grow. The Big O notation is expressed using a mathematical formula that uses the letter "O" and a function of "n," which represents the input size.
What is the Big-O notation for kids? ›Big O notation tells you how fast an algorithm is. For example, suppose you have a list of size n. Simple search needs to check each element, so it will take n operations. The run time in Big O notation is O(n).
What is the simplified Big-O notation for? ›Big O notation tells you how much slower a piece of a program gets as the input gets larger. For example: Returning the first element of an array takes a constant amount of time. If you double the length of an array, it still takes the same amount of time.
What is the Big-O notation for time complexity? ›Big O notation is a mathematical notation used in computer science to describe the upper bound or worst-case scenario of the runtime complexity of an algorithm in terms of the input size. It provides a standardized and concise way to express how the performance of an algorithm scales as the size of the input grows.
Which sorting algorithm has the best Big O time complexity? ›Which sorting algorithm has best time complexity? The quick sort algorithm has the best time complexity of all the sorting algorithms. The best, average, and worst cases of Quicksort's temporal complexity are O(N log N), O(N log N), and O(N log N), respectively.
What is Big-O notation and little O notation? ›Big-O is an inclusive upper bound, while little-o is a strict upper bound. For example, the function f(n) = 3n is: in O(n²) , o(n²) , and O(n) not in O(lg n) , o(lg n) , or o(n)
Which is greater, O'n or O logn? ›O(n) means that the algorithm's maximum running time is proportional to the input size. basically, O(something) is an upper bound on the algorithm's number of instructions (atomic ones). therefore, O(logn) is tighter than O(n) and is also better in terms of algorithms analysis.
What is the Big-O notation in machine learning? ›Big O notation assists in identifying bottlenecks in algorithms, guiding developers towards more efficient implementations. For example, reducing the complexity from O(n²) to O(n log n) can significantly speed up a model's training time.
Which Big-O notation is fastest? ›Constant-Time Algorithm - O (1) - Order 1: This is the fastest time complexity since the time it takes to execute a program is always the same. It does not matter that what's the size of the input, the execution and the space required to run this will be the same.