Big-O Notation: A Simple Explanation with Examples (2024)

It can be intimidating — it doesn’t need to be

Big-O Notation: A Simple Explanation with Examples (3)

As an instructor of front end engineering, I’m often faced with a question from junior developers when it comes to talking about big O notation:

“Can’t you get along as an engineer without knowing these things? Especially a frontend engineer?”

Sure — there’s no stopping anyone from brute-forcing their way to a solution.

“But isn’t making it work the most important thing?”

Fair enough. Working is the first (and sometimes only) step, depending on the size of the data set or your deadline for shipping a product.

But what happens when you go from working with a collection of 100,000 elements to 100 million? Or 10 billion? When it comes to scalability, these things matter.

The fact is you can write code that works without knowing the ins and outs of big O notation. But it’s difficult to tell whether big O is something that you should even be considering when you’re creating an algorithm if you have no idea what it is. In my opinion, dismissing foundational computer science fundamentals that can help you make decisions and write better code is foolish at best and disastrous at worst.

Big O notation is the language we use for talking about how long an algorithm takes to run (time complexity) or how much memory is used by an algorithm (space complexity). Big O notation can express the best, worst, and average-case running time of an algorithm. For our purposes, we’re going to focus primarily on Big-O as it relates to time complexity.

As a software engineer, you’ll find that most discussions of big O focus on the upper-bound run time of an algorithm, which is often termed the worst-case. An important thing to note is that the running time when using big O notation does not directly equate to time as we know it (e.g. seconds, milliseconds, microseconds, etc.) Analysis of running times does not take certain things into account, such as the processor, the language, or the run-time environment. Instead, we can think of time as the number of operations or steps it takes to complete a problem of size n. In other words, big O notation is a way to track how quickly the run-time grows relative to the size of the input.

When we’re thinking about the worst-case, the question becomes this: What are the most operations/steps that could happen for an input of size n?

O(1) means that it takes a constant time to run an algorithm, regardless of the size of the input.

Bookmarks are a great example of how constant time would play out in the real world. Bookmarks allow a reader to find the last page that you read in a quick, efficient manner. It doesn’t matter if you are reading a book that has 30 pages or a book that has 1000 pages. As long as you’re using a bookmark, you will find that last page in a single step.

In programming, a lot of operations are constant. Here are some examples:

  • math operations
  • accessing an array via the index
  • accessing a hash via the key
  • pushing and popping on a stack
  • insertion and removal from a queue
  • returning a value from a function

Take a look at findFirstIndex, listed below. Passing smallCollection or giganticCollection will produce the same run-time of O(1) when accessing the 0 index. The return of firstIndex is also a constant time operation. Regardless of the size of n, both of these operations will take a constant amount of time.

Big-O Notation: A Simple Explanation with Examples (4)

O(n) means that the run-time increases at the same pace as the input.

The act of reading a book is an example of how linear time would play out in the real world. Let’s assume that it takes me exactly one minute to read a single page of a large print book. Given that, a book that has 30 pages will take me 30 minutes to read. Likewise, a book that has 1000 pages will equal 1000 minutes of reading time. Now, I don’t force myself to finish books that aren’t great so there is always a chance that I won’t get through that 1,000-page book. However, before I start reading it, I can know that my worst-case reading time is 1000 minutes for a 1000 page book.

In programming, one of the most common linear-time operations is traversing an array. In JavaScript, methods like forEach, map, and reduce run through the entire collection of data, from start to finish.

Take a look at our printAllValues function below. The number of operations that it will take to loop through n is directly related to the size of n. Generally speaking (but not always), seeing a loop is a good indicator that the particular chunk of code you’re examining has a run time of O(n).

Big-O Notation: A Simple Explanation with Examples (5)

But what about the method find? Since it doesn’t always run through the entire collection, is it actually linear? In the example below, the first value that is less than three is at index 0 of the collection. Why wouldn’t this be constant time?

Big-O Notation: A Simple Explanation with Examples (6)

Remember, since we’re looking for the worst-case scenario, we must assume that the input will not be ideal and that the element or value that we seek could be the last value in the array. In the second scenario (below), you’ll see just that. Finding the number that is less than three, in the least ideal situation, would require iterating through the entire array.

Big-O Notation: A Simple Explanation with Examples (7)

O(n²) means that the calculation runs in quadratic time, which is the squared size of the input data.

In programming, many of the more basic sorting algorithms have a worst-case run time of O(n²):

  • Bubble Sort
  • Insertion Sort
  • Selection Sort

Let’s look at countOperations below. Here we have two nested loops that are incrementing the operations variable after each iteration. If n is our smallCollection, we will end up with a count of 16 operations. Not horrible. But what about if n is our gigantic collection? A billion times a billion is a quintillion — or 1,000,000,000,000,000,000. Yikes. That’s a LOT of operations. Even an array with as little as 1,000 elements would end up creating a million operations.

Big-O Notation: A Simple Explanation with Examples (8)

Generally speaking (but not always), seeing two nested loops is typically a good indicator that the piece of code you’re looking at has a run time of O(n²). Likewise — three nested loops would indicate a run time of O(n³)!

O(log n) means that the running time grows in proportion to the logarithm of the input size. this means that the run time barely increases as you exponentially increase the input.

Finding a word in a physical dictionary by halving my sample size is an excellent example of how logarithmic time works in the real world. For instance, when looking for the word “senior,” I could open the dictionary precisely in the middle — at which point I could determine whether the words that begin with “s” come before or after the words that I’m currently viewing. Once I determine that “s” is in the second half of the book, I can dismiss all of the pages in the first half. I then repeat the same process. By following this algorithm to the end, I would cut the number of pages I must search through in 1/2 every time until I find the word.

In programming, this act of searching through a physical dictionary is an example of a binary search operation — the most typical example used when discussing logarithmic run times.

Let’s take a look at a modified version of our countOperations function. Note that n is now a number: n could be an input (number) or the size of an input (the length of an array).

Big-O Notation: A Simple Explanation with Examples (9)

In the example above, we end up with 11 operations if n = 2000. If n = 4,000, we end up with 12 operations. Every time that we double the amount of n, the amount of operations only increases by one. Algorithms that run in logarithmic time have big implications when it comes to larger inputs. Using our example below, O(log(7)) would return three operations. An O(log(1000000)) would return only 20 operations!

Note: O(n log n), which is often confused with O(log n), means that the running time of an algorithm is linearithmic, which is a combination of linear and logarithmic complexity. Sorting algorithms that utilize a divide and conquer strategy are linearithmic, such as the following:

  • merge sort
  • timsort
  • heapsort

When looking at time complexity, O(n log n) lands between O(n2) and O(n).

Big-O Notation: A Simple Explanation with Examples (10)

One of my favorite sites to reference for big O is Big O cheat sheet. As you can see from the chart, other run times have pretty horrible time complexity, like O(2^n) and O(n!).

Algorithms that run in O(2^n) and O(n!) don’t fare well with anything but tiny data sets. O(2^n), or exponential time, doubles in time with each addition to the input. O(n!), or factorial time, is even worse. Any time n increases by 1, the running time increases by a factor of n.

It’s important to note that there is no fixed list of run times. However, the ones that we’ve covered in this post are some of the most common.

Thus far, we’ve only focused on talking about big O by isolating a few lines of code at a time. How do we calculate big O if we are dealing with an algorithm that has several parts?

Drop the constants

Let’s start with logEverythingTwice.

Big-O Notation: A Simple Explanation with Examples (11)

Since big O is concerned with how quickly our run-time grows, the first rule that you want to remember is to drop any constants when you analyze an algorithm.

In the example above, we have two separate loops that are iterating through the length of an array (linear). Each loop logs an item in the collection (constant). Since we don’t concern ourselves with operations that run in constant-time (they make little difference in the overall approximation), we’re only going to take the two loops into account and add them together, which gives us O(2n). Since the number two is also a constant in this case, we throw it out and call this O(n).

But what if these loops were nested?

Big-O Notation: A Simple Explanation with Examples (12)

With nesting these loops, we are no longer logging items[i] twice but five times! In this case, we multiply O(n) * O(n) instead of adding the run times together. We do this because the execution of our log is dependent on iterating through the entirety of the second loop (logging i five times) before we can increment i and move to the next index via our first loop. Here’s what our log ends up returning:

1
1
1
1
1
2
2
2
2
2
3
3
....

Just like the first example, we still want to drop the constants from the logs. In the end, O(n * n) gives us O(n²).

Your takeaway should not be that you always avoid nested loops due to performance implications. You will undoubtedly come across situations that warrant such a solution (tiny datasets, manipulating multi-dimensional arrays, etc.). However, you should be aware that there may be performance implications, depending on what you are doing.

Drop the non-dominant terms

Let’s take a look at another example.

Big-O Notation: A Simple Explanation with Examples (13)

After dropping the constants in printMultiplesThenSum, we can see that the big O notation for this function would be O(n² + n). Since big O is also not concerned with non-dominant terms, we drop the n (quadratic wins since it is worse than linear time). Throwing out non-dominant terms is the second rule to follow when analyzing the run time of an algorithm. In the end, O(n² + n) gives us O(n²).

As mentioned earlier, you can write code without understanding the ins and outs of big O. You should also be aware that it’s an approximation, as the run time you calculate by hand may not play out when running in production when performance is affected by things like environments, processors, and languages.

However, this shouldn’t stop you from learning big O. Having a good understanding of big O notation gives you more context about what does and does not matter when designing algorithms. Who wouldn’t want that?

Big-O Notation: A Simple Explanation with Examples (2024)

FAQs

Big-O Notation: A Simple Explanation with Examples? ›

In other words, Big O Notation is the language we use for talking about how long an algorithm takes to run. It is how we compare the efficiency of different approaches to a problem. With Big O Notation we express the runtime in terms of — how quickly it grows relative to the input, as the input gets larger .

What is the Big O Notation explained easily? ›

In simpler terms, Big O notation helps us understand how an algorithm's efficiency changes as the amount of data it processes increases. It focuses on the dominant factor influencing an algorithm's runtime, ignoring constant factors and lower-order terms.

How do you find the Big O Notation of a function example? ›

When we are trying to figure out the Big O for a particular function g(n), we only care about the dominant term of the function. The dominant term is the term that grows the fastest. For example, n² grows faster than n, so if we have something like g(n) = n² + 5n + 6, it will be big O(n²).

How do you answer Big O Notation? ›

Answer: Big O Notation is determined by identifying the dominant operation in an algorithm and expressing its time complexity in terms of n, where n represents the input size.

Is Big O notation hard to understand? ›

It is extremely common for programmers of all levels to have an extremely weak understanding of performance analysis with big O notation. Beginners often refuse to engage altogether, believing the topic to be impractical or too difficult.

What is the most efficient Big O notation? ›

Common Big O Notation Complexities

O(1): This complexity indicates that the algorithm's performance is constant, regardless of the input size. This is the best possible complexity, as the algorithm takes the same amount of time to complete regardless of the input size.

Is Big O the worst case? ›

The Big-O notation describes the worst-case running time of a program. We compute the Big-O of an algorithm by counting how many iterations an algorithm will take in the worst-case scenario with an input of N. We typically consult the Big-O because we must always plan for the worst case.

What is an example of a Big O proof? ›

In a formal big-O proof, you first choose values for k and c, then show that 0 ≤ f(x) ≤ cg(x) for every x ≥ k. So the example from the previous section would look like: Claim 51 3x2 + 7x + 2 is O(x2). Proof: Consider c = 4 and k = 100.

What is the mathematical expression of Big-O notation? ›

Informally, f(x) = o(g(x)) means that f grows much slower than g and is insignificant in comparison. Formally, we write f(x) = o(g(x)) (for x -> ∞) if and only if for every C>0 there exists a real number N such that for all x > N we have |f(x)| < C |g(x)|; if g(x) ≠ 0, this is equivalent to limx→ ∞ f(x)/g(x) = 0.

What is Big O notation calculator? ›

This is where Big O Notation enters the picture. Big O Notation is a metric for determining the efficiency of an algorithm. It allows you to estimate how long your code will run on different sets of inputs and measure how effectively your code scales as the size of your input increases.

Why is it called Big O? ›

Big O notation is named after the term "order of the function", which refers to the growth of functions. Big O notation is used to find the upper bound (the highest possible amount) of the function's growth rate, meaning it works out the longest time it will take to turn the input into the output.

How to calculate the time complexity? ›

To calculate linear time complexity, you typically look at algorithms with loops that iterate through the input once. The time taken for each iteration remains constant, so the overall time complexity grows linearly with the size of the input.

What is the Big O explained simply? ›

Big O notation tells you how much slower a piece of a program gets as the input gets larger. For example: Returning the first element of an array takes a constant amount of time. If you double the length of an array, it still takes the same amount of time.

Why do we drop constants in Big O notation? ›

Recall that when we use big-O notation, we drop constants and low-order terms. This is because when the problem size gets sufficiently large, those terms don't matter. However, this means that two algorithms can have the same big-O time complexity, even though one is always faster than the other.

What is the asymptotic notation in simple terms? ›

An asymptotic notation essentially describes the running time of an algorithm. This means that it shows how much time the algorithm takes to run with a given input, n. There are three different notations, big O, big Theta (θ), and big Omega (Ω).

Is big O the worst case? ›

The Big-O notation describes the worst-case running time of a program. We compute the Big-O of an algorithm by counting how many iterations an algorithm will take in the worst-case scenario with an input of N. We typically consult the Big-O because we must always plan for the worst case.

What is the big O and small O notation? ›

Big-O means “is of the same order as”. The corresponding little-o means “is ul- timately smaller than”: f (n) = o(1) means that f (n)/c !

How to prove big O? ›

In a formal big-O proof, you first choose values for k and c, then show that 0 ≤ f(x) ≤ cg(x) for every x ≥ k. So the example from the previous section would look like: Claim 51 3x2 + 7x + 2 is O(x2). Proof: Consider c = 4 and k = 100.

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Stevie Stamm

Last Updated:

Views: 6142

Rating: 5 / 5 (80 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Stevie Stamm

Birthday: 1996-06-22

Address: Apt. 419 4200 Sipes Estate, East Delmerview, WY 05617

Phone: +342332224300

Job: Future Advertising Analyst

Hobby: Leather crafting, Puzzles, Leather crafting, scrapbook, Urban exploration, Cabaret, Skateboarding

Introduction: My name is Stevie Stamm, I am a colorful, sparkling, splendid, vast, open, hilarious, tender person who loves writing and wants to share my knowledge and understanding with you.