Big O Notation: The Speedometer for Algorithm Efficiency

Discovering the Upper Bound of Algorithm Complexity

Gene Zeiniss
8 min readJul 21, 2024
Image by un-perfekt from Pixabay

One of the significant projects I worked on was a tool that scans text for sensitive data. After skipping the intricate logic details, the gist is that the tool scans texts using various data type recognizers. The code was completed, tested, and pushed to production. Both alpha and beta versions were tested by volunteer customers and received positive feedback. Excitement was in the air as the tool was officially released to all customers. But then, disaster struck. The monitoring dashboards began showing periodic timeouts from a specific client. After some digging, it was found that this customer’s inputs were extremely long and filled with lengthy character sequences, not the plain text I had expected. This led to a “performance improvement” project that spanned a few weeks because the solution was far from straightforward.

Don’t let unexpected challenges catch you off guard! Grasping function complexity and assessing performance in worst-case scenarios during pre-production can spare you from embarrassment and hefty company costs. It’s like knowing how fast your code can go when faced with different challenges. In places where time is money — think of real-time apps, stock trading platforms, or even your favorite multiplayer game — by choosing an algorithm with lower complexity, we’re ensuring our apps stay quick and snappy. Now, let’s dive into Big O notations to get this better.

Evaluating Algorithm Efficiency with Big O Notation

Think of Big O as a speedometer for algorithms in the wild world of computer science. This notation evaluates algorithms' efficiency in a worst-case scenario and describes the upper bound of an algorithm’s complexity.

To determine the Big O for a particular function, we focus on the dominant term of the function. The dominant term is like the superstar of the function — it’s the one that grows the fastest and steals the show as the input size increases. So, when we’re talking Big O, it’s all about that superstar! But how do we figure out which function outshines the rest? There are a few rules to guide us.

Algorithm Complexity Growth

Let’s chat about the notations shown in the graph above, starting from the least complex to the most complex.

O(1): Constant Time Complexity

The algorithm O(1) gets the job done in constant time, no matter the input size n. Here are a few examples of operations on an array that have O(1) time complexity:

These functions are super straightforward because they only access the specific elements or properties of the array directly. Whether the array has one element or a million, the operation’s duration stays rock steady. It’s the best-case scenario in the world of algorithms, so whenever you can, go for the O(1) to keep things running at top speed!

O(log n): Logarithmic Time Complexity

Imagine you have a huge book and you need to find a specific page. Instead of flipping through each page one by one (which would take forever), you use a clever strategy called binary search. Here’s how it works:

  • You open the book to the middle. If the page you need is earlier in the book, you close the second half and open the middle of the first half. If the page you need is later in the book, you ignore the first half and open the middle of the second half. You keep halving the number of pages you need to check until you find the one you’re looking for.

This binary search has O(log n) time complexity! In each step, you cut the number of pages you need to check in half, making your search super efficient.

So, if your book has a thousand pages, with O(log n) efficiency, you’d only need about 10 checks to find your page. It’s like having a magical ability to zero in on your target page in no time! But hey, with digital books taking over, who needs old-school page-flipping skills anyway? Pages change at the touch of a button — try keeping up with that, paper lovers!

Go ahead and bump up the totalPages to 1,000,000 and see what happens when you run the program. Spoiler: you’ll find the target page in about 20 tries! 😜

O(n): Linear Time Complexity

Let’s keep looking through the pages. This time, imagine you are trying to locate a specific page in a thick book with pages numbered in Chinese characters. If you don’t know Chinese, the binary search can’t help you. To find the relevant page, you will flip through the pages one by one from the beginning until you reach the page you’re looking for.

Image by Jeremy Zhu from Pixabay

As I mentioned earlier, Big O represents the worst-case scenario. Based on the given example, the target page will either be the last or nonexistent. In terms of time complexity O(n), where n is the total number of pages, if there are 100 pages, you should check all 100 pages to find the one you want; And if there are 1000 pages, you will check all of them— no shortcuts, just a straightforward, linear search.

Accordingly, O(n) represents linear time complexity, where the time required to find the page increases proportionally with the number of pages you have to search through. In other words, the runtime of an algorithm grows linearly with the input size.

Additional widely used examples of algorithms with linear complexity are finding the max or min value in an array, summing elements, copying an array, printing all elements, counting occurrences, and other operations requiring examining each element within an array.

O(n log n): Linearithmic Time Complexity

It doesn’t take a genius to understand that linearithmic time complexity combines linear and logarithmic complexities.

The classic example of an O(n log n) algorithm is merge sort:

Thus, the total complexity of the merge sort is O(n) for merging each level and log⁡ n for the number of levels, resulting in O(n log⁡ n).

You may wonder, why the algorithm’s complexity above is evaluated as O(n log n). Don’t we focus on the dominant term, our superstar, that outshines the rest? I like the way you think!

The O(n log n) notation accurately describes the complexity where the dominant term is a product of n and log n. It’s relevant where both parts are crucial in determining the algorithm’s overall efficiency. Linearithmic time complexity is often seen in more efficient sorting algorithms and other operations involving recursive division or merging processes.

O(n²): Quadratic Time Complexity

Now imagine you’re organizing a book club gathering where each of the 10 participants reads one chapter aloud to the other book lovers. After each reading, everyone in the group (including the reader) has to comment on and discuss the chapter. Hope I have the right impression of what’s going on during these get-togethers. Anyway, each reader will spark 10 comments, leading to a total of 10 times 10 = 100 opinion expressions. That’s O(n²) complexity in action!

Saying differently, the O(n²) complexity occurs when an algorithm’s performance is proportional to the square of the size of the input data set. Accordingly, while n represents the number of elements in the input data, means that for each element, the algorithm performs an operation that itself takes n steps, leading to n times n operations in total.

A well-known example of an O(n²) algorithm is bubble sort. You repeatedly pass through the list, comparing and swapping items if they’re out of order.

Quadratic complexity can get quite large as n increases, which is why O(n²) algorithms are often less efficient for large datasets.

O(2^n): Exponential Time Complexity

Exponential time complexity describes algorithms that grow at an alarming rate, doubling with each additional element in the input data set. It’s like a book club discussion where each comment sparks two more, turning a simple conversation into an endless debate!

This type of algorithm usually pops up in brute-force solutions for combinatorial searches and certain kinds of recursion. A classic example is the recursive solution to the Fibonacci sequence. Here, each function call leads to two more calls, which then leads to four more, and so on, creating an exponential explosion of calls.

O(n!): Factorial Time Complexity

Last but not least — let’s talk about O(n!) notation. Imagine you’re a librarian organizing a special book exhibition. You have a collection of unique books, and you want to try every possible arrangement to find the perfect display.

As the number of books increases, the number of possible arrangements skyrockets. With just 5 books, you’d have 120 different ways to arrange them. With 10 books, there are a staggering 3,628,800 arrangements to check!

In other words, O(n!) describes algorithms whose performance grows factorially with the input size and is usually encountered in problems involving permutations. The simplest example is below:

Now that we’ve covered the basics of Big O notation, let’s take a brief detour to examine the toChineseNumber method from an earlier example. This will help clarify how Big O is calculated and applied in practice.

To determine the Big O notation for the toChineseNumber function, let’s analyze the steps and operations involved in the function:

  1. Initialization units and digits lists. These are constant-time operations because they initialize fixed-size lists. The time complexity for this part is O(1).
  2. Checking if the number is zero. This check is also a constant-time operation, O(1).
  3. Building the Chinese numeral string. The while loop runs once for each digit in the input number. If the number has d digits, the loop runs d times.
  4. Operations inside the loop:
    — Extracting the rightmost digit: int digit = number % 10; is O(1).
    — Accessing list elements: digits.get(digit) and units.get(unitIndex) are O(1).
    — Prepending to the StringBuilder: chineseNumber.insert(0, ...) can be more complex. In Java, inserting at the beginning of a StringBuilder is not O(1), because it requires shifting the existing characters. The time complexity of this operation is O(n), where n is the current length of the StringBuilder.
    — Removing the rightmost digit: number /= 10; is O(1).
    — Incrementing the unit index: unitIndex++; is O(1).

The dominant term in the loop is chineseNumber.insert(0, …), which has a time complexity of O(n) for each insertion. Since this operation occurs d times (once for each digit), and the length of the StringBuilder increases linearly with the number of digits processed, the overall complexity of the loop is O(d^2).

Since the number of digits d in a number n is proportional to log⁡(n), the complexity of the loop can be rewritten in terms of n as O((log⁡ n)^2). Therefore, the overall time complexity of the toChineseNumber function is O((log⁡ n)^2)🤓 You nailed it!

To wrap up, even though I mostly used array examples, Big O notation applies far more. It’s a versatile tool for analyzing the efficiency of algorithms across various data structures and problems. Whether sorting, navigating graphs, solving dynamic programming challenges, or handling strings, Big O helps describe and compare algorithm performance and scalability. This makes it super important since it guides us to pick and fine-tune the optimal algorithms for different tasks.

--

--

Gene Zeiniss
Gene Zeiniss

Written by Gene Zeiniss

My blog is about wide aspects of programming, from design to code review. Still, I have a predilection for coding, all comes from my unabashed love for Java ☕