2.6 Asymptotic analysis
Figure 2.1 gives two views of a graph illustrating the growth rates for six equations. We repeat the graph here, where the right view shows in detail the lower-left portion of the top view. The horizontal axis represents input size, and the vertical axis can represent time, space, or any other measure of cost.

Despite the larger constant for the curve labeled in the figure above, crosses it at the relatively small value of . What if we double the value of the constant in front of the linear equation? As shown in the graph, is surpassed by once . The additional factor of two for the linear growth rate does not much matter. It only doubles the -coordinate for the intersection point. In general, changes to a constant factor in either equation only shift where the two curves cross, not whether the two curves cross.
When you buy a faster computer or a faster compiler, the new problem size that can be run in a given amount of time for a given growth rate is larger by the same factor, regardless of the constant on the running-time equation. The time curves for two algorithms with different growth rates still cross, regardless of their running-time equation constants. For these reasons, we usually ignore the constants when we want an estimate of the growth rate for the running time or other resource requirements of an algorithm. This simplifies the analysis and keeps us thinking about the most important aspect: the growth rate. This is called asymptotic algorithm analysis. To be precise, asymptotic analysis refers to the study of an algorithm as the input size “gets big” or reaches a limit (in the calculus sense). However, it has proved to be so useful to ignore all constant factors that asymptotic analysis is used for most algorithm comparisons.
In rare situations, it is not reasonable to ignore the constants. When comparing algorithms meant to run on small values of , the constant can have a large effect. For example, if the problem requires you to sort many collections of exactly five records, then a sorting algorithm designed for sorting thousands of records is probably not appropriate, even if its asymptotic analysis indicates good performance. There are rare cases where the constants for two algorithms under comparison can differ by a factor of 1000 or more, making the one with lower growth rate impractical for typical problem sizes due to its large constant. Asymptotic analysis is a form of “back of the envelope” estimation for algorithm resource consumption. It provides a simplified model of the running time or other resource needs of an algorithm. This simplification usually helps you understand the behaviour of your algorithms. Just be aware of the limitations to asymptotic analysis in the rare situation where the constant is important.
2.6.1 Orders of growth
To be able to discuss orders of growth for algorithms we need to do some abstractions. The most important abstraction is to describe the runtime of an algorithm as a mathematical function from the input size to a number. We actually don’t care how we encode the input size, as long as it is proportional to the actual size of the input. So we can, e.g., use the number of cells in an array as input, instead of trying to figure out the exact memory usage of the array. And in the same way, we don’t care about the unit of measure for the result – it can be actual runtime, in seconds, minutes or hours, but it’s more common to think about the number of “basic operations”. Already here we have abstracted away lots of things that relate to hardware, which is vital because we want to analyse algorithms, not implementations. So we will say things like “the runtime of algorithm is ”.
Now, the easiest way to view order of growth is not as an absolute propery of an algorithm, but instead as a relation between functions. When we say that an algorithm is quadratic, we actually mean that the mathematical function that describes the abstract runtime of the algorithm, is related to the quadratic function in some way.
So, how can we relate functions using orders of growth? We do this by saying that one function is a bound of another function. E.g., when we say that an algorithm is quadratic, we actually mean that the function is an upper bound of . The following are informal definitions of upper, lower and tight bounds – we will define them more rigorously in Chapter 5.
- Upper bound
-
is an upper bound of iff grows at least as fast as , and we write this
- Lower bound
-
is a lower bound of iff grows at most as fast as , and we write this
- Tight bound
-
is a tight bound of iff both functions grow at the same rate, and we write this
2.6.2 Should we use , or ?
If an algorithm has a lower bound of , we know that it will never run asymptotically faster than . But this is usually not very useful knowledge, because we are more interested in knowing how the algorithm works on on bad inputs. Therefore the upper bound is by far the most common complexity measure that people use, and this is what we will be using in this book.
One could argue that would be an even better measure, because it gives more information about an algorithm. But it is much more difficult to reason about , and therefore we will almost exclusively use the upper bound notation .
So, is the lower bound useless? – No, definitely not. The main use case for is when we want to classify problems, not algorithms. One example is when proving that the lower bound for sorting is , which we do in Section 5.3.1. But classifying problems is out of scope for this book, so we will not use much.