2.4 Growth rates

Imagine that you have a problem to solve, and you know of an algorithm whose running time is proportional to n2n^2 where nn is a measure of the input size. Unfortunately, the resulting program takes ten times too long to run. If you replace your current computer with a new one that is ten times faster, will the n2n^2 algorithm become acceptable? If the problem size remains the same, then perhaps the faster computer will allow you to get your work done quickly enough even with an algorithm having a high growth rate. But a funny thing happens to most people who get a faster computer. They don’t run the same problem faster. They run a bigger problem! Say that on your old computer you were content to sort 10,000 records because that could be done by the computer during your lunch break. On your new computer you might hope to sort 100,000 records in the same time. You won’t be back from lunch any sooner, so you are better off solving a larger problem. And because the new machine is ten times faster, you would like to sort ten times as many records.

If your algorithm’s growth rate is linear (i.e., if the equation that describes the running time on input size nn is T(n)=cnT(n) = cn for some constant cc), then 100,000 records on the new machine will be sorted in the same time as 10,000 records on the old machine. If the algorithm’s growth rate is greater than cncn, such as c1n2c_1n^2, then you will not be able to do a problem ten times the size in the same amount of time on a machine that is ten times faster.

2.4.1 Getting a faster computer

How much larger problems a faster computer solve in the same amount of time? Say that the old machine could solve a problem of size nn in an hour, and that the new machine is ten times faster than the old. What is the largest problem that the new machine can solve in one hour? Table 2.1 shows just that for five running-time functions.

Table 2.1: The increase in problem size that can be run in the same time on a computer that is ten times faster
Growth rate Max nn (old) Max nn' (new) Relation nn and nn' n/nn'/n
10n10 n 1,0001,000 10,00010,000 n=n10n' = n\cdot 10 1010
20n20 n 500500 5,0005,000 n=n10n' = n\cdot 10 1010
5nlogn5 n \log n 250250 1,8421,842 n0.1386nW(0.1386n)n' \approx \frac{0.1386n}{W(0.1386n)} 7.47.4
2n22 n^2 7070 223223 n=n10n' = n\cdot\sqrt{10} 3.23.2
2n2^n 1313 1616 n=n+3n' = n + 3 1\approx 1

Explanations:

  • The first column lists the right-hand sides for five growth rate equations.
  • The second column shows the maximum value for nn that can be run in 10,000 basic operations on the old machine.
  • The third column shows the value for nn', the new maximum size for the problem that can be run in the same time on the new machine that is ten times faster. Variable nn' is the greatest size for the problem that can run in 100,000 basic operations.
  • The fourth column shows how the size of nn changed to become nn' on the new machine.
  • The fifth column shows the increase in problem size as the ratio of nn' to nn.

This table illustrates many important points. The first two equations are both linear; only the value of the constant factor has changed. In both cases, the machine that is ten times faster gives an increase in problem size by a factor of ten. In other words, while the value of the constant does affect the absolute size of the problem that can be solved in a fixed amount of time, it does not affect the improvement in problem size (as a proportion to the original size) gained by a faster computer. This relationship holds true regardless of the algorithm’s growth rate: Constant factors never affect the relative improvement gained by a faster computer.

An algorithm with time equation T(n)=2n2T(n) = 2n^2 does not receive nearly as great an improvement from the faster machine as an algorithm with linear growth rate. Instead of an improvement by a factor of ten, the improvement is only the square root of that: 103.16\sqrt{10} \approx 3.16. Thus, the algorithm with higher growth rate not only solves a smaller problem in a given time in the first place, it also receives less of a speedup from a faster computer. As computers get ever faster, the disparity in problem sizes becomes ever greater.

The algorithm with growth rate T(n)=5nlognT(n) = 5 n \log n improves by a greater amount than the one with quadratic growth rate, but not by as great an amount as the algorithms with linear growth rates.

Note that something special happens in the case of the algorithm whose running time grows exponentially. If you look at its plot on a graph, the curve for the algorithm whose time is proportional to 2n2^n goes up very quickly as nn grows. The increase in problem size on the machine ten times as fast is about n+3n + 3 (to be precise, it is n+log210n + \log_2 10). The increase in problem size for an algorithm with exponential growth rate is by a constant addition, not by a multiplicative factor. Because the old value of nn was 13, the new problem size is 16. If next year you buy another computer ten times faster yet, then the new computer (100 times faster than the original computer) will only run a problem of size 19. If you had a second program whose growth rate is 2n2^n and for which the original computer could run a problem of size 1000 in an hour, than a machine ten times faster can run a problem only of size 1003 in an hour! Thus, an exponential growth rate is radically different than the other growth rates shown in the table. The significance of this difference is an important topic in computational complexity theory.

Instead of buying a faster computer, consider what happens if you replace an algorithm whose running time is proportional to n2n^2 with a new algorithm whose running time is proportional to nlognn \log n. In a graph relating growth rate functions to input size, a fixed amount of time would appear as a horizontal line. If the line for the amount of time available to solve your problem is above the point at which the curves for the two growth rates in question meet, then the algorithm whose running time grows less quickly is faster. An algorithm with running time T(n)=n2T(n)=n^2 requires 1000×1000=1,000,0001000 \times 1000 = 1,000,000 time steps for an input of size n=1000n=1000. An algorithm with running time T(n)=nlognT(n) = n \log n requires 1000×10=10,0001000 \times 10 = 10,000 time steps for an input of size n=1000n = 1000, which is an improvement of much more than a factor of ten when compared to the algorithm with running time T(n)=n2T(n) = n^2. Because n2>10nlognn^2 > 10 n \log n whenever n>58n > 58, if the typical problem size is larger than 58 for this example, then you would be much better off changing algorithms instead of buying a computer ten times faster. Furthermore, when you do buy a faster computer, an algorithm with a slower growth rate provides a greater benefit in terms of larger problem size that can run in a certain time on the new computer.

2.4.2 Comparing different growth rates

The growth rate for an algorithm is the rate at which the cost of the algorithm grows as the size of its input grows. Figure 2.1 shows a graph for six equations, each meant to describe the running time for a particular program or algorithm. A variety of growth rates that are representative of typical algorithms are shown.

Figure 2.1: Illustration of the growth rates for six equations. The right view shows in detail the lower-left portion of the top view. The horizontal axis represents input size. The vertical axis can represent time, space, or any other measure of cost.

The two equations labeled 10n10n and 20n20n are graphed by straight lines. A growth rate of cncn (for cc any positive constant) is often referred to as a linear growth rate or running time. This means that as the value of nn grows, the running time of the algorithm grows in the same proportion. Doubling the value of nn roughly doubles the running time. An algorithm whose running-time equation has a highest-order term containing a factor of n2n^2 is said to have a quadratic growth rate. In the figure, the line labeled 2n22n^2 represents a quadratic growth rate. The line labeled 2n2^n represents an exponential growth rate. This name comes from the fact that nn appears in the exponent. The line labeled n!n! also grows exponentially.

As you can see from the figure, the difference between an algorithm whose running time has cost T(n)=10nT(n) = 10n and another with cost T(n)=2n2T(n) = 2n^2 becomes tremendous as nn grows. For n>5n > 5, the algorithm with running time T(n)=2n2T(n) = 2n^2 is already much slower. This is despite the fact that 10n10n has a greater constant factor than 2n22n^2. Comparing the two curves marked 20n20n and 2n22n^2 shows that changing the constant factor for one of the equations only shifts the point at which the two curves cross. For n>10n>10, the algorithm with cost T(n)=2n2T(n) = 2n^2 is slower than the algorithm with cost T(n)=20nT(n) = 20n. This graph also shows that the equation T(n)=5nlognT(n) = 5 n \log n grows somewhat more quickly than both T(n)=10nT(n) = 10 n and T(n)=20nT(n) = 20 n, but not nearly so quickly as the equation T(n)=2n2T(n) = 2n^2. For constants a,b>1,naa, b > 1, n^a grows faster than either logbn\log^b n or lognb\log n^b. Finally, algorithms with cost T(n)=2nT(n) = 2^n or T(n)=n!T(n) = n! are prohibitively expensive for even modest values of nn. Note that for constants a,b1,ana, b \geq 1, a^n grows faster than nbn^b.

We can get some further insight into relative growth rates for various algorithms from Table 2.2 below. Most of the growth rates that appear in typical algorithms are shown, along with some representative input sizes. Once again, we see that the growth rate has a tremendous effect on the resources consumed by an algorithm.

Table 2.2: Costs for representative growth rates
n logn\log n nn nlognn\log n n2n^2 n3n^3 2n2^n n!n!
1010 3.33.3 1010 3333 100100 10310^3 10310^3 10610^{6}
100100 6.66.6 100100 664664 10410^4 10610^6 103010^{30} 1015810^{158}
1K=1,0001K = 1,000 1010 1,0001,000 10410^4 10610^6 10910^9 1030010^{300} 10256710^{2567}
10K=10410K = 10^4 13.313.3 10410^4 1.31051.3\cdot 10^5 10810^8 101210^{12} 10300010^{3000} 1035,65910^{35,659}
100K=105100K = 10^5 16.616.6 10510^5 1.61061.6\cdot 10^6 101010^{10} 101510^{15} 1030,00010^{30,000} 10456,57310^{456,573}
1M=1061M = 10^6 2020 10610^6 21072\cdot 10^7 101210^{12} 101810^{18} 10300,00010^{300,000} \cdots
1G=1091G = 10^9 3030 10910^9 310103\cdot 10^{10} 101810^{18} 102710^{27} \cdots \cdots

2.4.3 Growth rates exercises