Next: More examples of recursive Up: Analysis of correctness and Previous: Efficiency, Why and How?   Contents

## In the long run: Asymptotic analysis and Orders of growth

You may have noticed that there was something unsatisfactory about our way of doing things - the calculation was tuned too closely to our machine. The figure of seconds is a bit arbitrary - the time to execute on one particular laptop - and has no other absolute significance for an analysis on a different machine. We would like to remedy this situation so as to have a mode of analysis that has absolute significance applicable to any machine. It should tell us precisely how the problem scales - how does the resource requirement grow as the size of the input increases - on any machine.

We now introduce one such machine independent measure of the resources required by a computational process - the order of growth. If is a parameter that measures the size of a problem then we can measure the resources required by an algorithm as a function . We say that the function has an order of growth (of order ), if there exist constants and such that whenever .

In our example of the computation of , we found that the space required is , whereas the number of multiplications and function calls required are and respectively. We see, that according to our definition of order of growth, each of these are . Thus, we can say that the space complexity and the time complexity of the algorithm are both . In the example of determinant computation, regardless of the particular machine and the corresponding constants, the algorithm based on Gaussian elimination has time complexity .

Order of growth is only a crude measure of the resources required. A process which requires steps and another which requires steps have both the same order of growth . However, On the other hand, the notation has the following advantages:

• It hides constants, thus it is robust across different machines.
• It gives fairly precise indication of how the algorithm scales as we increase the size of the input. For example, if an algorithm has an order of growth , then doubling the size of the input will very nearly double the amount of resources required, whereas with a algorithm will square the amount of resources required.
• It tells us which of two competing algorithm will win out eventually in the long run: for example, however large the constant may be, it is always possible to find a break point above which will always be smaller than or giving us an indication of when an algorithm with the former complexity will start working better than algorithms with the latter complexities.
• Finally the very fact that it is a crude analysis means that it is frequently much easier to perform than an exact analysis! And we get all the advantages listed above.

In what follows we will give examples of algorithms which have different orders of growth.

Exercise 3.4   What does mean? Are and different?

In Figure 3.1 we show the relative scaling of some order functions with respect to . In Figure 3.2 we plot the and the curves with an increased -axis range. Clearly any algorithm with a time complexity of is computationally infeasible. In order to solve a problem of size 100 roughly steps will be required.

Exercise 3.5   Assuming that a single step may be executed in, say, seconds, obtain a rough estimate to solve a problem of size 100 using an algorithm with a time complexity of .

Next: More examples of recursive Up: Analysis of correctness and Previous: Efficiency, Why and How?   Contents
Subhashis Banerjee 2003-08-02