# 7.5 Summary

Because the speed of computers varies and the exact time required for a particular application depends on many details, the most important property to understand is how the work required scales with the size of the input. The asymptotic operators provide a convenient way of understanding the cost involved in evaluating a procedure applications.

Procedures that can produce an output only touching a fixed amount have constant running times. Procedures whose running times increase by a fixed amount when the input size increases by one have linear (in $\Theta(n)$) running times. Procedures whose running time quadruples when the input size doubles have quadratic (in $\Theta(n^2)$) running times. Procedures whose running time doubles when the input size increases by one have exponential (in $\Theta(2^n)$) running times. Procedures with exponential running time can only be evaluated for small inputs.

Asymptotic analysis, however, must be interpreted cautiously. For large enough inputs, a procedure with running time in $\Theta(n)$ is always faster than a procedure with running time in $\Theta(n^2)$. But, for an input of a particular size, the $\Theta(n^2)$ procedure may be faster. Without knowing the constants that are hidden by the asymptotic operators, there is no way to accurately predict the actual running time on a given input.

**Exercise 7.18.**
Analyze the asymptotic running time of the `list-sum`

procedure (from Example 5.2):

(if (null? p)

0

(+ (car p) (list-sum (cdr p)))))

You may assume all of the elements in the list have values below some constant (but explain why this assumption is useful in your analysis).

**Exercise 7.19.**
Analyze the asymptotic running time of the `factorial`

procedure (from Example 4.1):

Be careful to describe the running time in terms of the \emph{size} (not the magnitude) of the input.

**Exercise 7.20.**
Consider the `intsto`

problem (from Example 5.8).

**a.** $\left[\star \right]$ Analyze the asymptotic running time of this \scheme|intsto| procedure:

(if (= n 0)

null

(cons n (revintsto (- n 1)))))

(define (intsto n) (list-reverse (revintsto n)))

**b.** $\left[\star \right]$
Analyze the asymptotic running time of this `instto`

procedure:

(if (= n 0) null (list-append (intsto (- n 1)) (list n))))

**c.** Which version is better?

**d.** $\left[\star \star \right]$ Is there an asymptotically faster `intsto`

procedure?

**Exercise 7.21.**
Analyze the running time of the `board-replace-peg`

procedure (from Exploration 5.2)

(if (= col 1) (cons val (cdr pegs))

(cons (car pegs) (row-replace-peg (cdr pegs) (- col 1) val))))

(define (board-replace-peg board row col val)

(if (= row 1) (cons (row-replace-peg (car board) col val) (cdr board))

(cons (car board) (board-replace-peg (cdr board) (- row 1) col val))))

**Exercise 7.22.**
Analyze the running time of the `deep-list-flatten`

procedure from Section 5.5:

(if (null? p) null

(list-append (if (list? (car p))

(deep-list-flatten (car p))

(list (car p)))

(deep-list-flatten (cdr p)))))

**Exercise 7.23.** $\left[\star \right]$
Find and correct at least one error in the *Orders of Growth* section of the Wikipedia page on *Analysis of Algorithms* (`http://en.wikipedia.org/wiki/Analysis_of_algorithms`

). This is rated as $\left[\star \right]$ now (July 2011), since the current entry contains many fairly obvious errors. Hopefully it will soon become a $\left[\star \star \star \right]$ challenge, and perhaps, eventually will become impossible!