7.4.4 Exponential Growth
If the running time of a procedure scales as a power of the size of the input, the procedure's running time grows exponentially. When the size of the input increases by one, the running time is multiplied by some constant factor. The growth rate of a function whose output is multiplied by $w$ when the input size, $n$, increases by one is $w^n$. Exponential growth is very fastit is not feasible to evaluate applications of an exponential time procedure on large inputs.
For a surprisingly large number of interesting problems, the best known algorithm has exponential running time. Examples of problems like this include finding the best route between two locations on a map (the problem mentioned at the beginning of Chapter 4), the pegboard puzzle (Exploration 5.2, solving generalized versions of most other games such as Suduko and Minesweeper, and finding the factors of a number. Whether or not it is possible to design faster algorithms that solve these problems is the most important open problem in computer science.
A simple way to find a factor of a given input number is to exhaustively try all possible numbers below the input number to find the first one that divides the number evenly.
The findfactor
procedure takes one number as input and outputs the lowest factor of that number (other than 1
):
(define (findfactorhelper v)
(if (= (modulo n v) 0) v (findfactorhelper (+ 1 v))))
(findfactorhelper 2))
The findfactorhelper
procedure takes two inputs, the number to factor and the current guess. Since all numbers are divisible by themselves, the modulo
test will eventually be true
for any positive input number, so the maximum number of recursive calls is n
, the magnitude of the input to findfactor
. The magnitude of n
is exponential in its size, so the number of recursive calls is in $\Theta(2^b)$ where $b$ is the number of bits in the input. This means even if the amount of work required for each recursive call were constant, the running time of the findfactor
procedure is still exponential in the size of its input.
The actual work for each recursive call is not constant, though, since it involves an application of modulo
. The modulo
builtin procedure takes two inputs and outputs the remainder when the first input is divided by the second input. Hence, it output is 0
if n
is divisible by v
. Computing a remainder, in the worst case, at least involves examining every bit in the input number, so scales at least linearly in the size of its input ^{1}. This means the running time of findfactor
is in $\Omega(2^b)$: it grows at least as fast as $2^b$.
There are lots of ways we could produce a faster procedure for finding factors: stopping once the square root of the input number is reached since we know there is no need to check the rest of the numbers, skipping even numbers after 2
since if a number is divisible by any even number it is also divisible by 2
, or using advanced sieve methods. This techniques can improve the running time by constant factors, but there is no known factoring algorithm that runs in faster than exponential time. The security of the widely used RSA encryption algorithm depends on factoring being hard. If someone finds a fast factoring algorithm it would put the codes used to secure Internet commerce at risk.^{2}
The power set of a set $S$ is the set of all subsets of $S$. For example, the power set of ${1, 2, 3}$ is ${{}, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}$.
The number of elements in the power set of $S$ is $2^{S}$ (where $S$ is the number of elements in the set $S$).
Here is a procedure that takes a list as input, and produces as output the power set of the elements of the list:
(if (null? s) (list null)
(listappend (listmap (lambda (t) (cons (car s) t))
(listpowerset (cdr s)))
(listpowerset (cdr s)))))
The listpowerset
procedure produces a List of Lists. Hence, for the base case, instead of just producing null
, it produces a list containing a single element, null
. In the recursive case, we can produce the power set by appending the list of all the subsets that include the first element, with the list of all the subsets that do not include the first element. For example, the powerset of ${1, 2, 3}$ is found by finding the powerset of ${2, 3}$, which is {{}, {2}, {3}, {2, 3}}, and taking the union of that set with the set of all elements in that set unioned with ${1}$.
An application of listpowerset
involves applying listappend
, and two recursive applications of (listpowerset (cdr s))
. Increasing the size of the input list by one, doubles the total number of applications of listpowerset
since we need to evaluate (listpowerset (cdr s))
twice. The number of applications of listpowerset
is $2^n$ where $n$ is the length of the input list.^{3}
The body of listpowerset
is an if expression. The predicate applies the constanttime procedure, null?
. The consequent expression, (list null)
is also constant time. The alternate expression is an application of listappend
. From Example 7.4, we know the running time of listappend
is $\Theta(n_p)$ where $n_p$ is the number of elements in its first input. The first input is the result of applying listmap
to a procedure and the List produced by (listpowerset (cdr s))
. The length of the list output by listmap
is the same as the length of its input, so we need to determine the length of (listpowerset (cdr s))
.
We use $n_s$ to represent the number of elements in s
. The length of the input list to map
is the number of elements in the power set of a size $n_s1$ set: $2^{n_s1}$. But, for each application, the value of $n_s$ is different. Since we are trying to determine the total running time, we can do this by thinking about the total length of all the input lists to listmap
over all of the listpowerset
. In the input is a list of length $n$, the total list length is
$2^{n1} + 2^{n  2} + ... + 2^1 + 2^0$, which is equal to $2^{n}1$. So, the running time for all the listmap
applications is in $\Theta(2^n)$.
The analysis of the listappend
applications is similar. The length of the first input to listappend
is the length of the result of the listpowerset
application, so the total length of all the inputs to append is $2^n$.
Other than the applications of listmap
and listappend
, the rest of each listpowerset
application requires constant time. So, the running time required for $2^n$ applications is in $\Theta(2^n)$. The total running time for listpowerset
is the sum of the running times for the listpowerset
applications, in $\Theta(2^n)$; the listmap
applications, in $\Theta(2^n)$; and the listappend
applications, in $\Theta(2^n)$. Hence, the total running time is in $\Theta(2^n)$.
In this case, we know there can be no faster than exponential procedure that solves the same problem, since the size of the output is exponential in the size of the input. Since the most work a Turing Machine can do in one step is write one square, the size of the output provides a lower bound on the running time of the Turing Machine. The size of the powerset is $2^n$ where $n$ is the size of the input set. Hence, the fastest possible procedure for this problem has at least exponential running time.

In fact, it computing the remainder requires performing division, which is quadratic in the size of the input. ↩

The movie Sneakers is a fictional account of what would happen if someone finds a faster than exponential time factoring algorithm. ↩

Observant readers will note that it is not really necessary to perform this evaluation twice since we could do it once and reuse the result. Even with this change, though, the running time would still be in $\Theta(2^n)$. ↩