# 7.4.4 Exponential Growth

If the running time of a procedure scales as a power of the size of the input, the procedure's running time grows exponentially. When the size of the input increases by one, the running time is multiplied by some constant factor. The growth rate of a function whose output is multiplied by $w$ when the input size, $n$, increases by one is $w^n$. Exponential growth is very fast---it is not feasible to evaluate applications of an exponential time procedure on large inputs.

For a surprisingly large number of interesting problems, the best known algorithm has exponential running time. Examples of problems like this include finding the best route between two locations on a map (the problem mentioned at the beginning of Chapter 4), the pegboard puzzle (Exploration 5.2, solving generalized versions of most other games such as Suduko and Minesweeper, and finding the factors of a number. Whether or not it is possible to design faster algorithms that solve these problems is the most important open problem in computer science.

Example 7.9: Factoring

A simple way to find a factor of a given input number is to exhaustively try all possible numbers below the input number to find the first one that divides the number evenly.
The find-factor procedure takes one number as input and outputs the lowest factor of that number (other than 1):

(define (find-factor n)
(define (find-factor-helper v)
(if (= (modulo n v) 0) v (find-factor-helper (+ 1 v))))
(find-factor-helper 2))

The find-factor-helper procedure takes two inputs, the number to factor and the current guess. Since all numbers are divisible by themselves, the modulo test will eventually be true for any positive input number, so the maximum number of recursive calls is n, the magnitude of the input to find-factor. The magnitude of n is exponential in its size, so the number of recursive calls is in $\Theta(2^b)$ where $b$ is the number of bits in the input. This means even if the amount of work required for each recursive call were constant, the running time of the find-factor procedure is still exponential in the size of its input.

The actual work for each recursive call is not constant, though, since it involves an application of modulo. The modulo built-in procedure takes two inputs and outputs the remainder when the first input is divided by the second input. Hence, it output is 0 if n is divisible by v. Computing a remainder, in the worst case, at least involves examining every bit in the input number, so scales at least linearly in the size of its input 1. This means the running time of find-factor is in $\Omega(2^b)$: it grows at least as fast as $2^b$.

There are lots of ways we could produce a faster procedure for finding factors: stopping once the square root of the input number is reached since we know there is no need to check the rest of the numbers, skipping even numbers after 2 since if a number is divisible by any even number it is also divisible by 2, or using advanced sieve methods. This techniques can improve the running time by constant factors, but there is no known factoring algorithm that runs in faster than exponential time. The security of the widely used RSA encryption algorithm depends on factoring being hard. If someone finds a fast factoring algorithm it would put the codes used to secure Internet commerce at risk.2

Example 7.10: Power Set

The power set of a set $S$ is the set of all subsets of $S$. For example, the power set of ${1, 2, 3}$ is ${{}, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}$.

The number of elements in the power set of $S$ is $2^{|S|}$ (where $|S|$ is the number of elements in the set $S$).

Here is a procedure that takes a list as input, and produces as output the power set of the elements of the list:

(define (list-powerset s)
(if (null? s) (list null)
(list-append (list-map (lambda (t) (cons (car s) t))
(list-powerset (cdr s)))
(list-powerset (cdr s)))))

The list-powerset procedure produces a List of Lists. Hence, for the base case, instead of just producing null, it produces a list containing a single element, null. In the recursive case, we can produce the power set by appending the list of all the subsets that include the first element, with the list of all the subsets that do not include the first element. For example, the powerset of ${1, 2, 3}$ is found by finding the powerset of ${2, 3}$, which is {{}, {2}, {3}, {2, 3}}, and taking the union of that set with the set of all elements in that set unioned with ${1}$.

An application of list-powerset involves applying list-append, and two recursive applications of (list-powerset (cdr s)). Increasing the size of the input list by one, doubles the total number of applications of list-powerset since we need to evaluate (list-powerset (cdr s)) twice. The number of applications of list-powerset is $2^n$ where $n$ is the length of the input list.3

The body of list-powerset is an if expression. The predicate applies the constant-time procedure, null?. The consequent expression, (list null) is also constant time. The alternate expression is an application of list-append. From Example 7.4, we know the running time of list-append is $\Theta(n_p)$ where $n_p$ is the number of elements in its first input. The first input is the result of applying list-map to a procedure and the List produced by (list-powerset (cdr s)). The length of the list output by list-map is the same as the length of its input, so we need to determine the length of (list-powerset (cdr s)).

We use $n_s$ to represent the number of elements in s. The length of the input list to map is the number of elements in the power set of a size $n_s-1$ set: $2^{n_s-1}$. But, for each application, the value of $n_s$ is different. Since we are trying to determine the total running time, we can do this by thinking about the total length of all the input lists to list-map over all of the list-powerset. In the input is a list of length $n$, the total list length is $2^{n-1} + 2^{n - 2} + ... + 2^1 + 2^0$, which is equal to $2^{n}-1$. So, the running time for all the list-map applications is in $\Theta(2^n)$.

The analysis of the list-append applications is similar. The length of the first input to list-append is the length of the result of the list-powerset application, so the total length of all the inputs to append is $2^n$.

Other than the applications of list-map and list-append, the rest of each list-powerset application requires constant time. So, the running time required for $2^n$ applications is in $\Theta(2^n)$. The total running time for list-powerset is the sum of the running times for the list-powerset applications, in $\Theta(2^n)$; the list-map applications, in $\Theta(2^n)$; and the list-append applications, in $\Theta(2^n)$. Hence, the total running time is in $\Theta(2^n)$.

In this case, we know there can be no faster than exponential procedure that solves the same problem, since the size of the output is exponential in the size of the input. Since the most work a Turing Machine can do in one step is write one square, the size of the output provides a lower bound on the running time of the Turing Machine. The size of the powerset is $2^n$ where $n$ is the size of the input set. Hence, the fastest possible procedure for this problem has at least exponential running time.

1. In fact, it computing the remainder requires performing division, which is quadratic in the size of the input.

2. The movie Sneakers is a fictional account of what would happen if someone finds a faster than exponential time factoring algorithm.

3. Observant readers will note that it is not really necessary to perform this evaluation twice since we could do it once and reuse the result. Even with this change, though, the running time would still be in $\Theta(2^n)$.