If the running time of a procedure scales as the square of the size of the input, the procedure's running time grows quadratically
Doubling the size of the input approximately quadruples the running time. The running time is in $\Theta(n^2)$ where $n$ is the size of the input.

A procedure that takes a list as input has running time that grows quadratically if it goes through all elements in the list once for every element in the list. For example, we can compare every element in a list of length $n$ with every other element using $n(n-1)$ comparisons. This simplifies to $n^2 - n$, but $\Theta(n^2 - n)$ is equivalent to $\Theta(n^2)$ since as $n$ increases only the highest power term matters (see Exercise 7.7).

Example 7.7: Reverse

Consider the list-reverse procedure defined in Section 5.4.2:

(define (list-reverse p)
(if (null? p) null (list-append (list-reverse (cdr p)) (list (car p)))))

To determine the running time of list-reverse, we need to know how many recursive calls there are and how much work is involved in each recursive call. Each recursive application passes in (cdr p) as the input, so reduces the length of the input list by one. Hence, applying list-reverse to a input list with $n$ elements involves $n$ recursive calls.

The work for each recursive application, excluding the recursive call itself, is applying list-append. The first input to list-append is the output of the recursive call. As we argued in Example 7.4, the running time of list-append is in $\Theta(n_p)$ where $n_p$ is the number of elements in its first input. So, to determine the running time we need to know the length of the first input list to list-append. For the first call, (cdr p) is the parameter, with length $n-1$; for the second call, there will be $n-2$ elements; and so forth, until the final call where (cdr p) has $0$ elements. The total number of elements in all of these calls is:

$$(n-1) + (n-2) + \ldots + 1 + 0.$$

The average number of elements in each call is approximately $\frac{n}{2}$. Within the asymptotic operators the constant factor of $\frac{1}{2}$ does not matter, so the average running time for each recursive application is in $\Theta(n)$.

There are $n$ recursive applications, so the total running time of list-reverse is $n$ times the average running time of each recursive application: $n \cdot \Theta(n) = \Theta(n^2).$ Thus, the running time is quadratic in the size of the input list.

Example 7.8: Multiplication

Consider the problem of multiplying two numbers. The elementary school long multiplication algorithm works by multiplying each digit in $b$ by each digit in $a$, aligning the intermediate results in the right places, and summing the results:

If both input numbers have $n$ digits, there are $n^2$ digit multiplications, each of which can be done in constant time. The intermediate results will be $n$ rows, each containing $n$ digits. So, the total number of digits to add is $n^2$: 1 digit in the ones place, 2 digits in the tens place, $\ldots$, $n$ digits in the $10^{n-1}$s place, $\ldots$, 2 digits in the $10^{2n-3}$s place, and 1 digit in the $10^{2n-2}$s place. Each digit addition requires constant work, so the total work for all the digit additions is in $\Theta(n^2)$. Adding the work for both the digit multiplications and the digit additions, the total running time for the elementary school multiplication algorithm is quadratic in the number of input digits, $\Theta(n^2)$ where $n$ is the number if digits in the inputs.

This is not the fastest known algorithm for multiplying two numbers, although it was the best algorithm known until 1960. In 1960, Anatolii Karatsuba discovers a multiplication algorithm with running time in $\Theta(n^{\log_2 3})$. Since $\log_2 3 < 1.585$ this is an improvement over the $\Theta(n^2)$ elementary school algorithm. In 2007, Martin Furer discovered an even faster algorithm for multiplication 1. It is not yet known if this is the fastest possible multiplication algorithm, or if faster ones exist.

Exercise 7.15. $\left[\star \right]$ Analyze the running time of the elementary school long division algorithm.

Exercise 7.16. $\left[\star \right]$ Define a Scheme procedure that multiplies two multi-digit numbers (without using the built-in \scheme|*| procedure except to multiply single-digit numbers). Strive for your procedure to have running time in $\Theta(n)$ where $n$ is the total number of digits in the input numbers.

Exercise 7.17. $\left[\star \star \star \star \right]$ Devise an asymptotically faster general multiplication algorithm than Furer's, or prove that no faster algorithm exists.

1. Martin Furer, Faster Integer Multiplication, ACM Symposium on Theory of Computing, 2007.