# 11.1 Generalizing the linear model

The generalized linear model (GLM1) will allow us to model our response variable more appropriately. GLMs, represented as

$$y_{i}=f(\beta _{0}+\beta _{1}X_{1} + \beta_{2}X_{2} + \cdots + \beta _{p}X_{p})+\epsilon _{i}$$

extend the general linear model in two ways. First, the distribution of the errors (the $\epsilon_{i}$) is allowed to take on a variety of forms other than the normal distribution. Second, given a specific distribution of errors, a function is applied to the linear component of the right hand side of the equation. Remember, the linear component of the equation (i.e., $\beta_{0}+\beta_{1}X_{1} + \beta_{2}X_{2} + \cdots + \beta_{p}X_{p}$) describes the expected $y_{i}$. In the case of only one predictor, these expected values are the regression line. In a GLM, instead of applying the function directly to the linear component, we usually apply its inverse to the expected $y_{i}$.

$$f^{-1}(E(y_{i}))=\beta _{0}+\beta_{1}X_{1} + \beta _{2}X_{2} + \cdots + \beta _{p}X_{p} \label{glmeqn} \tag{1.2}$$

This inverse function is referred to as the “link” function, and each error distribution has a variety of link functions that are possible. Also notice that in equation \eqref{glmeqn}, because the link function is applied to the expected $y_{i}$, the $\epsilon_{i}$ drop out of the equation.

Estimating the $\beta _{i}$ for GLMs is a little more complicated than we have seen previously. For the general linear model, the method of ordinary least squares (OLS) was used to find parameter values that gave the best fit to the data. However, the analytical formulas for OLS parameter estimates do not hold for GLMs. Instead, GLMs rely on maximum likelihood estimation (MLE). The concept of the likelihood was introduced in Chapter , and, simply put, MLE seeks to find parameter estimates that could have most likely given rise to the data in hand, assuming the proposed model structure. Conceptually, this is very simple. In practice, however, MLE involving non-normal distributions involves fairly complex iterative algorithms, and results are merely approximations of true maximum likelihood estimates.

The function for fitting a generalized linear model is glm(). Take a look at ?glm. In practice, it is used very much like lm(), including the use of various functions (e.g., summary(), plot(), coef(), etc.) for summarizing and extracting information from the resulting model fit. A main difference from lm() is the family argument that tells R what error distribution to use for the response variable. The available error distributions, and their associated default link function can be seen using ?family. For biologists, the Binomial (e.g., dead or alive, proportions) and Poisson (e.g., counts) error distributions are especially useful. For a more thorough description of generalized models, readers are referred to [. Chapter 13 of [ provides a less technical, but very good introduction.

1. Unfortunately, both the general linear model and generalized linear models are frequently referred to as “GLM”. As we will see, R uses the function glm() to fit generalized linear models, and we will use GLM to refer to them.