8.9 Seasonal ARIMA models

So far, we have restricted our attention to non-seasonal data and non-seasonal ARIMA models. However, ARIMA models are also capable of modelling a wide range of seasonal data. A seasonal ARIMA model is formed by including additional seasonal terms in the ARIMA models we have seen so far. It is written as follows:

where $m =$ number of periods per season. We use uppercase notation for the seasonal parts of the model, and lowercase notation for the non-seasonal parts of the model.

The seasonal part of the model consists of terms that are very similar to the non-seasonal components of the model, but they involve backshifts of the seasonal period. For example, an ARIMA(1,1,1)(1,1,1)4 model (without a constant) is for quarterly data ($m=4$) and can be written as

The additional seasonal terms are simply multiplied with the non-seasonal terms.

ACF/PACF

The seasonal part of an AR or MA model will be seen in the seasonal lags of the PACF and ACF. For example, an ARIMA(0,0,0)(0,0,1)12 model will show:

  • a spike at lag 12 in the ACF but no other significant spikes.
  • The PACF will show exponential decay in the seasonal lags; that is, at lags 12, 24, 36, ….

Similarly, an ARIMA(0,0,0)(1,0,0)12 model will show:

  • exponential decay in the seasonal lags of the ACF
  • a single significant spike at lag 12 in the PACF.

In considering the appropriate seasonal orders for an ARIMA model, restrict attention to the seasonal lags. The modelling procedure is almost the same as for non-seasonal data, except that we need to select seasonal AR and MA terms as well as the non-seasonal components of the model. The process is best illustrated via examples.

Example 8.3 European quarterly retail trade

We will describe the seasonal ARIMA modelling procedure using quarterly European retail trade data from 1996 to 2011. The data are plotted in Figure 8.14.

Figure 8.14: Quarterly retail trade index in the Euro area (17 countries), 1996–2011, covering wholesale and retail trade, and repair of motor vehicles and motorcycles. (Index: 2005 = 100).

R code
plot(euretail, ylab="Retail index", xlab="Year")

The data are clearly non-stationary, with some seasonality, so we will first take a seasonal difference. The seasonally differenced data are shown in Figure 8.15. These also appear to be non-stationary, and so we take an additional first difference, shown in Figure 8.16.

Figure 8.15: Seasonally differenced European retail trade index.

R code
tsdisplay(diff(euretail,4))

Figure 8.16: Double differenced European retail trade index.

R code
tsdisplay(diff(diff(euretail,4)))

Our aim now is to find an appropriate ARIMA model based on the ACF and PACF shown in Figure 8.16. The significant spike at lag 1 in the ACF suggests a non-seasonal MA(1) component, and the significant spike at lag 4 in the ACF suggests a seasonal MA(1) component. Consequently, we begin with an ARIMA(0,1,1)(0,1,1)$_4$ model, indicating a first and seasonal difference, and non-seasonal and seasonal MA(1) components. The residuals for the fitted model are shown in Figure 8.17. (By analogous logic, we could also have started with an ARIMA(1,1,0)(1,1,0)$_4$ model.)

Figure 8.17: Residuals from the fitted ARIMA(0,1,1)(0,1,1)4 model for the European retail trade index data.

R code
fit <- Arima(euretail, order=c(0,1,1), seasonal=c(0,1,1))
tsdisplay(residuals(fit))

Both the ACF and PACF show significant spikes at lag 2, and almost significant spikes at lag 3, indicating some additional non-seasonal terms need to be included in the model. The AICc of the ARIMA(0,1,2)(0,1,1)$_4$ model is 74.36, while that for the ARIMA(0,1,3)(0,1,1)$_4$ model is 68.53. We tried other models with AR terms as well, but none that gave a smaller AICc value. Consequently, we choose the ARIMA(0,1,3)(0,1,1)$_4$ model. Its residuals are plotted in Figure 8.18. All the spikes are now within the significance limits, and so the residuals appear to be white noise. A Ljung-Box test also shows that the residuals have no remaining autocorrelations.

Figure 8.18: Residuals from the fitted ARIMA(0,1,3)(0,1,1)4 model for the European retail trade index data.

R code
fit3 <- Arima(euretail, order=c(0,1,3), seasonal=c(0,1,1))
res <- residuals(fit3)
tsdisplay(res)
Box.test(res, lag=16, fitdf=4, type="Ljung")

So we now have a seasonal ARIMA model that passes the required checks and is ready for forecasting. Forecasts from the model for the next three years are shown in Figure 8.19. Notice how the forecasts follow the recent trend in the data (this occurs because of the double differencing). The large and rapidly increasing prediction intervals show that the retail trade index could start increasing or decreasing at any time — while the point forecasts trend downwards, the prediction intervals allow for the data to trend upwards during the forecast period.

Figure 8.19: Forecasts of the European retail trade index data using the ARIMA(0,1,3)(0,1,1)4 model. 80% and 95% prediction intervals are shown.

R code
plot(forecast(fit3, h=12))

We could have used auto.arima() to do most of this work for us. It would have given the following result.

R output
> auto.arima(euretail)
ARIMA(1,1,1)(0,1,1)[4]                    

Coefficients:
         ar1      ma1     sma1
      0.8828  -0.5208  -0.9704
s.e.  0.1424   0.1755   0.6792

sigma^2 estimated as 0.1411:  log likelihood=-30.19
AIC=68.37   AICc=69.11   BIC=76.68

Notice that it has selected a different model (with a larger AICc value). auto.arima() takes some short-cuts in order to speed up the computation and will not always give the best model. You can turn the short-cuts off and then it will sometimes return a different model.

R output
> auto.arima(euretail, stepwise=FALSE, approximation=FALSE)
ARIMA(0,1,3)(0,1,1)[4]                    

Coefficients:
         ma1     ma2     ma3     sma1
      0.2625  0.3697  0.4194  -0.6615
s.e.  0.1239  0.1260  0.1296   0.1555

sigma^2 estimated as 0.1451:  log likelihood=-28.7
AIC=67.4   AICc=68.53   BIC=77.78

This time it returned the same model we had identified.

Example 8.4 Cortecosteroid drug sales in Australia

Our second example is more difficult. We will try to forecast monthly cortecosteroid drug sales in Australia. These are known as H02 drugs under the Anatomical Therapeutical Chemical classification scheme.

Figure 8.20: Cortecosteroid drug sales in Australia (in millions of scripts per month). Logged data shown in bottom panel.

R code
lh02 <- log(h02)
par(mfrow=c(2,1))
plot(h02, ylab="H02 sales (million scripts)", xlab="Year")
plot(lh02, ylab="Log H02 sales", xlab="Year")

Data from July 1991 to June 2008 are plotted in Figure 8.20. There is a small increase in the variance with the level, and so we take logarithms to stabilize the variance.

The data are strongly seasonal and obviously non-stationary, and so seasonal differencing will be used. The seasonally differenced data are shown in Figure 8.21. It is not clear at this point whether we should do another difference or not. We decide not to, but the choice is not obvious.

The last few observations appear to be different (more variable) from the earlier data. This may be due to the fact that data are sometimes revised as earlier sales are reported late.

Figure 8.21: Seasonally differenced cortecosteroid drug sales in Australia (in millions of scripts per month).

R code
tsdisplay(diff(lh02,12),
  main="Seasonally differenced H02 scripts", xlab="Year")

In the plots of the seasonally differenced data, there are spikes in the PACF at lags 12 and 24, but nothing at seasonal lags in the ACF. This may be suggestive of a seasonal AR(2) term. In the non-seasonal lags, there are three significant spikes in the PACF suggesting a possible AR(3) term. The pattern in the ACF is not indicative of any simple model.

Consequently, this initial analysis suggests that a possible model for these data is an ARIMA(3,0,0)(2,1,0)12. We fit this model, along with some variations on it, and compute their AICc values which are shown in the following table.

Model AICc
ARIMA(3,0,0)(2,1,0)12 -475.12
ARIMA(3,0,1)(2,1,0)12 -476.31
ARIMA(3,0,2)(2,1,0)12 -474.88
ARIMA(3,0,1)(1,1,0)12 -463.40
ARIMA(3,0,1)(0,1,1)12 -483.67
ARIMA(3,0,1)(0,1,2)12 -485.48
ARIMA(3,0,1)(1,1,1)12 -484.25

Of these models, the best is the ARIMA(3,0,1)(0,1,2)12 model (i.e., it has the smallest AICc value).

R output
> fit <- Arima(h02, order=c(3,0,1), seasonal=c(0,1,2), lambda=0)

ARIMA(3,0,1)(0,1,2)[12]                    
Box Cox transformation: lambda= 0

Coefficients:
          ar1     ar2     ar3     ma1     sma1     sma2
      -0.1603  0.5481  0.5678  0.3827  -0.5222  -0.1768
s.e.   0.1636  0.0878  0.0942  0.1895   0.0861   0.0872

sigma^2 estimated as 0.004145:  log likelihood=250.04
AIC=-486.08   AICc=-485.48   BIC=-463.28

The residuals from this model are shown in Figure 8.22. There are significant spikes in both the ACF and PACF, and the model fails a Ljung-Box test. The model can still be used for forecasting, but the prediction intervals may not be accurate due to the correlated residuals.

Figure 8.22: Residuals from the ARIMA(3,0,1)(0,1,2)12 model applied to the H02 monthly script sales data.

R code
tsdisplay(residuals(fit))
Box.test(residuals(fit), lag=36, fitdf=6, type="Ljung")

Next we will try using the automatic ARIMA algorithm. Running auto.arima() with arguments left at their default values led to an ARIMA(2,1,3)(0,1,1)12 model. However, the model still fails a Ljung-Box test. Sometimes it is just not possible to find a model that passes all the tests.

Finally, we tried running auto.arima() with differencing specified to be $d=0$ and $D=1$, and allowing larger models than usual. This led to an ARIMA(4,0,3)(0,1,1)12 model, which did pass all the tests.

R code
fit <- auto.arima(h02, lambda=0, d=0, D=1, max.order=9,
                stepwise=FALSE, approximation=FALSE)
tsdisplay(residuals(fit))
Box.test(residuals(fit), lag=36, fitdf=8, type="Ljung")

Test set evaluation:

We will compare some of the models fitted so far using a test set consisting of the last two years of data. Thus, we fit the models using data from July 1991 to June 2006, and forecast the script sales for July 2006 – June 2008. The results are summarised in the following table.

Model RMSE
ARIMA(3,0,0)(2,1,0)12 0.0661
ARIMA(3,0,1)(2,1,0)12 0.0646
ARIMA(3,0,2)(2,1,0)12 0.0645
ARIMA(3,0,1)(1,1,0)12 0.0679
ARIMA(3,0,1)(0,1,1)12 0.0644
ARIMA(3,0,1)(0,1,2)12 0.0622
ARIMA(3,0,1)(1,1,1)12 0.0630
ARIMA(4,0,3)(0,1,1)12 0.0648
ARIMA(3,0,3)(0,1,1)12 0.0640
ARIMA(4,0,2)(0,1,1)12 0.0648
ARIMA(3,0,2)(0,1,1)12 0.0644
ARIMA(2,1,3)(0,1,1)12 0.0634
ARIMA(2,1,4)(0,1,1)12 0.0632
ARIMA(2,1,5)(0,1,1)12 0.0640
R code
getrmse <- function(x,h,...)
{
  train.end <- time(x)[length(x)-h]
  test.start <- time(x)[length(x)-h+1]
  train <- window(x,end=train.end)
  test <- window(x,start=test.start)
  fit <- Arima(train,...)
  fc <- forecast(fit,h=h)
  return(accuracy(fc,test)[2,"RMSE"])
}

getrmse(h02,h=24,order=c(3,0,0),seasonal=c(2,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(2,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,2),seasonal=c(2,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(1,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(0,1,2),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(1,1,1),lambda=0)
getrmse(h02,h=24,order=c(4,0,3),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(3,0,3),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(4,0,2),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(3,0,2),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(2,1,3),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(2,1,4),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(2,1,5),seasonal=c(0,1,1),lambda=0)

The models that have the lowest AICc values tend to give slightly better results than the other models, but there is not a large difference. Also, the only model that passed the residual tests did not give the best out-of-sample RMSE values.

When models are compared using AICc values, it is important that all models have the same orders of differencing. However, when comparing models using a test set, it does not matter how the forecasts were produced --- the comparisons are always valid. Consequently, in the table above, we can include some models with only seasonal differencing and some models with both first and seasonal differencing. But in the earlier table containing AICc values, we compared models with only seasonal differencing.

None of the models considered here pass all the residual tests. In practice, we would normally use the best model we could find, even if it did not pass all tests.

Forecasts from the ARIMA(3,0,1)(0,1,2)12 model (which has the lowest RMSE value on the test set, and the best AICc value amongst models with only seasonal differencing and fewer than six parameters) are shown in the figure below.

Figure 8.23: Forecasts from the ARIMA(3,0,1)(0,1,2)12 model applied to the H02 monthly script sales data.

R code
fit <- Arima(h02, order=c(3,0,1), seasonal=c(0,1,2), lambda=0)
plot(forecast(fit), ylab="H02 sales (million scripts)", xlab="Year")