# 3.3 Key principles

Using a systematic and well structured approach in judgmental forecasting helps to reduce the adverse effects of the limitations of judgmental forecasting, some of which we listed in the previous section. Whether this approach involves one or many individuals, the following principles should be followed.

## Set the forecasting task clearly and concisely

Care is needed when setting the forecasting challenges and expressing the forecasting tasks. It is important that everyone is clear about what the task is. All definitions should be clear and comprehensive, avoiding ambiguous and vague expressions. Also it is important to avoid incorporating emotive terms and irrelevant information that may distract the forecaster. In the Delphi method that follows (see Section 3/4), it may be sometimes useful to conduct a preliminary round of information gathering before setting the forecasting task.

## Implement a systematic approach

Forecast accuracy and consistency can be improved by using a systematic approach to judgmental forecasting involving checklists of categories of information relevant to the forecasting task. For example, it is helpful to identify what information is important and how this information is to be weighted. When forecasting the demand of a new product, what factors should we account for and how should we account for them? Should it be the price, the quality and/or quantity of the competition, the economic environment at the time, the target population of the product? It is worthwhile devoting significant effort and resources in putting together decision rules that lead to the best possible systematic approach.

## Document and justify

Formalising and documenting the decision rules and assumptions implemented in the systematic approach can promote consistency as the same rules can be implemented repeatedly. Also, requesting a forecaster to document and justify forecasts leads to accountability which can lead to a reduced bias. Furthermore, formal documentation significantly aids in the systematic evaluation process that is suggested next.

## Systematically evaluate forecasts

Systematically monitoring the forecasting process can identify unforeseen irregularities. In particular, keep records of forecasts and use them to obtain feedback as the forecasted period becomes observed. Although you can do your best as a forecaster, the environment you operate in is dynamic. Changes occur and you need to monitor these in order to evaluate the decision rules and assumptions. Feedback and evaluation helps forecasters learn and improve forecast accuracy.

## Segregate forecasters and users

Forecast accuracy may be impeded if the forecasting task is carried out by users of the forecasts, such as those responsible for implementing plans of action about which the forecast is concerned. We should clarify again here (as in Section 1/2), forecasting is about predicting the future as accurately as possible, given all the information available including historical data and knowledge of any future events that might impact the forecasts. Forecasters and users should be clearly segregated. A classic case is that of a new product being launched. The forecast should be a reasonable estimate of the sales volume of a new product, which may be very different to what management expects or hopes the sales will be in order to meet company financial objectives. A forecaster in this case may be delivering a reality check to the user.

It is important that forecasters thoroughly communicate forecasts to potential users. As we will see in Section 3/8, users may feel distant and disconnected from forecasts and may not have full confidence in them. Explaining and clarifying the process and justifying basic assumptions that led to forecasts will provide some assurance to users.

How forecasts may then be used and implemented will clearly depend on managerial decision making. For example, management may decide to adjust a forecast upwards (be over-optimistic) as the forecast may be used to guide purchasing and stock keeping levels. Such a decision may have been taken after cost-benefit analysis reveals that the cost of holding excess stock is much lower than that of lost sales. This type of adjustment should be part of setting goals or planning supply rather than part of the forecasting process. In contrast, if forecasts are used as targets, they may be set low so that these can be exceeded more easily. Again, setting targets is different from producing forecasts, and the two should not be confused.

The example that follows comes from our experience in industry. It exemplifies two contrasting styles of judgmental forecasting — one that adheres to the principles we have just presented and one that does not.

## Example 3.1 Pharmaceutical Benefits Scheme (PBS)

The Australian government subsidises the cost of a wide range of prescription medicines as part of the PBS. Each subsidised medicine falls into one of four categories: concession copayments, concession safety net, general copayments, and general safety net. Each person with a concession card makes a concession copayment per PBS medicine (\$5.80)1 until they reach a set threshold amount labelled the concession safety net (\$348). For the rest of the financial year all PBS listed medicines are free. Each general patient makes a general copayment per PBS medicine (\$35.40) until the general safety net amount is reached (\$1,363.30). For the rest of the financial year they contribute a small amount per PBS listed medicine (\$5.80). The PBS forecasting process uses 84 groups of PBS listed medicines, and produces forecasts of the medicine volume and the total expenditure for each group and for each of the four PBS categories, a total of 672 series. This forecasting process aids in setting the government budget allocated to the PBS which is over \$7 billion per year or approximately 1% of GDP.

Figure 3.1: Process for producing PBS forecasts.

Figure 3.1 summarises the forecasting process. Judgmental forecasts are generated for new listings of medicines and for estimating the impact of new policies. These are shown by the green items. The pink items indicate the data used obtained from various government departments and associated authorites. The blue items show things that are calculated from the data provided. There were judgmental adjustments to the data to take account of new listings and new policies, and there were judgmental adjustments to the forecasts also. Because of the changing size of the concession population and the total population, forecasts are produced on a per capita basis, and then multiplied for the forecast population to obtain forecasts of total volume and expenditure per month.

One of us (Hyndman) was asked to evaluate the forecasting process a few years ago. We found that using judgement for new listings and new policy impacts gave better forecasts than using a statistical model alone. However, we also found that forecasting accuracy and consistency could be improved through a more structured and systematic process, especially for policy impacts.

Forecasting new listings: Companies who are applying for their medicine to be added to the PBS are asked to submit detailed forecasts for various aspects of the medicine, such as projected patient numbers, market share of the new medicine, substitution effects, etc. The Pharmaceutical Benefits Advisory Committee provides guidelines of a highly structured and systematic approach for generating these forecasts and requires careful documentation for each step of the process. This structured process helps reduce the likelihood and effect of deliberate self-serving biases. Two detailed evaluation rounds of the company forecasts are implemented by a sub-committee, one before the medicine is added to the PBS and one after it is added. Finally comparisons of observed versus forecasts for some selected new listings 12 months and also 24 months after the listing are performed and the results are sent back to the companies for comment.

Policy impact forecasts: In contrast to the highly structured process used for new listings, there were no systematic procedures for policy impact forecasts. On many occasions forecasts of policy impacts were calculated by a small team, often heavily reliant on the work of one person. The forecasts were not usually subject to a formal review process. There were no guidelines for how to construct judgmental forecasts for policy impacts and there was often a lack of adequate documentation about how these forecasts were obtained, what assumptions underly them, etc.

Consequently we recommended several changes:

• that guidelines for forecasting new policy impacts be developed to encourage a more systematic and structured forecasting approach;
• that the forecast methodology be documented in each case including all assumptions made in forming the forecasts;
• that new policy forecasts be made by at least two people from different areas of the organisation;
• that a review of forecasts be conducted one year after the implementation of each new policy by a review committee, especially for new policies that have a significant annual projected cost or saving. The review committee should include those involved in generating the forecasts but also others.

These recommendations reflect the principles outlined in Section 3/3.

1. These are Australian dollar amounts published by the Australian government for 2012.