Tuesday, December 29, 2015

Simple versus complex forecasting - Simple wins


While it has been argued that simple rules for forecasting do better than complex rules, many are skeptical of this conclusion. There is the view that to compete in competitive markets your edge should be complexity. However, the data just does not support that conclusion. It is better to follow the advice of William Ockham.

My confidence in simplicity is not based on a belief but on data that supports this conclusion. A recent paper by Kersten Green and J. Scott Armstrong called "Simple versus Complex Forecasting: The Evidence" shows that when direct comparisons are made simple wins.  

Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods—including those in this special issue—found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures.

From older work, there has been a clear conclusion that forecasting errors increase with complexity. There is a problem with overfitting. Similarly, the models that are fitted best in-sample do worse out of sample. 


There are number of complexities with testing this simplicity hypothesis but it does provide an important benchmark for model building. If you don't have a good reason for adding complexity, keep it simple. Even if you have a good reason for complexity, bias in the direction of simplicity. 

No comments:

Post a Comment