No single model is perfect. It will have flaws. It will not capture all market behavior. It may fail during turning points. The parameters may be wrong or the values have not been measured correctly.
Consequently, it makes sense to employ a number of models which can be averaged. The forecast from the average will do better than the forecast from any one model. An ensemble of models can offset many of the potential flaws from using a single model. Additionally it may make sense to adjust the parameters of any model as new information is gathered.
The literature on this issue is huge, yet a recent paper has me thinking about the impact of model averaging on the price process. See "Gresham's Law of Model Averaging" by In-Koo Cho and Kenneth Kasa in the American Economic Review. What is good for individual forecasting may not be good for the market as a whole.
The authors show that if there are two mixed models, one with stationary and another with time-varying parameters, the fear of parameter instability can become self-fulfilling. The feedback from the time varying parameter model on the market will lead to instability and drive out the stationary model which in the long-run may still be correct. Learning or adapting may cause instability.
The use of time varying parameter models can have the unintended consequence of generating excessive price volatility on the market being forecasted. Call it chasing the tail of parameter estimation error.
Even though model averaging makes sense for any market participant, the result, when more forecasters follow this strategy, is that good models are driven from use and prices will bounce around with parameter changes. Trying to correct for estimation uncertainty may harm markets. There is no good solution for avoiding to this problem other than estimate the right model, but that is always easier said than done. Interesting food for thought as many quants recalibrate models at the end of the year.