How do we know whether a model is right if we are running a systematic managed futures program? This is not an easy question because a significant amount of data is necessary to distinguish the difference between models. Plus, there is just the uncertainty of structural changes, regime changes, and parameter variability which ensures that the best model yesterday will not be the best today or tomorrow.
There are tools that can help with the process. One important direction that has not been effectively explored is robust control methods. Robust control assumes an "approximating model" which is then perturbed to find parameters that are penalized if there is failure. In this case, if we have a simple moving average model with stops, the robust control method will find the parameters that will reduce the risk of loss when there is uncertainty. This idea is not foreign to most modelers. While many managers have not explicitly used these techniques, it is intuitively used when there is an exploration of parameter choices or when multiple models used within a program.
You can think of robust control as another method for dealing with market unknowns. Your model is supposed to make predictions. The quality of the predictions is based on performance. A higher return model system is more predictive than a low return model. However, given the level of uncertainty in the market, it is hard to say what set or parameters or model will do best in the future. Hence, there is value through testing variations on a single model in order to find environments for when a model will do poorly. Using a min-max utility strategy, the parameter choice may not be to find the best performing model based on optimization of parameters, but to find the best model assuming that you want to minimize some max loss. Since there is uncertainty, don't find the fitted best model but one that will not generate a strong loss in any environment.
The same approach can be applied when employing more than one model. By mixing weights with more than one model, the controller can minimize the worst case regardless of the future environment. The objective is not to find the combination of models that maximizes returns but to find the combination that will not generate loses in unknown environments. The form of the robust control can be fit to the utility function of the controller-manager based on a set of criteria. The idea is to move beyond simple optimization and account for the fact that the future is uncertain, so you have to assume worse case scenarios. Researchers often implicitly do this but there can be explicit tools to solve the problem.
You can think of robust control as another method for dealing with market unknowns. Your model is supposed to make predictions. The quality of the predictions is based on performance. A higher return model system is more predictive than a low return model. However, given the level of uncertainty in the market, it is hard to say what set or parameters or model will do best in the future. Hence, there is value through testing variations on a single model in order to find environments for when a model will do poorly. Using a min-max utility strategy, the parameter choice may not be to find the best performing model based on optimization of parameters, but to find the best model assuming that you want to minimize some max loss. Since there is uncertainty, don't find the fitted best model but one that will not generate a strong loss in any environment.
The same approach can be applied when employing more than one model. By mixing weights with more than one model, the controller can minimize the worst case regardless of the future environment. The objective is not to find the combination of models that maximizes returns but to find the combination that will not generate loses in unknown environments. The form of the robust control can be fit to the utility function of the controller-manager based on a set of criteria. The idea is to move beyond simple optimization and account for the fact that the future is uncertain, so you have to assume worse case scenarios. Researchers often implicitly do this but there can be explicit tools to solve the problem.
No comments:
Post a Comment