Monday, March 27, 2017

What kind of model to choose?

"For people who like that kind of thing, that is the kind of thing they like" 
"History does not repeat itself. The historians repeat one another."
- Max Beerbohm

Approaches to modeling go through fads and fashions. What was learned yesterday by MBA's will be the model of choice tomorrow. Certain approaches are employed because that is the approach the modeler wants or likes. The same applies to strategies. A value investor will not likely to turn into a growth investor. He likes that sort of thing. A quant will not become a discretionary storyteller. He likes the precision of the model.

If you believe the world can be described by factors, those are the type of models you will use. If you believe in trend-following, then that is the approach that will be employed in your portfolio. Sometimes an approach will be used regardless of its efficacy with actual performance results. Damn the data, I like the elegance of my model. If one specification does not work, another will be tried in an effort to find the right factors without looking at alternative approaches. For example, if some modelers use a Fama-French three factor model, then others will repeat that approach. Everyone will start to use the Fama-French approach as a baseline. There is nothing wrong with this in concept, but it can be taken to extremes.

We are not arguing that that there is anything inherently wrong with being a specific model follower or being biased with a specific framework. We are not arguing for an atheoretical approach. However, focus on one approach can create a myopic view of the world at the expense of performance. The simple question should always be, does the model work? Whether trend-following versus factor modeling, systematic or discretionary, longer-term versus short-term, the question is not acceptance by peers of the approach employed but whether it generate the results expected.  

Sunday, March 26, 2017

Drawdowns - worth a closer look as a risk measure

While there is a strong interest in short-term return performance and volatility of hedge funds, drawdown is still the risk where most investors have placed their focus. Maximum drawdown, as a risk measure, can be formalized as the conditional expected drawdown or the measure of the tail mean of a maximum drawdown distribution. The figure below shows what that distribution will look like. What makes this risk measure especially useful is that it can be employed in any optimization and has a linear attribution to factors. Maximum drawdown can have traded off against return or specific risk factors. It can be compared or related to the marginal contribution of risk measure which has gained popularity with many investors.

Perhaps more important, drawdowns are serially correlated with the return pattern of a manager. This means that if the returns of the manager show serial correlation, it will show-up in the drawdown data as more significant drawdowns. The drawdown of a portfolio will be related to the correlation across managers.  See "Drawdown: From practice to theory and back again" by Lisa Goldberg and Ola Mahmoud

Drawdowns are path dependent. How the returns of the manager evolve over time is relevant. It is notable that volatility and expected shortfall do not capture the impact of small cumulative loses like a drawdown measure. In this case, the path dependency within drawdowns provides useful information on risk.

Drawdown has been used as a descriptive measure of risk, but more formal analysis suggests that it would be a good measure to optimize against other factors. It may be more useful than expected shortfall or volatility to help minimize the worse case scenarios faced by investors.

Dollar variations - the two main levels of uncertainty

“There is no sphere of human thought in which it is easier to show superficial cleverness and the appearance of superior wisdom than in discussing questions of currency and exchange.” 

Winston Churchill, Speech to the House of Commons, Sept 29, 1949 

What makes currency forecasting so difficult are two levels of uncertainty. This uncertainty is playing out today with the dollar declining on the Fed raising interest rates.

First, there is the uncertainty associated with relative policies and behavior. Since the exchange rate is a relative price, the forecaster always has to get the macroeconomics of two countries right. The policies of the Fed have to be contrasted with the policies of the ECB. The growth of the US has to be compared with the growth of Canada for CAD. It is always a problem associated with the forecast of two. 

Second, there is the changing dynamics of any regression results. Parameter uncertainty is greater because the weights on what is important are constantly changes. This has been shown to be an empirical reality. Today, the important variable may be monetary policy. Tomorrow, the most important weight may be on growth. The shifting weights causes differences in rational beliefs that may prove false for forecasting. Monetary policy is important but its important may be less today than last month. Two analysts may both be right on the impact of monetary policy, but their level of emphasis may be wrong.

This is why heuristics and simple rules may be helpful. As a first pass, the price trend may be the most important indicator because it is the weighted value of all beliefs and opinion. It is the aggregation of all views on relative price and relative emphasis. If the weighted opinion is moving the dollar lower, it is sending us a signal on underlying economic variables.

Thursday, March 16, 2017

Robust control and managed futures

How do we know whether a model is right if we are running a systematic managed futures program? This is not an easy question because a significant amount of data is necessary to distinguish the difference between models. Plus, there is just the uncertainty of structural changes, regime changes, and parameter variability which ensures that the best model yesterday will not be the best today or tomorrow. 

There are tools that can help with the process. One important direction that has not been effectively explored is robust control methods. Robust control assumes an "approximating model" which is then perturbed to find parameters that are penalized if there is failure. In this case, if we have a simple moving average model with stops, the robust control method will find the parameters that will reduce the risk of loss when there is uncertainty. This idea is not foreign to most modelers. While many managers have not explicitly used these techniques, it is intuitively used when there is an exploration of parameter choices or when multiple models used within a program.

You can think of robust control as another method for dealing with market unknowns. Your model is supposed to make predictions. The quality of the predictions is based on performance. A higher return model system is more predictive than a low return model. However, given the level of uncertainty in the market, it is hard to say what set or parameters or model will do best in the future. Hence, there is value through testing variations on a single model in order to find environments for when a model will do poorly. Using a min-max utility strategy, the parameter choice may not be to find the best performing model based on optimization of parameters, but to find the best model assuming that you want to minimize some max loss. Since there is uncertainty, don't find the fitted best model but one that will not generate a strong loss in any environment. 

The same approach can be applied when employing more than one model. By mixing weights with more than one model, the controller can minimize the worst case regardless of the future environment.  The objective is not to find the combination of models that maximizes returns but to find the combination that will not generate loses in unknown environments. The form of the robust control can be fit to the utility function of the controller-manager based on a set of criteria. The idea is to move beyond simple optimization and account for the fact that the future is uncertain, so you have to assume worse case scenarios. Researchers often implicitly do this but there can be explicit tools to solve the problem.