Monday, January 20, 2020

Ensemble modeling - A solution to ambiguity and improved forecasting

Models will fail. Models will miss variables. Models will change with the sample side used. We are imperfect modelers. It is hard to find the right models because it is hard to differentiate between theories. Data relationships change so the importance of some variables change through time. Market see structural changes and regime changes. Hence, a good model today may not work tomorrow. These fundamental issues are not new and discussed even in introductory econometric classes, yet the real world of market forecasting has to find meaningful solutions.

Of course, the need for good theory is paramount to drive an econometric model, but the advancements in machine learning deemphasize the formation of modeling and allow data to speak for itself. An atheoretical approach to modeling will focus on measured success without theory. This is problematic for theorists, but ultimately there needs to be a focus on success. A number of ensemble modeling approaches have been developed to help with prediction. Two major schools can be applied to forecasting problems. The first is the simple forecasting approach of model averaging which was introduced by Clive Granger fifty years ago as a simple combination of forecasts. The second set is more formal process for developing alternative models used in machine learning.

This first ensemble approach:

Average or weighting models

  • Take a set of model forecasts and average or form some type of weighting scheme for multiple models. This has been shown to be successful especially when the model used are uncorrelated. However, this approach has not been formalized into a process of finding alternative models. 
  • The average can be done through some weighting or voting scheme. A better model is given greater weight. This combination of forecasts has developed a large literature on ways to reduce the estimation error of forecasts. For example, a simple application to trend-following would be to use the signal from different trend lengths.
The second set of ensemble approaches which have come out of statistics and computer science include: 

Bagging models
  • Look for simple variations on the same theme of model averaging. Bagging stands for bootstrap aggregating. A very high-level view of bagging is using a single dataset tested over different subsamples of data sets or bags of data to generate different model choices. The bags are created with resampling to test different models. The modeler who applies the same basic approach across different markets can be thought of as using a bagging method. The forecast results are bundled or averaged to create the bagged model results. 
Boosting models
  • A boosting approach uses past predictions to help with new models. The idea is to find what has not worked with one model and build a new model that will reduce or offset the failure of the prior model. It can be described as using a modeling method to make a set of weak learners into a stronger learner. A model can be developed, but it will have some errors. A new model can be formed that focuses on the errors of the prior model. A model that is adjusted to account for errors could be thought of as being boosted. Using the gradient boosting method, a model is formed which will have residuals. A new model will be formed to explain the residuals (if they are not random) which is then added to the original model.  
These ensemble approaches are important methods to increase prediction as opposed to drawing some conclusion of model failure. While we have just provided a very simple overview, ensemble techniques have been used for decades to improve quant results. 

Adding uncorrelated models to get better forecasts, training or testing on different data sets to find common parameters or different models, building or adding models that address failure and reduce estimation errors have all been used to help forecasts; however, there are now more structured approaches to generate more efficient ensembles. 

Sunday, January 19, 2020

Principal Component Analysis of alternative risk premia shows unique groupings

Principal component analysis (PCA) is a useful tool for providing return groupings of different alternative risk premia strategies. PCA is a simple form of dimensionality reduction that is useful for factor extraction and data transformation. It can help further understand the differences in alternative risk premia relative to traditional equity and bond benchmarks beyond correlation or beta  measures. 

The following analysis is available in a thorough research paper, "A Framework for Risk Premia Investing: Anywhere to Hide" by Kari Vatanen and Antti Suhonen. We have looked at their work in previous posts on beta stability, "ARP strategies and market beta - Check the stability when constructing portfolios" and with cluster analysis in "Alternative risk premia and the advantage of cluster analysis".

They show earlier in their paper the differences in equity and bond betas across return quantiles for equity and bond benchmarks. In another section, they run principal components on the Sharpe ratios for each return quantile for equities and bonds. ARP strategies seem to be concave with respect to equity returns and convex with respect to bond returns. A little deeper analysis suggest that with respect to equity returns there is a pattern for PC2 that is like a long call option ratio spread.  The PC2 for bonds looks like a strangle.  

ARP strategies do well in the mid-return range quantiles for equities but will tail-off for high and low returns. Of course, this is in the context that equity betas are still low across all quantiles. For the bond benchmark, there is also lower PC1 and PC2 at the lower quantile. While this information is interesting, the value is limited since this quantile research is looking at all strategies.  

There is more information when you look at PC1 for both equity and bond benchmarks on a 2-dimensional plot for the bond and equity PC1. In this format, we can see there are clear "offensive"  and "defensive"  ARP strategies where an investor will be more positively (negatively) influenced by equity (bond) returns. This can be a helpful breakdown for ARP portfolio construction.  Carry will get you more offense and trend will get you more defense.

Additional information can be gained by looking at PC1 and PC2 for equity and bond benchmarks. In the case of the equity benchmark, an investor can think about high PC1 and limited PC2 which is carry or low PC1 and mixed PC2 which include fixed income carry and value. There is a third group, which has high PC2 and mixed PC1, and could be considered equity neutral strategies. This category seems to be dominated by trend and momentum.

The first and second PCs using the bond benchmark seems to breakdown the ARP world into positive and negative PC1. These clusters seem to more well-defined than what is seen with using the equity benchmark.  

The principal component analysis does a nice job of providing data transformations that can provide insight on how to group different factor strategies. More specific analysis is required but PCA provide a base framework. 

Thursday, January 16, 2020

Centuries of interest rate data - Proves challenging for investors today

There are many explanations for the low interest rates we currently see around the globe. For markets focused analysts, it is the decline in bond market risk premia. For the macroeconomist, it is "secular stagnation", or some narrative around excess demand and supply. The general view is that low interest rates is a problem that has to be solved and it is something that is abnormal, and rates will eventually move back to normal with normal being higher. 

The research by Paul Schmelzing of the Bank of England in Staff Working Paper #845, "Eight centuries of global real interest rates, R-G and "suprasecular" decline, 1311-2018", provides an alternative narrative on levels of interest rates that is challenging for all economists and analysts. A study of real interest rates over centuries of data suggests that rates have been declining for hundreds of years. Current rates may not be abnormal.
  • The global low interest rates of today are normal. 
  • The trend in real interest rates has been going down for centuries. 
  • Negative real interests are not abnormal. 
  • Extrapolation for what rates may do using the last few decades is perilous and not informative.
  • The spike and decline of rates over the last 100 years is unusual.
  • The volatility of inflation and real rates has been abnormal.

There is power in economic history and looking at very long-run data. The author of this monumental work does not oversell his results although he makes some interesting references to the r - g Piketty debate on inequality by observing that the facts of high "r" relative to "g" are not on his side. 

What centuries of data tell us is that any extrapolative forecasting of what is long-term normal or what to expect with interest rates is fraught with peril. You can make your forecasts on what long-rates should do, but tell us why this time is different. 

ARP strategies and market beta - Check the stability when constructing portfolios

Alternative risk premia or cross-sectional factor strategies generally have low correlations with equity and bond market betas, but that it not the same thing as no market beta exposure. Market betas will differ by strategy and will also change with the market environment. A portfolio of ARP strategies will have implicit beta exposure and investors need to account for these risks when constructing a portfolio.

Using tables and figures from the paper "A Framework for Risk Premia Investing: Anywhere to Hide" by Kari Vatanen and Antti Suhonen, we can see that there are ARP strategies that are equity beta sensitive and bond sensitive. The simplest breakdown is that carry strategies have, on average, more equity beta exposure while momentum, trend, and value are more bond sensitive. If you are looking for defensive strategies focus on momentum and trend. Carry may have low equity beta but will still be more sensitive to a risky asset move. This is one of the clear reasons why carry and trend provide good diversification benefit when combined in a portfolio.

These alternative risk premia strategies will also be affected differently by "good" and "bad" times. The beta in up markets may not be the same as the beta in down markets as seen in the table below which compares betas in different quintiles.

The differences in betas can be visually displayed in a graph between betas for the lowest quintiles versus all other quintiles. A difference away from the straight line tells us the amount of beta variation risk. This dispersion is what will surprise investors who expect protection from low beta and don't get it when needed for some strategies or get more diversification than expected with other strategies. Data below the line states that estimated beta in the lowest quantile is lower than the other quantiles. Those above the line suggest greater beta risk.
For a selected number of ARP strategies versus bond betas, the researchers found that for low bond return periods, the bond beta exposure increases. This is not for all of the ARP strategies. The graph displays those that show the biggest differences between quantiles. It is notable that there is less dispersion across equity betas than the sensitive bond beta strategies. The risk with carry is not that there will be a large deviation in equity beta during the worst equity quantile but that carry will become more "bad bond" sensitive in the worst bond quantiles. 
One of the core reasons for holding a portfolio of alternative risk premia is to provide diversification for core equity and bond holdings. All ARPs are not alike with respect to their equity and bond betas, and their stability changes with the market environment. Checking their betas and stress for different market environment is critical for understanding risk exposures.