Wednesday, April 29, 2015

Bernanke on Taylor Rule analysis

Ben Bernanke posted on his blog his views on the Taylor Rule. It is a good recap of the basics for the rule and it provides his take on why it is not an effective tool for managing monetary policy. There is just too much flexibility in how to effectively structure the rule.

First, there is the issue of determining the right inflation rate to use in the rule. The best is still the core PCE which is the preferred measure by the Fed. Second and more importantly, there is the issue of measuring the output gap. There is significant disagreement on how to measure this gap and how to form an effective real time measure. The Fed numbers are not available to the public and the CBO estimates may have significant forecast error. You have to know actual and potential GDP to find the gap. Third, there is a disagreement on the weights that should be applied to the rule. This is the measure of the reaction function. Bernanke stated that he thinks the reaction function to output gaps should be higher than what was used by Taylor. Small changes make a big difference on whether monetary policy was wrong in the 2000's. Finally, there is the measure of the equilibrium real rate of interest. Taylor assumes a long-term real rate of 2%. If the natural real rate has fallen, you will get a different policy answer.

Nevertheless, the Taylor Rule framework can be used to help focus any interest rate discussion. When new data comes out, the question is simple. Does it effect inflation, the output gap, or the equilibrium rate of interest.



The graphs above show the original Taylor Rule versus an adjusted Taylor Rule with a different inflation measure and output gap weight. It tells different stories, but it is clear that rates should be higher and the Fed should be taking some action.

Move from calendar based to data dependent Fed guidance - a step backwards

The Fed has been refining policy transparency in the Yellen era by moving from a calendar-based forward guidance approach to the newer operating guidance of being “data dependent”. The time based Fed guidance of saying action is more or less likely in the near-term caused clear time dependent forecasts of Fed action in forward rates. Action was pushed-up or back based on the guidance given. Data dependent guidance may be less informative at least until we get a better idea of how this is going to work. 

Central bank action should always be data dependent. How else should good decisions be made? The relevant issue is how is data weighted by central bankers. This is the key to providing transparency to the markets; nevertheless, this is the missing link from the central bank. How much weight is placed on the output gap? How much weight is placed on inflation or macro stability? In macro language, what is the Fed reaction function. The Fed is not disclosing this information. The Fed certainly does not want to constrain itself by fixing the reaction function and It could be evolving. The burden is on market participants to figure out the data weights of policy-makers.  
FED ACTION PUSHED INTO FUTURE 
Life is bare, gloom and mis'ry everywhere
Stormy weather
Just can't get my poorself together,
-Billie Holiday "Stormy Weather "

The 1st Quarter US GDP numbers at .2% surprised on the downside. The “data dependent” Fed will now need supporting evidence to offset the "bad weather" quarter. The cautious behavior of the Fed will continue. This was not just a weather outlier. The plunge in exports suggests that the dollar gains are taking a bit out of US growth. 

WATCH ATLANTA FED GDP NOW FORECAST 
We watch closely the Atlanta Fed’s new GDP Now forecasts to see how the GDP numbers evolve over time through live data disclosed through the quarter. This forecast was showing the weak first quarter GDP numbers a month earlier and was below consensus at least two months earlier. If the Fed was watching this number closely, they would not have been surprised by the .2% estimates. 

Monday, April 27, 2015

Circle of competence and circle of trust

“Everybody's got a different circle of competence. The important thing is not how big the circle is. The important thing is staying inside the circle.”

- Warren Buffet 

The difference in performance between many macro and systematic managers is not about their skill but their circle of competence. This a simple principle that can apply to all businesses. For the systematic manager, the circle of competence surrounds models and decisions rules that have been back-tested and reviewed. The discretionary trader believes in his competency at processing information as it enters the market.

The circle is determined by the type of models or decision process employed. The type of models employed is related to competence with working across markets, data, and time-frames. The long-term trader does not feel comfortable making many short-term decisions. The short-term trader's comfort zone is in taking trades for quick profit grabs. Some traders have to stay within a small  market sector while others need to be diversified. One key skill is knowing your competence and staying there. 

The circle of trust with investors is that they are comfortable with the manager's competency and believe there will not be style drift. The trust is that each manager 'knows thy self". 

Tuesday, April 21, 2015

Over-diversification and hedge funds



The gains from diversification have been called by some as the only "free lunch" in finance. It is almost like apple pie - no one should be against diversification, but it is possible that you can have too much diversification. (No different than having too much apple pie.)

Adding more stocks will diversify away risk until you get the market portfolio which is non-diversifable. The same also applies to hedge fund strategies. As you add more hedge funds within a strategy, there will be diminishing diversification. In fact, there will be diminishing gains based  on just abut any feature. There will be a decline in drawdowns but it will not be eliminated. There will be a smoothing of returns to an strategy norm. At that point, paying fees for a fully diversified portfolio makes no sense. Adding another manager, paying their fees but not getting any portfolio value is a loser's game. There will be a performance drag. The question is determining what is the optimal number of hedge funds after which there is minimal value-added. There is a law of demising marginal return for hedge funds.

This is not a simple question. There has been significant controversy over the optimal number of stocks. Some have argued that that it can be as low as ten while others will say the number is closer to  40. Most will say that a set of 30 is a good number.

With respect to hedge funds, there has been less research overall but enough for us to make some generalizations. A number greater than 10 is pushing the limit on where diversification is maximized. The outer limit to diversification comes at a number around 20. This will vary by strategy and through time but the limit is lower than what you will find with an individual stock portfolio. An example would put this in perspective. If you move from 1 to ten managers in managed futures, you will see a decline of about 35 percent in volatility. If you add another 10 managers, you will see an additional reduction of between 5-10 percent in volatility. If you have to track twice the managers and only receive a marginal reduction in risk, you are not receiving a good return to risk trade-off.

If you add managers across strategies, the diversification may be greater, but the principle is the same. Adding more managers will have a marginal benefit. If you are conservative and want the maximum diversification, the simple rule of an even dozen is a workable first pass on the problem.  

Managed futures diversification - the strategy dimension matters



All managed futures managers diversify. In fact, I would venture that there is more cross-asset diversification in managed futures than any other hedge fund strategy. The diversification is across commodities, fixed income, rates, foreign exchange, and stock indices. It is the key to their success and survival.  The premise is to find as many unique market opportunities as possible. This is only possible through examining and trading a broad set of markets across all market sectors.

However. the diversification discussion usually focuses only on the number of markets traded. In reality, the diversification dimension that may be more important for a manager's success and survival is strategy.   It is strategy diversification which smooths out drawdowns and returns. It is strategy diversification which keeps managers in the game and often gives investors the lower volatility desired. Strategy diversification can include trends, patterns, and fundamentals. The ways signals can be generated can be highly varied.

Trend-followers, for example, face drawdowns when trends are not present. These drawdowns can be severe. The reason for return failure is simple, there are no trends. A program that focuses on one phenomena will fail if that phenomena does not occur. There can be long periods when there are limited number or strength of trends for the simple reason the markets are in equilibrium. In this case using more than one strategy will smooth return profiles.

Along with strategy diversification is time diversification, looking at price behavior over different holding periods. The behavior of prices over a two week period may be very different than a two month period. The drivers of performance of the long-run will be more fundamental while short-run moves will be driven by flows.

Diversification into more markets may not be helpful if the markets do not have trends. Those markets that do have trends will have lower exposure or risk as more markets are added.  The result is muted returns on a fixed volatility. Diversification through the number roof markets can be a drag on performance. The only solution is add more strategies. Of course, adding more strategies will mute the positive value of trend-following. There will be less positive skew in a strategy diversified portfolio. Diversification has to be balanced with return. It cannot be looked at in isolation.

Saturday, April 18, 2015

Blacks swans and white swans - managed futures



Our thoughts about tail risk have been profoundly changed by the Nassim Taleb's concept of the black swan event; however, swans come in all colors and these other swans can be just as problematic. The concept of the Black Swan was focused on highly improbable events that could have a major market impact. The actual risk is a combination of the probability of the event times the cost. You may not be able to forecast improbably events, but you still have to protect against the potential high negative consequences of these events. The third part of a black swan which has been given less focus is the fact that we form stories to make these rare events seem more likely. We then become slaves of the past.

You don't know when black swans will occur, but you realize that they are possible. The protective action required by an investor is to diversify in order to minimize the costs of the negative event. The diversification can take two forms: diversification across markets and across strategies. A combination of both is most effective when the form of the negative event is unclear. With black swan events, the key risk management principle is diversification but since markets may correlate during an extreme strategy differences are valuable.

There are also grey swan and white swan events. Grey swans are those that are probable or at least measurable and will have a strong impact. White swan events are those that are often certain or likely to occur and whose impact are measurable. While black swans need diversification as protection, grey and white swans can be exploitable. These are events that can cause harm because of poor judgement and errors in thinking. The harm is not reacting to what is measurable or likely.

There is shame in actually being bitten by white or grey swan events because they could be managed. We lose because of our own conceit. The 2008 market events may not be considered a black swan event because there were enough signs that there was potential for an impending problem.

For swans of other colors, there are strategies that can be effective at exploiting the opportunities. Following trends is an easy way to exploit non-black swans. Trading on historical risk premia is another simple approach. You don't have to forecast the event. Rather quick response before others may be a middle ground that can help take advantage of more likely events.

Where am I going to get negative duration?



Even with a lower natural rate of interest, most investors will agree that interest rates are biased upwards over time. The timing of this rise is one of the big macro theme trades in 2015. The consensus looks like we will have to wait longer for a rise but it will be coming. Of course, the wait for rising rates was long in Japan and many a trader was hurt with JGB trades, but it hard to argue that negative yields will be the long-term norm for Europe.

Investors still cling to their fixed income allocations under the hope that their managers will be able to shorten duration when rates rise. There has also been the development of multi-strategy bond funds as a means of better managing duration. The key action of most fixed income managers to a rate rise will be to cut duration versus a benchmark. Most fixed income managers will take pride in the fact that they had lower duration and lost less then the Barclay's Aggregate Index, but that does not change the fact that they will have a negative absolute return.

This approach is not much different than the equity manager who has a lower beta than the stock index. It is the driver for many long/short equity hedge funds. The safe strategy is learning to take less beta exposure.

The general fixed income bet of lower duration was still an effective strategy over the last few decades because bonds were in an extended rally. Bond investors made less money but still made money by holding bonds. The future environment will be different. First, with lower coupon rates, there is a less cushion with holding a bond portfolio. Second overall durations have also extended with lower yields.

The question is where can a bond investor get negative duration. One of the few places for negative duration is the managed futures and global macro space. Shorting bonds will provide the negative duration.There is no long bias so a rising rate trend will create an opportunity for trades with negative duration exposure. The resurgence of managed futures may be linked to the fixed income hedge.

Thursday, April 16, 2015

REITs and interest rate increases


Real estate has been one of the key alternative investments for investors. It provides positive cash flows and should also do well if there is an increase in inflation because the underlying real asset should increase in value and rents should also increase. Higher prices can be passed through higher as higher leases.

However, for many investors a more relevant question is what will happen to REITs if interest rates increase? Will REITs still be a good investment? The answer is mixed. During the biggest moves in interest rates over the last 20 years, we find that it will underperform the SP 500. There will be a drag of 450 bps from stocks during periods when the average increase in yields have been up around 150. Some of the yield increases have been caused by increases in real rates and others times from higher expected inflation. The reasons have been mixed but there results have been the same. There have been rising rate periods in '94, '99, and '13 when REIT returns have been negative.

It is unclear that REITs are an effective investment during a rising interest rate environment. Asset allocation should not be about what has worked in the past but what will be effective during future scenarios. If you expect higher rates, REITs as an alternative investment may be disappointing.


Sunday, April 12, 2015

True economic uncertainty different than most risk measures



Research on economic uncertainty has been growing since the Financial Crisis. For many, the crisis was a surprise that has generated skepticism of conventional thinking. Models and predicts were wrong and there has been growing skepticism on what can be predicted. There has been an increased sense that conventional models don't work and there is a greater degree of unknown about the economy. 

Investor, forecasters, and businesses have all faced a new degree of uncertainty which has generated a wave of new research on how to measure uncertainty as well as how to determine its impact on markets.  This work started before the Crisis but has picked-up a new sense of urgency. Wall Street firms have generated risk aversion indices to measure stress. There has also been the development of macroeconomic surprise indices to measure when forecasts differ from macro announcements. This counts or measure the unanticipated part of and announcement. Some Fed banks have also developed stress indices to measure changes in risk. There also has been a focus on market volatility like the VIX index as a measure of fear. The meme of risk-on/risk-off behavior was the rage for a period after the 2008 decline. 

There has been early confusion between uncertainty and risk aversion with some of this work; however, there has been growing clarity on these measures. Uncertainty measures have been more specifically focused than just measures of option volatility such as the policy uncertainty indices on news. More recently, there has been new research on quantifiable measures of of uncertainty. What this recent research has done is to separate market volatility from uncertainty measures to make a important distinction on the impact to the real economy. For example, see the work Measuring Uncertainty in the March 2015 American Economic Review.

This work focuses on uncertainty as measured by the forecast error across a broad set of economic series. The work makes a distinction between increased volatility which may be forecastable and forecast error which measures what was missed by any forecast. They find three big periods of uncertainty, 1973-74, 1981-82, and 2007-09. These correspond to the deepest and longest recession in the post-WWII period.

The aggregate macro uncertainty is strong but for only a few periods. This measure of uncertainty is different than stock market volatility or measures of news uncertainty. Macroeconomic uncertainty is strongly counter-cyclical and will have as strong an impact as a monetary shock, but they may not come as frequently as expected from other uncertainty measures. The other measures such as the VIX are important but just may not have the same macro ramifications as broad measures of forecast uncertainty.

  

Trend-following and De-risking



One of the key tactical asset allocation decisions is determining when to get out of markets. This could be called the decision to de-risk. Investors don't usually want to exit markets that have upside volatility but only the downside. Trend-following can provide a simple framework for those who want to cut risk in market exposures. Turned on its head trend-following does not have to be about taking risk but actually mitigating risks.

Trend-following can tell you both what to buy and what to sell regardless of whether you are long/short investor, long only investor, ETF investor, or just stock investors.  It is just learning to employ or interpret how to use the signals generated. In a risky environment, the most important decision may be determining when to get out of trades. Trend-following can do that in a uncertain environment when the fundamentals are unclear.

If the market is moving lower, cut risk by cutting positions. This process does not have to include taking new risks. An exit of one market does not require increasing exposure in other markets. The action could be going to cash.  If more markets are moving lower, there will be less risk taken through delevering. The choice of taking more risk for those markets that are moving higher is separate from the risk-cutting decision.

Saturday, April 11, 2015

Uncertainty and unknowns - breaking down the process

Project analysis has advanced to include different levels of uncertainty when making decisions. The same approach used for projects can also be applied to asset management. There is a simple decision tree that can provide insights on how to classify and deal with uncertainty.

One of the first choice is associated with events that are foreseeable or known. If event are foreseeable, then a clear decision path can be formed, residual risk can be determined, and the level of complexity can be measured. Buffers and contingencies can be managed based on the level of uncertainty or risk faced. This can be very process driven. This is what we would like decision analysis to look like.

The alternative of unknown unknowns requires a different process. A key problem with finance is real time management when there are unknowns. There is an unknown event and the portfolio manager has to act. There is no way to deal with unknowns through experimentation.  Unknowns in project management can be addressed through two choices. For one path, some form of learning through trial and error or experimentation is conducted. The project manager goes down an unknown path and "figures it out". Adjust plans as you move through time. The other approach for project management is called selectionism. You try multiple path as the same time and then choose what you think is best ex post as the uncertainty is resolved. Asset management does not have the luxury of trying multiple options through different portfolios and then picking the best. Trial and error is not an option. There is no time for trial and error. If we could do that, then the unknown could be measured and it ceases to be an unknown.

Still there is hope in dealing with unknown unknowns. The selectionism is through employing different strategies. Instead of just one strategy, you employ multiple strategies. This form of diversification can be applied when there is different levels of complexity, but may also be appropriate for dealing with unknowns. Diversification is not just across markets but across strategy responses. 

Friday, April 10, 2015

Project management uncertainty and asset management


Uncertainty and complexity are the two biggest problems facing any project. The book Managing the Unknown focuses on how to deal with these issues for any development project, but this work or process for dealing with uncertainty and complexity is also applicable within asset management. Fitting this framework within the asset management field provides some different insights into the decision process for a money manager.

The authors define complexity by the amount of interactions necessary to successfully manage the task or project. Any project that needs little interaction is not complex. Constant adjustment adds to complexity. The same simple framework can be applied to investment work. In asset management, the passive or index portfolio is not complex. It is easy to manage with few actions.  Holding a passive portfolio is also not complex. The only choice is a finding a benchmark. Active management has a high degree of decision actions and complexity. It is complex to manage an active portfolio because so many decisions have to be made on what to hold and how to adjust. The choice of buying an active strategy is also complex because it requires more decisions or actions on how to make that choice. If you undertake more complexity or a strategy requires more action, then control systems have to be put into place to manage or monitor this process.   

Variation in action and complexity needs planning and control. Low complexity with foreseeable events needs risk management that identifies and prioritizes issues, but may not require a lot of market interaction. Unknown unknowns require managing the residual risks from the fall-out of events that could not be foreseen. As complexity increases, the action required by the manager increases ad are more nuanced. 

All project management requires learning about the complexity and uncertainty faced, selecting processes that fit the situation, and working to form combination of actions that can meet the specific challenges or risks that will arise. There has to be buffers in place to ensure uncertainty can be addressed or mitigated.

In asset management, one simple choice is to form diversified passive portfolios. The other extreme is forming portfolios that require more interaction to deal with changing uncertainty. We could call these action-response portfolios. If a problem arises, action will be taken to mitigate the risk. Part of the asset management process is finding the "project management" that can deal with uncertainty and the level of complexity that matches the behavior of the investor.

Complexity and uncertainty within asset management



Uncertainty goes hand-in-hand with complexity, but they are not the same thing. Investors should understand the trade-off between these two and what is the impact on decision-making when there is increasing uncertainty or complexity. We define complexity as the number of factors needed to explain a given event. Simple events are easy to explain with simple stories and a limited number   of factors.  Events that may need a model that requires more explanatory variables is inherently more complex. Given that macroeconomic cannot be easily explained by one or two factors, we would call it complex. The drivers of currency are inherently complex. An individual stock which can be explained by discounted cash flows which are stable would be considered a low complexity system.

If there is low complexity and low uncertainty, we are living in a simple world of facts. We can describe the world as a simple place where A implies B. When the complexity gets larger and there is more uncertainty, we have to form predictions because we are not sure of the interaction between factors and results. There could be three factors, A, B, and C which effect a market. We may not know the chance of one of the variables changing and we may not know the interaction between A, B, and C. We have to make some predictions. There is more uncertainty and complexity.

As the world becomes more complex and uncertainty, we move into a world of projections and scenarios. There is not a clear link between factors and results. There is more story-telling. We have to start describing "what-if" possibilities with variables that may not be easily countable. If there is high uncertainty and complexity, we may not know the factors that could drive markets and we may not know when they will occur. We are in the realm of speculation.

All this may seem obvious, but classifying the environment and when we may move between speculations and predictions is important. The systematic manager wants to keep the complexity low through limiting the number of factors review. High complexity will more likely require speculations. High uncertainty means key factors may not be knowable and again require high speculation. Knowing the amount of uncertainty and complexity can help define the amount of risk that should be taken. 

Career risk causes biased recommendations


Analysts are biased. Most investors would agree. The bias is associated with fear and greed. The fear of being fired and of getting kudos from clients. Of course, this is all about money. It is a story we have heard before, but it can be explained through a simple two-by-two graph. Barry Ritholtz presented this funny graph on twitter that has a significant amount of truth on how the bias works. 

Any broad market prediction by an analyst is usually going to be a bullish or bearish bet. The result of the forecast is simply either being right or wrong. If you are bullish and you are right, you will be the darling of most investors. You did your job. If you are bearish and right, most investors will respect you, but it is unlikely that they will feel good about it. Most likely they did not fully invest with your recommendation, so they are still going to see loses. If you are bullish and follow most other analysts and prove to be wrong, it is unlikely that any investor will fire you for your poor insights. There is likely to be a multitude of answers for why you did not see the downturn coming. Everyone can feel bad together. Many will just say, "We didn't see that coming." In the last case, if you are bearish and wrong, investors may be out of the market and miss what is perceived as the big opportunity. This is a reason to be fired. The answer is clear. Don't make bearish calls.

Given this bias, what can investors do about it? You cannot change analyst behavior, but you can change where you are getting your recommendations. Simply use models. Models don't have this bias against being fired. Models don't want to be heroes. Model just do what they were built to do and provide unbiased advice.  Follow that data.

Thursday, April 9, 2015

Confidence and the model builder



There are models that are effective and can explain a meaningful portion of market behavior and there are models that are very suspect and have low explanatory power. Most model have low explanatory power. This is just a the world we live in. High explanatory power for many models in macro finance will only less than twenty percent. Still, a modeler or user of the model can have a varying level of confidence in a model's quality. 

We can think about a trade-off between confidence and quality with model building that defines how portfolio are made. This trade-off in important with setting position sizes and forming a risk management strategy. Low confidence will be matched with smaller risk taking. Low quality should also have smaller positions. It is dangerous if a investors has high confidence in low quality models. Any investor should appreciate model limitations and understand their level of confidence in the models employed.

In the above graph, we can show the trade-off between quality of modeling and confidence in two dimensions, concern for model uncertainty and the legitimacy or quality of the model. High model quality and confidence is the optimal situation for any investors. However, if there is low model quality and high model concern, there is a need for more robustness or checks to avoid uncertainty.  You want to have go out and build more models or search for improvements. High quality models with high concern for uncertainty makes for a conscientious modeler. Low confidence will lead to constant model checking. It is likely that a model in this situation will be build for a focused purpose and limited usage. There will be a constant concern that the model is wrong even if it is of high quality. The combination of low quality but high confidence is associated with investors who feel that a model gives good intuition even if the actual quality is mixed. These investors are more likely to be discretionary decision-makers who are confident in broad principles.  

What the trade-off grid provides is a framework for a model/portfolio quality discussion. In simple terms, is the quality of the model high and do you have confidence in its ability? Understanding this trade-off provides for better portfolio structure. Less confidence creates a perceived need for more diversification. High confidence and quality will lead to more concentrated portfolios. There is no right answer for the confidence quality trade-off other than it defines the type of portfolio that will be structured. 

Breaking down surprises - the importance of learning


Financial markets are driven by changes in expectations. If markets are efficient and investors have rational expectations, at the extreme, prices will only move when there is new information or unanticipated surprises to the market. Unanticipated shocks are the key driver of volatility. Yet, there has been little discussion on the sources of surprise in markets. Not what are the surprises but the reason for a surprise. We have uncovered an old topology map of surprise classification which we think is very useful for any investor.

The figure is a very general approach, but provides some good insights between outcomes which are known and those which are not known or driven by ignorance. If possible outcomes are known, investors are faced with risk. The probabilities are known, but we may not know the actual result. The alternative is that events will occur and the possible outcomes are known but the probabilities are not know.  A simple case is an economic announcement. The date of the announcement is know and we may have a likelihood of what will happen when the new information enters the market. If those probabilities are known, we are facing risks. If the model of what the market reaction will be to an announcement, we are facing uncertainty.

The alternative direction of surprise is that the outcomes are not known and we are faced with a form of ignorance. Ignorance can be reduced through learning on a personal level or through analysis on the communal level. If we are not willing to learn, we will have closed ignorance. While not shown on the chart, you would move back to the known outcome section if there is effective learning. If we cannot reduce ignorance, an investor is faced with two problems. The view of world may be wrong or  the solution is not knowable.

This breakdown provides structure for dealing with surprises and how an investor can potentially minimize surprises and the potential impact. Learning is critical, but it must be focused.

Monday, April 6, 2015

Atlanta Fed "GDPNow" a good tool

The Atlanta Fed has developed a new forecasting tool called GDPNow. It actually is an old tool or technique that they are employing to help provide good estimates of quarterly GDP. What they do is actually very simple. They take the 13 sub-components that are used in the GDP estimate and track them in real time to provide a rolling update of what should be the best GDP estimate as data comes out. Many forecasters use this method, but it is not readily available to the investor public. Anything that makes forecasting easier and provides details on GDP components is a useful tool. 

The Atlanta Fed provides source data, forecasts, and model parameters of the sub-components in an easy to read spreadsheet. This tool can be used in a number of ways. First, it can give an early heads-up on GDP estimates. Second, it can provide a comparison against the Blue Chip consensus forecasts. This can form a rolling estimate of the difference between expected forecasts and a best guess based on components. This is a good estimate of potential macro growth shocks. For example, the GDPNow forecast has been showing consistent weakness versus the Blue Chip estimates and is suggesting a slowing economy for the last eight weeks. There are no delays with GDPNow. The chart above will cause a disciplined investor to hold more bonds under the threat a GDP estimate that will come in weaker than forecasted. 

This is a simple quick tool with a rich foundation for helping with GDP forecasts. It is a worth a close look.

Saturday, April 4, 2015

Charlie Munger on economics



From one of Charlie Munger's talks from 2003, we have a good set of reasons on the failure of economics. While it is over a decade old, the arguments still apply today. I would like to say economists have solved some of the problems, but the issues all still exist. Nevertheless, some progress has been made. I cannot match Munger's prose or humor, so I will list the key features. Any investors should take his comments seriously.

1.   Fatal Unconnectedness, Leading To “Man With A Hammer Syndrome,” Often Causing Overweighing What Can Be Counted
                      2.  Failure To Follow The Fundamental Full Attribution Ethos of Hard Science
3.  Physics Envy
4. Too Much Emphasis on Macroeconomics
5. Too Little Synthesis in Economics
6. Extreme and Counterproductive Psychological Ignorance
7. Too Little Attention to Second and Higher Order Effects
8. Not Enough Attention to the Concept of Febezzlement

9. Not Enough Attention to Virtue and Vice Effects

Friday, April 3, 2015

Improved decision-making, not financial knowledge the key



Better investment management is not always about knowing more finance. It is about making better decisions when there is limited information or a high degree of uncertainty. Good management is should be focused on taking the right action at the right time. Of course that seems obvious but it is done through understanding your biases and the process of how we think.

Knowing the research is helpful and a minimum threshold is required for success; nevertheless, research knowledge is necessary, but not sufficient for success in investment management. A good analyst may not be a good trader, but a good trader does not have to be a good analyst.

A focus on on the process of getting to a decision is critical. The process matters. The process  separates fast from slow thinking and understanding when thinking too fast is a hinderance as well as appreciating that slow thinking may impair your ability to take immediate action.

Sure we would all like to have more time to gain understanding of market dynamics, but if I had to pick one skill it would be the ability to make unbiased decisions using the right amount of information and the right amount of time for analysis.

What is the Fed thinking with their dot plots?

The Fed summary of economic projections (SEP) has come out along with the dot plots of Fed funds forecasts. The majority think rates will be above 50 bps by the end of the year and only three forecasts expect rates will be below 50 bps. At this point, if these forecasts are to be realized, the Fed will have to start raising rates in the summer.

Of course, we do not know the names associated with the forecasts. It would be nice to see who is on the extremes. I thought we were in an environment of transparency and "forward guidance"?  We could then question each forecaster on their rationale for the rate estimate. It would also be nice to know if Chairman Yellen in on the low-end of the dot plot. The plots also provide information on economic uncertainty. It is high with a 175 range with nine months to go until year-end from forecasters who have the ability to control rates. The level of disagreement is impressive. 

Is volatility a part of normalization?


  “Higher financial market volatility is a natural consequence, an integral part of the economy’s equilibration process...”, says Bank of Canada Governor Stephen S. Poloz. 


While this comment was made about the Canadian asset market, it can applied as a general statement to all markets around the globe as conventional wisdom. The normalization of rates by the Fed, ECB, BOJ, BOE or even the Bank of China will have a clear impact asset markets. While we may not know what will be the path of asset markets, many believe in a conventional wisdom that if central banks are less active in trying to control prices through intervention, there will be the potential for higher volatility. 

However, a review of the actual data shows mixed evidence for this argument. Some of the biggest volatility spikes have been in the post Financial Crisis period. The period of lowest stock volatility was in 2005 and the mid-90's, the period of the Great Moderation. What seems to be clear is that spikes in volatility will match with transitions in monetary policy whether conventional or unconventional. 


Monetary policy will always have a significant impact on rates given that activities of the central bank to set short rates will dampen moves. The risk comes when there is a transition in the rate-setting environment. This current transition will be grater than normal.  A normalization of rates will not have the central bank as the largest player in the market. An extreme view is that central banks are price manipulators especially when there focus is on purchase of supply. 

If we just assume a less active central bank and rates that are determined by the equilibrium of savings and investment, volatility will increase. This increase is independent of business cycles or any new crisis. The rise in volatility when the normalization comes is almost inevitable.



Changing risk profiles - we are getting closer to an inflection

"The received wisdom is that risk increases in the recessions and falls in booms. In contrast it may be more helpful to think of risk as increasing during upswings, as financial imbalances build up, and materializing in recessions."

-Sir Andrews Crockett 11th international conference of bank supervisors 2000. 

This quote provides a great counter-intuitive view of thinking about risk through time. The risk seen in recessions is just a build from causes during the growth phase of the cycle. This view is a simple variation on "Minsky Moments". The reach for and breath of speculative excess leads to the inevitable realization of volatility when the downturn comes. The shift in expectations, the loss of liquidity, the uncertainty during the transition, the divergence in fundamentals all lead to higher volatility. 

Risk has to be viewed through the transition of time and not as a snapshot in isolation from the past or where markets may be headed in the future. The excess today will lead to the volatility of tomorrow. The trick is finding the correct excesses and the lead-lag relationships. The trends in fundamentals hold the key to the trends and volatility of prices.