“As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones” -Donald Rumsfeld 12/2/2002.
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so”- Mark Twain
The inability of humans to think in terms of probabilities and uncertainties creates quite a few difficulties. The high profile mis-carriages of justice suffered by both Sally Clark and Angela Cannings were in part caused by inaccurate and misleading use of statistics, by medical experts which led to unsafe convictions. If even experts can make such errors, it is not surprising to find widespread mis-conceptions amongst investors themselves.
Critics of the Efficient Markets Hypothesis (EMH) point to market volatility as an indication of inefficiency. For example the S&P 500 has exhibited a 16.6% annualised volatility in the past year, which equates to a weekly volatility of 4.6% [1]. But volatility itself is not an indication of inefficiency (though it might be one of illiquidity). It may merely reflect the change in probabilities investors attach to one scenario or another [2]. (It could also be due to Keynes Beauty Contest effect, whereby investors are reacting to their perceptions of others perceptions- investors worry that other investors will worry etc, etc). Bubbles and crashes are partly generated by this effect.
Research analysts are constantly looking for new ways to “beat” the market. In the rush to market “Smart Beta”, “Factor Timing” or whatever it is, there is little or (more often) no “replication studies” done – testing of the hypothesis under differing conditions etc, such that the chance of finding “False Positives” ( where a Factor, for example is shown to “work”, when in fact it doesn’t) is extremely high. Back-testing data is the preferred method of formulating a theory and here too, there are problems- this study recommends that, (page 6), as the number of “tests” increases, so should the number of years of data tested, (at least 10 years in order for them to eliminate the role of chance in the results), but some practitioners only apply 2-3 years of such. The advance of computing power has led to an exponential growth in the number of factors “found ” to work. In short, data mining on an epic scale is now possible, so analysts have begun to dig.. whether any of them work after transaction costs, market slippage costs etc, is highly dubious. One needs to show causation before citing correlation; (look at these charts if you are looking for some new investment ideas). We need to know why a strategy outperforms, not just that it does- beware of a Fund Manager who fails to demonstrate the former.
Then there is the problem of sample size- this is a possible explanation for the perennial failure of opinion pollsters to predict nearly every recent election result; opinion polls are often conducted by phone or internet and thus fail to pick up large swathes of the population. The problem is exaggerated when it comes to Financial markets- not only are they NOT normally distributed, but we dont know whether the last 115 years price returns represent a true sample of what is to come. As in nature, market returns are often distributed according to Power Laws, whereby “extreme events ” occur much more frequently than conventional probability distributions assume (a “fat tail” in statistical jargon). In markets even big samples contain a huge margin for error, as investor returns are even more volatile than public opinion. Thus, big falls often take participants by surprise; but they shouldn’t- in the last 115 years, UK real equity returns have been +5.6% p.a. with a standard deviation of 21 percentage points. We could therefore expect a yearly range of +1.6- +9.6% per annum returns (with a 95% probability)[3]. This is a very wide range, and indicative of the precariousness of making long term forecasts. We may have 115 years of data, but how relevant are they to the next 20-30 years? We cannot say.
Another area of uncertainty revolves around what Schroder’s Kevin Murphy calls the “Paradox of Unanimity”, whereby Investors’ chances of buying undervalued assets decreases the more of them subscribe to a particular opinion. If everyone is bullish about a stock, bond, region or asset class, it is likely that it has already vacated “Value” territory and moved into a location marked “over-priced”. An investors’ failure to understand the extremely small chance of this unanimity being correct is the principle reason for the success of a contrary opinion investment strategy, which has worked well for us in the past. The demise of the Final Salary Pension Schemes may well be a result of Fund Actuaries looking (in the 1980’s) at past equity returns and extrapolating them forward indefinitely (along with the concomitant reduction in Company funding thereof). Reversion to the mean (that is, good performers will under-perform subsequently and vice versa for laggards) is a powerful phenomenon, but is continually misunderstood by investors, many of whom appear all too eager to buy after strong gains and sell after large falls.
There is a huge difference between risk (howsoever defined) and uncertainty. We can model risk , but we have to live with uncertainty. Maybe we need to know ourselves first-what amount of risk, and as importantly, what type of risk can we bear? The level of risk can depend on age, income and job type (the young and those in more economically stable jobs should be able to take more risk). Then again, this study suggests that we cannot reliably predict our future preferences, so any risk decisions we make today may not be valid in the future. So, we need to ask ourselves, what do we know ? The answer may well be, very little.
[1] Annual volatility is 16.61% according to FE data. Divide 16.61 by 7.21 (the square root of the number of weeks in a year) gives us 2.304%. A 2 standard deviation event (a 95% chance of containing the markets’ price action in a given week), would mean a move of up to 4.61% in either direction.
[2] If the FTSE 100 is at 6000, that may reflect a 25% chance of it moving to 7000 (7000 x 25%) a 50% chance of it remaining at 6000 (6000 x 50%), and a 25% chance of it falling to 5000 (5000 x 25%). Should participants become more optimistic, and assign only a 15% chance of 5000, and a 35% chance of it reaching 7000, then we should see the FTSE 100 move up to 6200.
[3]The Standard Error is calculated from the Square Root of 115 (10.7). 21/10.7= 1.963. 2 Standard Deviations is therefore 3.93%. Thus we can say with 95% confidence that the range of returns (the Standard Error) will be +1.6 (i.e. 5.6- 3.96) and + 9.6 (5.6+ 3.93).