The theory behind limiting exceptional stock losses is straightforward. Extreme losses are caused by extreme fundamentals. The challenge lies in real-world implementation. Here’s a 20-stock portfolio designed to limit losses through identification of extreme fundamentals and rules eliminating stocks thusly impacted.

A Change of Pace

The approach taken here differs from the traditional quant-based methods that seek to minimize or at least reduce Beta, Standard Deviation, Downside Deviation, Value at Risk, etc. Despite what’s often said about such metrics, they are not legitimate indicators of risk. They are nothing more than statistical report cards showing the results of what took place at specific times in the past. What counts is not the report card, but the underlying factors that caused the report card to appear as it did. Just as a B+ student can get Cs in the next marking period if he stops studying, so, too, can a stock with a good report card (i.e. a low Beta) turn riskier or even speculative if conditions take a turn for the worse. As with the kids’ report cards, the grade won’t tell you what to expect in the next marking period; you have to look to continuation of or slackening in that which caused the grade; study habits, etc.

There are strands of modern quantitative analysis that look to the asymmetry of investor desires (we like upside volatility but hate big downside moves) and seek stocks with a demonstrated propensity for delivering returns skewed toward the upside (in statistical jargon, the “right tail” or, avoid stocks with track records of big downside moves (“Left tail” returns). As discussed on 9/21/15, that’s the wrong way to go. The characteristics capable of inspiring big gains are equally likely to produce extreme losses if things break badly for the company.

Any investor who aims to maximize upside volatility while minimizing downside risk is engaging in a dangerous fantasy. If you have one, then you have the other no matter how much so-called research you can show that suggests otherwise. For this strategy, I’m going to keep it real. The goal is to reduce the probability of extreme losses. I won’t go out of my way to close my eyes to the upside, and I believe the model can deliver market-beating gains. But don’t look for quick doublings or ten- baggers, at least not by design. We’ll catch those only if we get lucky.

Setting Expectations re: Maximum Drawdown

Maximum Drawdown, or Max DD, is a measure of the worst peak-to-trough movement experienced by a stock or a portfolio (or any asset for that matter). Many consider a modest number here as a sign of good control of downside risk.

I wish it were true. If Max DD really could be thoughtfully analyzed and modeled, I’d be all over it. But alas, it’s not so.

It’s easy to reduce Max DD during the process of strategy development. All we need do is look at the period in which Max DD occurred in the past (for studies with long testing periods, this will turn out to be a brief span in late 2008), identify characteristics associated with the best and/or worst performers during that period, build those into our model, and voila, we can show the world a strategy that reduces Max DD. But there’s a huge problem: This is an example of what’s called data mining, or sham research. To really control Max DD in the future, which, when push comes to shove, is what we really care about, we need to be confident in the persistence of the factors that caused bad Max DD experiences in the past. And it’s hard, if not downright impossible, to even set up a decent, non-data-mined, research effort because Max DD is so sensitive to the peculiarities of one specific point in time.

Print Friendly, PDF & Email