Mutual funds, hedge funds and individuals measure portfolio returns over time windows such as 1, 3 or 5 years. In posts 2 and 6 I followed the herd reporting model performances over such time windows. (Embarrassingly, I've been in the herd for more years than I'll list here.) But in post 7: "How is the model affected by Volatility?" I showed performance far more incisively.
That post and 8: "Time is the wrong variable for Risk." clarified for me that volatility is also far more important that time when measuring performance. And crucially, triggers such as country interaction, momentum, and discounts should adjust to volatility level.
This post compares the fixed trigger model with one that adjusts triggers for volatility. I calculated four volatility quartiles ex-post for 5 years but they're reasonably consistent with 30 year averages so it's not cheating. The ranges were 0-12.5; 12.5-13.8; 13.8-16 and 16-40.
The last column reports how many out of one million random trails beat the model. Being less than 50,000 means significant at the 5% level; less than 10,000 means significant at the 1% level. Both models (last 2 rows) are extremely significant; 1% significance is 1 out of 100 random trials beating the model. For the relaxed model only 1 out of 11,500 (87 out of a million) won. For the volatility-adjusted model only 1 out of 333,333 (3 out of a million) won.
For the highest volatilities (rows 8 and 9) Monte Carlo results (1,072 vs 5042) and net return (14.4% vs 9.8%) are better for the adjusted model. Similarly for the lowest volatilities (rows 2 and 3) with Monte Carlo results (4,975 vs 55,806) and net return (6.7% vs 4.2%). In both cases adjusting triggers to volatility level creates more signals and so a better return plus stronger statistics. (Yes, now perhaps we'll have more noon signals.) The two middle quantiles barely changed. Post 2 showed model performance at 20%; Relaxing triggers in post 8 increased performance to 24.8%. and adjusting for volatility increases it to 33.2%.
But as importantly, or perhaps more importantly, knowing current volatility provides an estimate of expected performance. Knowing the year or month offers no indication on performance. For instance Post 2 mentioned that the model wasn't performing well in early 2017 but I was following the herd using a time calendar. Instead it was performing as expected given the low volatility during that period.
Some may argue that volatility changes. Yes it does but it has persistence. In summary, a volatility calendar is far more important than a time calendar.
(*) significant at 95% level, (**) significant at 1% level; (***) extreme significance