Second of two parts.
From the Phoenicians to the late 1970s, the top two risks in banking were always seen to be credit followed by liquidity. That changed with the marriage of high interest-rate volatility and computer modeling. Only since 1980 have we defined the top two risks as credit followed by interest rate risk.
Views need to realigned. Both rate risk and liquidity risk cover a spectrum of severity. But interest rate risk is a gun pointed at your wallet while liquidity risk is a gun pointed at your head.
Rate risk can be minimized or hedged. Liquidity risk cannot. For years, risk managers and regulators have carefully controlled re-pricing mismatches. Liquidity mismatches received comparatively little attention.
A surprisingly large number of banks continue to rely on balance sheet ratios to measure and report liquidity risk. Even the best of these ratios only illuminate current or retrospective risk.
At the opposite end of the measurement spectrum, seemingly sophisticated risk managers rely on value-at-risk models. VaR comes in three main mathematical flavors: historical, variance/covariance and Monte Carlo. As the name makes clear, the first depends on volatility data from selected historical observation periods. The second depends on correlation information obtained from analysis of historical changes. Only the Monte Carlo VaR model is forward-looking. But even Monte Carlo gets biased when historical data is used to define key parameters such as volatility.
Both the overly simple ratio metrics and the complex mathematical models significantly underestimated potential losses. In May 2008, four months before the nadir of the financial crisis, the President of the Federal Reserve Bank of Boston observed that “none of the major stress tests I am aware of – done by a variety of financial institutions – came close to capturing the depth of the problems that we are experiencing today.”
Copious indictments of VaR were published during the Great Meltdown and are still being published. The models have been faulted for their dependence on historical time periods, their assumptions of normalcy, their failure to pay attention to
VaR is a fine risk measurement tool – but not for liquidity risk. Former Federal Reserve Chairman Paul Volker made this point crystal clear in what a Wall Street Journal blogger described as “a
“Normal distribution curves,” Volcker said, “do not exist in financial markets. It’s not that they are fat tails. They don’t exist. I keep hearing about fat tails, and Jesus, it’s only supposed to occur every 100 years, and it appears every 10 years.”
More than 80 years ago, Frank Knight at the University of Chicago distinguished between three different types of probability. His first two types, “a priori probability” and “statistical probability” equate to “risk” as the term is used in modern finance. Knight’s third group is comprised of randomness for which “there is no valid basis of any kind for classifying instances.” In this case, such data as exists does not lend itself to statistical analysis. Consequently, only “estimates” can be made.
The narrow, specialized definition of risk we’ve used for the past quarter century has facilitated risk modeling at the cost of excluding an entire category of risk.
Why did so many of the brightest, most innovative risk experts fall into this error? The simple answer is that using the narrow definition permitted the application of mathematical analysis that provides appealingly precise answers to important questions that cannot otherwise be answered quantitatively. Dragging in unquantifiable estimates is counter-productive to that exercise even if it is conceptually correct.
As Knight explained: “Business decisions … deal with situations which are far too unique, generally speaking, for any sort of statistical tabulation to have any value for guidance. The conception of an objectively measurable probability or chance is simply inapplicable.”
Leonard Matz is an independent liquidity risk consultant. This article is adapted from his book, “