Wednesday, October 1, 2014

How to Build DISASTROUSLY WRONG Financial Models

Here’s the secret:  begin with the wrong goal.
(Adapted from How to Build Disastrous Financial Models, a Quant Perspectives column published by the Global Association of Risk Professionals)
Perhaps the greatest weakness we quantitative financial people have is that we assume at the outset of our careers that all colleagues and competitors share the philosophy that the goal of model development is to seek truth.  That is, imagine the current model task is to estimate the value of a loan or derivative trade, or the risk of a portfolio, or the proper credit rating of a bond, or the likelihood of repayment of a residential mortgage.  Clearly, we assume, everybody would prefer that the model have good accuracy (i.e., truth) in estimating value, or risk, or credit rating, or repayment likelihood.
Unfortunately, real life is different.  Many, though not all, actors in the financial world – business heads, traders, rating analysts, executives, regulators, consultants, auditors, politicians – desire models that describe and promote their reality.  As an example, the head of a trading desk wants models for derivative pricing that permit her group to win an adequate number of trades in competition with other firms.  (The direct experience of a friend of mine is that the tranche correlation desk of a first-tier investment bank rejected the quant team’s improved pricing model because it made the desk lose trades!)  In this case, rather than accuracy, the “reality” of the trading desk is that a good model will help win trades.
Another example is the difficulty of the CEBS (Committee of European Banking Supervisors) and EBA (European Banking Authority) in implementing stress tests for European banks beginning in 2009.  Stress tests are models.  For the CEBS and then the EBA, the “reality” of the stress test model is that it must be credible to the public and build confidence that the banks are adequately capitalized.  (See Kevin Dowd’s penetrating and entertaining “Math Gone Mad,” CATO Institute 754, 1-64, September 3, 2014.)  Needless to say, the goals of credibility and confidence are not synonymous with truth and accuracy.
Yet another, albeit indirect, example of a manipulated model is the U.S. Consumer Financial Protection Bureau (CFPB) determination that bank lenders enjoy a presumption of prudent mortgage lending practices under “Ability-to-Repay and Qualified Mortgage Standards.”  This “QM” standard specifically does not require the lender to impose or consider the loan-to-value (LTV) ratio of the mortgage loan.  Yet, if the goal of mandated underwriting standards is to reduce loan defaults, which harm both lender and borrower, then omission of LTV consideration from the “model” for a qualified mortgage is a huge oversight.  (See, for example, “Housing Industry Awaits Down-Payment Rule for Mortgages,” Bloomberg News, January 18, 2013.)  Unfortunately, the “reality” for the CFPB and self-appointed advocates is wide access to mortgage loans rather than low default risk of the loans.
There are numerous further examples of both high and low public notoriety in which practitioners create or adjust models in “helpful” directions only.  Lehman Brothers in 2007-8 (see page 180 of the Examiner’s Report) and J.P. Morgan in 2012, for example, tweaked their internal models to reduce apparent risk.
The focus on reaching desired end results rather than true and accurate results is certainly a misuse of financial models, but there’s a nuance to consider.  To judge truth and accuracy, one must inspect the model results and determine somehow whether the results “seem right.”  It could well be that the loan underwriter who watches competing lenders make loans that he had rejected will legitimately question the accuracy of his own bank’s model.  But how does one distinguish legitimate questioning of the model result from abusive adjustment of the model?
There is no simple answer other than to rely on the expert judgment of the quantitative model developer and for all analysts, users, and management to adhere to a principle of good faith.  This good-faith standard is the commitment to truth and accuracy.  Senior executives of the institution must understand that models are, by nature, malleable given their numerous judgments and assumptions.  With this understanding, the executives must then set, proclaim, and maintain a culture of good-faith, unbiased model construction and use.

The best uses of quantitative models are:  (i) the learning, intuition, and judgment one develops while building the model and (ii) the testing for completeness and quality of the firm’s data that exercising the model provides.  By virtue of assumptions and insufficient information, many financial models are less useful as generators of precise numerical results (e.g., for bank capital, loan default probability, et cetera).  When it’s imperative to have such numerical model results, then the principle of good-faith model construction is critical.

No comments:

Post a Comment