Managing Model Risk

cropped-IMG_4462.jpg

Author: Bob Mark

This article first appear in Editorial of the Journal of Risk Management in Financial Institutions, printed here with the permission of the author.

All too often, senior managers take it for granted that a mathematical model is accurate. This may be a mistake, however, without a sophisticated model vetting process. Managing model risk has been a priority in sophisticated risk-literate organizations with a strong risk culture. Nevertheless, model risk turned out to have been not well understood in many organizations prior to and during the financial crises of 2007–2009. This is far from the only embarrassing failure of risk management in recent decades.

Our dependence on mathematical models to calculate regulatory capital has provided an incentive to make model risks more transparent. Models measuring financial risk (eg market risk in the trading book, credit risk in the wholesale and consumer lending books, etc) and operational risk (eg credit card fraud, cyber risk, etc) can be misleading. They can be misapplied, fed the wrong input, and can provide results which are too often misinterpreted.

In an earlier editorial,[1] I pointed out that it is useful to categorize model risk into risk caused by implementing a model incorrectly, and risk caused by model error. Implementing a model incorrectly refers to a model that is wrongly executed, either by accident or as part of a deliberate fraud. Model error refers to a model that contains a mathematical error or, more likely, is based on simplifying assumptions that are misleading or inappropriate. If the output of the model is used for the wrong purpose or the output is misinterpreted, then it is another form of model error. For example, if senior managers do not know the limitations of a model or assumptions behind the model, then they might get the wrong impression from the model results.

Part of the challenge in preventing model error resides in model complexity. For example, there has been a relentless increase in the complexity of valuation theories used to support financial innovations and a parallel rise in the threat from model risk. Technology has also played a key role. Computers are now so powerful that there is a temptation to develop ever more complex models that will inevitably be less and less understood by management. Some have argued that the Basel III regulatory framework is too complex and should be based on a significantly less complex model-based approach (eg use ratios such as a leverage ratio, calculated as tangible equity to non-risk weighted assets).[2]

The risk management function needs to guard against models being either too complex or too simple. Our goal in an application is to design models that fit data well and provide the best out-of- sample prediction. How do you know if a model is too complex? For example, given data and a choice of models, a parsimonious selection criterion is the minimum description length (MDL) principle.[3]  The purpose of this criterion is to avoid overfitting the data. From these models of the data, we then choose the ones with the best out-of-sample predictions. In general, a practical and tangible way to compare the complexity of models is to construct and examine a list of transparent factors such as the number of assumptions, the compute time, the number of parameters to be estimated, etc. But, to quote others, ‘I know model complexity when I see it; everything should be as simple as possible but not simpler!’ Think also of how long it will take you to explain the model to senior management.

The risk management community does not have a generally accepted risk management standard of practice (SoP) for individuals working across all the various risk professions and industries. Most professionals, including accountants, actuaries, lawyers and doctors, follow a SoP. A good example of a risk management SoP is an existing Actuarial SoP,[4] which identifies what the risk manager should consider, document and disclose when performing a professional assignment.

An important way to control model risk is to establish a formal external peer review process for vetting mathematical models.[5] An external peer review serves to provide an independent and external assurance that the mathematical model is reasonable. The external peer review would include but not be limited to examining:

(1) What is the theory behind the model? For example:

• What assumptions are being made for the model to apply?

• Are there equally compelling theories?

– Which theory is the most parsimonious, or empirically testable?

– Why was this theory chosen?

• Is the model specification unique or is it a representative of an equivalence class?

(2) What parameters need to be calibrated? For example:

• Given sufficient data, is the calibration unique or are there multiple possible calibrations?

• How sensitive are model extrapolations to parameter choice?

• How are the parameters estimated?

• What statistical inference, optimization or utility is being used and why?

(3) Does the chosen model answer the question?

• A model may be parsimonious, robust, elegant and predictive. Nevertheless, it may be applied to a problem for which assumptions are materially violated.

• A simple model applied judiciously with caveats may work better than a more sophisticated model applied blindly. In any event, judgment and prudence are needed by the user for appropriate application.

• All models are wrong at some level of abstraction, since they are imperfect representations of reality and do not capture all the relevant frictions.

(4) Does the software actually implement the chosen model?

• How do you know?

• Does it correctly calculate known cases?

• What numerical approximations are implicit in the code?

• When does the algorithm used to implement a model break down, take too long or hit a phase transition?

(5) How can the theory, model, code and calibration be improved over time?

• What diagnostics are needed to monitor model performance?

• When things do not work, is it obvious?

• Why is it not working? If not clear, then document more clearly.

• Implement at least two models. If they do not agree, then learn why not and determine when and how to trust the results on the basis of theory, model assumptions, calibration, calculation algorithms and coding.

As described above, an independent external peer review process offers additional transparency. It also provides guidance to stakeholders (eg employers, shareholders and regulators) and forms the basis for professional opinions. It provides additional assurance that the results from the model offer a reasonable representation of reality and that the model has been implemented correctly. The external peer review process should be harmonized with an internal vetting process. The internal vetting process should be constructed to include, but not be limited to:

(1) Model documentation. A vetting book should be constructed to include full documentation of the model, including both the assumptions underlying the model and its mathematical expression.This should be independent from any particular implementation (eg the type of computer code) and should include a term sheet (or a complete description of the transaction).

        A mathematical statement of the model should include an explicit statement of all the components of the model (eg variables and their processes, parameters, equations, etc). It should also include the calibration procedure for the model parameters. Implementation features such as inputs and outputs should be provided, along with a working version of the implementation.

(2) Soundness of model. The internal model vetting process needs to verify that the mathematical model produces results that are a useful representation of reality. At this stage, the risk manager should concentrate on the financial aspects and not become overly focused on the mathematics. Risk management model builders need to appreciate the real-world financial aspects of their models as well as defining their value within the organizations they serve. Risk management model builders also need to communicate limitations or particular uses of models to senior management who may not know all the technical details. Models can be used more safely in an organization when there is a full understanding of their limitations.

(3) Independent access to data. The internal model vetter should check that the middle office has independent access to an independent database to facilitate independent parameter estimation.

(4) Benchmark modelling. The internal model vetter should develop a benchmark model based on the assumptions that are being made and on the specifications of the deal.The results of the benchmark test can be compared with those of the proposed model.

(5) Formal treatment of model risk. A formal model vetting treatment should be built into the overall risk management procedures and it should call for periodically re-evaluating models. It is essential to monitor and control model performance over time.

(6) Stress-test the model. The internal model vetter should stress-test the model. For example, a stress-test can look at some limit scenario in order to identify the range of parameter values for which the model provides accurate pricing. This is especially important for implementations that rely on numerical techniques. In light of the 2007–2009 financial crisis, stress-tests conducted by banks did not produce realistically large loss numbers.[6]

The evolution towards sophisticated financial mathematics is an inevitable feature of modern financial risk management, and model risk is inherent in the use of models. Firms must avoid placing undue blind faith in the results offered by models, and must hunt down all the possible sources of inaccuracy in a model. In particular, they must learn to think through situations in which the failure of a mathematical model might have a significant impact. The board and senior managers need to be educated about the dangers associated with failing to properly vet mathematical models. They also need to insist that model risk is made transparent, and that all models are independently vetted and governed under a clearly articulated peer review process.

 

Bob Mark is managing partner, Black Diamond Risk Enterprises. Editorial Board Member, Journal of Risk Management in Financial Institutions. E-mail: bobmark@blackdiamondrisk.com

References and notes

1 Mark, R. (2008) ‘Making risk transparent’, Editorial, Journal of Risk Management in Financial Institutions, Vol. 1, No. 2, 128–132.

2 See Haldane, A. G. (2012) ‘The dog and the Frisbee’, Bank of England, London. Speech given at the Federal Reserve Bank of Kansas City’s 36th economic symposium, ‘The Changing Policy Landscape’, Jackson Hole, WY, August. Haldane argued that the complexity of the regulatory system did not serve its purpose during the financial crisis of 2007–2009. He advocates the use of simple rules of thumb that may be more useful than sophisticated models.

3 See Rissanen, J. (1978) ‘Modeling by shortest data description’, Automatica, Vol. 14, No. 5, pp. 465–658. The MDL principle is a formalisation of the Occam’s razor rule, and is an important concept in information theory.

4 ASB (2012) ‘Risk evaluation in enterprise risk management’, Actuarial Standard of Practice No. 46, September, Actuarial Standards Board, Washington, DC.

5 The author is grateful to Dr Gary Nan Tie from Mu Risk LLC for suggesting the use of a peer review process to examine mathematical models and his many useful suggestions regarding the material.

6 Crouhy, M. Galai, D. and Mark, R. (2014) ‘The essentials of risk management’, 2nd edn, McGraw Hill, New York.

 

Advertisements