IFRS 9 and CECL Credit Risk
Author(s): Tiziano Bellini
Modelling and Validation
A Practical Guide with Examples
Worked in R and SAS
Tiziano Bellini
IFRS 9 and CECL Credit Risk Modelling and Validation
A Practical Guide with Examples Worked in R and SAS
Milan University
Course description
This Course provides a comprehensive guide on credit risk modelling and validation for IFRS 9 and CECL expected credit loss (ECL) estimates. As a distinctive practical imprint, software examples in R and SAS accompany the reader through the journey. The choice of these tools is driven by their wide use both in banks and academia.
Despite the non-prescriptive nature of accounting standards, common practice suggests to rely on the so-called probability of default (PD), loss given default (LGD), and exposure at default (EAD) framework. Other non-complex methods based on loss-rate, vintage, cash flows are considered as a corollary. In practice, banks estimate their ECLs as the present value of the above three parameters’ product over a one-year or lifetime horizon. Based on this, a distinction arises between CECL and IFRS 9. If the former follows a lifetime perspective for all credits, the latter classifies accounts in three main buckets: stage 1 (one-year ECL), stage 2 (lifetime ECL), stage 3 (impaired credits). The key innovation introduced by the new accounting standards subsumes a shift towards a forward-looking and lifetime perspective. Therefore a link between macroeconomic variables (MVs), behavioural variables (BVs), and the above three parameters is crucial for our dissertation. Such a framework is also a natural candidate for stress testing projections.
Chapter 1 serves the purpose to introduce IFRS 9 and CECL. It points out their similarities and differences. Then the focus is on the link connecting expected credit loss estimates and capital requirements. A coursebook overview is provided as a guide for the reader willing to grasp a high-level picture of the entire ECL modelling and validation journey.
Chapter 2 focuses on one-year PD modelling. Two main reasons suggest our treating one-year and lifetime separately. Firstly, banks have been developing one-year PD models over the last two decades for Basel II regulatory requirements. Secondly, a building-block-structure split in one-year and lifetime PD facilitates the learning process. As a starting point, this chapter focuses on how default events are defined for accounting purposes and how to build a consistent PD database. Moving towards modelling features, firstly, generalized linear models (GLMs) are explored as a paradigm for one-year PD estimates. Secondly, machine learning (ML) algorithms are studied. Classification and regression trees (CARTs), bagging, random forest, and boosting are investigated both to challenge existing models, and explore new PD modelling solutions. In line with the most recent literature, the choice of these approaches is driven both by their effectiveness and easy implementation. If a wide data availability encourages the use of data driven methods, low default portfolios and data scarcity are other challenges one may need to face. Bespoke methods are scrutinized to address issues related to limited number of defaults, and ad hoc procedures are explored to deal with lack of deep historical data.
Chapter 5 is devoted to exposure at default (EAD) analysis. A key distinction operates between committed products (for example, loans) and uncommitted facilities (for instance, overdrafts). Loantype products usually cover a multi-year horizon. Consequently, economic conditions may cause a deviation from the originally agreed repayment scheme. Full prepayments and overpayments (partial prepayments) are first investigated by means of generalized linear models (GLMs) and machine learning (ML). Hints on survival analysis are also provided to estimate and project prepayment outcomes. Growing attention is devoted both by researchers and practitioners to competing risks. As a second step of our investigation, the focus is on a framework to jointly model full prepayments, defaults, and overpayments. On the one hand, we model these events by means of a multinomial regression. In this case, when the outcome is not binary (for example, overpayment) a second step is needed. Tobit and beta regressions are used to capture overpayment specific features. On the other hand, full prepayments and overpayments are jointly investigated by means of ML models. As a third focus area, uncommitted facilities are inspected by means of a bespoke framework. One needs to deal with additional challenges, comprising accounts with zero or negative utilization at reporting date and positive exposure at default. All states of the world relevant for ECL computation are scrutinized from different angles.
Finally, Chapter 6 brings together all ECL ingredients studied throughout the book. Given the role of scenarios, multivariate time series are investigated by means of vector auto-regression (VAR) and vector error-correction (VEC) models. Information regarding global vector auto-regression (GVAR) modelling is also provided. Case studies allow us to grasp how to compute ECL in practice. Emphasisis placed on IFRS 9 and CECL comparison. Finally, full ECL validation is scrutinized. Indeed, the presumed independence between risk parameters and lack of concentration effects is challenged by means of historical validation and forward-looking portfolio analysis.
Risk Modelling and Validation.pdf
(7.97 MB, 需要: RMB 19 元)


雷达卡


京公网安备 11010802022788号







