The importance of data for risk management systems

As financial regulators demand significant enhancements to risk management practices, additional digital stress tests have to be implemented for the process to work

 

Many banks have a typically top-down perspective on enterprise risk management (ERM). This approach underestimates the importance of data, the core bottom-up enabler for ERM, and compromises the bigger-picture requirements of a sound ERM framework, including the longer-term strategic advantages of a solid data foundation.

ERM is about timely scrutiny and proactive management of risk across business. Risk assessment might be of the extent to which concentrations are being built up; or whether industry or geography limits are being eroded too fast; or if pricing is too low (for profitability) or too high (for competitive positioning). For an ERM framework to be deemed a success, it must be seen to deliver better informed and timelier decision-making capabilities. Examples of sound ERM practice include the ability to monitor, in near real-time, the combined impact of lending decisions being made by originators in a branch network, or the cumulative effect of trading decisions being made on the trading floor, each day.

Automated, or systemised, and centralised reporting enables these things to be visible at the enterprise-wide level, but only as long as such reporting is informed by granular, bottom-up data. ERM requires the right data capture, feeding automated workflow-type systems, to give operations management access to the data required for daily activity purposes. In turn (via a central data repository), this gives executive management access to the data required for business intelligence purposes.

Profiling risk management
Capturing the correct data at the point of origination can prove critical to ensuring that the right people discuss, monitor and manage the risks appropriate for consideration at each level of the organisation. Origination workflow represents an excellent opportunity to gain both board commitment and business unit engagement on the topic of holistic data capture and data management. For example, business management will have the opportunity to assess whether large deals are meeting the hurdle rates for different risk profiles, while executive management can review whether business for a particular segment or region is meeting its targeted risk-adjusted returns. The ultimate objective – to ensure that unusual, unintended, or unacceptable risks are isolated and proactively managed – can then also be met.

For an ERM framework to be deemed a success, it must be seen to deliver better informed and timelier decision-making capabilities

Deficiencies in raw data are not the only obstacle to achieving this objective. When poor data is combined with the management of risk in silos, ERM is fundamentally undermined. Whether silos are viewed in terms of operational entity, line of business or type of risk, the end result is the separation of data for finance and risk management purposes. As a consequence, data management for the corporate and retail banking groups, or for country ‘A’ and country ‘B’, or for liquidity management and credit risk management, happens on systems that do not communicate with each other.

Silos are perhaps inevitable for day-to-day, local operational purposes, but this approach to management of risks is inadequate for the organisation as a whole. New regulations are driving banks to become more holistic and are forcing a breakdown of the organisational cultures within many banks that perpetuate silos. For example, under Basel II, credit risk was typically managed by the risk department, and liquidity risk was managed by the ALM/treasury department. Now, under Basel III, calculation of liquidity ratios requires data from both entities. The old way is no longer sustainable.

New requirements for enterprise-wide stress testing shine a light on the problems of poor data and data in silos. Typically banks have to throw armies of people and huge technical resources at satisfying each external reporting obligation. These tactical solutions deliver one-off results, leaving the underlying problem unresolved. So the real, untapped, opportunity when it comes to increasing operational effectiveness, is to address the separation of the organisation’s databases.

Once a holistic view of key risk data has been achieved, banks can deliver material improvements in operational efficiency both at the local level as well as at the enterprise level.

To understand how limitations in data availability across the enterprise frustrates the holistic management of individual firms, one only needs look at the recent subprime crisis, which morphed into the liquidity crisis, and then the economic crisis, which in turn led to the wider contagion that we experienced post 2008. Ultimately, banks did not have access to the data needed to enable the robust management of risk across the enterprise.

Source: Moody's
Source: Moody’s

Tools to do the job
Clearly, banks need the ability to collate raw risk-related data; combine it with non-risk data; model it to transform it into meaningful information; and then further aggregate it for business intelligence purposes. Doing this successfully means that, at the centre of the bank, at the press of a button, management can assess key risk dimensions and drill down in to them. Sustainable performance requires everyday data capture, analysis and reporting capabilities that combine information from multiple databases and models, across different business lines and geographies, while using different technologies.

The point about data being the foundation of all things ERM triggered the Bank for International Settlements’ January 2013 paper on risk data aggregation and risk reporting. This paper set out 14 principles to strengthen risk data management, in four broad categories: overarching governance and infrastructure, risk data aggregation capabilities, risk reporting capabilities, supervisory review, and tools and cooperation. Although the BIS paper was, at face value, a top-down perspective on data aggregation and reporting, it has at its heart the need for bottom-up, granular data flows.

The biggest challenge is to be able to do this across all risk types – not just credit, market and operational risk, but also for liquidity, capital, interest rate, settlement, IT and other risks.

Regulatory stress testing again illustrates the value to banks of getting this right. The heart of a well-functioning stress testing process is a single data repository in which the relevant risk and finance data required for the regulatory stress tests is consolidated and readily available. With the key data layer/datamart element in place, models, workflow tools and reporting modules can be layered on top. Once this structure is in place, banks are afforded a scalable and powerful capability, which enables them to effectively report on a broad array of enterprise-wide stress tests in a timely and cost-efficient manner.

Ultimately, banks did not have access to the data needed to enable the robust management of risk across the enterprise

In addition to supporting stress testing, this same capability offers substantial insight to senior management about a bank’s risk profile for day-to-day business management purposes, across transaction, portfolio, balance sheet and performance management activities. In turn, it facilitates medium-term planning and annual budget rounds, capital allocation and wider enterprise management, consistently, across the organisation. Of course, all this needs to be within a strategic context, with consistent, well-informed policies, and with governance providing the right checks and balances (see Fig. 1).

Regulators are demanding significant enhancements to risk management practices with regularly issued and updated guidelines. In turn, they want extra information and reports, with increasing granularity and frequency. All this, combined with requirements for additional stress tests and even more capital, means that compliance costs for banks, whether in terms of people, systems or capital, are mounting.

No investment in solutions to these pressures will work without good data, which brings us full circle back to the need for data at the bottom end to be looked at and for the ERM framework to be underpinned by increasingly accurate, relevant and timely data. Delivering an enhanced enterprise risk management framework requires considerable planning and substantial investment, but it is well worth the effort. Much of the investment recently made in risk management has been reactive – in response to regulation – rather than proactive. Banks now need to think proactively about improving profitability by better management of risk; by understanding return to risk dynamics of both individual exposures and also portfolio level risks; and by ensuring more efficient use of capital.

The good news is that raising the standard of the firm’s ERM framework is a case of taking advantage of established advances in risk management practices, moving from a siloed approach to a holistic view of risk, and simultaneously increasing the focus on data and its management. Ultimately, such process reengineering is about delivering centrally accessible data, for example risk data, volumes data, performance data, migration data and point-in-time data and trend analysis. Returns of three times the original investment are typical and potentially even understate the benefits. Certainly, when thinking in these terms, process reengineering really does begin to make a compelling business case.

For further information visit moodys.com