In September 2021, the Prudential Regulation Authority (PRA) released its thematic findings related to the reliability of regulatory reporting in the UK. The PRA expressed concern regarding the significant number of deficiencies identified across a number of banks’ processes as they relate to producing accurate and reliable regulatory returns. In particular, the regulator highlighted that many banks had fragmented cross-functional end-to-end legacy processes that prioritised tactical fixes and relied on ineffective controls. These issues are by no means limited to banks in the UK. Following the 2007/2008 Financial Crisis, regulators around the world, including the BCBS, placed significant pressure on banks to improve their data aggregation and reporting capabilities. Deeply embedded data landscapes and increasingly granular regulatory reporting requirements have, however, together hindered industry refinement in this area. These chronic pain points have not only led to concerning findings by regulators, but also continue to stifle banks’ strategic decision-making and risk management capabilities.
To establish a robust integrated data foundation for downstream data and reporting processes, financial institutions – especially banks – will need to tackle some unglamorous but nonetheless critical tasks. This will include organisational and data structure redesign, data governance reviews, and active remediation of years’ worth of tactical fixes and data workarounds.
Bank functions, including risk, finance, and treasury, have often employed siloed approaches when sourcing and utilising data throughout the enterprise. These disparate data solutions – each with their own data marts and customised extract-transform-load (ETL) processes – are undoubtedly useful, having allowed these functions the independence and flexibility to meet their own data needs. However, while the same source data is often utilised across functions, the disparate processes that apply along the end-to-end processes of the various risk areas produce diverging and difficult to reconcile outputs. An uncoordinated approach to finance and risk creates pain points throughout the enterprise, with a lack of standardisation, control and transparency often resulting in significant data aggregation and reporting failings. These become most apparent when multiple datasets converge across the functions but cannot be easily compared or consolidated due to the varying views and granularity.
As a result of various individual processes to satisfy reports by discipline, ETL procedures within the bank are often duplicated. To add another layer of complexity, these data extraction procedures are often built using different technologies and will have different teams responsible for the maintenance of these procedures. Changes to front-end systems must be updated across multiple downstream processes and if the change is not consistently applied, this may amplify the reconciliation differences that may have already been experienced in the past.
Inconsistent Application of Adjustments The requirement to adjust data and correct data quality issues, such as financial or reference data,used throughout the processes of aggregating and compiling regulatory reports is often highlighted as a major contributor to the need for a more flexible, ad hoc approach. Adjustments made in each process do, however, introduce a significant risk of alterations being applied in an inconsistent manner, causing further discrepancies in the data used for reporting purposes. Furthermore, required data updates are often not communicated to the original owner of the data, resulting in the same adjustments being applied every month.
Analysts who should be focussed on identifying business insights and trends contained within the available data and providing management information for decisionmaking often spend the largest part of their working day reconciling data and providing explanations for the differences observed. Although in some cases there may be valid reasons for the differences in various reports, the processes do not allow these to be observed simply and therefore a substantial amount of effort is spent reconciling, analysing, and explaining the data used in various processes.
Data ownership between finance and risk has become increasingly more difficult to determine. For example, regulatory capital requirements rely on highly complex risk-modelling but begin with and are reconciled to financial exposure data, creating uncertainty regarding ownership of this data and the resulting reporting. In many cases, there is no right or wrong answer as to who should own the data, although the preference is often to assign ownership where the data is created. Nonetheless, institutions that do not define and agree upon ownership and responsibility upfront run the risk of limited oversight and accountability, which is only exacerbated by fragmented data processes.
As demand for data increases, FIs without appropriate oversight and assigned stewardship of data risk creating a “wild west” environment. A lack of accountability to drive standardisation and quality controls leads to stakeholders sourcing and transforming data as they see fit, undermining any efforts to establish a golden source. This results in data being used for purposes for which it was never intended. Similarly, data adjustments and fixes can be difficult to implement without a data owner overseeing and driving these updates
Ultimately, the effective consolidation of various output datasets, which enable capabilities such as single view of customer, is impaired by the lack of integration outlined above. Fragmented data processes lead to functions within an institution focusing on completing their own objectives without consideration of the requirements of enterprise-wide services. An example of this would be the failure to produce results at a sufficient level of granularity by one area, which means that it cannot be leveraged by another, for example to build customer value management insights.
Similarly, finance’s month-end close process becomes a reflection of the data architecture that underlies it. Continual maintenance and monitoring through reconciliations, controls, and manual adjustment places chronic strain on deadlines, as well as staff, and ultimately reduces the function’s strategic value.
Given the commonality of data for finance and risk, as well as their established symbiotic relationship, finance and risk integration has become a critical initiative to embed and enforce alignment and collaboration to address these pain points.
Finance and risk integration primarily aims to reduce data and process duplications, overlap, and other inefficiencies that result from a siloed data environment. While there is no standard solution that should be applied, certain principles should be followed to address the issues detailed above and the various concerns that regulators have raised.
Scope: Integration Priority by Risk Type
1. Credit Risk
2. Liquidity Risk
3. Market Risk (Banking Book)*
4. Market Risk (Trade Book)*
Exclude Operational Risk due to its variable requirements and metrics
* Must consider reconciliation frequency disparities: BB – Monthly; TB – Daily
Stakeholders often assume that the only answer to an integrated risk, finance and treasury landscape is to centralise all processes, but this is not necessarily the case. Although centralisation of data sourcing is not a prerequisite for finance and risk integration, it can remove the complexity of managing multiple, fragmented data marts across the various datasets, which include customer, trade, product, reference, economic, transactional and account data. Institutions can then make use of a common data landing zone and data quality layer, as well as reduced break points, either through a subledger or common date repository. It is advised that the intended solution should have the ability to integrate finance and risk output datasets to facilitate the creation of reconciled and aligned data for use in customer valuation analytics. Tagging general ledger, cost centre and account attributes across datasets at their most granular level allows for powerful insights through customer value management and advanced analytics that can then be built off a solid foundation.
Solves for:
Disparate Data Solutions
Adjustments and Corrections
Reconciliations and Controls
Data Consolidation and Insights
Data interpretations and classifications must be defined and agreed upon in order for data to be viewed and utilised consistently across the organisations. Data that is critical to finance and risk processes and reporting (“golden sources”), along with any transformations, should be identified, defined, and explained through documented metadata and business information models, while periodic reviews and appropriate sign-off should be enforced to maintain its relevancy.
Solves for:
Reconciliations and Controls
Although the application of month end adjustments is often a pain point for organisations, it is not feasible to implement a solution that does not allow for any adjustments. When adjustments to month end data are required, it is important that controls are put in place to ensure consistent application. In addition, a process should be designed to ensure incorrect data that is being adjusted is fixed at source to avoid a situation where the same adjustments are applied every month. By creating more transparency on the adjustments made and by aligning the adjustments between various departments, the organisation is also in a better position to resolve data issues consistently at source, avoiding stakeholder misalignment, which is often a large hurdle in fixing data at source.
Solves for:
Adjustments and Corrections
The application of BCBS 239 – “Principles of Effective Risk Data Aggregation and Risk Reporting” goes hand in-hand with integration projects and is fundamental to their longevity and effectiveness. It forces organisations to confront the issues of enterprise-wide data ownership and responsibility, as well as determining the controls required across key datasets and metrics – an administrative and intricate task. A robust control framework becomes critical to proper data governance, particularly in the case of complex processes across multiple data sources and systems (for example, trade capture to FRTB and IRRBB or loan management systems to IFRS 9’s ECL). Banks should aim to identify and document their operating model to determine required controls and their ultimate effectiveness. Reconciliations, in particular reconciling to the general ledger, are of paramount importance to ensure alignment between finance and risk. Reconciliations should be formalised into the control framework that is guided by comprehensive documentation of the various end-toend data processes.
Data ownership, while a laborious and often tricky task, is fundamental to ensuring senior management is actively engaged in managing its data and enforcing oversight and responsibility. This is particularly important for key regulatory interpretations and ambiguous data processes, such as the calculation and reporting of Risk Weighted Assets, and off-balance sheet data, such as the calculation of limit balances, which is further complicated by differing entity roll ups for risk (entity) and finance (account). One way to address the issue of data ownership is to implement a principle of “publish what you own” – if data is transformed or enriched through any particular process, the enhanced data is owned by the stakeholders of that process, and they must take responsibility for publishing this data for enterprise use.
Solves for:
Data Governance and Ownership
Reconciliations and Controls
Data Consolidation and Insights
Maturity Assessment: Each bank is unique; it has its own issues, culture, business processes and data infrastructure. Hence, we recommend that the starting point for any integration initiative should be a thorough maturity assessment. Before a solution and framework can be designed, the intricacies of the bank’s current data landscape and processes should be identified and evaluated across its finance and risk functions to determine:
By using the month-end close process as a starting point and then working backwards towards the data-originating source systems, banks can identify the various pain points that are dragging out their month-end close processes across multiple weeks and leverage these insights to design and plan for a future state.
Resolving reliability of regulatory reporting issues and misalignment between finance and risk are common and typical initiatives within banks. However, many projects do not provide adequate results due to capacity constraints, loss of momentum and, ultimately, a lack of delivery. We recommend that project managers and key stakeholders consider the following when launching and managing a project of this nature:
Finance and risk integration is an opportunity to analyse, optimise, and simplify rather than battle with the complexity of a full transformation initiative.
In its thematic findings on the reliability of regulatory reporting, the PRA raised the issue of a lack of strategic investment in regulatory reporting data infrastructure, which it determined was driven by a culture of prioritising tactical fixes over strategic ones. While the thematic findings focus primarily on COREP, with the regulator commenting that financial reporting has been historically prioritised by banks, many of the findings can be addressed through finance and risk integration initiatives. This is particularly important as the PRA raised the need for oversight of front-to-back processes and cross-functional processes.
Inadequate reconciliations were raised by the PRA who further confirmed the need for formalised and comprehensive processes to reconcile capital and risk data to the general ledger.
Key interpretations and judgements have been hard coded into risk processes without frameworks in place to manage changes. Currently, changes would require the maintenance of various ETL processes that may lead to negative downstream misalignments. Finance and risk integration looks to address this pain point through formalised governance frameworks and centralisation of ETL processes (if appropriate).
The various PRA findings across governance, data infrastructure and controls relate very closely to BCBS 239, the principles regarding governance as well as data aggregation and reporting capabilities. Financial institutions can apply a maturity assessment as described above to thoroughly assess and document* their end-to-end risk processes and set corrective measures to improve reliability of regulatory reporting as well as enhance how risk processes and data integrate with finance.
*Documentation was highlighted various times across process mapping, key regulatory interpretations, controls, model management and manual interventions
At Monocle, we have over 20 years of expertise in finance and risk integration. We are focused on delivering long-term, customised solutions to our clients and for this reason, we align our finance and risk integration approaches to address each of our clients’ specific pain points and characteristics. Our expertise include:
Comprehensive enterprise-wide maturity assessment to accurately identify, define and document all finance and risk integration current state, including pain points and inefficiencies.
Technical expertise regarding data and process architectural design, with consideration of data governance, including BCBS 239, as well as process automation and optimisation, process flow management and reporting optimisation, and data visualisation.
Programme and project management with experience across various finance and risk integration initiatives, including subledger implementation, common data repository implementation, data governance and control design and embedment, as well as extensive process optimisation.
Monocle’s sustained presence in the banking industry in the United Kingdom, Europe and across Southern Africa ensures we are well acquainted with each of our clients’ respective infrastructure and organisational structure. This competitive advantage allows us to drive and accomplish integration initiatives that inherently require significant collaboration across functions and their individual requirements.
Monocle is one of the largest independent management consulting firms in South Africa specialising in banking and insurance. Since our establishment in 2002, we have worked with industry-leading banks and insurance companies world-wide.
We design and execute bespoke change projects, from start to finish, bridging the divide between business stakeholders’ needs and the complex systems, processes and data that sit under the hood. We offer several unique capabilities to our clients, which have been forged over time through the combination of a highly specialised skillset and extensive experience working with the systems, processes and people that are at the heart of the financial services industry.
Explore trending topics in the banking and insurance industries
View AllCopyright © 2024 | Monocle