Artikel

Generative AI in Financial Services: Eight Risks and How to Overcome Them

Leading companies can control the chaos of generative AI requests through smart risk segmentation.

Artikel

Generative AI in Financial Services: Eight Risks and How to Overcome Them
en

Within most banks, insurers, and other financial services companies, risk and control groups have been overwhelmed with a large and growing volume of requests to deploy generative AI for different use cases. To address this wave of requests, leading companies have found that an effective response is to categorize the risks and work with other departments to develop mitigation strategies for each category.

Eight categories of risk

Vendor risk, for instance, can be tackled through controls developed by a cross-functional team consisting of IT, procurement, and risk professionals.

Here's a quick look at each risk and how leading companies are addressing them.

Risk No. 1: Data integrity is compromised

Inadequate or inappropriate practices, strategies, or frameworks on data ownership, security, and privacy may compromise data integrity.

Mitigation tactics: Implement well-governed data management practices that surveil data uses and protect data privacy.

Risk No. 2: Model misuse leads to hallucinations

Decisions are based on inaccurate or misused models, or lack of transparency in AI models.

Mitigation tactics: Apply regulatory expectations on model risk to AI use cases based on criticality and materiality.

Risk No. 3: Vendors issues are not addressed

Vendors fail to adhere to contractual stipulations, which can disrupt operations.

Mitigation tactics: Conduct due diligence on technology and software partners, onboard and monitor them, and ensure service-level agreements are in place.

Risk No. 4: Incomplete technology integration

AI models do not fully integrate with existing technology.

Mitigation tactics: Embed controls into IT and architecture, enhance AI governance, and improve process control with enhanced testing.

Risk No. 5: Information security failures

Shared access to AI models may compromise data security when there are limited access controls and a lack of filters.

Mitigation tactics: Enhance identity access management and the use of virtual private cloud to protect risk data and models.

Risk No. 6: Missed legal and regulatory requirements

Businesses fail to comply with laws and regulations, so the embedded bias in AI model training may compromise output.

Mitigation tactics: Cross-functional teams should carefully select use cases. Identify and test data elements for potential bias.

Risk No. 7: Reputational damage

Negative stakeholder perception may result in a loss of trust or value.

Mitigation tactics: Create a stakeholder management plan, with program management and change management. Define escalation protocols and prepare communications scripts.

Risk No. 8: Strategic misalignment

Failing to mobilize around AI can reduce shareholder value, making non-adoption a strategic threat.

Mitigation tactics: Create board awareness, a clear AI strategy, and a plan to capture value.

This set of mitigation strategies will reduce most of the risks, leaving only a small set of residual risks for the company to deal with. Spending time up front on mitigation strategies is far more effective than responding to each new AI request with ad hoc control measures.

Markierungen

Möchten Sie mit uns in Kontakt bleiben?

Wir unterstützen Führungskräfte weltweit, die kritischen Themen in ihrem Unternehmen zu adressieren. Gemeinsam schaffen wir nachhaltige Veränderungen und Ergebnisse.