Brief

Adapting Your Organization for Responsible AI

Adapting Your Organization for Responsible AI

How a comprehensive, responsible approach to AI helps leading companies accelerate and amplify the value they get from the technology.

  • 읽기 소요시간

Brief

Adapting Your Organization for Responsible AI
en
At a Glance
  • Leading companies should commit to managing six specific system risks and to ensuring that key responsible AI commitments, governance processes, technology, and culture are in place.

In short order, generative artificial intelligence has engendered both remarkable enthusiasm and significant concern. While stakeholders are excited to harness the power of this new technology to improve products, productivity, and competitiveness, they also have important questions about ethics, bias, data privacy, and job loss, and regulation is beginning to take shape on these points.

For companies, these issues can only be addressed with a comprehensive approach to AI responsibility. This approach must include organizational action like assigning proper roles and responsibilities, as well as technology measures for model testing and monitoring.

Companies with a comprehensive, responsible approach to AI earn twice as much profit from their AI efforts, in our experience (see Figure 1). These leaders aren’t afraid of possible risks, and they aren’t tentative about what they pursue and deploy. Rather, they quickly implement use cases and adopt sophisticated applications, accelerating and amplifying the value they get from AI. Importantly, they also identify the uses of AI they will not pursue—at least until the technology develops further or their organization is mature enough to manage those uses.

Figure 1
An effective approach to responsible AI doubles its profit impact

It’s possible for any company to develop this kind of responsible approach. Generative AI technologies are new, but machine learning and AI are not. The financial services industry was the first to formally establish the practice of model risk management with the US Office of the Comptroller of the Currency’s guidance on model validation (first published in 2000) and interagency model risk management (2011). These policies fostered good practices for developing robust, documented, and validated models and implementing effective challenge and monitoring procedures. Separately, throughout the 2010s, leading technology companies such as Google evolved machine learning testing and operations practices, which established additional understanding of how to ensure security, accuracy, and stability of machine learning systems.

Beyond long-established risks such as bias, explainability, and malicious use, generative AI brings additional risks, including hallucinations, training data provenance, and ownership of output. Building on the experiences of the financial services and technology industries, organizations should make six commitments to managing AI system risks (see Figure 2).

Figure 2
Responsible AI commitments need to span the most critical areas of risk across an organization and within each application

Enabling responsible AI

A comprehensive approach to responsible AI has three components.

  1. Aspirations and commitments. To demonstrate to their stakeholders that they will be responsible stewards, companies must clearly explain how they intend to manage the risks from these new technologies. This starts with acknowledging the new and enhanced challenges—that they include not only technology questions but also equity and societal concerns, and that they require proactive attention, disclosure, and communication.
  2. Governance processes, roles, and technology. Companies will need to augment existing approaches with new technology and practices that address the unique systems life cycle of AI solutions. Data governance and management practices will need to cover new security, privacy, and ownership challenges, for example. Roles, accountabilities, forums, and councils will all need to be revised and extended to effectively monitor these new systems and how they are used. This could include appointing a chief AI ethics officer and an AI ethics council.
  3. Culture. Given the broad impact and rapid advancement and adoption of generative AI technologies, organization-wide training and engagement covering their use—as well as the organization’s aspirations and commitments—will be needed. By ensuring these efforts are iterative, a company can nurture a culture of vigilance and learning that continuously improves its ability to use AI responsibly.

We’ll discuss each of these components in detail below.

About Bain

Responsible AI

We are committed to the responsible use of artificial intelligence, both with our clients and within our organization.

Matching aspirations and commitments to risk tolerance

Stakeholders including customers, employees, shareholders, investors, regulators, and communities are keen to see organizations explore AI solutions in a responsible manner. They expect companies to invest in ensuring that their systems are secure, accurate, and unbiased and that they are used ethically and designed with potential future regulations and compliance requirements in mind.

Of course, each organization will tune its commitments to its capabilities, potential exposures, and the specific requirements of its markets and location. Strategy and risk tolerance will determine which AI uses a company develops and, in turn, the investment needed and how much value they can be expected to generate. For example, some organizations in regulated industries have steered clear of direct customer-facing applications of generative AI until they better understand the technology.

Building effective organizational governance

After companies articulate their commitments, as they set out to pursue opportunities to deploy AI-powered products and solutions, they need to make sure that the appropriate structures, policies, and technology are in place.

Structure and accountabilities. The impact of responsible AI spans business units and corporate functions. This pervasiveness means it is crucial to build cross-functional governance that includes key stakeholders from relevant groups. Clear roles and responsibilities must be communicated and understood by all, and business leaders must be accountable for integrating responsible AI into their offerings and operations. This will help develop an ownership mindset across the organization, but companies may still wish to elevate a single organization-wide leader for responsible AI to ensure clear accountability for outcomes.

Nearly a quarter of the Fortune 20 already have one. Microsoft, for example, has a dedicated chief responsible AI officer within its Office of Responsible AI who is charged with defining the company’s approach and empowering employees to become active champions of responsible AI. This executive collaborates with Aether, Microsoft’s internal AI and ethics committee, which performs research and provides recommendations on significant responsible AI issues.

Most organizations will also need to review their existing governance mechanisms, including those for technology, data, vendors, and information security, and identify any necessary changes to address the new and amplified risks of AI.

Policies and procedures. The right policies and processes, new or augmented, will codify responsible AI expectations and guardrails at each level of the organization. Enterprise policies and procedures, such as codes of conduct for services like ChatGPT for enterprises and requirements for sourcing foundation model providers, will help ensure these technologies are deployed responsibly.

Many companies will find value in establishing or updating a clear code of conduct, either through the adoption of broad digital citizenship or data responsibility codes, or through more specific codes of ethics for AI. These might include an AI acceptable use policy that outlines specific dos and don’ts, for example, or that defines the risk assessments to be done when assessing individual AI use cases. Microsoft’s Responsible AI Standard defines requirements and provides concrete and actionable guidance, tools, and practices employees can use to apply responsible AI principles in their daily work. Alongside this standard, Microsoft has established a Responsible AI Impact Assessment, which evaluates the effects an AI system might have on people, organizations, and society. Other companies, such as Telefónica, have similar processes that allow employees to develop new AI systems with confidence that they are adhering to the company’s greater responsible AI aspirations.

Technology platforms and frameworks. Modern AI systems are too complex and dynamic to govern through manual efforts alone. Effective AI technology platforms and application development frameworks are critical to enabling the rapid development and deployment of AI technology while embedding controls required to deliver on responsible AI commitments. An AI platform comprises reusable and scalable AI components and services with built-in guardrails that make it possible for companies to deploy AI systems safely and rapidly. Application development frameworks accelerate the adoption of best practices that enable AI system developers to leverage standardized approaches to automation, testing, evaluation, and monitoring across the AI system life cycle. They also facilitate long-term system maintenance and make the performance of AI systems centrally visible, increasing confidence that the company is meeting its responsible AI commitments.

Hallmarks of a responsible AI culture

Successful AI requires embedding responsibility in the organization’s culture. Well-governed, high-performing companies ensure that

  • responsible AI principles are ingrained in their organizational mindset;
  • leaders understand the organization’s existing capabilities and only take on risks they are capable of managing and mitigating;
  • managers are held accountable for cross-functional collaboration on the policies, processes, and governance for responsible AI;
  • team members are provided with the resources and skills to use AI tools effectively and responsibly; and
  • the organization communicates, monitors, and reinforces its commitments to responsibility and maintains an active dialogue with its stakeholder groups on the balance between risks and benefits.

This is complicated terrain to navigate, but generative AI can’t be ignored. The scope of the technological and economic change it is likely to bring is just too great.

For more on the potential of AI, see “You’re Out of Time to Wait and See on AI” from Bain’s Technology Report 2023.

태그

베인에 궁금하신 점이 있으신가요?

베인은 주저 없이 변화를 마주할 줄 아는 용감한 리더들과 함께합니다. 그리고, 이들의 담대한 용기는 고객사의 성공으로 이어집니다.

Bain @ Davos
Bain @ Davos