There has been a great deal of activity around the globe in recent months on ethical artificial intelligence (AI). A forthcoming landmark policy in the European Union will likely have significant ramifications across industries, including telecommunications. Hyperscalers and innovators are revising their ethical AI principles and mobilization strategies. Financial investors are asking their portfolio companies to report how they plan to tackle ethical AI, part of investors’ risk management amid a larger push on environmental, social, and governance (ESG) initiatives and diversity, equity, and inclusion. Similarly, auditors are increasingly expected to include ethical AI risks as part of their evaluations.
Leading telecom executives recognize the growing urgency to develop an ethical AI strategy. It’s not only a matter of complying with expected regulatory requirements (and avoiding potentially significant financial penalties). Ethical AI also represents a chance for fast-moving carriers to differentiate themselves in the eyes of consumers, employees, and investors, as well as to advance the company’s business and ESG strategies. Given the complexity of delivering ethical AI across functions, and the importance of data and AI to succeed in a fast-changing industry, leaders would be wise to start mobilizing now.
Ethical AI risks
Over the years, AI has gone from strength to strength. Through AI, complex scientific problems such as protein folding have finally been solved. Innovative, general-purpose machine learning algorithms such as AlphaZero within hours surpass the best computer systems in their specific domains. Fueled by abundant data, computing power, and software democratization, AI has become ubiquitous for companies across all industries. In the telecom industry, early adopters have seen significant improvements in return on investment related to such core activities as network planning and churn reduction.
Yet there are inherent risks associated with the powerful capabilities of AI. Take, for example, unwanted bias. The key strength of machine learning technologies—the ability to independently learn from observations—implies that any pattern in the observed data influences the results of these algorithms. Typically, this is exactly what developers want AI to do. Find, for instance, the next best cross-selling offer for an individual telecom customer, based on insights from piles of data gleaned from the network, social media, or customer services.
The problem is that this underlying data can be inherently biased in ways that are often unknown. These biases are often detected and amplified by machine learning algorithms, resulting in unwanted decision outcomes. For example, when tasked to identify the recruiting candidates with the best chances to succeed in an open position, AI technologies have tended to shortchange women. Why? The data used in training the algorithms is often dominated by profiles of successful men.
There are many more well-documented examples of what can go wrong with AI. Ethical AI has quickly evolved into a complex management challenge for executives and boards, as they wrestle with how best to scale AI in their organizations given the goals of:
- Equality. No harmful unintended bias against particular groups of people based on, for example, gender, race, ethnicity, age, or disability
- Robustness. Cybersecure and resilient AI systems and data value chains
- Privacy. Data governance aligned with consumers’ rights and interests
- Explainability. Clear understanding of (repeatable) results from AI systems
- Transparency. Disclosure of when AI is used and of the algorithms’ underlying operating principles
Why care now
The issues associated with ethical AI are complex, and so are the solutions. Why should telecom business leaders and executive boards pay attention to this topic now?
First, AI is everywhere, changing fast, and critical for telcos to master. Narrowly focused AI has evolved into a core capability that can improve competitiveness for companies in all industries. Software democratization fuels the pace of innovation and proliferation of AI use cases in products and services. Exponential growth of multifaceted data assets further propels AI algorithm capacities, notably in the telecom industry, given its many customer and Internet of Things touchpoints. Frequent training and retraining of AI models increases their exposure to new ethical risks.
Second, AI is receiving increased scrutiny from regulators worldwide. Policymakers are introducing unprecedented legal requirements that will affect telcos as critical infrastructure players. One example is the proposed EU AI Act, which suggests a fine structure for violating responsible AI guidelines and could imply, in theory, noncompliance fines of up to 6% of annual revenue. Different regulatory approaches across regions are resulting in fragmentated compliance strategies and unclear timelines, and the potential consequences of policy changes are contributing to uncertainty among business leaders.
Third, company stakeholders are becoming more sensitive to the issue. As AI increasingly affects day-to-day life, and incidents of unintended consequences become known, many customers’ trust is faltering. Employees are also questioning the role of their companies in ensuring “fair” use of AI. Investors are asking for sustainable business strategies and clear roadmaps for bridging digital and ESG transformations.
Lastly, AI is growing more complex. Digitalization is driving a data paradigm shift from select internal silos to massive crowdsourcing of data powering new products and services. The AI delivery stack is delayering; more data and software vendors are involved in AI production. Control points are blurring, making accountability across organizations less clear. There’s a growing need to establish distinct responsibilities with AI delivery partners and system integrators. More companies are considering adopting enterprise AI governance solutions to assist in automating compliance.
Implications for telcos
While more companies, including some telcos, have been highlighting their emphasis on ethical AI in public statements, few have really begun to translate those goals into concrete strategies, governance systems, or operating guidelines. No one has all the answers yet, but telecom executives recognize they can’t wait to act until they have a “perfect” solution for the entire organization. Leading companies start by focusing on four areas.
- Value at stake. Most leaders understand the importance of directing their ethical AI capabilities to use cases with the highest potential value. However, for many boards, the value associated with ethical AI (both risks and upside potential) is unclear. What will companies legally be required to do? What are the costs of noncompliance? What do customers, shareholders, employees, and business partners expect? Can firms differentiate themselves through leadership in ethical AI? Think of talent that turns away from large tech companies because of concerns about the use of data and AI.
- Sources of value. Given the many potential use cases for AI, it’s not always immediately clear where executives should focus first. Are they aware of all of the instances where AI is currently being deployed or considered within the organization? Where is the highest value at stake, and how complex will it be to deliver? Some telcos are prioritizing ethical AI investments by mapping their potential use cases, organized by the value at stake, complexity to deliver, and annual financial impact (see Figure 1).
How one telco assessed the value and complexity of critical AI use cases
For example, using AI in talent recruiting will likely be considered a potentially “high risk” use case under the EU AI Act, invoking a series of regulatory requirements. The same goes for the management of “critical infrastructure,” which might well include parts of telecom networks. It remains to be seen how the AI Act will define an acceptable level of transparency, including the level of testing mandated for various AI activities to ensure they’ve built in adequate ethics.
- How to mobilize. Enacting an ethical AI mission requires the expertise of employees across a wide variety of functions, including technology, strategy, legal, risk and compliance, marketing, and public affairs. Where should companies start small, learn, and then scale? Telecom executives need to think hard about evolving their operating model, allowing frontline leaders to easily tap this expertise while not undermining speed of innovation or time to market. Specialized talent will need to be recruited in some areas, but many more existing employees will need training to identify opportunities to deploy AI in business operations and to manage the risks that accompany those efforts.
- Automation. Using fast-paced AI at scale while aligning day-to-day activities with ethical principles requires strong automation of workflows. Which governance software systems can companies employ to ensure ethical AI quality with minimal or no human oversight? What do companies need to do themselves, and what can be asked of technology delivery partners? How can companies seamlessly and efficiently react to AI policy and technology changes across multiple markets?
Ultimately, using AI technologies in line with ethical principles is a rapidly evolving capability at the core of digital transformations, ESG ambitions, and the tectonic shifts disrupting the telecom industry. Although regulations trigger the need for carriers to act now and start addressing the complex delivery challenge, doing this well and fast can also earn the trust of consumers, employees, and investors and turn ethical AI into a competitive advantage for telcos.
The authors wish to thank Lukas Droege for his contributions to this article.