We have limited Greek content available. View Greek content.

Brief

Tackling AI's Unintended Consequences
en

Whether or not they know it, nearly everyone has had an experience that exposes just how dependent we have become on artificial intelligence (AI). It often comes in the back seat of a car.

That’s where I was a few months ago, sitting in a rideshare from suburban Scarsdale, New York, to New York City. The driver had recently emigrated from Nepal, and his ability to quickly find work greatly illustrates how ride-sharing platforms open economic opportunities. Once upon a time, my driver would have had to learn the area well before he could drive a customer from place to place. Now AI mapping had him hard at work just weeks after landing.

The New York metropolitan region is one of the most complicated urban areas in the world, however, and even with the map, my driver struggled. After making a few illegal maneuvers and an unplanned stop for gas, he did get me to my destination, but I exited the car thinking that I’d once had the privilege to expect a far more capable driver.

As AI infiltrates more of our experiences and organizations, it’s important to recognize not only its many benefits but its unintended consequences as well. AI protects us from known and unknown threats, helps us connect to one another, and provides better answers faster and cheaper than humans do. And, of course, it’s great that AI frees us from routine tasks such as reading a map. But are we recognizing and addressing the loss of human expertise that accompanies that new freedom?

For business leaders and others investing in the technology, there are certain high-gain questions that can help them begin to grapple with leadership in the AI age—including how to manage the unique properties and risks of AI, bring clarity and focus to its deployment, and ultimately make better application of it (see Figure 1).


tackling-unintended-consequences-fig01_embed

There are also a half-dozen risks that should inform those conversations as well.

Risk No. 1: AI can create hidden errors


Unlike traditional rules-based programming, AI models are statistical representations of the world. They provide answers based on their learning, but they are imperfect. The opacity of many AI models and their ability to quickly scale make it possible for real errors to remain hidden from view. We are familiar with chatbots unleashed on social media that pick up racist views from their data set because that example is on public display. But what about autonomous driving or flying systems? Their training data is growing exponentially and the models based on it are improving dramatically, yet errors in those algorithms continue to be discovered, sometimes only after loss of life.

Risk No. 2: AI can lead to a loss of skill, critical thinking and understanding


It’s not only new rideshare drivers who are in danger of becoming excessively reliant on AI. One Silicon Valley engineer recently stated that his site’s recommendation algorithm makes it so that his team doesn’t have to think as much. Whether you run a finance department of a company that relies on algorithmic sales forecasts or you are a salesperson getting leads from one, it’s dangerous to lose an understanding of the fundamentals of your business and what’s truly driving demand.

Risk No. 3: AI can open new hazards


Similar to human workers, algorithms are subject to manipulation. But while a worker is observed by management and makes relatively few decisions in the course of his or her day, an algorithm will make many decisions—often unseen. Spammers learned long ago how to get the best of machine learning systems, and there’s every reason to believe that hackers are only getting started on AI. Look at the election-season manipulations of social media newsfeeds or the cottage industry of search engine optimization. Algorithms can be and are being exploited. As algorithms take on broader roles—setting a price on an e-commerce site, determining a car insurance rate, hiring someone—cause for concern increases. Now managers must anticipate how an algorithm might be manipulated and adjust accordingly.

Risk No. 4: AI can institutionalize bias


Most AI machines learn by studying examples in curated data sets. AI experts may understand how an algorithm reached its conclusion, or it may be a black box that is mysterious even to experts in the field. This lack of transparency raises concerns about bias, since any algorithm trained on historical data will logically come to conclusions that reflect bias present in that data. In the mortgage industry, for instance, lenders had better be certain their algorithms conform to regulations that they not discriminate based on characteristics such as race and gender. Bias does not have to be so clearly wrong for it to lead to bad outcomes, either. In customer analytics, for example, an algorithm trained on data culled from an existing customer base will favor those customers’ preferences. But what about the tastes of the many people not yet served? With algorithms now involved in everything from hiring to the delivery of social services to the needy, one very real risk is simply repeating how things have always been done.

Risk No. 5: AI can contribute to a loss of empathy

As more companies use bots and other machines for consumer interactions, organizations run the risk of losing touch with their customers. To executives, the concerns of workers managed by algorithms, as rideshare drivers are today, may feel similarly remote. Distance could lessen managers’ empathy and ability to listen to either group, but it doesn’t have to. Though I fly often for work, I have taken just one Virgin Atlantic flight over the past few months, and it was delayed 45 minutes. When I landed, the airline’s systems had already spotted the issue and sent an email apology and a voucher for a discount on my next flight. Rather than feeling irritated by the experience, it left me rather impressed.

Risk No. 6: AI can cause a loss of control


The convenience and speed of AI-driven decision making are attractive, but sometimes humans need to be involved. There is no clearer example than the integral role that human drone pilots play in the remote bombing of military targets. Today, it’s accepted that human judgment must be involved, but as we grow more accustomed to this technology, it is plausible that could change. Will that be OK? Many such difficult questions will arise around AI’s erosion of human control. It will be essential that leaders grapple with them.

Governance matters. Top executives need to be involved in establishing the goals and guardrails around the AI that is increasingly enabling their businesses. For decades, financial services organizations that rely heavily on credit algorithms have been expected to stringently govern risk management; a similar elevation of AI governance may now be needed for organizations broadly embedding the technology.

Every materially important algorithm in the business should also have a product manager—a human reviewing and testing the algorithm, auditing its outcomes, and assessing and improving its performance.

Strong, human listening systems are essential. The key constituents of an important algorithm must be regularly solicited for input and feedback, whether they are customers or employees or other partners. Empathy must guide the management and deployment of any algorithm. The organization must be able to recognize when a reset is necessary.

Featured topic

More Digital Transformation Insights

Digital transformation is a topic of rich and vital discussion in boardrooms and among executive teams around the world. Here are some insights on what it takes to lead and deliver a digital transformation.

How this plays out for any organization depends on the industry and context. Each will have its own particular AI opportunities and potential pitfalls. There are, however, certain questions that can help any executive or board member stimulate the right conversation around AI:

  • How well does this algorithm match the essential tenets of our business? How will it work with those key principles?
  • Who is going to ensure that we secure the benefits and not the downsides from its deployment?
  • Who are the key constituents affected by this algorithm? Are we soliciting their feedback now? How will we be sure we continue to seek their insight in the future?
  • Who is going to operate the algorithm? What are their goals for increasing its impact and innovation?

The pervasiveness and scalability of AI mean that algorithms can rapidly affect millions. Competition and progress require its use, but technology is neither necessarily moral nor intrinsically improving. That’s up to the humans who leverage it. In a world shaped by AI, human leadership matters more than ever.

Chris Brahm is a Bain partner based in the San Francisco office; he leads the firm’s Global Advanced Analytics practice.

Tags

Έτοιμοι να μιλήσουμε

Συνεργαζόμαστε με φιλόδοξους ηγέτες που θέλουν να καθορίσουν το μέλλον και όχι. Όχι να κρυφτούν από αυτό. Μαζί, επιτυγχάνουμε πετυχαίνουμε εξαιρετικά αποτελέσματα.