Log In

Reset Password
BERMUDA | RSS PODCAST

How do we hold AI accountable?

Test case: an Uber driverless car waits in traffic during a test drive in San Francisco (Photograph by Eric Risberg/AP)

A self-driving car operated by Uber struck and killed a woman on Sunday in Tempe, Arizona.

Few details have emerged, but it’s reportedly the first fatality involving a self-driving vehicle.

In January, a Pittsburgh car crash sent two people to the hospital; the accident involved a self-driving Fusion from Ford-backed Argo AI.

The Fusion was hit by a truck that ran a red light, and at the last second, the human back-up driver reportedly took the car out of autonomous mode and took control of the Fusion’s wheel.

Could these crashes have been avoided?

If a human had been driving either of those cars, she could have given police an account of what went wrong, her level of fault could have been assessed, and she could have given her first-person thoughts on how a similar accident could be prevented. But “asking” the artificial intelligence under the hood is a lot more challenging.

AI is already a growing presence in our world, used for tasks such as analysing MRIs, giving financial advice and even composing music. Among the many questions about the ethics and governance of using it, the most important may be this: Should AI be required to explain its decisions?

To make these kinds of crashes a rarity, the software that propels self-driving cars — the “brains” behind this and other AI functions — must be interrogated, understood and improved upon. One way of doing this is by designing artificial intelligence so that it can describe, in human-readable terms, the reasons for a specific decision.

In other words, we need programs that can not only think like humans but also explain their thoughts as a human could.

In some cases, we can find out what an algorithm is doing by examining the logic it uses or by reverse engineering it from the sensor measurements going in and the driving decisions coming out. But other algorithms, especially those developed using advanced machine learning techniques, like deep neural networks, can be so complex that not even the designers fully understand how they make decisions.

These complex inputs, such as data from sensors, are processed by multiple layers of self-modifying algorithms that try to mimic the structure of the human brain. As a result, there is a complicated set of possible pathways that any given input could take through the system and designers are not always able to determine, post hoc, which pathway was used.

In this sense, trying to understand advanced AI is like trying to understand the inner workings of another person’s mind.

Unlike a human, though, AI isn’t necessarily able to explain its internal processes to an outsider — unless we build in the capacity to do so. Entrusting important decisions to a system that can’t explain itself presents obvious dangers.

Take the case of Eric Loomis, a Wisconsin man sentenced to six years in prison for eluding police while driving a car that had been used in a drive-by shooting. The judge’s sentence was based in part on a risk score for Loomis generated by Compas, a commercial risk-assessment tool used, according to one study, “to assess more than 1 million offenders” in the last two decades.

Loomis appealed his sentence, based on the court’s use of the AI-generated risk score, because it relied on a proprietary algorithm whose exact methodology is unknown. COMPAS is designed to estimate an individual’s likelihood of committing another crime in the future, but evidence suggests that it may be no better at predicting risk than untrained observers.

In addition, because they rely heavily on prior convictions, algorithmic risk scores can reinforce existing biases within the criminal justice system.

Without the ability to review and interrogate such algorithms, judges, lawyers and defendants can’t know whether a given risk score is fair or not. (Loomis lost his case in the Wisconsin Supreme Court on the grounds that the trial judge did not rely exclusively on the Compas report.)

Similar tools are also increasingly used to inform determinations about probation, parole and bail, as well as for non-criminal-justice decisions like approving loans and weeding through job candidates.

New AI-driven lending programs, for example, wade through massive amounts of personal data to help determine whether someone should be granted a loan and on what terms. These programs may feed all kinds of information, from a person’s text messages to their SAT scores, into complex algorithms to make loan evaluations.

Proponents of AI-driven lending argue that it will be more fair than traditional determinations based primarily on income and credit score, but sceptics are concerned that biases like the ones seen with COMPAS may infect these tools as well.

How do we hold artificial intelligence accountable — how do we assess the trustworthiness of decisions beyond our understanding?

In some cases, we may be able to rely on theoretical guarantees — mathematical or logical proof that AI is incapable of making a specific error — but such situations are rare. (If, for example, one knew and could account for all the ways in which stop signs and yield signs could appear on the road, we might be able to prove that AI would never or rarely confuse them.)

In others, we may be able to use aggregated statistical measures: Is the AI approach better overall? But aggregated statistics are sometimes insufficient, and we will need AI to explain a specific decision.

How was Loomis’s risk score or the Fusion’s speed determined? What factors and criteria were involved? Would changing a certain input — like road conditions or an individual’s criminal record — alter the outcome? These are all issues we would interrogate when evaluating the actions of a human decision-maker, like a loan officer, a driver or a judge.

In the same situations, we need to be able to do this with AI. While we don’t expect humans to justify all their decisions, there are many situations in which they have an ethical, social or legal obligation to do so. Compared with a human decision-maker, AI is already better in some ways at looking under the hood.

You can’t peer into a human’s conscience, but you can review the progression of information that a program processed.

Yet that material can be difficult to understand. Machine-learning algorithms typically process massive data sets, and AI decision-making rules can be organised in ways that are completely different from how the human brain processes information. If a self-driving car could spit out a raw record of what it was “thinking” at the time of a crash, it would probably be meaningless to a human.

To make these explanations useful, we’ll need to develop ways for AI to translate its “thinking” into terms humans can understand. Doing this, of course, will add extra time and cost to the development of a system. If it requires revealing how a technology works or releasing proprietary data, there may be concerns about intellectual-property rights.

But from a technical perspective, we believe this is a feasible goal. With the right programming, AI can probably explain its decisions even more clearly than a human could, retracing the inputs far more precisely than a human with fuzzy memory.

Researchers, manufacturers and, yes, lawyers must start thinking about accountability as they’re designing AI. Programs must include feedback that can be used to properly apportion legal liability. For example, in the Pittsburgh crash, was AI partially at fault because it didn’t detect the truck that ran the light?

Or was its decision optimal, because any other choice would have created greater danger? The answers to these questions would be central to determining fault and damages in any legal proceedings after such a crash.

Artificial intelligence has incredible potential, but good outcomes can’t be taken for granted.

Developing systems that can explain their “thinking” will let lawyers, policymakers and ethicists create standards that allow us to hold flawed or biased AI accountable under the law. Through our research, conducted along with our colleagues, we want to ensure that as this tech advances and spreads, we can hold our tools to the same — or higher — standards we ask of our fellow humans.

• Finale Doshi-Velez is an assistant professor of computer science at Harvard University’s John A. Paulson School of Engineering and Applied Sciences. Mason Kortz is a clinical instructional fellow at the Harvard Law School cyberlaw clinic, part of the Berkman Klein Centre for internet & Society