star twitter facebook envelope linkedin instagram youtube alert-red alert home left-quote chevron hamburger minus plus search triangle x

Explainable AI: What’s Happening Inside the Black Box


By Scottie Barsotti

Why do algorithms make the decisions they make? The process is complicated and can be opaque, but Heinz College machine learning expert Leman Akoglu wants algorithms and their decisions to be better explained to the humans who use them.

If you’re sitting across from a colleague and they voice an opinion that you don’t agree with, you can ask them how they came to hold that opinion. You might say, “Tell me why you think that,” or ask, “What’s leading you to that conclusion?” Your colleague should be able to answer those questions and describe the path they took to arrive at their way of thinking. Maybe they convince you, or maybe a debate ensues. In any case, there’s an explanation available—as humans we are used to having our logic questioned and having to explain it in dialogue.

But what about algorithms? Who’s asking them to explain themselves?

Because algorithms have no feet to hold to the fire, many experts believe they need to be designed with more transparency in mind. Part of that transparency is clear explanations of outcomes.

Professor Leman Akoglu is one of Heinz College’s resident experts on machine learning and data mining, specializing in anomaly detection models.

What’s anomaly detection? Say your department at work handles thousands of expense reports. A successful anomaly detection model will comb through transactions on those reports and flag items that seem out of place so they can be investigated. Anomaly detection can apply to many domains and in some cases directly impact peoples’ lives, such as alerting a social worker when a report of child abuse significantly stands out from the others, or when data from ERs and social media might indicate a potential disease outbreak or societal unrest. 

Akoglu’s work in this area spans many topics, from identifying emerging news that could constitute risk for corporate partners, to identifying fraudulent users and fabricated reviews on sites like Yelp and TripAdvisor.

But it’s not always enough to know that an anomaly exists, Akoglu says. Humans using the outcomes of detection models must understand what the anomaly means.

 

We have to trust that our systems are truly doing things that are logical in their context, and that can be understood by a human at the end of the process. Leman Akoglu

“If an auditor is looking at an insurance claim or expense report detected by an algorithm as anomalous, they cannot simply say ‘this is anomalous, so we are not paying it.’ They have to investigate, verify and specifically pinpoint the misinformation in the report if any. Similarly, if a user account is flagged as anomalous by a detector, the system administrator would not simply want to shut down the account, but rather investigate and validate the suspicious behavior if any. In such cases it is beneficial to be able to tell why something is anomalous,” she said. “If the algorithm can tell not only that there is a potential error or suspicious activity, but explain why it thinks that, and how the anomalies stand out, the human analyst can make sense of the situation and look into it.”

Explanations are essential particularly for these types of anomaly detection scenarios, which cannot be fully automated and require a human in-the-loop (like an auditor) for verification. However, explanations can help beyond sense-making. In some cases, they may bring up issues with the detection algorithm itself, by exposing reliance on unexpected or undesirable clues for detection purposes—for example, claiming terrorist activity based on someone’s nationality or biometric data.

The problem? Many algorithms exist in what’s called “black boxes,” meaning the human beings who use the algorithm may know what its outcome was, but not necessarily how it reached that determination.

Akoglu says that explanations are a step toward algorithmic transparency, a societal issue that touches many disciplines and is of great concern to many at Heinz College and the new Block Center for Technology and Society.

Akoglu says that better explanation rely in part on the creation of “interpretable rules.”

“We can determine what are the distinctive characteristics of an anomaly, and determine instances that exist that are similar to an anomalous instance and yet are not anomalous,” said Akoglu. “While the former shows how an anomaly stands out from the rest, the latter outlines how far it is from being considered a non-anomaly in an interpretable way.” 

This could be useful for reducing false positives (when a decision model predicts an outcome that is unlikely to occur) and false negatives (when an outcome occurs after being classified by a model as unlikely). It could also give us insight into why an algorithm took a specific action—or didn’t take a specific action—in contrast to what a human being might have taken in the same situation with the same input.

In some contexts, this can also directly benefit consumers. 

“Say a mortgage application is marked for rejection by an algorithm, the consumer can be given useful information about why they were tagged that way and what they could seek to change to get a better outcome,” said Akoglu.

The inner-workings of machine learning algorithms can be opaque, which leads to many ethical concerns. However, Akoglu believes that “explainable AI” is both possible and desirable. Devising clearer explanation frameworks for algorithms will provide those using these algorithms and those affected by the outcomes of the algorithms with better information, and could improve trust in these technologies over time. 

“We have to trust that our systems are truly doing things that are logical in their context, and that can be understood by a human at the end of the process,” said Akoglu.