Module that assists in explainability by displaying particularly salient examples from the dataset that relate to the current decision
Risk-Management・Operations・Medical Prognosis・Financial Modeling

Work In Progress

Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!


A difficult decision may require additional evidence, and the same is true for when humans must act on the recommendations of AI. However, instead of asking the AI to make a final decision and back up its claim (a hard task), the AI can be designed to provide only evidence, as an assistive tool to the user. This task is relatively easier, as it can risk total failure by the AI. In fact, humans are quite capable of disregarding evidence they see as unimportant.

An application of such evidence could occur in the case of medical prognostics. Doctors often make decisions (say, detecting cancer from a radiology scan) by comparing the scan to past decisions in their mind. An AI system may step in to assist the doctor by finding a highly similar scan in a large dataset, tagged with the actual outcome. By comparing the relevant evidence to their task at hand, the doctor may end up making a better judgment.


While automated system can make terrible judgments on even very basic problems, humans are relatively robust. It is therefore no surprise that most important activities still require humans, even in cases where AI can outperform them. However, humans also display persistent biases and cognitive traps that lead us to make false judgments. Evidence from past examples can overcome some of these cognitive traps, as it may describe an edge case or other nuance that the human left out.

Purely machine decisions have cognitive biases of their own, especially in the case of low probability events for which little data exists. However, the task of finding similar examples within the data mitigates this bias, as the AI can make full use of the dataset.


Of course, there is no free lunch, and similarity matching carries its own biases, especially when different data points vary greatly in information value.


Most evidence-based schemes involve a form of similarity matching, which is generally an unstructured learning task—specifically, representation learning[1]. The machine must make a concise representation of each data point, and in doing so is forced to draw ‘lines in the sand’ where examples differ from each other.


  1. Representation Learning: A Review and New Perspectives by Yoshua Bengio ↩︎