Work In Progress
Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!
A candidate is part of a user interaction where the output of an AI model that is presented to a user for immediate processing. The purpose of this user interaction is to create a collaboration between human and machine where a human’s job is to take downstream actions based on the model’s recommendation. This allows the model to focus on its own task of presenting the best candidates to the user, as taking action may be out of the realm of the AI’s capabilities. Similarly, processing many data points quickly may be out of the realm of the human’s capabilities.
Many jobs may benefit from a restructuring around the candidate model of interaction. Especially in cases where individuals must parse through many profiles (e.g. recruitment or sales), AI systems can provide value without the organization incurring significant risk. Users may simply ignore the candidates, or possibly provide feedback to the AI to improve its recommendations. Alternatively, humans may be employed in training the AI system to recommend the best candidates over time by labeling and processing examples.
Candidates may help in professional contexts where expertise is still required. For example, in a medical scenario, perhaps a user (doctor) is asked to take action on patients with a high risk of life-threatening illness, and the AI aggregates the top five most risky patients from a list of patients approved for discharge. A typical doctor may not have time to observe all of these patients, but can easily take on just five from the AI.
Humans and AI work well together when agency is somewhat shared, though separated, so that both parties can perform their duties best. The core design behind the candidate is a UI that allows humans to intelligently take further action. This creates a clear division between the responsibilities between human and machine, allowing each to be individually trained to do their job best.
A feedback mechanism from human to AI can help iteratively improve the system’s combined capabilities. By adding feedback to the system, users may both encode their own preferences and clarify mistakes by the AI (see Clarification).
There is no technology ‘magic’ behind the candidate element—it simply involves displaying a limited number of outputs to the user, allowing them (or perhaps forcing them) to take immediate action. Feedback may be implemented as a simple re-labeling of the input or as collaborative filtering where the user’s preferences are measured in relationship to other users’ preferences.