Work In Progress

Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!

Preface

In our day-to-day lives, we interact with systems that have certain implicit rules. We call these rules of each system its design language. Take, for example, the experience of walking into an unfamiliar office. Almost every office has a ‘main entrance’ for visitors. Upon entering, visitors are often guided visually to a front desk. This front desk may not be labeled as such, but we know from social and cultural cues that we should probably walk up to this desk and introduce ourselves to the person sitting behind it. This entire experience can be quite vexing if the office has no clear entrance or front desk — in other words, these elements form a design language that facilitates an intuitive and seamless office experience.

Our design language for AI includes an extensive set of reusable components, which we call Elements. These elements should serve as starting points for your design, or as inspiration for new features and modalities. However, unlike visual design frameworks (e.g. iOS Human Interface Guidelines[1]), Lingua Franca does not define a specific ‘look’. Instead, we focus on the ‘feel’ of AI in a way that may translate across interactions as diverse as voice, gesture, conversation, and data exploration. Designed modularly, our elements provide guidelines, examples, and best practices for integrating them into your next project.

Assortment

An algorithmically generated group of items, often shown as follow-up recommendations or related actions

Read More

Candidate

A single or group of recommendations that takes focus in the application, allowing a human operator to make an immediate decision

Read More

Clarification

Ability to amend the decision by the model to reflect a user’s true intent

Read More

Comparison

A way to compare the results of alternate models in order to give human executive oversight that prevents a single model’s biases from taking prominence

Read More

Correlation

Auxilliary information that contextualizes a model’s inference by showing correlated fields or properties

Read More

Evidence

Module that assists in explainability by displaying particularly salient examples from the dataset that relate to the current decision

Read More

Forensics

Tools to identify types or regimes for which the system fails to operate successfully

Read More

Guard Rails

Highly simple, straightforward rules that limit the behavior of an AI

Read More

History

Interaction component that allows users to view past actions and return to them if desired

Read More

Intent

In real-time systems, an indication of model’s decision that is given within a reasonable human response time

Read More

Latent Space

In creative domains, the ability to explore by navigating hybrid representations of input data

Read More

Mark

Visual indication of a model’s training signals when it may help users better interact over time

Read More

Model Card

Auditable and legally precise description of a model with associated details and known caveats

Read More

Multi-Modal

Allowing the user to give or receive information via multiple modes of interaction

Read More

Override

Ability to take partial or complete human command of a system, or to handoff such command to an autonomous system

Read More

Re-Engagement

Non-intrusive piece of delightfully personalized information to continually engage users

Read More

Signal

Point of interest or additional context that can be overlaid on model’s output to further indicate behavior

Read More

Variadic

Allowing users to input multiple items or examples into a model for inference

Read More

Warm-Up

Practice or pre-training period for a user to gain familiarity with the system and vice-versa

Read More

Footnotes


  1. iOS Human Interface Guidelines by Apple ↩︎