Intuition-building is a complex ritual of learning metaphors, altering expectations and other inscrutable processes of the human mind. It is one of the few mental activities that starts as fully conscious and ends somewhere deep in the unconscious. Intuition is perhaps best described as practical forgetting.
Building intuitive AI requires more than well-worded buttons and explanatory tutorials. It requires understanding users’ immediate grasp of your system as well as their evolving perception of your system over time. Because AIs typically have a wide range of behaviors, learning curves can be steep as your users must gain a nuanced intuition that may not come quite naturally.
Some of the most rewarding tools take time to learn, and some tools just aren’t for everyone. Take a skateboard—infamously frustrating, yet capable of astonishing feats when wielded by experts. Today, many designers shy away from learning curves of any kind, assuming that their products can only succeed if users understand them at first glance. We have paradigms of UI design that leave nothing to interpretation. Designers conduct usability studies that ask users to manipulate a UI that they have never seen before, observing where users get lost or confused, and attempt to completely remove these moments.
The same rigid logic of UI paradigms creates an impossible standards for AI: to be totally intuitive, yet capable of automating processes more intelligently than humans can. An intelligent AI is bound to make a different decision than a human would, which is a source of inevitable friction. However, this is also a crucial opportunity for design to provide a gradual learning curve so that users may eventually benefit from artificial intelligence (see Design Tradeoffs). Eliminating all moments of confusion and uncertainty will not work in an age of AI, where users must crucially have some degree of trust in the system’s decisions.
At its core, an intuitive AI is one that has built trust. Trust comes in many different forms, often functioning differently depending on domain—it can be notoriously inscrutable. In some cases, an AI can build trust simply by providing the same output every time to a given input so that a person doesn’t feel that the system is acting ‘randomly’. In other cases, an AI may build trust by providing different outputs every time it runs with the same inputs, allowing a user to retry until they receive a result that suits their tastes. Unfortunately, trust-building may require extensive iteration and trial-and-error, especially considering so-called ‘expert’ domains where your user must make frequent judgment calls of their own.
When trust in an AI is lost with a user, that user may become nervous using the AI (take for example how users become untrusting of their phone’s autocorrect tool). Sometimes, users may actually attempt to sabotage the AI, providing false or misleading signals, and attempting to subvert the AI’s decision-making abilities. Because AIs are highly dynamic and responsive, these forms of sabotage could quite easily pay off to the antagonized user.
Interpretation and Confusion
Humans interpret things in wildly different ways—test your AI out on users, asking what they assume is happening under the hood, and their explanations might surprise you. Many engineers expect that a sufficiently intelligent AI will be ‘self-evident’ with decisions that are always intuitive and users that trust the system wholeheartedly. However, this rarely occurs in complex situations. Users bring in context and understanding from experience, while AIs bring context from vast quantities of data. Data can often disagree with, or misinterpret, human experience, and the AI system may need to justify its decision to doubtful users (see Transparency). Unfortunately, this interpretability cannot be achieved by simply conveying ‘how the system works’. Instead, consider interpretability as a design space, and seek to always provide users with meaningful assistance rather than inexplicable decisions.
Clarify where dataset comes from or brand the dataset accordingly to its source to help users attribute behavior.
User actions that directly affect a recommender system or other ML decision making system should be called out in real time.
Users should be able to easily internalize the capabilities of your system with an intuitive mental model.
Interactions that are highly malleable can affect user expectations negatively as the user is constantly undermined by missed expectation.
People tend to create narratives around how a system behaves, and tell those narratives to others.
Provide users with a rich history of their actions, allowing users to navigate highly dynamic interfaces.
Users can build false and damaging intuition about your product, jeopardizing further use and data collection.
- Recommender systems: from algorithms to user experience by Joseph A. Konstan & John Riedl