Work In Progress
Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!
Real-time AI systems are now pervasive across robotics and mechanical devices. Many would assume autonomous systems to be relatively free of UIs, but in fact there are strong reasons to provide user-friendly, even interactive UIs to such systems. In order to gain trust and confidence in AI, users often want to visualize the intent of that AI, either as a route or plan, or as a real-time display of the ‘next action’.
One mode of displaying intent is as an interactive, or even searchable dashboard that allows the user to confirm that the AI’s course of action fits their own mindset. This mode frames the user as a manager or overseer of the AI. Crucially, the user does not need to have an individual view of the complete AI model and every intermediary signal or inference. The user only needs a ‘high-level’ understanding of the overall system, with key decisions communicated in a manner akin to a subordinate.
Finally, intent should be communicated within a reasonable human response time, especially if the user wants to feel in control of the AI. If the intent is communicated as of after the system is making the decision, the user will feel a lack of agency. Instead, even a ‘heads-up’ time period of a second for certain real-time systems is enough to satisfy users.
Real-time intent notification relates to human subconscious indicators of agency, as studied in the seminal 1968 research paper by Robert Miller. In the paper, Miller determined that a latency of ≤100ms between action and result is felt as instantaneous to humans, and a latency of between 1-10s can still feel causally connected, but begins to strain attention. Since then, agency has been widely studied in the realm of human-computer interaction with attention paid to numerous factors such as movement, color, feedback, etc.
Intent, however, requires only a slight communication of agency (full human agency would require that the system not be autonomous). Depending on context, this can require different kinds of interaction design. For example, a remote autonomous system might give a user a ‘heads-up’ SMS notification 15 minutes before acting, giving the human a reasonable window of time to react or respond. A real-time system where the human is fully attentive of the AI’s behavior may only need to show a notification within two seconds of action.
Notification of intent requires the system have an overall slower response time, as with any ‘human-in-the-loop’ system. However, thoughtful interaction design may mitigate this by taking ‘preparatory’ steps without the user’s control and taking the full action with sufficient latency. For example, in an autonomous vehicle, the car may give a notification of an upcoming turn and simultaneously slow down, but only take the turn after five seconds. Therefore, the user will have a five-second window to override the action, while still allowing the vehicle to safely steer.
- Tesla Autopilot: Feature Table on Wikipedia
Response time in man-computer conversational transactions on ACM Digital Library ↩︎
The experience of agency in human-computer interactions: a review by Limerick, et al. ↩︎