Today artificial intelligence advances at an exponential rate—and so do the increasingly autonomous systems (e.g. drones, Google cars) that machine intelligence enables. Yet a major issue threatens this progress: how can humans learn to work with these autonomous systems as collaborators and teammates?
Take, for example, airline pilots’ questions about their automation: “What did it do?” “Why did it do that?” “What will it do next?” Recent incidents–such as the airline disaster in which pilots, interacting with their Airbus 330’s highly complex, automated system, became confused and flew their entirely airworthy plane to its doom–show what happens when clear, timely human-automation teaming isn’t there. The automation leaves people behind.
In 2014, Dr. Nhut Ho recognized that this gap was one of our own making, and that unless we confront the boundaries and barriers that currently divide people from the automation, the confusion, helplessness, and mistrust/distrust will only increase. A Professor at California State University, Northridge, Nhut was running a multidisciplinary laboratory composed of students from the engineering and behavioral science, and was investigating trust in automation for the Air Force and NASA. This work made him keenly aware that the key to the success of advanced systems required sustained focus on the integration of the engineering and behavioral sciences.
He saw that a solution for automation leaving the human behind was to design systems that foster–rather than challenge–the human’s and automation’s mutual understanding of each other’s goals and intentions. Nhut also sensed the opportunity to be part of the solution. The mission of HATS Inc. is to ensure that these challenges do not get in the way of achieving appropriate trust and reliance with our new automated partners.