Calibrating Trust in Autonomous Systems in a Dynamic Environment

AbstractAppropriately calibrating trust in autonomous systems is essential for successful collaboration between humans and the systems. Over-trust and under-trust often happen in dynamically changing environments, and they can be major causes of serious issues with safety and efficiency. Many studies have examined the role of continuous system transparency in keeping proper trust calibration; however, not many studies have focused on how to find poor trust calibration nor how to mitigate it. In our proposed method of trust calibration, a behavior-based approach is used to detect improper trust calibration, and cognitive cues called ``trust calibration cues'' are presented to users as triggers for trust calibration. We conducted an online experiment with a drone simulator. Seventy participants performed pothole inspection tasks manually or relied on the drone's automatic inspection. The results demonstrated that adaptively presenting a simple cue could significantly promote trust calibration in both over-trust and under-trust cases.


Return to previous page