Christopher M. Reardon, Kerstin S Haring, J. Gregory, J. Rogers
{"title":"Evaluating Human Understanding of a Mixed Reality Interface for Autonomous Robot-Based Change Detection","authors":"Christopher M. Reardon, Kerstin S Haring, J. Gregory, J. Rogers","doi":"10.1109/SSRR53300.2021.9597854","DOIUrl":null,"url":null,"abstract":"Online change detection performed by mobile robots has incredible potential to impact safety and security applications. While robots are superior to humans at detecting changes, humans are still better at interpreting this information and will be responsible for making critical decisions in these contexts. For these reasons, robot-to-human communication of change detection is a fundamental requirement for successful human-robot teams operating in such scenarios. In this work we seek to improve this communication, and present the results of a study that evaluates the interpretability of autonomous robot-based change detections conveyed via mixed reality to untrained human participants. Our results show that humans are able to identify changes and understand the visualizations employed without prior training. Our analysis of the limitations of this initial study should be constructive to future work in this domain.","PeriodicalId":423263,"journal":{"name":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSRR53300.2021.9597854","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Online change detection performed by mobile robots has incredible potential to impact safety and security applications. While robots are superior to humans at detecting changes, humans are still better at interpreting this information and will be responsible for making critical decisions in these contexts. For these reasons, robot-to-human communication of change detection is a fundamental requirement for successful human-robot teams operating in such scenarios. In this work we seek to improve this communication, and present the results of a study that evaluates the interpretability of autonomous robot-based change detections conveyed via mixed reality to untrained human participants. Our results show that humans are able to identify changes and understand the visualizations employed without prior training. Our analysis of the limitations of this initial study should be constructive to future work in this domain.