Seongcheol Kim, Casey C. Bennett, Zachary Henkel, Jinjae Lee, Cedomir Stanojevic, Kenna Baugus, Cindy L. Bethel, Jennifer A. Piatt, Selma Šabanović
{"title":"Generative replay for multi-class modeling of human activities via sensor data from in-home robotic companion pets","authors":"Seongcheol Kim, Casey C. Bennett, Zachary Henkel, Jinjae Lee, Cedomir Stanojevic, Kenna Baugus, Cindy L. Bethel, Jennifer A. Piatt, Selma Šabanović","doi":"10.1007/s11370-023-00496-0","DOIUrl":null,"url":null,"abstract":"<p>Deploying socially assistive robots (SARs) at home, such as robotic companion pets, can be useful for tracking behavioral and health-related changes in humans during lifestyle fluctuations over time, like those experienced during CoVID-19. However, a fundamental problem required when deploying autonomous agents such as SARs in people’s everyday living spaces is understanding how users interact with those robots when not observed by researchers. One way to address that is to utilize novel modeling methods based on the robot’s sensor data, combined with newer types of interaction evaluation such as ecological momentary assessment (EMA), to recognize behavior modalities. This paper presents such a study of human-specific behavior classification based on data collected through EMA and sensors attached onboard a SAR, which was deployed in user homes. Classification was conducted using <i>generative replay</i> models, which attempt to use encoding/decoding methods to emulate how human dreaming is thought to create perturbations of the same experience in order to learn more efficiently from less data. Both multi-class and binary classification were explored for comparison, using several types of generative replay (variational autoencoders, generative adversarial networks, semi-supervised GANs). The highest-performing binary model showed approximately 79% accuracy (AUC 0.83), though multi-class classification across all modalities only attained 33% accuracy (AUC 0.62, F1 0.25), despite various attempts to improve it. The paper here highlights the strengths and weaknesses of using generative replay for modeling during human–robot interaction in the real world and also suggests a number of research paths for future improvement.</p>","PeriodicalId":48813,"journal":{"name":"Intelligent Service Robotics","volume":"72 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Service Robotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11370-023-00496-0","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Deploying socially assistive robots (SARs) at home, such as robotic companion pets, can be useful for tracking behavioral and health-related changes in humans during lifestyle fluctuations over time, like those experienced during CoVID-19. However, a fundamental problem required when deploying autonomous agents such as SARs in people’s everyday living spaces is understanding how users interact with those robots when not observed by researchers. One way to address that is to utilize novel modeling methods based on the robot’s sensor data, combined with newer types of interaction evaluation such as ecological momentary assessment (EMA), to recognize behavior modalities. This paper presents such a study of human-specific behavior classification based on data collected through EMA and sensors attached onboard a SAR, which was deployed in user homes. Classification was conducted using generative replay models, which attempt to use encoding/decoding methods to emulate how human dreaming is thought to create perturbations of the same experience in order to learn more efficiently from less data. Both multi-class and binary classification were explored for comparison, using several types of generative replay (variational autoencoders, generative adversarial networks, semi-supervised GANs). The highest-performing binary model showed approximately 79% accuracy (AUC 0.83), though multi-class classification across all modalities only attained 33% accuracy (AUC 0.62, F1 0.25), despite various attempts to improve it. The paper here highlights the strengths and weaknesses of using generative replay for modeling during human–robot interaction in the real world and also suggests a number of research paths for future improvement.
期刊介绍:
The journal directs special attention to the emerging significance of integrating robotics with information technology and cognitive science (such as ubiquitous and adaptive computing,information integration in a distributed environment, and cognitive modelling for human-robot interaction), which spurs innovation toward a new multi-dimensional robotic service to humans. The journal intends to capture and archive this emerging yet significant advancement in the field of intelligent service robotics. The journal will publish original papers of innovative ideas and concepts, new discoveries and improvements, as well as novel applications and business models which are related to the field of intelligent service robotics described above and are proven to be of high quality. The areas that the Journal will cover include, but are not limited to: Intelligent robots serving humans in daily life or in a hazardous environment, such as home or personal service robots, entertainment robots, education robots, medical robots, healthcare and rehabilitation robots, and rescue robots (Service Robotics); Intelligent robotic functions in the form of embedded systems for applications to, for example, intelligent space, intelligent vehicles and transportation systems, intelligent manufacturing systems, and intelligent medical facilities (Embedded Robotics); The integration of robotics with network technologies, generating such services and solutions as distributed robots, distance robotic education-aides, and virtual laboratories or museums (Networked Robotics).