Seongcheol Kim, Casey C. Bennett, Zachary Henkel, Jinjae Lee, Cedomir Stanojevic, Kenna Baugus, Cindy L. Bethel, Jennifer A. Piatt, Selma Šabanović
{"title":"通过家庭宠物机器人的传感器数据对人类活动进行多类建模的生成式重放","authors":"Seongcheol Kim, Casey C. Bennett, Zachary Henkel, Jinjae Lee, Cedomir Stanojevic, Kenna Baugus, Cindy L. Bethel, Jennifer A. Piatt, Selma Šabanović","doi":"10.1007/s11370-023-00496-0","DOIUrl":null,"url":null,"abstract":"<p>Deploying socially assistive robots (SARs) at home, such as robotic companion pets, can be useful for tracking behavioral and health-related changes in humans during lifestyle fluctuations over time, like those experienced during CoVID-19. However, a fundamental problem required when deploying autonomous agents such as SARs in people’s everyday living spaces is understanding how users interact with those robots when not observed by researchers. One way to address that is to utilize novel modeling methods based on the robot’s sensor data, combined with newer types of interaction evaluation such as ecological momentary assessment (EMA), to recognize behavior modalities. This paper presents such a study of human-specific behavior classification based on data collected through EMA and sensors attached onboard a SAR, which was deployed in user homes. Classification was conducted using <i>generative replay</i> models, which attempt to use encoding/decoding methods to emulate how human dreaming is thought to create perturbations of the same experience in order to learn more efficiently from less data. Both multi-class and binary classification were explored for comparison, using several types of generative replay (variational autoencoders, generative adversarial networks, semi-supervised GANs). The highest-performing binary model showed approximately 79% accuracy (AUC 0.83), though multi-class classification across all modalities only attained 33% accuracy (AUC 0.62, F1 0.25), despite various attempts to improve it. The paper here highlights the strengths and weaknesses of using generative replay for modeling during human–robot interaction in the real world and also suggests a number of research paths for future improvement.</p>","PeriodicalId":48813,"journal":{"name":"Intelligent Service Robotics","volume":"72 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generative replay for multi-class modeling of human activities via sensor data from in-home robotic companion pets\",\"authors\":\"Seongcheol Kim, Casey C. Bennett, Zachary Henkel, Jinjae Lee, Cedomir Stanojevic, Kenna Baugus, Cindy L. Bethel, Jennifer A. Piatt, Selma Šabanović\",\"doi\":\"10.1007/s11370-023-00496-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Deploying socially assistive robots (SARs) at home, such as robotic companion pets, can be useful for tracking behavioral and health-related changes in humans during lifestyle fluctuations over time, like those experienced during CoVID-19. However, a fundamental problem required when deploying autonomous agents such as SARs in people’s everyday living spaces is understanding how users interact with those robots when not observed by researchers. One way to address that is to utilize novel modeling methods based on the robot’s sensor data, combined with newer types of interaction evaluation such as ecological momentary assessment (EMA), to recognize behavior modalities. This paper presents such a study of human-specific behavior classification based on data collected through EMA and sensors attached onboard a SAR, which was deployed in user homes. Classification was conducted using <i>generative replay</i> models, which attempt to use encoding/decoding methods to emulate how human dreaming is thought to create perturbations of the same experience in order to learn more efficiently from less data. Both multi-class and binary classification were explored for comparison, using several types of generative replay (variational autoencoders, generative adversarial networks, semi-supervised GANs). The highest-performing binary model showed approximately 79% accuracy (AUC 0.83), though multi-class classification across all modalities only attained 33% accuracy (AUC 0.62, F1 0.25), despite various attempts to improve it. The paper here highlights the strengths and weaknesses of using generative replay for modeling during human–robot interaction in the real world and also suggests a number of research paths for future improvement.</p>\",\"PeriodicalId\":48813,\"journal\":{\"name\":\"Intelligent Service Robotics\",\"volume\":\"72 1\",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Service Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11370-023-00496-0\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Service Robotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11370-023-00496-0","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
摘要
在家中部署社交辅助机器人(SARs),如机器人伴侣宠物,有助于跟踪人类在生活方式随时间变化时的行为和健康相关变化,如 CoVID-19 期间所经历的变化。然而,在人们的日常生活空间部署 SAR 等自主代理时,需要解决的一个基本问题是了解用户在没有被研究人员观察到的情况下是如何与这些机器人互动的。解决这一问题的方法之一是利用基于机器人传感器数据的新型建模方法,并结合生态瞬间评估(EMA)等新型交互评估方法来识别行为模式。本文介绍了基于通过 EMA 和安装在用户家中的合成孔径雷达(SAR)上的传感器收集的数据进行的人类特定行为分类研究。该模型试图使用编码/解码方法来模拟人类做梦时如何对相同体验进行扰动,以便更有效地从更少的数据中学习。为了进行比较,我们使用几种类型的生成式重放(变异自动编码器、生成式对抗网络、半监督 GAN)对多类和二元分类进行了探索。表现最出色的二元模型显示了约 79% 的准确率(AUC 0.83),而所有模式的多类分类仅达到了 33% 的准确率(AUC 0.62,F1 0.25),尽管尝试了各种改进方法。本文强调了在现实世界中使用生成式重放进行人机交互建模的优缺点,并提出了一些未来改进的研究路径。
Generative replay for multi-class modeling of human activities via sensor data from in-home robotic companion pets
Deploying socially assistive robots (SARs) at home, such as robotic companion pets, can be useful for tracking behavioral and health-related changes in humans during lifestyle fluctuations over time, like those experienced during CoVID-19. However, a fundamental problem required when deploying autonomous agents such as SARs in people’s everyday living spaces is understanding how users interact with those robots when not observed by researchers. One way to address that is to utilize novel modeling methods based on the robot’s sensor data, combined with newer types of interaction evaluation such as ecological momentary assessment (EMA), to recognize behavior modalities. This paper presents such a study of human-specific behavior classification based on data collected through EMA and sensors attached onboard a SAR, which was deployed in user homes. Classification was conducted using generative replay models, which attempt to use encoding/decoding methods to emulate how human dreaming is thought to create perturbations of the same experience in order to learn more efficiently from less data. Both multi-class and binary classification were explored for comparison, using several types of generative replay (variational autoencoders, generative adversarial networks, semi-supervised GANs). The highest-performing binary model showed approximately 79% accuracy (AUC 0.83), though multi-class classification across all modalities only attained 33% accuracy (AUC 0.62, F1 0.25), despite various attempts to improve it. The paper here highlights the strengths and weaknesses of using generative replay for modeling during human–robot interaction in the real world and also suggests a number of research paths for future improvement.
期刊介绍:
The journal directs special attention to the emerging significance of integrating robotics with information technology and cognitive science (such as ubiquitous and adaptive computing,information integration in a distributed environment, and cognitive modelling for human-robot interaction), which spurs innovation toward a new multi-dimensional robotic service to humans. The journal intends to capture and archive this emerging yet significant advancement in the field of intelligent service robotics. The journal will publish original papers of innovative ideas and concepts, new discoveries and improvements, as well as novel applications and business models which are related to the field of intelligent service robotics described above and are proven to be of high quality. The areas that the Journal will cover include, but are not limited to: Intelligent robots serving humans in daily life or in a hazardous environment, such as home or personal service robots, entertainment robots, education robots, medical robots, healthcare and rehabilitation robots, and rescue robots (Service Robotics); Intelligent robotic functions in the form of embedded systems for applications to, for example, intelligent space, intelligent vehicles and transportation systems, intelligent manufacturing systems, and intelligent medical facilities (Embedded Robotics); The integration of robotics with network technologies, generating such services and solutions as distributed robots, distance robotic education-aides, and virtual laboratories or museums (Networked Robotics).