Joao Marcos Correia Marques, Patrick Naughton, Jing-Chen Peng, Yifan Zhu, James Seungbum Nam, Qianxi Kong, Xuanpu Zhang, Aman Penmetcha, Ruifan Ji, Nairen Fu, Vignesh Ravibaskar, Ryan Yan, Neil Malhotra, Kris Hauser
{"title":"利用 AVATRINA 机器人头像实现身临其境的商品远程呈现","authors":"Joao Marcos Correia Marques, Patrick Naughton, Jing-Chen Peng, Yifan Zhu, James Seungbum Nam, Qianxi Kong, Xuanpu Zhang, Aman Penmetcha, Ruifan Ji, Nairen Fu, Vignesh Ravibaskar, Ryan Yan, Neil Malhotra, Kris Hauser","doi":"10.1007/s12369-023-01090-1","DOIUrl":null,"url":null,"abstract":"<p>Immersive robotic avatars have the potential to aid and replace humans in a variety of applications such as telemedicine and search-and-rescue operations, reducing the need for travel and the risk to people working in dangerous environments. Many challenges, such as kinematic differences between people and robots, reduced perceptual feedback, and communication latency, currently limit how well robot avatars can achieve full immersion. This paper presents AVATRINA, a teleoperated robot designed to address some of these concerns and maximize the operator’s capabilities while using a commodity light-weight human–machine interface. Team AVATRINA took 4th place at the recent $10 million ANA Avatar XPRIZE competition, which required contestants to design avatar systems that could be controlled by novice operators to complete various manipulation, navigation, and social interaction tasks. This paper details the components of AVATRINA and the design process that contributed to our success at the competition. We highlight a novel study on one of these components, namely the effects of baseline-interpupillary distance matching and head mobility for immersive stereo vision and hand-eye coordination.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"54 1","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2024-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Immersive Commodity Telepresence with the AVATRINA Robot Avatar\",\"authors\":\"Joao Marcos Correia Marques, Patrick Naughton, Jing-Chen Peng, Yifan Zhu, James Seungbum Nam, Qianxi Kong, Xuanpu Zhang, Aman Penmetcha, Ruifan Ji, Nairen Fu, Vignesh Ravibaskar, Ryan Yan, Neil Malhotra, Kris Hauser\",\"doi\":\"10.1007/s12369-023-01090-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Immersive robotic avatars have the potential to aid and replace humans in a variety of applications such as telemedicine and search-and-rescue operations, reducing the need for travel and the risk to people working in dangerous environments. Many challenges, such as kinematic differences between people and robots, reduced perceptual feedback, and communication latency, currently limit how well robot avatars can achieve full immersion. This paper presents AVATRINA, a teleoperated robot designed to address some of these concerns and maximize the operator’s capabilities while using a commodity light-weight human–machine interface. Team AVATRINA took 4th place at the recent $10 million ANA Avatar XPRIZE competition, which required contestants to design avatar systems that could be controlled by novice operators to complete various manipulation, navigation, and social interaction tasks. This paper details the components of AVATRINA and the design process that contributed to our success at the competition. We highlight a novel study on one of these components, namely the effects of baseline-interpupillary distance matching and head mobility for immersive stereo vision and hand-eye coordination.</p>\",\"PeriodicalId\":14361,\"journal\":{\"name\":\"International Journal of Social Robotics\",\"volume\":\"54 1\",\"pages\":\"\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-01-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Social Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12369-023-01090-1\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Social Robotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12369-023-01090-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
Immersive Commodity Telepresence with the AVATRINA Robot Avatar
Immersive robotic avatars have the potential to aid and replace humans in a variety of applications such as telemedicine and search-and-rescue operations, reducing the need for travel and the risk to people working in dangerous environments. Many challenges, such as kinematic differences between people and robots, reduced perceptual feedback, and communication latency, currently limit how well robot avatars can achieve full immersion. This paper presents AVATRINA, a teleoperated robot designed to address some of these concerns and maximize the operator’s capabilities while using a commodity light-weight human–machine interface. Team AVATRINA took 4th place at the recent $10 million ANA Avatar XPRIZE competition, which required contestants to design avatar systems that could be controlled by novice operators to complete various manipulation, navigation, and social interaction tasks. This paper details the components of AVATRINA and the design process that contributed to our success at the competition. We highlight a novel study on one of these components, namely the effects of baseline-interpupillary distance matching and head mobility for immersive stereo vision and hand-eye coordination.
期刊介绍:
Social Robotics is the study of robots that are able to interact and communicate among themselves, with humans, and with the environment, within the social and cultural structure attached to its role. The journal covers a broad spectrum of topics related to the latest technologies, new research results and developments in the area of social robotics on all levels, from developments in core enabling technologies to system integration, aesthetic design, applications and social implications. It provides a platform for like-minded researchers to present their findings and latest developments in social robotics, covering relevant advances in engineering, computing, arts and social sciences.
The journal publishes original, peer reviewed articles and contributions on innovative ideas and concepts, new discoveries and improvements, as well as novel applications, by leading researchers and developers regarding the latest fundamental advances in the core technologies that form the backbone of social robotics, distinguished developmental projects in the area, as well as seminal works in aesthetic design, ethics and philosophy, studies on social impact and influence, pertaining to social robotics.