人类远程操作-触觉启用的远程超声混合现实系统

IF 4.5 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS Human-Computer Interaction Pub Date : 2023-06-30 DOI:10.1080/07370024.2023.2218355
David Black, Yas Oloumi Yazdi, Amir Hossein Hadi Hosseinabadi, Septimiu Salcudean
{"title":"人类远程操作-触觉启用的远程超声混合现实系统","authors":"David Black, Yas Oloumi Yazdi, Amir Hossein Hadi Hosseinabadi, Septimiu Salcudean","doi":"10.1080/07370024.2023.2218355","DOIUrl":null,"url":null,"abstract":"ABSTRACTABSTRACTCurrent teleultrasound methods include audiovisual guidance and robotic teleoperation, which constitute tradeoffs between precision and latency versus flexibility and cost. We present a novel concept of “human teleoperation” which bridges the gap between these two methods. In the concept, an expert remotely teloperates a person (the follower) wearing a mixed-reality headset by controlling a virtual ultrasound probe projected into the person’s scene. The follower matches the pose and force of the virtual device with a real probe. The pose, force, video, ultrasound images, and 3-dimensional mesh of the scene are fed back to the expert. This control framework, where the actuation is carried out by people, allows more precision and speed than verbal guidance, yet is more flexible and inexpensive than robotic teleoperation. The purpose of this paper is to introduce this concept as well as a prototype teleultrasound system with limited haptics and local communication. The system was tested to show its potential, including mean teleoperation latencies of 0.32 ± 0.05 seconds and steady-state errors of 4.4 ± 2.8 mm and 5.4 ± 2.8 ∘ in position and orientation tracking respectively. A preliminary test with an ultrasonographer and four patients was completed, showing lower measurement error and a completion time of 1:36 ± 0:23 minutes using human teleoperation compared to 4:13 ± 3:58 using audiovisual teleguidance.KEYWORDS: Teleoperationtele-ultrasoundmixed realityhapticshuman computer interaction Disclosure statementNo potential conflict of interest was reported by the authors.Supplementary materialSupplemental data for this article can be accessed online at https://doi.org/10.1080/07370024.2023.2218355Additional informationFundingThe work was supported by the Natural Sciences and Engineering Research Council of Canada [RGPIN-2016-04618]Notes on contributorsDavid BlackDavid Black completed a BASc in engineering physics at the University of British Columbia (UBC), Canada, in 2021. He is currently a Vanier Scholar and PhD candidate in electrical and computer engineering at UBC. During his studies, he has worked at A&K Robotics, Vancouver, Canada, the Robotics and Control Laboratory (RCL) at UBC, and at the BC Cancer Research Centre. From 2018 to 2019 he worked as a systems engineer in Advanced Development at Carl Zeiss Meditec AG, Oberkochen, Germany, and has continued as a consultant and collaborator since 2019.Yas Oloumi YazdiYas Oloumi Yazdi completed a BASc in engineering physics at the University of British Columbia (UBC), Canada, in 2022. She is currently a PhD student in biomedical engineering at UBC. She has completed internships at the Michael Smith Genome Sciences Centre, BC Cancer Research Centre, and UBC BioMEMS lab.Amir Hossein Hadi HosseinabadiAmir Hossein Hadi Hosseinabadi received BSc and MASc degrees in mechanical engineering in 2011 and 2013 from the Sharif University of Technology, Tehran, Iran, and the University of British Columbia (UBC), Vancouver, Canada, respectively. He completed a PhD in electrical and computer engineering at UBC with the Robotics and Control Laboratory (RCL). From 2013-2020, he was a Robotics & Control Engineer at Dynamic Attractions, Port Coquitlam, Canada. He completed internships at Microsoft, Redmond, WA, USA and Intuitive Surgical, Sunnyvale, CA, USA. He is now a hardware engineer at Apple, Cupertino, California, USA.Septimiu SalcudeanSeptimu E. Salcudean was born in Cluj, Romania. He received the BEng (Hons.) and MEng degrees in from McGill University, Montreal, Quebc, Canada in 1979 and 1981, respectively, and his PhD degree from the University of California, Berkeley, USA in 1986, all in electrical engineering.He was a Research Staff Member at the IBM T.J. Watson Research Center from 1986 to 1989. He then joined the University of British Columbia (UBC) and currently is a Professor in the Department of Electrical and Computer Engineering, where he holds the C.A. Laszlo Chair in Biomedical Engineering and a Canada Research Chair. He has courtesy appointments with the UBC School of Biomedical Engineering and the Vancouver Prostate Centre. He has been a co-organizer of the Haptics Symposium, a Technical Editor and Senior Editor of the IEEE Transactions on Robotics and Automation, and on the program committees of the ICRA, MICCAI and IPCAI Conferences. He is currently on the steering committee of the IPCAI conference and on the Editorial Board of the International Journal of Robotics Research. He is a Fellow of the IEEE, a Fellow of MICCAI and of the Canadian Academy of Engineering.","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"99 1","pages":"0"},"PeriodicalIF":4.5000,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human teleoperation - a haptically enabled mixed reality system for teleultrasound\",\"authors\":\"David Black, Yas Oloumi Yazdi, Amir Hossein Hadi Hosseinabadi, Septimiu Salcudean\",\"doi\":\"10.1080/07370024.2023.2218355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTABSTRACTCurrent teleultrasound methods include audiovisual guidance and robotic teleoperation, which constitute tradeoffs between precision and latency versus flexibility and cost. We present a novel concept of “human teleoperation” which bridges the gap between these two methods. In the concept, an expert remotely teloperates a person (the follower) wearing a mixed-reality headset by controlling a virtual ultrasound probe projected into the person’s scene. The follower matches the pose and force of the virtual device with a real probe. The pose, force, video, ultrasound images, and 3-dimensional mesh of the scene are fed back to the expert. This control framework, where the actuation is carried out by people, allows more precision and speed than verbal guidance, yet is more flexible and inexpensive than robotic teleoperation. The purpose of this paper is to introduce this concept as well as a prototype teleultrasound system with limited haptics and local communication. The system was tested to show its potential, including mean teleoperation latencies of 0.32 ± 0.05 seconds and steady-state errors of 4.4 ± 2.8 mm and 5.4 ± 2.8 ∘ in position and orientation tracking respectively. A preliminary test with an ultrasonographer and four patients was completed, showing lower measurement error and a completion time of 1:36 ± 0:23 minutes using human teleoperation compared to 4:13 ± 3:58 using audiovisual teleguidance.KEYWORDS: Teleoperationtele-ultrasoundmixed realityhapticshuman computer interaction Disclosure statementNo potential conflict of interest was reported by the authors.Supplementary materialSupplemental data for this article can be accessed online at https://doi.org/10.1080/07370024.2023.2218355Additional informationFundingThe work was supported by the Natural Sciences and Engineering Research Council of Canada [RGPIN-2016-04618]Notes on contributorsDavid BlackDavid Black completed a BASc in engineering physics at the University of British Columbia (UBC), Canada, in 2021. He is currently a Vanier Scholar and PhD candidate in electrical and computer engineering at UBC. During his studies, he has worked at A&K Robotics, Vancouver, Canada, the Robotics and Control Laboratory (RCL) at UBC, and at the BC Cancer Research Centre. From 2018 to 2019 he worked as a systems engineer in Advanced Development at Carl Zeiss Meditec AG, Oberkochen, Germany, and has continued as a consultant and collaborator since 2019.Yas Oloumi YazdiYas Oloumi Yazdi completed a BASc in engineering physics at the University of British Columbia (UBC), Canada, in 2022. She is currently a PhD student in biomedical engineering at UBC. She has completed internships at the Michael Smith Genome Sciences Centre, BC Cancer Research Centre, and UBC BioMEMS lab.Amir Hossein Hadi HosseinabadiAmir Hossein Hadi Hosseinabadi received BSc and MASc degrees in mechanical engineering in 2011 and 2013 from the Sharif University of Technology, Tehran, Iran, and the University of British Columbia (UBC), Vancouver, Canada, respectively. He completed a PhD in electrical and computer engineering at UBC with the Robotics and Control Laboratory (RCL). From 2013-2020, he was a Robotics & Control Engineer at Dynamic Attractions, Port Coquitlam, Canada. He completed internships at Microsoft, Redmond, WA, USA and Intuitive Surgical, Sunnyvale, CA, USA. He is now a hardware engineer at Apple, Cupertino, California, USA.Septimiu SalcudeanSeptimu E. Salcudean was born in Cluj, Romania. He received the BEng (Hons.) and MEng degrees in from McGill University, Montreal, Quebc, Canada in 1979 and 1981, respectively, and his PhD degree from the University of California, Berkeley, USA in 1986, all in electrical engineering.He was a Research Staff Member at the IBM T.J. Watson Research Center from 1986 to 1989. He then joined the University of British Columbia (UBC) and currently is a Professor in the Department of Electrical and Computer Engineering, where he holds the C.A. Laszlo Chair in Biomedical Engineering and a Canada Research Chair. He has courtesy appointments with the UBC School of Biomedical Engineering and the Vancouver Prostate Centre. He has been a co-organizer of the Haptics Symposium, a Technical Editor and Senior Editor of the IEEE Transactions on Robotics and Automation, and on the program committees of the ICRA, MICCAI and IPCAI Conferences. He is currently on the steering committee of the IPCAI conference and on the Editorial Board of the International Journal of Robotics Research. He is a Fellow of the IEEE, a Fellow of MICCAI and of the Canadian Academy of Engineering.\",\"PeriodicalId\":56306,\"journal\":{\"name\":\"Human-Computer Interaction\",\"volume\":\"99 1\",\"pages\":\"0\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2023-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/07370024.2023.2218355\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/07370024.2023.2218355","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

摘要

摘要目前的远程超声方法包括视听制导和机器人远程操作,这些方法需要在精度和延迟与灵活性和成本之间进行权衡。我们提出了一种新的“人类远程操作”概念,它弥合了这两种方法之间的差距。在这个概念中,专家通过控制投射到人的场景中的虚拟超声探头,远程操作戴着混合现实耳机的人(追随者)。跟随者将虚拟设备的姿态和力与真实探头相匹配。将场景的姿态、力、视频、超声图像和三维网格反馈给专家。这种控制框架是由人来执行的,比口头指导更精确、速度更快,但比机器人远程操作更灵活、更便宜。本文的目的是介绍这一概念,以及一个具有有限触觉和局部通信的远程超声系统原型。测试显示了该系统的潜力,包括在位置和方向跟踪上的平均遥操作延迟为0.32±0.05秒,稳态误差分别为4.4±2.8 mm和5.4±2.8°。在超声仪和4例患者的配合下完成了初步测试,人工远程操作的测量误差更小,完成时间为1:36±0:23分钟,而视听远程引导的完成时间为4:13±3:58分钟。关键词:远程操作远程超声混合现实触觉人机交互披露声明作者未报告潜在利益冲突。本研究得到了加拿大自然科学与工程研究委员会[RGPIN-2016-04618]的资助。作者简介:david Black于2021年在加拿大不列颠哥伦比亚大学(UBC)完成了工程物理专业的BASc学位。他目前是UBC电气和计算机工程的Vanier学者和博士候选人。在学习期间,他曾在加拿大温哥华的A&K机器人公司、UBC的机器人和控制实验室(RCL)以及BC癌症研究中心工作。从2018年到2019年,他在德国Oberkochen的Carl Zeiss Meditec AG担任高级开发系统工程师,并自2019年以来继续担任顾问和合作者。Yas Oloumi Yazdi于2022年在加拿大不列颠哥伦比亚大学(UBC)完成了工程物理的基础课程。她目前是UBC生物医学工程专业的博士生。她曾在Michael Smith基因组科学中心、BC癌症研究中心和UBC生物医学实验室完成实习。Amir Hossein Hadi Hosseinabadi分别于2011年和2013年在伊朗德黑兰谢里夫理工大学和加拿大温哥华不列颠哥伦比亚大学(UBC)获得机械工程学士和硕士学位。他在UBC机器人与控制实验室(RCL)完成了电气和计算机工程博士学位。从2013年到2020年,他是加拿大高贵林港动态景点的机器人和控制工程师。他曾在美国华盛顿州雷德蒙德的微软公司和美国加利福尼亚州森尼维尔的Intuitive Surgical公司实习。他现在是美国加州库比蒂诺苹果公司的硬件工程师。septimu E. Salcudean出生于罗马尼亚的克鲁日。他分别于1979年和1981年在加拿大魁北克省蒙特利尔市麦吉尔大学获得荣誉学士和硕士学位,并于1986年在美国加州大学伯克利分校获得电气工程博士学位。1986年至1989年,他是IBM T.J. Watson研究中心的研究人员。随后,他加入了不列颠哥伦比亚大学(UBC),目前是电气和计算机工程系的教授,在那里他担任生物医学工程的C.A. Laszlo主席和加拿大研究主席。他与UBC生物医学工程学院和温哥华前列腺中心有礼貌的约会。他是触觉研讨会的共同组织者,IEEE机器人与自动化交易的技术编辑和高级编辑,以及ICRA, MICCAI和IPCAI会议的计划委员会成员。他目前是IPCAI会议指导委员会成员和国际机器人研究杂志编辑委员会成员。他是IEEE院士、MICCAI院士和加拿大工程院院士。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Human teleoperation - a haptically enabled mixed reality system for teleultrasound
ABSTRACTABSTRACTCurrent teleultrasound methods include audiovisual guidance and robotic teleoperation, which constitute tradeoffs between precision and latency versus flexibility and cost. We present a novel concept of “human teleoperation” which bridges the gap between these two methods. In the concept, an expert remotely teloperates a person (the follower) wearing a mixed-reality headset by controlling a virtual ultrasound probe projected into the person’s scene. The follower matches the pose and force of the virtual device with a real probe. The pose, force, video, ultrasound images, and 3-dimensional mesh of the scene are fed back to the expert. This control framework, where the actuation is carried out by people, allows more precision and speed than verbal guidance, yet is more flexible and inexpensive than robotic teleoperation. The purpose of this paper is to introduce this concept as well as a prototype teleultrasound system with limited haptics and local communication. The system was tested to show its potential, including mean teleoperation latencies of 0.32 ± 0.05 seconds and steady-state errors of 4.4 ± 2.8 mm and 5.4 ± 2.8 ∘ in position and orientation tracking respectively. A preliminary test with an ultrasonographer and four patients was completed, showing lower measurement error and a completion time of 1:36 ± 0:23 minutes using human teleoperation compared to 4:13 ± 3:58 using audiovisual teleguidance.KEYWORDS: Teleoperationtele-ultrasoundmixed realityhapticshuman computer interaction Disclosure statementNo potential conflict of interest was reported by the authors.Supplementary materialSupplemental data for this article can be accessed online at https://doi.org/10.1080/07370024.2023.2218355Additional informationFundingThe work was supported by the Natural Sciences and Engineering Research Council of Canada [RGPIN-2016-04618]Notes on contributorsDavid BlackDavid Black completed a BASc in engineering physics at the University of British Columbia (UBC), Canada, in 2021. He is currently a Vanier Scholar and PhD candidate in electrical and computer engineering at UBC. During his studies, he has worked at A&K Robotics, Vancouver, Canada, the Robotics and Control Laboratory (RCL) at UBC, and at the BC Cancer Research Centre. From 2018 to 2019 he worked as a systems engineer in Advanced Development at Carl Zeiss Meditec AG, Oberkochen, Germany, and has continued as a consultant and collaborator since 2019.Yas Oloumi YazdiYas Oloumi Yazdi completed a BASc in engineering physics at the University of British Columbia (UBC), Canada, in 2022. She is currently a PhD student in biomedical engineering at UBC. She has completed internships at the Michael Smith Genome Sciences Centre, BC Cancer Research Centre, and UBC BioMEMS lab.Amir Hossein Hadi HosseinabadiAmir Hossein Hadi Hosseinabadi received BSc and MASc degrees in mechanical engineering in 2011 and 2013 from the Sharif University of Technology, Tehran, Iran, and the University of British Columbia (UBC), Vancouver, Canada, respectively. He completed a PhD in electrical and computer engineering at UBC with the Robotics and Control Laboratory (RCL). From 2013-2020, he was a Robotics & Control Engineer at Dynamic Attractions, Port Coquitlam, Canada. He completed internships at Microsoft, Redmond, WA, USA and Intuitive Surgical, Sunnyvale, CA, USA. He is now a hardware engineer at Apple, Cupertino, California, USA.Septimiu SalcudeanSeptimu E. Salcudean was born in Cluj, Romania. He received the BEng (Hons.) and MEng degrees in from McGill University, Montreal, Quebc, Canada in 1979 and 1981, respectively, and his PhD degree from the University of California, Berkeley, USA in 1986, all in electrical engineering.He was a Research Staff Member at the IBM T.J. Watson Research Center from 1986 to 1989. He then joined the University of British Columbia (UBC) and currently is a Professor in the Department of Electrical and Computer Engineering, where he holds the C.A. Laszlo Chair in Biomedical Engineering and a Canada Research Chair. He has courtesy appointments with the UBC School of Biomedical Engineering and the Vancouver Prostate Centre. He has been a co-organizer of the Haptics Symposium, a Technical Editor and Senior Editor of the IEEE Transactions on Robotics and Automation, and on the program committees of the ICRA, MICCAI and IPCAI Conferences. He is currently on the steering committee of the IPCAI conference and on the Editorial Board of the International Journal of Robotics Research. He is a Fellow of the IEEE, a Fellow of MICCAI and of the Canadian Academy of Engineering.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human-Computer Interaction
Human-Computer Interaction 工程技术-计算机:控制论
CiteScore
12.20
自引率
3.80%
发文量
15
审稿时长
>12 weeks
期刊介绍: Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting on fundamental research in human-computer interaction. The goal of HCI is to be a journal of the highest quality that combines the best research and design work to extend our understanding of human-computer interaction. The target audience is the research community with an interest in both the scientific implications and practical relevance of how interactive computer systems should be designed and how they are actually used. HCI is concerned with the theoretical, empirical, and methodological issues of interaction science and system design as it affects the user.
期刊最新文献
File hyper-searching explained Social fidelity in cooperative virtual reality maritime training The future of PIM: pragmatics and potential Clarifying and differentiating discoverability Design and evaluation of a versatile text input device for virtual and immersive workspaces
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1