Guest Editorial: Integrating sensor fusion and perception for human–robot interaction

IF 1.2 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Cognitive Computation and Systems Pub Date : 2021-08-28 DOI:10.1049/ccs2.12031
Hang Su, Jing Guo, Wen Qi, Mingchuan Zhou, Yue Chen
{"title":"Guest Editorial: Integrating sensor fusion and perception for human–robot interaction","authors":"Hang Su,&nbsp;Jing Guo,&nbsp;Wen Qi,&nbsp;Mingchuan Zhou,&nbsp;Yue Chen","doi":"10.1049/ccs2.12031","DOIUrl":null,"url":null,"abstract":"<p>This is the Special Issue ‘Integrating Sensor Fusion and Perception for Human–Robot Interaction’ of <i>IET Cognitive Computation and System</i> that introduces the latest advances in sensor fusion and perception in the human–robot interaction (HRI) field.</p><p>In recent years, as intelligent systems have developed, HRI has attracted increasing research interest. In many areas, including factories, rehabilitation robots and operating rooms, HRI technology can be exploited to enhance safety by using intelligence for human operations. However, both available practical robotic systems and some ongoing investigations lack intelligence due to their limited capabilities in perceiving their environment. Nowadays, the HRI method usually focusses on a single sensing system without integrating algorithms and hardware, such as tactile perception and computer vision. Sensor fusion and perception with artificial intelligence (AI) techniques have been successful in environment perception and activity recognition by fusing information from a multi-modal sensing system and selecting the most appropriate information to perceive the activity or environment. Consequently, combining the technique of multi-sensor fusion and perception for HRI is an exciting and promising topic.</p><p>This Special Issue aims to track the latest advances and newly appeared technology in the integrated sensor fusion and perception for HRI. After careful peer reviews and revision, four representative papers were accepted for publication in this Special Issue. These papers represent four important application areas of multi-sensor fusion and perception technology and can be assigned into four topics. The related summary of every topic is given below. We strongly recommend reading the entire paper if interested. They will bring some new ideas and inspire the mind.</p><p>In the paper ‘Deep learning techniques-based perfection of multi-sensor fusion oriented human-robot interaction system for identification of dense organisms’, Li et al. present an HRI system based on deep learning and sensors' fusion to study the species and density of dense organisms in the deep-sea hydrothermal vent. In this paper, several deep learning models based on convolutional neural network (CNN) are improved and compared to study the species and density of dense organisms in deep-sea hydrothermal vent, which are fused with related environmental information provided by position sensors and conductivity–temperature–depth (CTD) sensors, so as to perfect the multi-sensor fusion-oriented HRI system. First, the authors combined different meta-architectures and different feature extractors and obtained five object identification algorithms based on CNN. Then, they compared the computational cost of feature extractors and weighed the pros and cons of each algorithm from mean detection speed, correlation coefficient and mean class-specific confidence score to confirm that Faster Region-based CNN (R-CNN)_InceptionNet is the best algorithm applicable to the hydrothermal vent biological dataset. Finally, they calculated the cognitive accuracy of <i>rimicaris exoculata</i> in dense and sparse areas, which were 88.3% and 95.9%, respectively, to analyse the performance of the Faster R-CNN_InceptionNet. Experiments show that the proposed method can automatically detect the species and quantity of dense organisms with higher speed and accuracy. And it is feasible and of realistic value that the improved multi-sensor fusion-oriented HRI system is used to help biologists analyse and maintain the ecological balance of deep-sea hydrothermal vents.</p><p>Integrating sensor fusion and perception is not limited to the extraction and processing of the physic data. It also plays an important role in multi-system coupling. In the paper ‘Research on intelligent service of customer service system’, Nie et al. illustrate a new generation of customer service systems based on the sense coupling of the outbound call system, enterprise internal management system and knowledge base. This study introduces the principle of the outbound call system, the enterprise internal management system, and the knowledge base and explains the network structure of the intelligent customer service system. Also, the methods of accessing the intelligent customer service system and the whole workflow are described. Through data sharing and information exchange between multiple systems, the new customer service system proposed achieves the perceptual integration of the outbound call system, the enterprise internal management system and the knowledge base to provide service for customers intelligently. Based on the application of cloud service and IoT technology, the intelligent customer service system establishes a dynamically updated knowledge base and forms a management model dominated by the knowledge base.</p><p>In recent years, wearable sensors have developed fast, especially in the medical and health field. More mature commercial wearable sensors have appeared and generated a new network formation—the body area network (BAN). The BAN is composed of every wearable device network on the body to share information and data, which is applied in medical and health devices, especially in intelligent clothing. This Special Issue includes a review article on wearable sensor and body area network by Ren et al. Based on the wearable sensor in the critical factor of wearable device fusion, this paper analyses the classification, technology, and current situation of a wearable sensor, discusses the problems of a wearable sensor for the BAN from the aspects of human–computer interaction experience, data accuracy, multiple interaction modes, and battery power supply, and summarises the direction of multi-sensor fusion, compatible biosensor materials, and low power consumption and high sensitivity. Furthermore, a sustainable design direction of visibility design, identification of use scenarios, short-term human–computer interaction, interaction process reduction and integration invisibility are introduced.</p><p>Augmented reality is one of the most inspiring technology in recent years. There is no doubt that augment reality will lead the trend of the immerse application in the industry and medical field. This Special Issue includes a paper on augmented reality display of neurosurgery craniotomy lesions based on feature contour matching by Hao et al. In this paper, an augmented reality display method for neurosurgical craniotomy lesions based on feature contour matching is proposed, which uses an augmented reality display method to provide doctors with accurate lesion information. It can visualise the patient's intracranial information and help doctors plan the path of scalp cutting and craniectomy. This method also performs non-rigid matching for the patient, eliminates additional injury to the patient, reduces the extra work of doctors to paste marking points for the patient, and reduces the burden of multiple medical scans for the patient. Through experiments to compare the feature point cloud matching and feature contour matching methods, it is proved that the feature contour matching method has a better display effect. In addition, a user interface is designed. The doctor can determine the patient's personal information through the text displayed in the upper left corner of the interface and zoom in, zoom out, and rotate the virtual model on a mobile terminal screen by pressing buttons. It provides a visual basis for the doctor's preoperative preparation. The method described in this article effectively improves the efficiency of the doctor's operation as well as patient safety. The proposed augmented reality matching method based on feature contours also provides basic theoretical help for applying augmented reality to neurosurgery in the future.</p><p>All of the papers selected for this Special Issue show the significant effect and application potential of sensor fusion and perception in HRI application. Multi-sensor fusion and perception can effectively improve system accuracy, increase stability and improve the human–computer interaction experience. There are still many challenges in this field that require future research attention, such as the fusion method and the fusion result evaluation. With further development, the integration of sensor fusion and perception in HRI will have broad application.</p><p>Examples of published guest editorials are given below for your reference:</p><p>https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-gtd.2020.1493</p><p>https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-pel.2020.0051</p><p>https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-rsn.2020.0089</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2021-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12031","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This is the Special Issue ‘Integrating Sensor Fusion and Perception for Human–Robot Interaction’ of IET Cognitive Computation and System that introduces the latest advances in sensor fusion and perception in the human–robot interaction (HRI) field.

In recent years, as intelligent systems have developed, HRI has attracted increasing research interest. In many areas, including factories, rehabilitation robots and operating rooms, HRI technology can be exploited to enhance safety by using intelligence for human operations. However, both available practical robotic systems and some ongoing investigations lack intelligence due to their limited capabilities in perceiving their environment. Nowadays, the HRI method usually focusses on a single sensing system without integrating algorithms and hardware, such as tactile perception and computer vision. Sensor fusion and perception with artificial intelligence (AI) techniques have been successful in environment perception and activity recognition by fusing information from a multi-modal sensing system and selecting the most appropriate information to perceive the activity or environment. Consequently, combining the technique of multi-sensor fusion and perception for HRI is an exciting and promising topic.

This Special Issue aims to track the latest advances and newly appeared technology in the integrated sensor fusion and perception for HRI. After careful peer reviews and revision, four representative papers were accepted for publication in this Special Issue. These papers represent four important application areas of multi-sensor fusion and perception technology and can be assigned into four topics. The related summary of every topic is given below. We strongly recommend reading the entire paper if interested. They will bring some new ideas and inspire the mind.

In the paper ‘Deep learning techniques-based perfection of multi-sensor fusion oriented human-robot interaction system for identification of dense organisms’, Li et al. present an HRI system based on deep learning and sensors' fusion to study the species and density of dense organisms in the deep-sea hydrothermal vent. In this paper, several deep learning models based on convolutional neural network (CNN) are improved and compared to study the species and density of dense organisms in deep-sea hydrothermal vent, which are fused with related environmental information provided by position sensors and conductivity–temperature–depth (CTD) sensors, so as to perfect the multi-sensor fusion-oriented HRI system. First, the authors combined different meta-architectures and different feature extractors and obtained five object identification algorithms based on CNN. Then, they compared the computational cost of feature extractors and weighed the pros and cons of each algorithm from mean detection speed, correlation coefficient and mean class-specific confidence score to confirm that Faster Region-based CNN (R-CNN)_InceptionNet is the best algorithm applicable to the hydrothermal vent biological dataset. Finally, they calculated the cognitive accuracy of rimicaris exoculata in dense and sparse areas, which were 88.3% and 95.9%, respectively, to analyse the performance of the Faster R-CNN_InceptionNet. Experiments show that the proposed method can automatically detect the species and quantity of dense organisms with higher speed and accuracy. And it is feasible and of realistic value that the improved multi-sensor fusion-oriented HRI system is used to help biologists analyse and maintain the ecological balance of deep-sea hydrothermal vents.

Integrating sensor fusion and perception is not limited to the extraction and processing of the physic data. It also plays an important role in multi-system coupling. In the paper ‘Research on intelligent service of customer service system’, Nie et al. illustrate a new generation of customer service systems based on the sense coupling of the outbound call system, enterprise internal management system and knowledge base. This study introduces the principle of the outbound call system, the enterprise internal management system, and the knowledge base and explains the network structure of the intelligent customer service system. Also, the methods of accessing the intelligent customer service system and the whole workflow are described. Through data sharing and information exchange between multiple systems, the new customer service system proposed achieves the perceptual integration of the outbound call system, the enterprise internal management system and the knowledge base to provide service for customers intelligently. Based on the application of cloud service and IoT technology, the intelligent customer service system establishes a dynamically updated knowledge base and forms a management model dominated by the knowledge base.

In recent years, wearable sensors have developed fast, especially in the medical and health field. More mature commercial wearable sensors have appeared and generated a new network formation—the body area network (BAN). The BAN is composed of every wearable device network on the body to share information and data, which is applied in medical and health devices, especially in intelligent clothing. This Special Issue includes a review article on wearable sensor and body area network by Ren et al. Based on the wearable sensor in the critical factor of wearable device fusion, this paper analyses the classification, technology, and current situation of a wearable sensor, discusses the problems of a wearable sensor for the BAN from the aspects of human–computer interaction experience, data accuracy, multiple interaction modes, and battery power supply, and summarises the direction of multi-sensor fusion, compatible biosensor materials, and low power consumption and high sensitivity. Furthermore, a sustainable design direction of visibility design, identification of use scenarios, short-term human–computer interaction, interaction process reduction and integration invisibility are introduced.

Augmented reality is one of the most inspiring technology in recent years. There is no doubt that augment reality will lead the trend of the immerse application in the industry and medical field. This Special Issue includes a paper on augmented reality display of neurosurgery craniotomy lesions based on feature contour matching by Hao et al. In this paper, an augmented reality display method for neurosurgical craniotomy lesions based on feature contour matching is proposed, which uses an augmented reality display method to provide doctors with accurate lesion information. It can visualise the patient's intracranial information and help doctors plan the path of scalp cutting and craniectomy. This method also performs non-rigid matching for the patient, eliminates additional injury to the patient, reduces the extra work of doctors to paste marking points for the patient, and reduces the burden of multiple medical scans for the patient. Through experiments to compare the feature point cloud matching and feature contour matching methods, it is proved that the feature contour matching method has a better display effect. In addition, a user interface is designed. The doctor can determine the patient's personal information through the text displayed in the upper left corner of the interface and zoom in, zoom out, and rotate the virtual model on a mobile terminal screen by pressing buttons. It provides a visual basis for the doctor's preoperative preparation. The method described in this article effectively improves the efficiency of the doctor's operation as well as patient safety. The proposed augmented reality matching method based on feature contours also provides basic theoretical help for applying augmented reality to neurosurgery in the future.

All of the papers selected for this Special Issue show the significant effect and application potential of sensor fusion and perception in HRI application. Multi-sensor fusion and perception can effectively improve system accuracy, increase stability and improve the human–computer interaction experience. There are still many challenges in this field that require future research attention, such as the fusion method and the fusion result evaluation. With further development, the integration of sensor fusion and perception in HRI will have broad application.

Examples of published guest editorials are given below for your reference:

https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-gtd.2020.1493

https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-pel.2020.0051

https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-rsn.2020.0089

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
嘉宾评论:集成传感器融合和感知的人机交互
这是IET认知计算与系统的特刊“人机交互集成传感器融合和感知”,介绍了人机交互(HRI)领域传感器融合和感知的最新进展。近年来,随着智能系统的发展,人力资源研究受到越来越多的关注。在许多领域,包括工厂、康复机器人和手术室,HRI技术可以通过使用人工操作的智能来提高安全性。然而,现有的实用机器人系统和一些正在进行的研究都缺乏智能,因为它们在感知环境方面的能力有限。目前,HRI方法通常侧重于单个传感系统,没有将触觉感知和计算机视觉等算法和硬件相结合。传感器融合和感知与人工智能(AI)技术通过融合来自多模态传感系统的信息并选择最合适的信息来感知活动或环境,在环境感知和活动识别方面取得了成功。因此,将多传感器融合与感知技术结合起来用于HRI是一个非常有前景的研究课题。本期特刊旨在追踪HRI集成传感器融合与感知的最新进展和新出现的技术。经过认真的同行评议和修改,四篇有代表性的论文被接受发表在本期特刊上。这些论文代表了多传感器融合与感知技术的四个重要应用领域,可分为四个主题。下面给出了每个主题的相关摘要。如果有兴趣,我们强烈建议阅读全文。他们会带来一些新的想法,激发思想。Li等人在论文《基于深度学习技术的多传感器融合面向密集生物识别人机交互系统的完善》中,提出了一种基于深度学习和传感器融合的HRI系统,用于研究深海热液喷口密集生物的种类和密度。本文对几种基于卷积神经网络(CNN)的深度学习模型进行改进和比较,将深度学习模型与位置传感器、电导率-温度-深度(CTD)传感器提供的相关环境信息融合,研究深海热液喷口中致密生物的种类和密度,完善面向多传感器融合的HRI系统。首先,作者结合不同的元架构和不同的特征提取器,得到了5种基于CNN的目标识别算法。然后,他们比较了特征提取器的计算成本,并从平均检测速度、相关系数和平均类特异性置信度评分等方面权衡了每种算法的优缺点,确认Faster Region-based CNN (R-CNN)_InceptionNet是适用于热液喷口生物数据集的最佳算法。最后,他们计算了密集和稀疏区域的外眼小眼的认知准确率,分别为88.3%和95.9%,以分析Faster R-CNN_InceptionNet的性能。实验表明,该方法能够自动检测出密集生物的种类和数量,具有较高的速度和精度。利用改进的多传感器融合HRI系统帮助生物学家分析和维护深海热液喷口生态平衡具有可行性和现实价值。传感器融合与感知的集成并不局限于物理数据的提取和处理。它在多系统耦合中也起着重要作用。在“客服系统的智能服务研究”一文中,聂等人阐述了基于外呼系统、企业内部管理系统和知识库的感知耦合的新一代客服系统。本文介绍了外呼系统的原理、企业内部管理系统和知识库,并对智能客服系统的网络结构进行了说明。描述了智能客服系统的接入方法和整个工作流程。本文提出的新型客户服务系统通过多个系统之间的数据共享和信息交换,实现了外呼系统、企业内部管理系统和知识库的感知集成,智能地为客户提供服务。智能客服系统基于云服务和物联网技术的应用,建立动态更新的知识库,形成以知识库为主导的管理模式。近年来,可穿戴传感器发展迅速,特别是在医疗健康领域。 更成熟的商用可穿戴传感器已经出现,并产生了一种新的网络形态——body area network (BAN)。BAN是由人体上的每一个可穿戴设备网络组成,实现信息和数据的共享,应用于医疗健康设备,尤其是智能服装。本期特刊包括Ren等人关于可穿戴传感器和体域网络的综述文章。基于可穿戴传感器在可穿戴设备融合中的关键因素,本文分析了可穿戴传感器的分类、技术和现状,从人机交互体验、数据精度、多种交互模式、电池供电等方面探讨了可穿戴传感器用于BAN存在的问题,总结了多传感器融合、兼容生物传感器材料、低功耗高灵敏度的发展方向。在此基础上,提出了可视化设计、使用场景识别、短期人机交互、减少交互过程和集成不可见的可持续设计方向。增强现实是近年来最鼓舞人心的技术之一。毫无疑问,增强现实将引领工业和医疗领域的沉浸式应用趋势。本期特刊收录了Hao等人基于特征轮廓匹配的增强现实显示神经外科开颅病变的论文。本文提出了一种基于特征轮廓匹配的神经外科开颅病变增强现实显示方法,利用增强现实显示方法为医生提供准确的病变信息。它可以可视化患者的颅内信息,帮助医生计划头皮切割和颅骨切除术的路径。该方法还对患者进行了非刚性匹配,消除了对患者的额外伤害,减少了医生为患者粘贴标记点的额外工作,减轻了患者多次医疗扫描的负担。通过实验对比特征点云匹配和特征轮廓匹配方法,证明了特征轮廓匹配方法具有更好的显示效果。此外,还设计了用户界面。医生可以通过界面左上角显示的文字判断患者的个人信息,并通过按键在移动端屏幕上放大、缩小、旋转虚拟模型。它为医生的术前准备提供了直观的依据。本文所描述的方法有效地提高了医生的手术效率和患者的安全。本文提出的基于特征轮廓的增强现实匹配方法也为未来增强现实在神经外科中的应用提供了基础理论帮助。本特刊收录的所有论文都展示了传感器融合与感知在HRI应用中的重要作用和应用潜力。多传感器融合与感知可以有效提高系统精度,增加稳定性,改善人机交互体验。该领域仍存在许多挑战,如融合方法和融合结果评价等,需要进一步研究。随着进一步的发展,传感器融合与感知的融合将在HRI中得到广泛的应用。以下是已发表的特邀社论的例子,供参考:https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-gtd.2020.1493https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-pel.2020.0051https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-rsn.2020.0089
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Cognitive Computation and Systems
Cognitive Computation and Systems Computer Science-Computer Science Applications
CiteScore
2.50
自引率
0.00%
发文量
39
审稿时长
10 weeks
期刊最新文献
EF-CorrCA: A multi-modal EEG-fNIRS subject independent model to assess speech quality on brain activity using correlated component analysis Detection of autism spectrum disorder using multi-scale enhanced graph convolutional network Evolving usability heuristics for visualising Augmented Reality/Mixed Reality applications using cognitive model of information processing and fuzzy analytical hierarchy process Emotion classification with multi-modal physiological signals using multi-attention-based neural network Optimisation of deep neural network model using Reptile meta learning approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1