Hang Su, Jing Guo, Wen Qi, Mingchuan Zhou, Yue Chen
{"title":"Guest Editorial: Integrating sensor fusion and perception for human–robot interaction","authors":"Hang Su, Jing Guo, Wen Qi, Mingchuan Zhou, Yue Chen","doi":"10.1049/ccs2.12031","DOIUrl":null,"url":null,"abstract":"<p>This is the Special Issue ‘Integrating Sensor Fusion and Perception for Human–Robot Interaction’ of <i>IET Cognitive Computation and System</i> that introduces the latest advances in sensor fusion and perception in the human–robot interaction (HRI) field.</p><p>In recent years, as intelligent systems have developed, HRI has attracted increasing research interest. In many areas, including factories, rehabilitation robots and operating rooms, HRI technology can be exploited to enhance safety by using intelligence for human operations. However, both available practical robotic systems and some ongoing investigations lack intelligence due to their limited capabilities in perceiving their environment. Nowadays, the HRI method usually focusses on a single sensing system without integrating algorithms and hardware, such as tactile perception and computer vision. Sensor fusion and perception with artificial intelligence (AI) techniques have been successful in environment perception and activity recognition by fusing information from a multi-modal sensing system and selecting the most appropriate information to perceive the activity or environment. Consequently, combining the technique of multi-sensor fusion and perception for HRI is an exciting and promising topic.</p><p>This Special Issue aims to track the latest advances and newly appeared technology in the integrated sensor fusion and perception for HRI. After careful peer reviews and revision, four representative papers were accepted for publication in this Special Issue. These papers represent four important application areas of multi-sensor fusion and perception technology and can be assigned into four topics. The related summary of every topic is given below. We strongly recommend reading the entire paper if interested. They will bring some new ideas and inspire the mind.</p><p>In the paper ‘Deep learning techniques-based perfection of multi-sensor fusion oriented human-robot interaction system for identification of dense organisms’, Li et al. present an HRI system based on deep learning and sensors' fusion to study the species and density of dense organisms in the deep-sea hydrothermal vent. In this paper, several deep learning models based on convolutional neural network (CNN) are improved and compared to study the species and density of dense organisms in deep-sea hydrothermal vent, which are fused with related environmental information provided by position sensors and conductivity–temperature–depth (CTD) sensors, so as to perfect the multi-sensor fusion-oriented HRI system. First, the authors combined different meta-architectures and different feature extractors and obtained five object identification algorithms based on CNN. Then, they compared the computational cost of feature extractors and weighed the pros and cons of each algorithm from mean detection speed, correlation coefficient and mean class-specific confidence score to confirm that Faster Region-based CNN (R-CNN)_InceptionNet is the best algorithm applicable to the hydrothermal vent biological dataset. Finally, they calculated the cognitive accuracy of <i>rimicaris exoculata</i> in dense and sparse areas, which were 88.3% and 95.9%, respectively, to analyse the performance of the Faster R-CNN_InceptionNet. Experiments show that the proposed method can automatically detect the species and quantity of dense organisms with higher speed and accuracy. And it is feasible and of realistic value that the improved multi-sensor fusion-oriented HRI system is used to help biologists analyse and maintain the ecological balance of deep-sea hydrothermal vents.</p><p>Integrating sensor fusion and perception is not limited to the extraction and processing of the physic data. It also plays an important role in multi-system coupling. In the paper ‘Research on intelligent service of customer service system’, Nie et al. illustrate a new generation of customer service systems based on the sense coupling of the outbound call system, enterprise internal management system and knowledge base. This study introduces the principle of the outbound call system, the enterprise internal management system, and the knowledge base and explains the network structure of the intelligent customer service system. Also, the methods of accessing the intelligent customer service system and the whole workflow are described. Through data sharing and information exchange between multiple systems, the new customer service system proposed achieves the perceptual integration of the outbound call system, the enterprise internal management system and the knowledge base to provide service for customers intelligently. Based on the application of cloud service and IoT technology, the intelligent customer service system establishes a dynamically updated knowledge base and forms a management model dominated by the knowledge base.</p><p>In recent years, wearable sensors have developed fast, especially in the medical and health field. More mature commercial wearable sensors have appeared and generated a new network formation—the body area network (BAN). The BAN is composed of every wearable device network on the body to share information and data, which is applied in medical and health devices, especially in intelligent clothing. This Special Issue includes a review article on wearable sensor and body area network by Ren et al. Based on the wearable sensor in the critical factor of wearable device fusion, this paper analyses the classification, technology, and current situation of a wearable sensor, discusses the problems of a wearable sensor for the BAN from the aspects of human–computer interaction experience, data accuracy, multiple interaction modes, and battery power supply, and summarises the direction of multi-sensor fusion, compatible biosensor materials, and low power consumption and high sensitivity. Furthermore, a sustainable design direction of visibility design, identification of use scenarios, short-term human–computer interaction, interaction process reduction and integration invisibility are introduced.</p><p>Augmented reality is one of the most inspiring technology in recent years. There is no doubt that augment reality will lead the trend of the immerse application in the industry and medical field. This Special Issue includes a paper on augmented reality display of neurosurgery craniotomy lesions based on feature contour matching by Hao et al. In this paper, an augmented reality display method for neurosurgical craniotomy lesions based on feature contour matching is proposed, which uses an augmented reality display method to provide doctors with accurate lesion information. It can visualise the patient's intracranial information and help doctors plan the path of scalp cutting and craniectomy. This method also performs non-rigid matching for the patient, eliminates additional injury to the patient, reduces the extra work of doctors to paste marking points for the patient, and reduces the burden of multiple medical scans for the patient. Through experiments to compare the feature point cloud matching and feature contour matching methods, it is proved that the feature contour matching method has a better display effect. In addition, a user interface is designed. The doctor can determine the patient's personal information through the text displayed in the upper left corner of the interface and zoom in, zoom out, and rotate the virtual model on a mobile terminal screen by pressing buttons. It provides a visual basis for the doctor's preoperative preparation. The method described in this article effectively improves the efficiency of the doctor's operation as well as patient safety. The proposed augmented reality matching method based on feature contours also provides basic theoretical help for applying augmented reality to neurosurgery in the future.</p><p>All of the papers selected for this Special Issue show the significant effect and application potential of sensor fusion and perception in HRI application. Multi-sensor fusion and perception can effectively improve system accuracy, increase stability and improve the human–computer interaction experience. There are still many challenges in this field that require future research attention, such as the fusion method and the fusion result evaluation. With further development, the integration of sensor fusion and perception in HRI will have broad application.</p><p>Examples of published guest editorials are given below for your reference:</p><p>https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-gtd.2020.1493</p><p>https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-pel.2020.0051</p><p>https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/iet-rsn.2020.0089</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2021-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12031","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This is the Special Issue ‘Integrating Sensor Fusion and Perception for Human–Robot Interaction’ of IET Cognitive Computation and System that introduces the latest advances in sensor fusion and perception in the human–robot interaction (HRI) field.
In recent years, as intelligent systems have developed, HRI has attracted increasing research interest. In many areas, including factories, rehabilitation robots and operating rooms, HRI technology can be exploited to enhance safety by using intelligence for human operations. However, both available practical robotic systems and some ongoing investigations lack intelligence due to their limited capabilities in perceiving their environment. Nowadays, the HRI method usually focusses on a single sensing system without integrating algorithms and hardware, such as tactile perception and computer vision. Sensor fusion and perception with artificial intelligence (AI) techniques have been successful in environment perception and activity recognition by fusing information from a multi-modal sensing system and selecting the most appropriate information to perceive the activity or environment. Consequently, combining the technique of multi-sensor fusion and perception for HRI is an exciting and promising topic.
This Special Issue aims to track the latest advances and newly appeared technology in the integrated sensor fusion and perception for HRI. After careful peer reviews and revision, four representative papers were accepted for publication in this Special Issue. These papers represent four important application areas of multi-sensor fusion and perception technology and can be assigned into four topics. The related summary of every topic is given below. We strongly recommend reading the entire paper if interested. They will bring some new ideas and inspire the mind.
In the paper ‘Deep learning techniques-based perfection of multi-sensor fusion oriented human-robot interaction system for identification of dense organisms’, Li et al. present an HRI system based on deep learning and sensors' fusion to study the species and density of dense organisms in the deep-sea hydrothermal vent. In this paper, several deep learning models based on convolutional neural network (CNN) are improved and compared to study the species and density of dense organisms in deep-sea hydrothermal vent, which are fused with related environmental information provided by position sensors and conductivity–temperature–depth (CTD) sensors, so as to perfect the multi-sensor fusion-oriented HRI system. First, the authors combined different meta-architectures and different feature extractors and obtained five object identification algorithms based on CNN. Then, they compared the computational cost of feature extractors and weighed the pros and cons of each algorithm from mean detection speed, correlation coefficient and mean class-specific confidence score to confirm that Faster Region-based CNN (R-CNN)_InceptionNet is the best algorithm applicable to the hydrothermal vent biological dataset. Finally, they calculated the cognitive accuracy of rimicaris exoculata in dense and sparse areas, which were 88.3% and 95.9%, respectively, to analyse the performance of the Faster R-CNN_InceptionNet. Experiments show that the proposed method can automatically detect the species and quantity of dense organisms with higher speed and accuracy. And it is feasible and of realistic value that the improved multi-sensor fusion-oriented HRI system is used to help biologists analyse and maintain the ecological balance of deep-sea hydrothermal vents.
Integrating sensor fusion and perception is not limited to the extraction and processing of the physic data. It also plays an important role in multi-system coupling. In the paper ‘Research on intelligent service of customer service system’, Nie et al. illustrate a new generation of customer service systems based on the sense coupling of the outbound call system, enterprise internal management system and knowledge base. This study introduces the principle of the outbound call system, the enterprise internal management system, and the knowledge base and explains the network structure of the intelligent customer service system. Also, the methods of accessing the intelligent customer service system and the whole workflow are described. Through data sharing and information exchange between multiple systems, the new customer service system proposed achieves the perceptual integration of the outbound call system, the enterprise internal management system and the knowledge base to provide service for customers intelligently. Based on the application of cloud service and IoT technology, the intelligent customer service system establishes a dynamically updated knowledge base and forms a management model dominated by the knowledge base.
In recent years, wearable sensors have developed fast, especially in the medical and health field. More mature commercial wearable sensors have appeared and generated a new network formation—the body area network (BAN). The BAN is composed of every wearable device network on the body to share information and data, which is applied in medical and health devices, especially in intelligent clothing. This Special Issue includes a review article on wearable sensor and body area network by Ren et al. Based on the wearable sensor in the critical factor of wearable device fusion, this paper analyses the classification, technology, and current situation of a wearable sensor, discusses the problems of a wearable sensor for the BAN from the aspects of human–computer interaction experience, data accuracy, multiple interaction modes, and battery power supply, and summarises the direction of multi-sensor fusion, compatible biosensor materials, and low power consumption and high sensitivity. Furthermore, a sustainable design direction of visibility design, identification of use scenarios, short-term human–computer interaction, interaction process reduction and integration invisibility are introduced.
Augmented reality is one of the most inspiring technology in recent years. There is no doubt that augment reality will lead the trend of the immerse application in the industry and medical field. This Special Issue includes a paper on augmented reality display of neurosurgery craniotomy lesions based on feature contour matching by Hao et al. In this paper, an augmented reality display method for neurosurgical craniotomy lesions based on feature contour matching is proposed, which uses an augmented reality display method to provide doctors with accurate lesion information. It can visualise the patient's intracranial information and help doctors plan the path of scalp cutting and craniectomy. This method also performs non-rigid matching for the patient, eliminates additional injury to the patient, reduces the extra work of doctors to paste marking points for the patient, and reduces the burden of multiple medical scans for the patient. Through experiments to compare the feature point cloud matching and feature contour matching methods, it is proved that the feature contour matching method has a better display effect. In addition, a user interface is designed. The doctor can determine the patient's personal information through the text displayed in the upper left corner of the interface and zoom in, zoom out, and rotate the virtual model on a mobile terminal screen by pressing buttons. It provides a visual basis for the doctor's preoperative preparation. The method described in this article effectively improves the efficiency of the doctor's operation as well as patient safety. The proposed augmented reality matching method based on feature contours also provides basic theoretical help for applying augmented reality to neurosurgery in the future.
All of the papers selected for this Special Issue show the significant effect and application potential of sensor fusion and perception in HRI application. Multi-sensor fusion and perception can effectively improve system accuracy, increase stability and improve the human–computer interaction experience. There are still many challenges in this field that require future research attention, such as the fusion method and the fusion result evaluation. With further development, the integration of sensor fusion and perception in HRI will have broad application.
Examples of published guest editorials are given below for your reference: