Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872286
Meiyuan Zou, Jiajie Yu, Bo Lu, Wenzheng Chi, Lining Sun
As a common multi-functional engineering equipment, excavators are widely used in civil construction, coal mining, power engineering, etc. The excellent performance of the excavator not only greatly improves the work efficiency during the construction process, but also effectively saves labor costs. However, due to the complexity of the working environment of the excavator and the blind area of the excavator itself, the driver cannot make timely judgments on the surrounding environment, which may cause potential threats to pedestrians. In response to such problems, this paper proposes a multi-sensor fusion detection method applied to excavators to provide vision assistance for excavator drivers, thereby reducing the risk of pedestrian casualties. Based on the results of the joint calibration, the transformation relationship between the camera and lidar coordinate systems is determined. Combining the detection results of the pedestrian detection algorithm YOLO-v5 and the segmented image information, the position of the pedestrian in the image can be inversely mapped to the 3D point clouds via the matrix transformation, which can accurately display the position of the pedestrian in the point clouds, consequently making up for the lack of depth information in the image. The experimental results show that our method can effectively extract the location information of pedestrians from the complex background environment and realize timely pedestrian alarm.
{"title":"Active Pedestrian Detection for Excavator Robots based on Multi-Sensor Fusion","authors":"Meiyuan Zou, Jiajie Yu, Bo Lu, Wenzheng Chi, Lining Sun","doi":"10.1109/RCAR54675.2022.9872286","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872286","url":null,"abstract":"As a common multi-functional engineering equipment, excavators are widely used in civil construction, coal mining, power engineering, etc. The excellent performance of the excavator not only greatly improves the work efficiency during the construction process, but also effectively saves labor costs. However, due to the complexity of the working environment of the excavator and the blind area of the excavator itself, the driver cannot make timely judgments on the surrounding environment, which may cause potential threats to pedestrians. In response to such problems, this paper proposes a multi-sensor fusion detection method applied to excavators to provide vision assistance for excavator drivers, thereby reducing the risk of pedestrian casualties. Based on the results of the joint calibration, the transformation relationship between the camera and lidar coordinate systems is determined. Combining the detection results of the pedestrian detection algorithm YOLO-v5 and the segmented image information, the position of the pedestrian in the image can be inversely mapped to the 3D point clouds via the matrix transformation, which can accurately display the position of the pedestrian in the point clouds, consequently making up for the lack of depth information in the image. The experimental results show that our method can effectively extract the location information of pedestrians from the complex background environment and realize timely pedestrian alarm.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131462165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872185
Chi Zhang, Zhenna Liu, Yaoguang Wei, Dong An, Jincun Liu
The dolphins are fairly well endowed with high turn maneuverability in vertical and horizontal planes. This paper proposes a modified path planning algorithm fusions of rapidly-exploring random tree (RRT) and graph-based methods for a developed robotic dolphin. Considering simultaneously the both minimum yaw radius and minimum pitch radius constraints, a method of calculating a three-dimensional (3D) Dubins curve from two 2D Dubins curves by interpolation is proposed. The 3D Dubins curves and the length of the curves are utilized as the paths and costs of path planning to satisfy the motion constraints of the robotic dolphin. Furthermore, in order to meet the speed and optimization of planned path, a variant RRT algorithm combined with A* algorithm is employed to generate feasible path for the robotic dolphin. The path cost and calculation time of this method is lower. Finally, a tendon-driven continuum robotic dolphin is presented to provide the simulation platform for verifying the effectiveness of the proposed methods.
{"title":"Improved RRT*-A*-based Three-Dimensional Path Planning Algorithm for the Robotic Dolphin","authors":"Chi Zhang, Zhenna Liu, Yaoguang Wei, Dong An, Jincun Liu","doi":"10.1109/RCAR54675.2022.9872185","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872185","url":null,"abstract":"The dolphins are fairly well endowed with high turn maneuverability in vertical and horizontal planes. This paper proposes a modified path planning algorithm fusions of rapidly-exploring random tree (RRT) and graph-based methods for a developed robotic dolphin. Considering simultaneously the both minimum yaw radius and minimum pitch radius constraints, a method of calculating a three-dimensional (3D) Dubins curve from two 2D Dubins curves by interpolation is proposed. The 3D Dubins curves and the length of the curves are utilized as the paths and costs of path planning to satisfy the motion constraints of the robotic dolphin. Furthermore, in order to meet the speed and optimization of planned path, a variant RRT algorithm combined with A* algorithm is employed to generate feasible path for the robotic dolphin. The path cost and calculation time of this method is lower. Finally, a tendon-driven continuum robotic dolphin is presented to provide the simulation platform for verifying the effectiveness of the proposed methods.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123770707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872231
Yangyan Deng, Ding Yuan, Hong Zhang
With the development of deep learning, multi-view stereo has achieved significant progress recently. Due to the expensive three-dimension supervision, self-supervised methods have more potential. In this work, a novel two-stage self-supervised learning framework for multi-view stereo is proposed to overcome photometric dependency and the effect of foreshortening. On considering that accurate depth hypothesis always plays an important role in estimating depth information. Therefore, this work concentrates on designing an adaptive depth sampling module based on neighboring spatial patches propagation, which is determined by the normal maps. From this point of view, a two-stage process is carried out in this work. In detail, the coarse initial depth maps and normal maps are obtained in the first stage, and then the network in the second stage refines the depth sampling module by taking the influence of foreshortening into account. Furthermore, the loss functions are developed including feature-metric consistency to overcome the photometric inconsistency caused by lighting variation. Moreover, the consistency between depth maps and normal maps is also employed in the loss functions. To evaluate the effectiveness of our proposed two-stage framework, the experiments are carried out on the DTU datasets. The experimental results demonstrate that our self-supervised learning framework has excellent performance compared to the baseline methods.
{"title":"Two-stage Self-supervised MVS Network using Adaptive Depth Sampling","authors":"Yangyan Deng, Ding Yuan, Hong Zhang","doi":"10.1109/RCAR54675.2022.9872231","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872231","url":null,"abstract":"With the development of deep learning, multi-view stereo has achieved significant progress recently. Due to the expensive three-dimension supervision, self-supervised methods have more potential. In this work, a novel two-stage self-supervised learning framework for multi-view stereo is proposed to overcome photometric dependency and the effect of foreshortening. On considering that accurate depth hypothesis always plays an important role in estimating depth information. Therefore, this work concentrates on designing an adaptive depth sampling module based on neighboring spatial patches propagation, which is determined by the normal maps. From this point of view, a two-stage process is carried out in this work. In detail, the coarse initial depth maps and normal maps are obtained in the first stage, and then the network in the second stage refines the depth sampling module by taking the influence of foreshortening into account. Furthermore, the loss functions are developed including feature-metric consistency to overcome the photometric inconsistency caused by lighting variation. Moreover, the consistency between depth maps and normal maps is also employed in the loss functions. To evaluate the effectiveness of our proposed two-stage framework, the experiments are carried out on the DTU datasets. The experimental results demonstrate that our self-supervised learning framework has excellent performance compared to the baseline methods.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116602419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872180
Wen Fu, Yanjie Li, Zhaohui Ye, Qi Liu
On the basis of environmental information processed by the sensing module, the decision module in automatic driving integrates environmental and vehicle information to make the autonomous vehicle produce safe and reasonable driving behavior. Considering the complexity and variability of the driving environment of autonomous vehicles, researchers have begun to apply deep reinforcement learning (DRL) in the study of autonomous driving control strategies in recent years. In this paper, we apply an algorithm framework combining multimodal transformer and DRL to solve the autonomous driving decision problem in complex scenarios. We use ResNet and transformer to extract the features of LiDAR point cloud and image. We use Deep Deterministic Policy Gradient (DDPG) algorithm to complete the subsequent autonomous driving decision-making task. And we use information bottleneck to improve the sampling efficiency of RL. We use CARLA simulator to evaluate our approach. The results show that our approach allows the agent to learn better driving strategies.
{"title":"Decision Making for Autonomous Driving Via Multimodal Transformer and Deep Reinforcement Learning*","authors":"Wen Fu, Yanjie Li, Zhaohui Ye, Qi Liu","doi":"10.1109/RCAR54675.2022.9872180","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872180","url":null,"abstract":"On the basis of environmental information processed by the sensing module, the decision module in automatic driving integrates environmental and vehicle information to make the autonomous vehicle produce safe and reasonable driving behavior. Considering the complexity and variability of the driving environment of autonomous vehicles, researchers have begun to apply deep reinforcement learning (DRL) in the study of autonomous driving control strategies in recent years. In this paper, we apply an algorithm framework combining multimodal transformer and DRL to solve the autonomous driving decision problem in complex scenarios. We use ResNet and transformer to extract the features of LiDAR point cloud and image. We use Deep Deterministic Policy Gradient (DDPG) algorithm to complete the subsequent autonomous driving decision-making task. And we use information bottleneck to improve the sampling efficiency of RL. We use CARLA simulator to evaluate our approach. The results show that our approach allows the agent to learn better driving strategies.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131687619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a semi-supervised learning-based method for condition monitoring of construction equipment is developed. The method is suitable for vibration datasets collected from mechanical equipment on the construction site, for which class definitions are difficult to obtain. The collected vibration signals are analyzed in the time and frequency domain, respectively. Combining the statistical features of the vibration data and some expert information to obtain the category labels of extremely few data, the Fast Fourier transform (FFT) of the vibration signal is used for feature extraction to increase the ability of the classifier. Finally, the limited labeled samples and a large number of unlabeled samples are used as training sets to establish a condition monitoring model based on semi-supervised support vector machines. The performance of the proposed method is evaluated on the real datasets which collected on three different mechanical devices. The result shows that the correct classification rates of the method is 98.87%, 97.37% and 95.33% respectively, which proves that the proposed method is suitable for the condition monitoring of multiple mechanical equipment.
{"title":"A semi-supervised support vector machines approach for condition monitoring of construction equipment","authors":"Shubo Cao, Shitao Liu, Yunfei Shi, Yubo Pan, Lifang Han, Yiwei Yang","doi":"10.1109/RCAR54675.2022.9872264","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872264","url":null,"abstract":"In this paper, a semi-supervised learning-based method for condition monitoring of construction equipment is developed. The method is suitable for vibration datasets collected from mechanical equipment on the construction site, for which class definitions are difficult to obtain. The collected vibration signals are analyzed in the time and frequency domain, respectively. Combining the statistical features of the vibration data and some expert information to obtain the category labels of extremely few data, the Fast Fourier transform (FFT) of the vibration signal is used for feature extraction to increase the ability of the classifier. Finally, the limited labeled samples and a large number of unlabeled samples are used as training sets to establish a condition monitoring model based on semi-supervised support vector machines. The performance of the proposed method is evaluated on the real datasets which collected on three different mechanical devices. The result shows that the correct classification rates of the method is 98.87%, 97.37% and 95.33% respectively, which proves that the proposed method is suitable for the condition monitoring of multiple mechanical equipment.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122346750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872301
Fan Feng, Zefeng Liu, Yongfeng Cao, Le Xie
Currently, on the one hand, continuum robots have been widely used for robot-assisted minimally invasive surgery. On the other hand, deep learning is also widely used in medical image detection and recognition. However, there is no robotic system that integrates those two technologies for vocal fold tissue lesion detection. Therefore, in this paper, we designed a continuum robot for diagnosing vocal fold lesions based on the helical flexible joint and the master-slave kinematic mapping method is derived. In addition, we conducted experiments on object detection vocal fold lesions using a laryngeal model based on YOLOv5 by using the Pytorch framework.
{"title":"Design of A Continuum Robot System with Object Detection for the Diagnosis of Vocal Fold Lesions","authors":"Fan Feng, Zefeng Liu, Yongfeng Cao, Le Xie","doi":"10.1109/RCAR54675.2022.9872301","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872301","url":null,"abstract":"Currently, on the one hand, continuum robots have been widely used for robot-assisted minimally invasive surgery. On the other hand, deep learning is also widely used in medical image detection and recognition. However, there is no robotic system that integrates those two technologies for vocal fold tissue lesion detection. Therefore, in this paper, we designed a continuum robot for diagnosing vocal fold lesions based on the helical flexible joint and the master-slave kinematic mapping method is derived. In addition, we conducted experiments on object detection vocal fold lesions using a laryngeal model based on YOLOv5 by using the Pytorch framework.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121427653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872241
Yue Zhao, Xiaoming Liu, Junnan Chen, M. Kojima, Qiang Huang, T. Arai
Micro-nano operation refers to the high-precision operation of the target on the micro-nano scale. It is widely used in the assembly of small devices, single-cell manipulation and analysis, and cell assembly in tissue engineering. At present, many micro-operations mainly rely on traditional manual operations, which have poor accuracy, low efficiency and low controllability. In this paper, a teleoperation system composed of a three-degree-of-freedom parallel micro-nano manipulator driven by piezoelectric ceramics and the 3D Systems’ Touch haptic device is designed. The system has the characteristics of small size, high precision, fast speed, and convenient operation. It can greatly reduce the technical threshold of the operator and make it more intuitive and efficient to complete the micro-nano operation task, which has a great market prospect.
{"title":"Teleoperation of Dexterous Micro-Nano Hand with Haptic Devices","authors":"Yue Zhao, Xiaoming Liu, Junnan Chen, M. Kojima, Qiang Huang, T. Arai","doi":"10.1109/RCAR54675.2022.9872241","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872241","url":null,"abstract":"Micro-nano operation refers to the high-precision operation of the target on the micro-nano scale. It is widely used in the assembly of small devices, single-cell manipulation and analysis, and cell assembly in tissue engineering. At present, many micro-operations mainly rely on traditional manual operations, which have poor accuracy, low efficiency and low controllability. In this paper, a teleoperation system composed of a three-degree-of-freedom parallel micro-nano manipulator driven by piezoelectric ceramics and the 3D Systems’ Touch haptic device is designed. The system has the characteristics of small size, high precision, fast speed, and convenient operation. It can greatly reduce the technical threshold of the operator and make it more intuitive and efficient to complete the micro-nano operation task, which has a great market prospect.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116873536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872287
Yuanhe Chen, Qingsong Xu
This paper presents a new movable rotating magnetic field actuation system by integrating a rotating permanent magnet as the end-effector of a robot arm. The permanent magnet is rotated by a stepper motor, which creates a rotating magnetic field for driving a millimeter-scale magnet robot in 3D space. The trajectory tracking control of the miniature robot in 3D vascular model filled with different liquids has been realized by programming the movement of the robot arm. Experimental study has been carried out to test the performance of the magnetic millirobot for catheter-based target delivery. The results demonstrate the effectiveness of the millirobot for tracking predefined 2-D planar and 3-D spatial trajectories in vascular model under wireless control by the created movable rotating magnetic field. The reported magnetic actuation system provides a promising solution for target delivery in vascular navigation.
{"title":"Design of a Movable Rotating Magnetic Field Actuation System for Target Delivery in 3-D Vascular Model","authors":"Yuanhe Chen, Qingsong Xu","doi":"10.1109/RCAR54675.2022.9872287","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872287","url":null,"abstract":"This paper presents a new movable rotating magnetic field actuation system by integrating a rotating permanent magnet as the end-effector of a robot arm. The permanent magnet is rotated by a stepper motor, which creates a rotating magnetic field for driving a millimeter-scale magnet robot in 3D space. The trajectory tracking control of the miniature robot in 3D vascular model filled with different liquids has been realized by programming the movement of the robot arm. Experimental study has been carried out to test the performance of the magnetic millirobot for catheter-based target delivery. The results demonstrate the effectiveness of the millirobot for tracking predefined 2-D planar and 3-D spatial trajectories in vascular model under wireless control by the created movable rotating magnetic field. The reported magnetic actuation system provides a promising solution for target delivery in vascular navigation.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872302
Binglin Li, Qiang Lei, Pai Li, Y. Lian
With the continuous development of artificial intelligence, sewage pipeline robot is also gradually intelligent. This intelligent system is inseparable from machine perception systems and machine learning. Therefore, for the problem that the robot in the sewage pipeline can not locate accurately, a pipeline robot positioning system based on machine learning is designed. From the perspective of computer vision, the full convolution neural network is used to locate the robot. The robot can realize its positioning function by acquiring a single RGB (Red Green Blue) image from the current perspective. The positioning results are combined with the robot mobile platform system to complete the robot navigation task. Through the test in the simulated sewage pipeline scene, the practical value of the system method is verified. The experimental data show that the positioning and navigation system has high positioning accuracy, strong stability and certain practical value.
{"title":"Pipeline Robot Positioning System Based on Machine Learning","authors":"Binglin Li, Qiang Lei, Pai Li, Y. Lian","doi":"10.1109/RCAR54675.2022.9872302","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872302","url":null,"abstract":"With the continuous development of artificial intelligence, sewage pipeline robot is also gradually intelligent. This intelligent system is inseparable from machine perception systems and machine learning. Therefore, for the problem that the robot in the sewage pipeline can not locate accurately, a pipeline robot positioning system based on machine learning is designed. From the perspective of computer vision, the full convolution neural network is used to locate the robot. The robot can realize its positioning function by acquiring a single RGB (Red Green Blue) image from the current perspective. The positioning results are combined with the robot mobile platform system to complete the robot navigation task. Through the test in the simulated sewage pipeline scene, the practical value of the system method is verified. The experimental data show that the positioning and navigation system has high positioning accuracy, strong stability and certain practical value.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127001195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/RCAR54675.2022.9872232
Jing Zhao, Zhongyi Li, Chunyang Li, Fanqing Zhang
Health detection and early diagnosis of disease have become one of the most concerned topics. However, the existing medical detection equipment were limited by huge size or sensitivity, which were unable to satisfy growing demand. Therefore, the microrobot combined with sensors can be a new route to realize the in-situ detection with more sensitivity and precision in real time due to the tiny scale and flexible movement. Here we introduced a micro wireless ionic sensor based on LC resonant circuit, which required no on-board power and can be easily fabricated on the microrobot to realize the real-time wireless sensing signal transmission. Further, the sensor fabricated on microrobot can realize remote sensing based on changes of the local imaging signal during navigation in magnetic field-based medical imaging equipment such as MRI or MPI. In addition, the non-invasive implantation of sensors on microrobots will provide more possible applications for future in vivo monitoring technology.
{"title":"Wireless Ionic sensor on microrobots for Medical Application","authors":"Jing Zhao, Zhongyi Li, Chunyang Li, Fanqing Zhang","doi":"10.1109/RCAR54675.2022.9872232","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872232","url":null,"abstract":"Health detection and early diagnosis of disease have become one of the most concerned topics. However, the existing medical detection equipment were limited by huge size or sensitivity, which were unable to satisfy growing demand. Therefore, the microrobot combined with sensors can be a new route to realize the in-situ detection with more sensitivity and precision in real time due to the tiny scale and flexible movement. Here we introduced a micro wireless ionic sensor based on LC resonant circuit, which required no on-board power and can be easily fabricated on the microrobot to realize the real-time wireless sensing signal transmission. Further, the sensor fabricated on microrobot can realize remote sensing based on changes of the local imaging signal during navigation in magnetic field-based medical imaging equipment such as MRI or MPI. In addition, the non-invasive implantation of sensors on microrobots will provide more possible applications for future in vivo monitoring technology.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126406131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}