Pub Date : 2022-12-05DOI: 10.1109/ROBIO55434.2022.10011795
Jinsung Ahn, Y. Yamakawa
This paper presents a new method of image processing for the line tracing task, which is one of the simple and fundamental tasks that has been applied to an unmanned system, utilizing multiple regions of interest to draw information from the entire image which was discarded in traditional image processing method for more accurate and flexible line trace. This new method divides the acquired image by machine vision into 3 regions: feedback region, prediction region, and inspection region. And different process was applied to each region to acquire parameters depending on the characteristics of each region that can enhance line tracing performance. In this paper, parameters of the new method are applied to the proportional control method and implemented to the robot arm and the camera and evaluated with the basic proportional control by comparing adaptability to a sharp curve. Consequently, the new method provided more adaptability in line tracing compared to the traditional single region of interest method.
{"title":"Full Utilization of a Single Image by Characterizing Multiple Regions of Interest for Line Tracing","authors":"Jinsung Ahn, Y. Yamakawa","doi":"10.1109/ROBIO55434.2022.10011795","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011795","url":null,"abstract":"This paper presents a new method of image processing for the line tracing task, which is one of the simple and fundamental tasks that has been applied to an unmanned system, utilizing multiple regions of interest to draw information from the entire image which was discarded in traditional image processing method for more accurate and flexible line trace. This new method divides the acquired image by machine vision into 3 regions: feedback region, prediction region, and inspection region. And different process was applied to each region to acquire parameters depending on the characteristics of each region that can enhance line tracing performance. In this paper, parameters of the new method are applied to the proportional control method and implemented to the robot arm and the camera and evaluated with the basic proportional control by comparing adaptability to a sharp curve. Consequently, the new method provided more adaptability in line tracing compared to the traditional single region of interest method.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115232115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a four-wheel mobile robot that uses passive star-wheel configuration to climb stairs. Without adding any control complexity, the robot can climb a standard indoor staircase of 15 x 28cm at a speed of 0.7 $s$ / step. For stairs of other sizes, it also has a certain adaptability, avoiding the problem of slipping when using active star-wheel to climb stairs. This paper analyzes the obstacle-surmounting conditions of the four-star-wheel robot during stair climbing, and uses statics to calculate the driving torque. We use simulation to validate the torque consumption study, and prove the stability of the robot's central trajectory during stair climbing. These results provide a basis to quantify the robot's stair-climbing capability under certain load. We build an robot prototype platform and conduct physical experimentation to validate the robot performance.
提出了一种采用被动星轮结构的四轮移动爬楼梯机器人。在不增加任何控制复杂性的情况下,机器人可以以每步0.7美元的速度爬上15 x 28厘米的标准室内楼梯。对于其他尺寸的楼梯,也具有一定的适应性,避免了使用有源星轮爬楼梯时打滑的问题。本文分析了四星级轮式机器人在爬楼梯过程中的越障条件,并利用静力学方法计算了驱动力矩。通过仿真验证了扭矩消耗的研究结果,并证明了机器人在爬楼梯过程中中心轨迹的稳定性。这些结果为机器人在一定载荷下的爬楼梯能力的量化提供了依据。我们搭建了机器人原型平台,并进行了物理实验来验证机器人的性能。
{"title":"A Stair-Climbing Robot with Star-wheel Configuration","authors":"Tongxin Cui, Wenhui Wang, Zheng Zhu, Jing Wu, Zhenzhong Jia","doi":"10.1109/ROBIO55434.2022.10011928","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011928","url":null,"abstract":"This paper proposes a four-wheel mobile robot that uses passive star-wheel configuration to climb stairs. Without adding any control complexity, the robot can climb a standard indoor staircase of 15 x 28cm at a speed of 0.7 $s$ / step. For stairs of other sizes, it also has a certain adaptability, avoiding the problem of slipping when using active star-wheel to climb stairs. This paper analyzes the obstacle-surmounting conditions of the four-star-wheel robot during stair climbing, and uses statics to calculate the driving torque. We use simulation to validate the torque consumption study, and prove the stability of the robot's central trajectory during stair climbing. These results provide a basis to quantify the robot's stair-climbing capability under certain load. We build an robot prototype platform and conduct physical experimentation to validate the robot performance.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"975 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/ROBIO55434.2022.10011683
W. He, Tao Wang
Aiming at the insufficient load capacity of a soft robot, a robotic arm based on rigid-soft coupling structure was proposed. It is composed of fluidic soft actuators external installed and a rigid skeleton in central. The stiffness variation of the skeleton is realized by using jamming principle. The rigid skeleton is a Y-shaped linkage mechanism, which can realize bending and elongation simultaneously. This coupling structure has the same degrees of freedom in comparison with the original three-chamber soft robotic arm without the rigid skeleton. A prototype of the robotic arm as well as an experimental setup were developed. Load capacity experiments and dynamic response experiments of the robotic arm were implemented respectively. The results verify that the proposed rigid-soft coupling robotic arm is superior to the skeletonless soft robotic arm in terms of load-carrying capacity and dynamic response performance.
{"title":"Design and Experiments of a Robotic Arm with a Rigid-Soft Coupling Structure*","authors":"W. He, Tao Wang","doi":"10.1109/ROBIO55434.2022.10011683","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011683","url":null,"abstract":"Aiming at the insufficient load capacity of a soft robot, a robotic arm based on rigid-soft coupling structure was proposed. It is composed of fluidic soft actuators external installed and a rigid skeleton in central. The stiffness variation of the skeleton is realized by using jamming principle. The rigid skeleton is a Y-shaped linkage mechanism, which can realize bending and elongation simultaneously. This coupling structure has the same degrees of freedom in comparison with the original three-chamber soft robotic arm without the rigid skeleton. A prototype of the robotic arm as well as an experimental setup were developed. Load capacity experiments and dynamic response experiments of the robotic arm were implemented respectively. The results verify that the proposed rigid-soft coupling robotic arm is superior to the skeletonless soft robotic arm in terms of load-carrying capacity and dynamic response performance.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127333126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/ROBIO55434.2022.10011651
Yunlong Wu, Jinghua Li, Haoxiang Jin, Jiexin Zhang, Yanzhen Wang
In this paper, we propose RBT-HCI, a reliable behavior tree (BT) planning method with human-computer interaction, aiming at generating an interpretable and human-acceptable BT. Compared with other BT generation methods, RBT-HCI can reliably plan a BT based on the knowledge base. When an available BT cannot be planned automatically, instead of terminating or relaxing the rules, RBT-HCI provides a new idea, which is to make decisions through human-computer interaction, thereby enhancing the reliability and robustness of the method. The effectiveness of RBT-HCI is verified by an example of robot grasping objects, showing that a reliable and robust planning result can be obtained through knowledge-based automatic planning and human-computer interaction.
{"title":"RBT-HCI: A Reliable Behavior Tree Planning Method with Human-Computer Interaction","authors":"Yunlong Wu, Jinghua Li, Haoxiang Jin, Jiexin Zhang, Yanzhen Wang","doi":"10.1109/ROBIO55434.2022.10011651","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011651","url":null,"abstract":"In this paper, we propose RBT-HCI, a reliable behavior tree (BT) planning method with human-computer interaction, aiming at generating an interpretable and human-acceptable BT. Compared with other BT generation methods, RBT-HCI can reliably plan a BT based on the knowledge base. When an available BT cannot be planned automatically, instead of terminating or relaxing the rules, RBT-HCI provides a new idea, which is to make decisions through human-computer interaction, thereby enhancing the reliability and robustness of the method. The effectiveness of RBT-HCI is verified by an example of robot grasping objects, showing that a reliable and robust planning result can be obtained through knowledge-based automatic planning and human-computer interaction.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125123443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/ROBIO55434.2022.10011873
Boyue Zhang, Shaowei Cui, C. Zhang, Jingyi Hu, Shuo Wang
In robot grasp and dexterous manipulation tasks, tactile sensing is important for the control adjustment of the manipulator. In this paper, we present a novel low-cost parallel gripper with high-resolution tactile sensing, named the GelStereo Gripper. Furthermore, an adaptive grasp strategy is proposed to endow the gripper with tactile-feedback grasp stability-maintaining ability. We install the gripper on our robot platform and conduct various grasp experiments by utilizing proposed control methods. Experimental results verify the reliability of the GelStereo gripper and also prove the effectiveness of the proposed strategy for experimental objects with different features.
{"title":"Visuotactile Feedback Parallel Gripper for Robotic Adaptive Grasping","authors":"Boyue Zhang, Shaowei Cui, C. Zhang, Jingyi Hu, Shuo Wang","doi":"10.1109/ROBIO55434.2022.10011873","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011873","url":null,"abstract":"In robot grasp and dexterous manipulation tasks, tactile sensing is important for the control adjustment of the manipulator. In this paper, we present a novel low-cost parallel gripper with high-resolution tactile sensing, named the GelStereo Gripper. Furthermore, an adaptive grasp strategy is proposed to endow the gripper with tactile-feedback grasp stability-maintaining ability. We install the gripper on our robot platform and conduct various grasp experiments by utilizing proposed control methods. Experimental results verify the reliability of the GelStereo gripper and also prove the effectiveness of the proposed strategy for experimental objects with different features.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125174560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/ROBIO55434.2022.10011881
Wenjing Shi, Yihui Li, Y. Guan, Xiaohan Chen, Shengtian Yang, Senyu Mo
Robots can be used to play musical instruments such as the piano, but existing robots are not automated enough for this purpose. In this paper, we design an automatic finger and arm planning system for dual-arm robots, whose input object is a commonly used bi-phonic musical score with high-pitched and low-pitched voices. The digital score can be obtained from methods in the field of Optical Score Recognition. We combine the characteristics of the score and the robot to automatically generate fingering using a Dynamic Planning approach. Then we generate the movements of the robot's arms based on the annotated fingering. Finally, it is demonstrated by simulated experiments. Our method is more applicable to robots than general fingering generation algorithms.
{"title":"Optimized Fingering Planning for Automatic Piano Playing Using Dual-arm Robot System","authors":"Wenjing Shi, Yihui Li, Y. Guan, Xiaohan Chen, Shengtian Yang, Senyu Mo","doi":"10.1109/ROBIO55434.2022.10011881","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011881","url":null,"abstract":"Robots can be used to play musical instruments such as the piano, but existing robots are not automated enough for this purpose. In this paper, we design an automatic finger and arm planning system for dual-arm robots, whose input object is a commonly used bi-phonic musical score with high-pitched and low-pitched voices. The digital score can be obtained from methods in the field of Optical Score Recognition. We combine the characteristics of the score and the robot to automatically generate fingering using a Dynamic Planning approach. Then we generate the movements of the robot's arms based on the annotated fingering. Finally, it is demonstrated by simulated experiments. Our method is more applicable to robots than general fingering generation algorithms.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125832515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a precise LiDAR SLAM in optimization framework using plane-like object as landmark. Compared to general methods, finite plane feature is used to represent landmark and a new residual model is designed, making constraint from edge of landmark can be used to limit the parallel position between LiDAR and landmark, leading to a more accurate result. Moreover, floor plan is used to provide global pose for reduction of drift, and additionally feature orientation prior is used to prevent map distortion when updating the inaccurate part of floor plan. Experiments are conducted using data collected in real environment. The result qualitatively shows that proposed method can build a corrected map from floor plan without distortion, and quantitatively verifies that proposed method can outperform other baseline methods in accuracy.
{"title":"Precise LiDAR SLAM in Structured Scene Using Finite Plane and Prior Constraint","authors":"Yuhui Xie, Wentao Zhao, Jiahao Wang, Jingchuan Wang, Weidong Chen","doi":"10.1109/ROBIO55434.2022.10011847","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011847","url":null,"abstract":"In this paper, we propose a precise LiDAR SLAM in optimization framework using plane-like object as landmark. Compared to general methods, finite plane feature is used to represent landmark and a new residual model is designed, making constraint from edge of landmark can be used to limit the parallel position between LiDAR and landmark, leading to a more accurate result. Moreover, floor plan is used to provide global pose for reduction of drift, and additionally feature orientation prior is used to prevent map distortion when updating the inaccurate part of floor plan. Experiments are conducted using data collected in real environment. The result qualitatively shows that proposed method can build a corrected map from floor plan without distortion, and quantitatively verifies that proposed method can outperform other baseline methods in accuracy.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126650035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, CNNs (Convolutional Neural Networks) with their powerful feature representation and feature learning capabilities, have played an important role in gesture recognition tasks based on sparse multichannel surface EMG signals. As each muscle group in the upper limb plays a different role in a particular hand movement, we propose a hybrid CNN model that considers the spatial distribution of muscle groups in the myoelectric channel to improve the accuracy of hand gesture recognition. The model takes the spectrogram of CWT (Continuous Wavelet Transform) as input, based on the spatial distribution of channels, decomposes all channels into multiple input streams, lets the CNN learn the features of each stream separately, and gradually fuses (slowly fusion) the features learned by each stream, and then performs gesture classification. Finally, the results of several of these stream-division methods are fused for decision making to obtain classification accuracies. The proposed model was validated and tested on the Nina Pro DB4 dataset, and the average accuracy was improved compared to both traditional machine learning methods and multi-stream CNN models that do not take into account the spatial distribution of channels.
近年来,卷积神经网络(cnn)以其强大的特征表示和特征学习能力,在基于稀疏多通道表面肌电信号的手势识别任务中发挥了重要作用。由于上肢的每个肌肉群在特定的手部运动中发挥着不同的作用,我们提出了一种混合CNN模型,该模型考虑了肌电通道中肌肉群的空间分布,以提高手势识别的准确性。该模型以CWT (Continuous Wavelet Transform)的谱图作为输入,基于通道的空间分布,将所有通道分解成多个输入流,让CNN分别学习每个流的特征,并将每个流学习到的特征逐渐融合(slowly fusion),然后进行手势分类。最后,将几种划分方法的结果进行融合,以获得分类精度。在Nina Pro DB4数据集上对该模型进行了验证和测试,与传统机器学习方法和不考虑频道空间分布的多流CNN模型相比,该模型的平均准确率有所提高。
{"title":"Channel-distribution Hybrid Deep Learning for sEMG-based Gesture Recognition","authors":"Keyi Lu, Hao Guo, Fei Qi, Peihao Gong, Zhihao Gu, Lining Sun, Haibo Huang","doi":"10.1109/ROBIO55434.2022.10011951","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011951","url":null,"abstract":"In recent years, CNNs (Convolutional Neural Networks) with their powerful feature representation and feature learning capabilities, have played an important role in gesture recognition tasks based on sparse multichannel surface EMG signals. As each muscle group in the upper limb plays a different role in a particular hand movement, we propose a hybrid CNN model that considers the spatial distribution of muscle groups in the myoelectric channel to improve the accuracy of hand gesture recognition. The model takes the spectrogram of CWT (Continuous Wavelet Transform) as input, based on the spatial distribution of channels, decomposes all channels into multiple input streams, lets the CNN learn the features of each stream separately, and gradually fuses (slowly fusion) the features learned by each stream, and then performs gesture classification. Finally, the results of several of these stream-division methods are fused for decision making to obtain classification accuracies. The proposed model was validated and tested on the Nina Pro DB4 dataset, and the average accuracy was improved compared to both traditional machine learning methods and multi-stream CNN models that do not take into account the spatial distribution of channels.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/ROBIO55434.2022.10011904
Jixiu Li, Yisen Huang, W. Ng, T. Cheng, Xixin Wu, Q. Dou, Helen M. Meng, P. Heng, Yunhui Liu, S. Chan, D. Navarro-Alarcon, Calvin Sze Hang Ng, Philip Wai Yan Chiu, Zheng Li
In minimally invasive surgery (MIS), controlling the endoscope view is crucial for the operation. Many robotic endoscope holders were developed aiming to address this prob-lem,. These systems rely on joystick, foot pedal, simple voice command, etc. to control the robot. These methods requires surgeons extra effort and are not intuitive enough. In this paper, we propose a speech-vision based multi-modal AI approach, which integrates deep learning based instrument detection, automatic speech recognition and robot visual servo control. Surgeons could communicate with the endoscope by speech to indicate their view preference, such as the instrument to be tracked. The instrument is detected by the deep learning neural network. Then the endoscope takes the detected instrument as the target and follows it with the visual servo controller. This method is applied to a magnetic anchored and guided endoscope and evaluated experimentally. Preliminary results demonstrated this approach is effective and requires little efforts for the surgeon to control the endoscope view intuitively.
{"title":"Speech-Vision Based Multi-Modal AI Control of a Magnetic Anchored and Actuated Endoscope","authors":"Jixiu Li, Yisen Huang, W. Ng, T. Cheng, Xixin Wu, Q. Dou, Helen M. Meng, P. Heng, Yunhui Liu, S. Chan, D. Navarro-Alarcon, Calvin Sze Hang Ng, Philip Wai Yan Chiu, Zheng Li","doi":"10.1109/ROBIO55434.2022.10011904","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011904","url":null,"abstract":"In minimally invasive surgery (MIS), controlling the endoscope view is crucial for the operation. Many robotic endoscope holders were developed aiming to address this prob-lem,. These systems rely on joystick, foot pedal, simple voice command, etc. to control the robot. These methods requires surgeons extra effort and are not intuitive enough. In this paper, we propose a speech-vision based multi-modal AI approach, which integrates deep learning based instrument detection, automatic speech recognition and robot visual servo control. Surgeons could communicate with the endoscope by speech to indicate their view preference, such as the instrument to be tracked. The instrument is detected by the deep learning neural network. Then the endoscope takes the detected instrument as the target and follows it with the visual servo controller. This method is applied to a magnetic anchored and guided endoscope and evaluated experimentally. Preliminary results demonstrated this approach is effective and requires little efforts for the surgeon to control the endoscope view intuitively.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124699025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1109/ROBIO55434.2022.10011976
Ning Zhang, Yixuan Kong, Hailin Huang, Shuang Song, Bing Li
Flexible robots with torsional motion can be used in transnasal surgery which can improve the flexibility of the end-effector during the operation. However, the torsion or twisting function still can not satisfy the requirement of the complex cavities with a different structure. This paper proposed a concentric torsionally-steer able (CTS) flexible surgical robot with novel concentric tendon-driven tubes. A 2L-RPRPR model based on the rigidized equivalence model is established to guide the spatial motion of the CTS robot. On the basis of this model, the cooperative motion between the inner and outer tubes can be realized, such as linear movement and rotation. In the meanwhile, the concentric tendon-driven tubes can perform different bending directions and curvatures according to various cavities. And the C-shape or S-shape with different curvatures required by the surgery operation can also be achieved. The results of the simulation and experiments show that the proposed CTS robot has larger workspace and higher operational flexibility, which are sufficient for surgical operation.
{"title":"Design and Kinematic Modeling of a Concentric Torsionally-Steerable Flexible Surgical Robot","authors":"Ning Zhang, Yixuan Kong, Hailin Huang, Shuang Song, Bing Li","doi":"10.1109/ROBIO55434.2022.10011976","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011976","url":null,"abstract":"Flexible robots with torsional motion can be used in transnasal surgery which can improve the flexibility of the end-effector during the operation. However, the torsion or twisting function still can not satisfy the requirement of the complex cavities with a different structure. This paper proposed a concentric torsionally-steer able (CTS) flexible surgical robot with novel concentric tendon-driven tubes. A 2L-RPRPR model based on the rigidized equivalence model is established to guide the spatial motion of the CTS robot. On the basis of this model, the cooperative motion between the inner and outer tubes can be realized, such as linear movement and rotation. In the meanwhile, the concentric tendon-driven tubes can perform different bending directions and curvatures according to various cavities. And the C-shape or S-shape with different curvatures required by the surgery operation can also be achieved. The results of the simulation and experiments show that the proposed CTS robot has larger workspace and higher operational flexibility, which are sufficient for surgical operation.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129918384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}