Pub Date : 2023-12-04DOI: 10.1109/ROBIO58561.2023.10354873
Sihao Qi, Jiexin Xie, Haitao Yan, Shijie Guo
Human instance segmentation in occlusion scenarios remains a challenging task, especially in nursing scenarios, which hinders the development of nursing robots. Existing approaches are unable to focus the network’s attention on the occluded areas, which leads to unsatisfactory results. To address this issue, this paper proposes a novel and effective network based on density map in the instance segmentation task. Density map-based neural networks perform well in cases where human bodies occlude each other and can be trained without additional annotation information. Firstly, a density map generator (DMG) is introduced to generate accurate density information from feature maps computed by the backbone. Secondly, using density map enhances features in the density fusion module (DFM), which focuses the network on high-density areas as well as occluded areas. Additionally, to remedy the lack of occlusion-based dataset of nursing instance segmentation, a new dataset NSR-dataset is proposed. A large amount experiments on the public datasets (NSR and COCO-PersonOcc) show that the proposed method can be a powerful instrument for human instance segmentation. The improvements of efficiency with respect to accuracy are both prominent. The dataset can be got at https://github.com/Monkey0806/NSR-dataset.
{"title":"DenseXFormer: An Effective Occluded Human Instance Segmentation Network based on Density Map for Nursing Robot","authors":"Sihao Qi, Jiexin Xie, Haitao Yan, Shijie Guo","doi":"10.1109/ROBIO58561.2023.10354873","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354873","url":null,"abstract":"Human instance segmentation in occlusion scenarios remains a challenging task, especially in nursing scenarios, which hinders the development of nursing robots. Existing approaches are unable to focus the network’s attention on the occluded areas, which leads to unsatisfactory results. To address this issue, this paper proposes a novel and effective network based on density map in the instance segmentation task. Density map-based neural networks perform well in cases where human bodies occlude each other and can be trained without additional annotation information. Firstly, a density map generator (DMG) is introduced to generate accurate density information from feature maps computed by the backbone. Secondly, using density map enhances features in the density fusion module (DFM), which focuses the network on high-density areas as well as occluded areas. Additionally, to remedy the lack of occlusion-based dataset of nursing instance segmentation, a new dataset NSR-dataset is proposed. A large amount experiments on the public datasets (NSR and COCO-PersonOcc) show that the proposed method can be a powerful instrument for human instance segmentation. The improvements of efficiency with respect to accuracy are both prominent. The dataset can be got at https://github.com/Monkey0806/NSR-dataset.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"44 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1109/ROBIO58561.2023.10354856
Wenhao Liu, Wanlei Li, Tao Wang, Jun He, Yunjiang Lou
Pedestrian tracking is an important research direction in the field of mobile robotics. In order to complete tasks more efficiently and without hindering the original intention of pedestrians, mobile robots need to track pedestrians accurately in real time. In this paper, we propose a real-time RGB-D pedestrian tracking framework. First, we propose a pedestrian segmentation detection algorithm to detect pedestrians and obtain their two-dimensional positions. Second, due to limited computational resources and the rarity of missed detection for pedestrians, we use an nearest neighbor tracker for pedestrian tracking. To address the issue of inaccurate pedestrian localization, we use our detection algorithm to obtain the center of pedestrians from RGB images. By combining them with point clouds, the 2D coordinates of pedestrians are obtained. Our method enables accurate pedestrian tracking in the world coordinate, by adaptively fusing RGB images with their corresponding depth-based point clouds. Besides, our light-weight detection and tracking algorithm guarantee the real-time pedestrian tracking for realistic mobile robot applications. To validate the effectiveness and real-time performance of tracking algorithm, we conduct experiments using multiple pedestrian datasets of approximately half a minute in length, captured from two different perspectives. To validate the practicality and accuracy of the tracking algorithm in real-world scenarios, we extend our tracking algorithm to apply it to trajectory prediction.
{"title":"Real-Time RGB-D Pedestrian Tracking for Mobile Robot","authors":"Wenhao Liu, Wanlei Li, Tao Wang, Jun He, Yunjiang Lou","doi":"10.1109/ROBIO58561.2023.10354856","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354856","url":null,"abstract":"Pedestrian tracking is an important research direction in the field of mobile robotics. In order to complete tasks more efficiently and without hindering the original intention of pedestrians, mobile robots need to track pedestrians accurately in real time. In this paper, we propose a real-time RGB-D pedestrian tracking framework. First, we propose a pedestrian segmentation detection algorithm to detect pedestrians and obtain their two-dimensional positions. Second, due to limited computational resources and the rarity of missed detection for pedestrians, we use an nearest neighbor tracker for pedestrian tracking. To address the issue of inaccurate pedestrian localization, we use our detection algorithm to obtain the center of pedestrians from RGB images. By combining them with point clouds, the 2D coordinates of pedestrians are obtained. Our method enables accurate pedestrian tracking in the world coordinate, by adaptively fusing RGB images with their corresponding depth-based point clouds. Besides, our light-weight detection and tracking algorithm guarantee the real-time pedestrian tracking for realistic mobile robot applications. To validate the effectiveness and real-time performance of tracking algorithm, we conduct experiments using multiple pedestrian datasets of approximately half a minute in length, captured from two different perspectives. To validate the practicality and accuracy of the tracking algorithm in real-world scenarios, we extend our tracking algorithm to apply it to trajectory prediction.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"43 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, deep learning based feature fusion has drawn significant attention in the field of information integration due to its robust representational and generative capabilities. However, existing methods struggle to effectively preserve essential information. To this end, this paper proposes a gate-based fusion module for object detection to integrate the information from distinct feature layers of convolutional neural networks. The gate structure of the fusion module adaptively selects features from neighboring layers, storing valuable information in memory units and passing it to the subsequent layer. This approach facilitates the fusion of high-level semantic and low-level detailed features. Experimental validation is conducted on the public Pascal VOC dataset. Experiments results demonstrate that the addition of the gate-based fusion module to the detection task leads to an average accuracy increment of up to 5%.
{"title":"Feature Fusion Module Based on Gate Mechanism for Object Detection","authors":"Zepeng Sun, Dongyin Jin, Jian Deng, Mengyang Zhang, Zhenzhou Shao","doi":"10.1109/ROBIO58561.2023.10354575","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354575","url":null,"abstract":"In recent years, deep learning based feature fusion has drawn significant attention in the field of information integration due to its robust representational and generative capabilities. However, existing methods struggle to effectively preserve essential information. To this end, this paper proposes a gate-based fusion module for object detection to integrate the information from distinct feature layers of convolutional neural networks. The gate structure of the fusion module adaptively selects features from neighboring layers, storing valuable information in memory units and passing it to the subsequent layer. This approach facilitates the fusion of high-level semantic and low-level detailed features. Experimental validation is conducted on the public Pascal VOC dataset. Experiments results demonstrate that the addition of the gate-based fusion module to the detection task leads to an average accuracy increment of up to 5%.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"17 6","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1109/ROBIO58561.2023.10355008
Mvs Sakethram, Ps Saikrishna
The Internet of Things (IoT) refers to a network of interconnected physical devices embedded with sensors, software, and network connectivity that enables them to collect and exchange data. Cloud computing refers to the delivery of computing resources and services over the Internet. The time it takes for IoT data to transit to the cloud and back might have a substantial influence on the performance, especially for applications that need low latency. Fog computing has been proposed for this constraint. Many issues need to be resolved in order to fully utilize the real-time analytics capabilities of the fog and IoT paradigms. In this paper, we worked extensively using a simulator called iFogsim, to model IoT and Fog environments with real-world challenges and discussed mainly the data transmission between the fog nodes. We describe a case study and added constraints that make the a realistic fog environment with a Distributed Camera Network System (DCNS).
{"title":"Fog-based Distributed Camera Network system for Surveillance Applications","authors":"Mvs Sakethram, Ps Saikrishna","doi":"10.1109/ROBIO58561.2023.10355008","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10355008","url":null,"abstract":"The Internet of Things (IoT) refers to a network of interconnected physical devices embedded with sensors, software, and network connectivity that enables them to collect and exchange data. Cloud computing refers to the delivery of computing resources and services over the Internet. The time it takes for IoT data to transit to the cloud and back might have a substantial influence on the performance, especially for applications that need low latency. Fog computing has been proposed for this constraint. Many issues need to be resolved in order to fully utilize the real-time analytics capabilities of the fog and IoT paradigms. In this paper, we worked extensively using a simulator called iFogsim, to model IoT and Fog environments with real-world challenges and discussed mainly the data transmission between the fog nodes. We describe a case study and added constraints that make the a realistic fog environment with a Distributed Camera Network System (DCNS).","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"111 12","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1109/ROBIO58561.2023.10354616
Yuqiao Dai, Peng Li, Shilin Zhang, Yunhui Liu
Soft robots are a hot spot in today's robotic research, because most of them exist in the form of continuums, and the current continuum is difficult to recognize the shape and reproduce the corresponding shape. In this paper, we propose a method, in which the shape features of the flexible continuum are obtained by contour centerline extraction and binocular camera reconstruction and the modeling of the relationship between the motor input and the shape output of the continuum is completed using neural networks. Simulation environment is set up to test the shape estimation and shape control of the flexible continuum. Results show that this method can prediction and reproduce the shape of the continuum well. This method can be used in shape control of the continuum robot.
{"title":"Shape Analysis and Control of a Continuum Objects*","authors":"Yuqiao Dai, Peng Li, Shilin Zhang, Yunhui Liu","doi":"10.1109/ROBIO58561.2023.10354616","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354616","url":null,"abstract":"Soft robots are a hot spot in today's robotic research, because most of them exist in the form of continuums, and the current continuum is difficult to recognize the shape and reproduce the corresponding shape. In this paper, we propose a method, in which the shape features of the flexible continuum are obtained by contour centerline extraction and binocular camera reconstruction and the modeling of the relationship between the motor input and the shape output of the continuum is completed using neural networks. Simulation environment is set up to test the shape estimation and shape control of the flexible continuum. Results show that this method can prediction and reproduce the shape of the continuum well. This method can be used in shape control of the continuum robot.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"111 7","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1109/ROBIO58561.2023.10354971
Dan Xiong, Yiyong Huang, Yanjie Yang, Hongwei Liu, Zhijie Jiang, Wei Han
Micro/low gravity is one of the most prominent features of the outer space environment, and it significantly alters the force state and dynamics of spacecraft or astronauts compared to the Earth’s gravitational environment. It is crucial to simulate the micro/low gravity environment on the ground for astronaut training or spacecraft testing. The suspension method utilizes a pulley and sling mechanism to create a micro-low gravity environment. This method counteracts the gravitational force exerted by the object based on rope tension. The simulation effect greatly depends on the accuracy of the horizontal following system, which serves as the central subsystem of the suspension device. In this paper, we propose a dual-arm following system to solve the issue of horizontal following for self-momentum targets. In addition, we conduct research on adaptive inhibition technology for flexible rope swing, and coupling control between a robotic arm and a crane. Physical experiments are conducted on the robotic system to verify the effectiveness of the proposed approach.
{"title":"Research on Horizontal Following Control of a Suspended Robot for Self-Momentum Targets","authors":"Dan Xiong, Yiyong Huang, Yanjie Yang, Hongwei Liu, Zhijie Jiang, Wei Han","doi":"10.1109/ROBIO58561.2023.10354971","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354971","url":null,"abstract":"Micro/low gravity is one of the most prominent features of the outer space environment, and it significantly alters the force state and dynamics of spacecraft or astronauts compared to the Earth’s gravitational environment. It is crucial to simulate the micro/low gravity environment on the ground for astronaut training or spacecraft testing. The suspension method utilizes a pulley and sling mechanism to create a micro-low gravity environment. This method counteracts the gravitational force exerted by the object based on rope tension. The simulation effect greatly depends on the accuracy of the horizontal following system, which serves as the central subsystem of the suspension device. In this paper, we propose a dual-arm following system to solve the issue of horizontal following for self-momentum targets. In addition, we conduct research on adaptive inhibition technology for flexible rope swing, and coupling control between a robotic arm and a crane. Physical experiments are conducted on the robotic system to verify the effectiveness of the proposed approach.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"88 12","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a general estimation method of deformation for the self-balancing lower limb exoskeleton (SBLLE). In particular, we propose a Bi-LSTM deformation estimator (BLDE) to predict and compensate for the deformation of SBLLE based on the current force and torque data measured by force/torque (F/T) sensors. First, we choose four movements including squatting down and up, waist twisting, left foot lifting, and right foot lifting to mimic the constituent action of walking motion. The deformation data set is obtained through the motion capture analysis system and offline planning trajectories, and the relative F/T data set is obtained by the F/T sensors embedded in the feet of SBLLE. Second, the BiLSTM network is trained to learn the relationship between the deformation and F/T and verified on the test set. After that, BLDE is added to the control system of SBLLE to estimate and compensate for the deformation. Finally, four same movements and the walking experiment are conducted on the exoskeleton AutoLEE-G2 with BLDE. The experimental results have proven that BLDE can predict and compensate for deformation by only using F/T sensors.
{"title":"Estimation of Deformation for Self-balancing Lower Limb Exoskeleton Only Using Force/Torque Sensors","authors":"Ziqiang Chen, Ming Yang, Feng Li, Wentao Li, Jinke Li, Dingkui Tian, Jianquan Sun, Yong He, Xinyu Wu","doi":"10.1109/ROBIO58561.2023.10354999","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354999","url":null,"abstract":"This paper presents a general estimation method of deformation for the self-balancing lower limb exoskeleton (SBLLE). In particular, we propose a Bi-LSTM deformation estimator (BLDE) to predict and compensate for the deformation of SBLLE based on the current force and torque data measured by force/torque (F/T) sensors. First, we choose four movements including squatting down and up, waist twisting, left foot lifting, and right foot lifting to mimic the constituent action of walking motion. The deformation data set is obtained through the motion capture analysis system and offline planning trajectories, and the relative F/T data set is obtained by the F/T sensors embedded in the feet of SBLLE. Second, the BiLSTM network is trained to learn the relationship between the deformation and F/T and verified on the test set. After that, BLDE is added to the control system of SBLLE to estimate and compensate for the deformation. Finally, four same movements and the walking experiment are conducted on the exoskeleton AutoLEE-G2 with BLDE. The experimental results have proven that BLDE can predict and compensate for deformation by only using F/T sensors.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"87 6","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we develop an automatic control system to perform the reach-to-grasp movement of a 7-DOF (Degrees of Freedom) robotic arm that has the same DOFs as a human arm, and an end-effector with the same shape as a human hand. The 6-DOF pose of the object to be grasped is estimated in real time only from RGB images using a neural network based object pose estimation model. Based on this information, motion planning is performed to automatically control the reach-to-grasp movement of the robotic arm. In the evaluation experiment, the 7-DOF robotic arm performs reach-to-grasp movements for a household object in different poses using the developed control system. The results show that the control system developed in this study can automatically control the reach-to-grasp movement to an object in a certain arbitrary pose.
{"title":"Automatic Control System for Reach-to-Grasp Movement of a 7-DOF Robotic Arm Using Object Pose Estimation with an RGB Camera","authors":"Shuting Bai, Jiazhen Guo, Yinlai Jiang, Hiroshi Yokoi, Shunta Togo","doi":"10.1109/ROBIO58561.2023.10354531","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354531","url":null,"abstract":"In this study, we develop an automatic control system to perform the reach-to-grasp movement of a 7-DOF (Degrees of Freedom) robotic arm that has the same DOFs as a human arm, and an end-effector with the same shape as a human hand. The 6-DOF pose of the object to be grasped is estimated in real time only from RGB images using a neural network based object pose estimation model. Based on this information, motion planning is performed to automatically control the reach-to-grasp movement of the robotic arm. In the evaluation experiment, the 7-DOF robotic arm performs reach-to-grasp movements for a household object in different poses using the developed control system. The results show that the control system developed in this study can automatically control the reach-to-grasp movement to an object in a certain arbitrary pose.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"13 2","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1109/ROBIO58561.2023.10354624
Yinong Ye, Yongming Yue, Wei Gao, Shiwu Zhang
The walking control of bipedal robots poses challenges due to inherent coupling among the robot’s degrees of freedom. This paper introduces an approach to address this challenge by using decoupled control in the sagittal and frontal planes. The proposed control method takes advantage of Hybrid Zero Dynamics and Hybrid-Linear Inverted Pendulum for sagittal and frontal plane dynamics, respectively. The hybrid controller is successfully validated on a bipedal robot RobBIE, whose torso inertia is relatively high and if not adequately controlled can easily violate the point mass assumption in many reduced-order model based walking controllers developed previously. With the help of full-model based Hybrid Zero Dynamics, the robot can achieve stable walking behaviors at different velocities and adapt to various terrains and even moderate disturbances.
{"title":"Decoupled Control of Bipedal Locomotion Based on HZD and H-LIP","authors":"Yinong Ye, Yongming Yue, Wei Gao, Shiwu Zhang","doi":"10.1109/ROBIO58561.2023.10354624","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354624","url":null,"abstract":"The walking control of bipedal robots poses challenges due to inherent coupling among the robot’s degrees of freedom. This paper introduces an approach to address this challenge by using decoupled control in the sagittal and frontal planes. The proposed control method takes advantage of Hybrid Zero Dynamics and Hybrid-Linear Inverted Pendulum for sagittal and frontal plane dynamics, respectively. The hybrid controller is successfully validated on a bipedal robot RobBIE, whose torso inertia is relatively high and if not adequately controlled can easily violate the point mass assumption in many reduced-order model based walking controllers developed previously. With the help of full-model based Hybrid Zero Dynamics, the robot can achieve stable walking behaviors at different velocities and adapt to various terrains and even moderate disturbances.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"109 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1109/ROBIO58561.2023.10354888
Chenhao Xu, F. Xie, Xin-Jun Liu
Large tilt angle is required for parallel manipulators in many applications, this is a challenging issue in the field. In this paper, the optimum design of a 3-RCU parallel manipulator with 1T2R DoFs is carried out to realize the performance of large tilt angle output. The parameter-finiteness normalization method is used to build the parameter design space, and the motion/force transmission and constraint performance indices are used as the evaluation criterion. On these bases, the performance charts have been generated. Taking the constraint condition of achieving 45° tilt angle in all directions into consideration, an optimum region in the parameter design space has been derived and a group of optimized parameters is obtained. According to the results of optimum design, an CAD model of the manipulator is built. Based on perturbation method and principle of virtual work, a stiffness analytical model is established. Finally, the stiffness has been investigated, and the accuracy of the stiffness analytical model has been verified by comparing with the stiffness calculation using finite element analysis method. The work in this paper lays the foundation for the development of the manipulator.
{"title":"Optimum Design and Stiffness Analysis of a 3-RCU Parallel Manipulator *","authors":"Chenhao Xu, F. Xie, Xin-Jun Liu","doi":"10.1109/ROBIO58561.2023.10354888","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354888","url":null,"abstract":"Large tilt angle is required for parallel manipulators in many applications, this is a challenging issue in the field. In this paper, the optimum design of a 3-RCU parallel manipulator with 1T2R DoFs is carried out to realize the performance of large tilt angle output. The parameter-finiteness normalization method is used to build the parameter design space, and the motion/force transmission and constraint performance indices are used as the evaluation criterion. On these bases, the performance charts have been generated. Taking the constraint condition of achieving 45° tilt angle in all directions into consideration, an optimum region in the parameter design space has been derived and a group of optimized parameters is obtained. According to the results of optimum design, an CAD model of the manipulator is built. Based on perturbation method and principle of virtual work, a stiffness analytical model is established. Finally, the stiffness has been investigated, and the accuracy of the stiffness analytical model has been verified by comparing with the stiffness calculation using finite element analysis method. The work in this paper lays the foundation for the development of the manipulator.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"108 9","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}