Purpose The purpose of this paper is to design a soft robot for performing detection, by using a hybrid drive to reach the target point faster and enable the robot to perform the detection task at a relatively fast speed. Design/methodology/approach The soft robot is driven by a mixture of motors and pneumatic pressure, in which the pneumatic pressure is used to drive the soft actuator to bend and the motors to drive the soft robot forward. The careful design of the actuator is based on a finite element simulation using ABAQUS, which combines a constant curvature differential model and the D-H method to analyze the motion space of the soft actuator. Findings The soft robot’s ability to adapt to the environment and cross obstacles has been demonstrated by building prototypes and complex environments such as grass, gravel, sand and pipes. Originality/value This design can improve the speed and smoothness of the motion of the soft robot, while retaining the good environmental flexibility of the soft robot. And the soft robot has good environmental adaptability and the ability to cross obstacles. The soft robot proposed in this paper has broad prospects in fields such as pipeline inspection and field exploration.
{"title":"Design and experimental research of the hybrid-driven soft robot","authors":"Ke Zhang, Hongtao Wei, Yongqi Bi","doi":"10.1108/ir-08-2022-0214","DOIUrl":"https://doi.org/10.1108/ir-08-2022-0214","url":null,"abstract":"\u0000Purpose\u0000The purpose of this paper is to design a soft robot for performing detection, by using a hybrid drive to reach the target point faster and enable the robot to perform the detection task at a relatively fast speed.\u0000\u0000\u0000Design/methodology/approach\u0000The soft robot is driven by a mixture of motors and pneumatic pressure, in which the pneumatic pressure is used to drive the soft actuator to bend and the motors to drive the soft robot forward. The careful design of the actuator is based on a finite element simulation using ABAQUS, which combines a constant curvature differential model and the D-H method to analyze the motion space of the soft actuator.\u0000\u0000\u0000Findings\u0000The soft robot’s ability to adapt to the environment and cross obstacles has been demonstrated by building prototypes and complex environments such as grass, gravel, sand and pipes.\u0000\u0000\u0000Originality/value\u0000This design can improve the speed and smoothness of the motion of the soft robot, while retaining the good environmental flexibility of the soft robot. And the soft robot has good environmental adaptability and the ability to cross obstacles. The soft robot proposed in this paper has broad prospects in fields such as pipeline inspection and field exploration.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"38 1","pages":"648-658"},"PeriodicalIF":1.8,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73989125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose The study proposed a human–robot interaction (HRI) framework to enable operators to communicate remotely with robots in a simple and intuitive way. The study focused on the situation when operators with no programming skills have to accomplish teleoperated tasks dealing with randomly localized different-sized objects in an unstructured environment. The purpose of this study is to reduce stress on operators, increase accuracy and reduce the time of task accomplishment. The special application of the proposed system is in the radioactive isotope production factories. The following approach combined the reactivity of the operator’s direct control with the powerful tools of vision-based object classification and localization. Design/methodology/approach Perceptive real-time gesture control predicated on a Kinect sensor is formulated by information fusion between human intuitiveness and an augmented reality-based vision algorithm. Objects are localized using a developed feature-based vision algorithm, where the homography is estimated and Perspective-n-Point problem is solved. The 3D object position and orientation are stored in the robot end-effector memory for the last mission adjusting and waiting for a gesture control signal to autonomously pick/place an object. Object classification process is done using a one-shot Siamese neural network (NN) to train a proposed deep NN; other well-known models are also used in a comparison. The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved. Findings The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved. The results revealed the effectiveness of the proposed teleoperation system and demonstrate its potential for use by robotics non-experienced users to effectively accomplish remote robot tasks. Social implications The proposed system reduces risk and increases level of safety when applied in hazardous environment such as the nuclear one. Originality/value The contribution and uniqueness of the presented study are represented in the development of a well-integrated HRI system that can tackle the four aforementioned circumstances in an effective and user-friendly way. High operator–robot reactivity is kept by using the direct control method, while a lot of cognitive stress is removed using elective/flapped autonomous mode to manipulate randomly localized different configuration objects. This necessitates building an effective deep learning algorithm (in comparison to well-known methods) to recognize objects in different conditions: illumination levels, shadows and different postures.
{"title":"Augmented reality-assisted gesture-based teleoperated system for robot motion planning","authors":"Ahmed Eslam Salman, M. Roman","doi":"10.1108/ir-11-2022-0289","DOIUrl":"https://doi.org/10.1108/ir-11-2022-0289","url":null,"abstract":"\u0000Purpose\u0000The study proposed a human–robot interaction (HRI) framework to enable operators to communicate remotely with robots in a simple and intuitive way. The study focused on the situation when operators with no programming skills have to accomplish teleoperated tasks dealing with randomly localized different-sized objects in an unstructured environment. The purpose of this study is to reduce stress on operators, increase accuracy and reduce the time of task accomplishment. The special application of the proposed system is in the radioactive isotope production factories. The following approach combined the reactivity of the operator’s direct control with the powerful tools of vision-based object classification and localization.\u0000\u0000\u0000Design/methodology/approach\u0000Perceptive real-time gesture control predicated on a Kinect sensor is formulated by information fusion between human intuitiveness and an augmented reality-based vision algorithm. Objects are localized using a developed feature-based vision algorithm, where the homography is estimated and Perspective-n-Point problem is solved. The 3D object position and orientation are stored in the robot end-effector memory for the last mission adjusting and waiting for a gesture control signal to autonomously pick/place an object. Object classification process is done using a one-shot Siamese neural network (NN) to train a proposed deep NN; other well-known models are also used in a comparison. The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved.\u0000\u0000\u0000Findings\u0000The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved. The results revealed the effectiveness of the proposed teleoperation system and demonstrate its potential for use by robotics non-experienced users to effectively accomplish remote robot tasks.\u0000\u0000\u0000Social implications\u0000The proposed system reduces risk and increases level of safety when applied in hazardous environment such as the nuclear one.\u0000\u0000\u0000Originality/value\u0000The contribution and uniqueness of the presented study are represented in the development of a well-integrated HRI system that can tackle the four aforementioned circumstances in an effective and user-friendly way. High operator–robot reactivity is kept by using the direct control method, while a lot of cognitive stress is removed using elective/flapped autonomous mode to manipulate randomly localized different configuration objects. This necessitates building an effective deep learning algorithm (in comparison to well-known methods) to recognize objects in different conditions: illumination levels, shadows and different postures.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"116 1","pages":"765-780"},"PeriodicalIF":1.8,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80141652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose This paper aims to present an iterative path-following method with joint limits to solve the problem of large computation cost, movement exceeding joint limits and poor path-following accuracy for the path planning of hyper-redundant snake-like manipulator. Design/methodology/approach When a desired path is given, new configuration of the snake-like manipulator is obtained through a geometrical approach, then the joints are repositioned through iterations until all the rotation angles satisfy the imposed joint limits. Finally, a new arrangement is obtained through the analytic solution of the inverse kinematics of hyper-redundant manipulator. Finally, simulations and experiments are carried out to analyze the performance of the proposed path-following method. Findings Simulation results show that the average computation time is 0.1 ms per step for a hyper-redundant manipulator with 12 degrees of freedom, and the deviation in tip position can be kept below 0.02 mm. Experiments show that all the rotation angles are within joint limits. Research limitations/implications Currently , the manipulator is working in open-loop, the elasticity of the driving cable will cause positioning error. In future, close-loop control based on real-time attitude detection will be used in in combination with the path-following method to achieve high-precision trajectory tracking. Originality/value Through a series of iterative processes, the proposed method can make the manipulator approach the desired path as much as possible within the joint constraints with high precision and less computation time.
目的针对超冗余蛇形机械臂路径规划中计算量大、运动超出关节极限、路径跟踪精度差的问题,提出一种带关节极限的迭代路径跟踪方法。设计/方法/方法当给定所需的路径时,通过几何方法获得蛇形机械臂的新构型,然后通过迭代重新定位关节,直到所有的旋转角度都满足给定的关节限制。最后,通过对超冗余度机械臂逆运动学的解析解,得到了一种新的布置方式。最后,通过仿真和实验对所提出的路径跟踪方法进行了性能分析。仿真结果表明,12自由度超冗余度机械臂的平均计算时间为0.1 ms /步,尖端位置偏差可控制在0.02 mm以下。实验表明,所有转角均在关节极限范围内。目前,机械手工作在开环状态,驱动索的弹性会造成定位误差。未来,基于实时姿态检测的闭环控制将与路径跟踪方法相结合,实现高精度的轨迹跟踪。独创性/价值通过一系列迭代过程,该方法能使机械手在关节约束条件下尽可能接近期望路径,且精度高,计算时间少。
{"title":"An iterative path-following method for hyper-redundant snake-like manipulator with joint limits","authors":"Cheng Wang, Haibo Xie, Huayong Yang","doi":"10.1108/ir-04-2022-0106","DOIUrl":"https://doi.org/10.1108/ir-04-2022-0106","url":null,"abstract":"\u0000Purpose\u0000This paper aims to present an iterative path-following method with joint limits to solve the problem of large computation cost, movement exceeding joint limits and poor path-following accuracy for the path planning of hyper-redundant snake-like manipulator.\u0000\u0000\u0000Design/methodology/approach\u0000When a desired path is given, new configuration of the snake-like manipulator is obtained through a geometrical approach, then the joints are repositioned through iterations until all the rotation angles satisfy the imposed joint limits. Finally, a new arrangement is obtained through the analytic solution of the inverse kinematics of hyper-redundant manipulator. Finally, simulations and experiments are carried out to analyze the performance of the proposed path-following method.\u0000\u0000\u0000Findings\u0000Simulation results show that the average computation time is 0.1 ms per step for a hyper-redundant manipulator with 12 degrees of freedom, and the deviation in tip position can be kept below 0.02 mm. Experiments show that all the rotation angles are within joint limits.\u0000\u0000\u0000Research limitations/implications\u0000Currently , the manipulator is working in open-loop, the elasticity of the driving cable will cause positioning error. In future, close-loop control based on real-time attitude detection will be used in in combination with the path-following method to achieve high-precision trajectory tracking.\u0000\u0000\u0000Originality/value\u0000Through a series of iterative processes, the proposed method can make the manipulator approach the desired path as much as possible within the joint constraints with high precision and less computation time.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"170 1","pages":"505-519"},"PeriodicalIF":1.8,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85533104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose With the increasing demands of industrial applications, it is imperative for robots to accomplish good contact-interaction with dynamic environments. Hence, the purpose of this research is to propose an adaptive fractional-order admittance control scheme to realize a robot–environment contact with high accuracy, small overshoot and fast response. Design/methodology/approach Fractional calculus is introduced to reconstruct the classical admittance model in this control scheme, which can more accurately describe the complex physical relationship between position and force in the interaction process of the robot–environment. In this control scheme, the pre-PID controller and fuzzy controller are adopted to improve the system force tracking performance in highly dynamic unknown environments, and the fuzzy controller is used to improve the trajectory, transient and steady-state response by adjusting the pre-PID integration gain online. Furthermore, the stability and robustness of this control algorithm are theoretically and experimentally demonstrated. Findings The excellent force tracking performance of the proposed control algorithm is verified by constructing highly dynamic unstructured environments through simulations and experiments. In simulations and experiments, the proposed control algorithm shows satisfactory force tracking performance with the advantages of fast response speed, little overshoot and strong robustness. Practical implications The control scheme is practical and simple in the actual industrial and medical scenarios, which requires accurate force control by the robot. Originality/value A new fractional-order admittance controller is proposed and verified by experiments in this research, which achieves excellent force tracking performance in dynamic unknown environments.
{"title":"Adaptive fractional-order admittance control for force tracking in highly dynamic unknown environments","authors":"Kaixin Li, Ye He, Kuan-Lin Li, Chengguo Liu","doi":"10.1108/ir-09-2022-0244","DOIUrl":"https://doi.org/10.1108/ir-09-2022-0244","url":null,"abstract":"\u0000Purpose\u0000With the increasing demands of industrial applications, it is imperative for robots to accomplish good contact-interaction with dynamic environments. Hence, the purpose of this research is to propose an adaptive fractional-order admittance control scheme to realize a robot–environment contact with high accuracy, small overshoot and fast response.\u0000\u0000\u0000Design/methodology/approach\u0000Fractional calculus is introduced to reconstruct the classical admittance model in this control scheme, which can more accurately describe the complex physical relationship between position and force in the interaction process of the robot–environment. In this control scheme, the pre-PID controller and fuzzy controller are adopted to improve the system force tracking performance in highly dynamic unknown environments, and the fuzzy controller is used to improve the trajectory, transient and steady-state response by adjusting the pre-PID integration gain online. Furthermore, the stability and robustness of this control algorithm are theoretically and experimentally demonstrated.\u0000\u0000\u0000Findings\u0000The excellent force tracking performance of the proposed control algorithm is verified by constructing highly dynamic unstructured environments through simulations and experiments. In simulations and experiments, the proposed control algorithm shows satisfactory force tracking performance with the advantages of fast response speed, little overshoot and strong robustness.\u0000\u0000\u0000Practical implications\u0000The control scheme is practical and simple in the actual industrial and medical scenarios, which requires accurate force control by the robot.\u0000\u0000\u0000Originality/value\u0000A new fractional-order admittance controller is proposed and verified by experiments in this research, which achieves excellent force tracking performance in dynamic unknown environments.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"37 1","pages":"530-541"},"PeriodicalIF":1.8,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80338962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yawen Li, G. Song, Shuang Hao, Juzheng Mao, Aiguo Song
Purpose The prerequisite for most traditional visual simultaneous localization and mapping (V-SLAM) algorithms is that most objects in the environment should be static or in low-speed locomotion. These algorithms rely on geometric information of the environment and restrict the application scenarios with dynamic objects. Semantic segmentation can be used to extract deep features from images to identify dynamic objects in the real world. Therefore, V-SLAM fused with semantic information can reduce the influence from dynamic objects and achieve higher accuracy. This paper aims to present a new semantic stereo V-SLAM method toward outdoor dynamic environments for more accurate pose estimation. Design/methodology/approach First, the Deeplabv3+ semantic segmentation model is adopted to recognize semantic information about dynamic objects in the outdoor scenes. Second, an approach that combines prior knowledge to determine the dynamic hierarchy of moveable objects is proposed, which depends on the pixel movement between frames. Finally, a semantic stereo V-SLAM based on ORB-SLAM2 to calculate accurate trajectory in dynamic environments is presented, which selects corresponding feature points on static regions and eliminates useless feature points on dynamic regions. Findings The proposed method is successfully verified on the public data set KITTI and ZED2 self-collected data set in the real world. The proposed V-SLAM system can extract the semantic information and track feature points steadily in dynamic environments. Absolute pose error and relative pose error are used to evaluate the feasibility of the proposed method. Experimental results show significant improvements in root mean square error and standard deviation error on both the KITTI data set and an unmanned aerial vehicle. That indicates this method can be effectively applied to outdoor environments. Originality/value The main contribution of this study is that a new semantic stereo V-SLAM method is proposed with greater robustness and stability, which reduces the impact of moving objects in dynamic scenes.
{"title":"Semantic stereo visual SLAM toward outdoor dynamic environments based on ORB-SLAM2","authors":"Yawen Li, G. Song, Shuang Hao, Juzheng Mao, Aiguo Song","doi":"10.1108/ir-09-2022-0236","DOIUrl":"https://doi.org/10.1108/ir-09-2022-0236","url":null,"abstract":"\u0000Purpose\u0000The prerequisite for most traditional visual simultaneous localization and mapping (V-SLAM) algorithms is that most objects in the environment should be static or in low-speed locomotion. These algorithms rely on geometric information of the environment and restrict the application scenarios with dynamic objects. Semantic segmentation can be used to extract deep features from images to identify dynamic objects in the real world. Therefore, V-SLAM fused with semantic information can reduce the influence from dynamic objects and achieve higher accuracy. This paper aims to present a new semantic stereo V-SLAM method toward outdoor dynamic environments for more accurate pose estimation.\u0000\u0000\u0000Design/methodology/approach\u0000First, the Deeplabv3+ semantic segmentation model is adopted to recognize semantic information about dynamic objects in the outdoor scenes. Second, an approach that combines prior knowledge to determine the dynamic hierarchy of moveable objects is proposed, which depends on the pixel movement between frames. Finally, a semantic stereo V-SLAM based on ORB-SLAM2 to calculate accurate trajectory in dynamic environments is presented, which selects corresponding feature points on static regions and eliminates useless feature points on dynamic regions.\u0000\u0000\u0000Findings\u0000The proposed method is successfully verified on the public data set KITTI and ZED2 self-collected data set in the real world. The proposed V-SLAM system can extract the semantic information and track feature points steadily in dynamic environments. Absolute pose error and relative pose error are used to evaluate the feasibility of the proposed method. Experimental results show significant improvements in root mean square error and standard deviation error on both the KITTI data set and an unmanned aerial vehicle. That indicates this method can be effectively applied to outdoor environments.\u0000\u0000\u0000Originality/value\u0000The main contribution of this study is that a new semantic stereo V-SLAM method is proposed with greater robustness and stability, which reduces the impact of moving objects in dynamic scenes.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"2012 1","pages":"542-554"},"PeriodicalIF":1.8,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86402182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose The purpose of this paper is to illustrate the growing role of robots in environmental monitoring. Design/methodology/approach Following an introduction, this first considers aerial robots for monitoring atmospheric pollution. It then discusses the role of aerial, surface and underwater robots to monitor aquatic environments. Some examples are then provided of the robotic monitoring of the terrestrial environment, and finally, brief conclusions are drawn. Findings Robots are playing an important role in numerous environmental monitoring applications and have overcome many of the limitations of traditional methodologies. They operate in all media and frequently provide data with enhanced spatial and temporal coverage. In addition to detecting pollution and characterising environmental conditions, they can assist in locating illicit activities. Drones have benefited from the availability of small and lightweight imaging devices and sensors that can detect airborne pollutants and also characterise certain features of aquatic and terrestrial environments. As with other robotic applications, environmental drone imagery is benefiting from the use of AI techniques. Ranging from short-term local deployments to extended-duration oceanic missions, aquatic robots are increasingly being used to monitor and characterise freshwater and marine environments. Originality/value This provides a detailed insight into the growing number of ways that robots are being used to monitor the environment.
{"title":"The role of robots in environmental monitoring","authors":"R. Bogue","doi":"10.1108/ir-12-2022-0316","DOIUrl":"https://doi.org/10.1108/ir-12-2022-0316","url":null,"abstract":"\u0000Purpose\u0000The purpose of this paper is to illustrate the growing role of robots in environmental monitoring.\u0000\u0000\u0000Design/methodology/approach\u0000Following an introduction, this first considers aerial robots for monitoring atmospheric pollution. It then discusses the role of aerial, surface and underwater robots to monitor aquatic environments. Some examples are then provided of the robotic monitoring of the terrestrial environment, and finally, brief conclusions are drawn.\u0000\u0000\u0000Findings\u0000Robots are playing an important role in numerous environmental monitoring applications and have overcome many of the limitations of traditional methodologies. They operate in all media and frequently provide data with enhanced spatial and temporal coverage. In addition to detecting pollution and characterising environmental conditions, they can assist in locating illicit activities. Drones have benefited from the availability of small and lightweight imaging devices and sensors that can detect airborne pollutants and also characterise certain features of aquatic and terrestrial environments. As with other robotic applications, environmental drone imagery is benefiting from the use of AI techniques. Ranging from short-term local deployments to extended-duration oceanic missions, aquatic robots are increasingly being used to monitor and characterise freshwater and marine environments.\u0000\u0000\u0000Originality/value\u0000This provides a detailed insight into the growing number of ways that robots are being used to monitor the environment.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"15 1","pages":"369-375"},"PeriodicalIF":1.8,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78806111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongyao Li, Guanyu Ding, Chao Li, Sen Wang, Qinglei Zhao, Qi Song
Purpose This paper presents a comprehensive pallet-picking approach for forklift robots, comprising a pallet identification and localization algorithm (PILA) to detect and locate the pallet and a vehicle alignment algorithm (VAA) to align the vehicle fork arms with the targeted pallet. Design/methodology/approach Opposing vision-based methods or point cloud data strategies, we utilize a low-cost RGB-D camera, and thus PILA exploits both RGB and depth data to quickly and precisely recognize and localize the pallet. The developed method guarantees a high identification rate from RGB images and more precise 3D localization information than a depth camera. Additionally, a deep neural network (DNN) method is applied to detect and locate the pallet in the RGB images. Specifically, the point cloud data is correlated with the labeled region of interest (RoI) in the RGB images, and the pallet's front-face plane is extracted from the point cloud. Furthermore, PILA introduces a universal geometrical rule to identify the pallet's center as a “T-shape” without depending on specific pallet types. Finally, VAA is proposed to implement the vehicle approaching and pallet picking operations as a “proof-of-concept” to test PILA’s performance. Findings Experimentally, the orientation angle and centric location of the two kinds of pallets are investigated without any artificial marking. The results show that the pallet could be located with a three-dimensional localization accuracy of 1 cm and an angle resolution of 0.4 degrees at a distance of 3 m with the vehicle control algorithm. Research limitations/implications PILA’s performance is limited by the current depth camera’s range (< = 3 m), and this is expected to be improved by using a better depth measurement device in the future. Originality/value The results demonstrate that the pallets can be located with an accuracy of 1cm along the x, y, and z directions and affording an angular resolution of 0.4 degrees at a distance of 3m in 700ms.
{"title":"A systematic strategy of pallet identification and picking based on deep learning techniques","authors":"Yongyao Li, Guanyu Ding, Chao Li, Sen Wang, Qinglei Zhao, Qi Song","doi":"10.1108/ir-05-2022-0123","DOIUrl":"https://doi.org/10.1108/ir-05-2022-0123","url":null,"abstract":"\u0000Purpose\u0000This paper presents a comprehensive pallet-picking approach for forklift robots, comprising a pallet identification and localization algorithm (PILA) to detect and locate the pallet and a vehicle alignment algorithm (VAA) to align the vehicle fork arms with the targeted pallet.\u0000\u0000\u0000Design/methodology/approach\u0000Opposing vision-based methods or point cloud data strategies, we utilize a low-cost RGB-D camera, and thus PILA exploits both RGB and depth data to quickly and precisely recognize and localize the pallet. The developed method guarantees a high identification rate from RGB images and more precise 3D localization information than a depth camera. Additionally, a deep neural network (DNN) method is applied to detect and locate the pallet in the RGB images. Specifically, the point cloud data is correlated with the labeled region of interest (RoI) in the RGB images, and the pallet's front-face plane is extracted from the point cloud. Furthermore, PILA introduces a universal geometrical rule to identify the pallet's center as a “T-shape” without depending on specific pallet types. Finally, VAA is proposed to implement the vehicle approaching and pallet picking operations as a “proof-of-concept” to test PILA’s performance.\u0000\u0000\u0000Findings\u0000Experimentally, the orientation angle and centric location of the two kinds of pallets are investigated without any artificial marking. The results show that the pallet could be located with a three-dimensional localization accuracy of 1 cm and an angle resolution of 0.4 degrees at a distance of 3 m with the vehicle control algorithm.\u0000\u0000\u0000Research limitations/implications\u0000PILA’s performance is limited by the current depth camera’s range (< = 3 m), and this is expected to be improved by using a better depth measurement device in the future.\u0000\u0000\u0000Originality/value\u0000The results demonstrate that the pallets can be located with an accuracy of 1cm along the x, y, and z directions and affording an angular resolution of 0.4 degrees at a distance of 3m in 700ms.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"2 1","pages":"353-365"},"PeriodicalIF":1.8,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75160102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose The grasping task of robots in dense cluttered scenes from a single-view has not been solved perfectly, and there is still a problem of low grasping success rate. This study aims to propose an end-to-end grasp generation method to solve this problem. Design/methodology/approach A new grasp representation method is proposed, which cleverly uses the normal vector of the table surface to derive the grasp baseline vectors, and maps the grasps to the pointed points (PP), so that there is no need to add orthogonal constraints between vectors when using a neural network to predict rotation matrixes of grasps. Findings Experimental results show that the proposed method is beneficial to the training of the neural network, and the model trained on synthetic data set can also have high grasping success rate and completion rate in real-world tasks. Originality/value The main contribution of this paper is that the authors propose a new grasp representation method, which maps the 6-DoF grasps to a PP and an angle related to the tabletop normal vector, thereby eliminating the need to add orthogonal constraints between vectors when directly predicting grasps using neural networks. The proposed method can generate hundreds of grasps covering the whole surface in about 0.3 s. The experimental results show that the proposed method has obvious superiority compared with other methods.
{"title":"PP-GraspNet: 6-DoF grasp generation in clutter using a new grasp representation method","authors":"Enbo Li, Haibo Feng, Yili Fu","doi":"10.1108/ir-08-2022-0196","DOIUrl":"https://doi.org/10.1108/ir-08-2022-0196","url":null,"abstract":"\u0000Purpose\u0000The grasping task of robots in dense cluttered scenes from a single-view has not been solved perfectly, and there is still a problem of low grasping success rate. This study aims to propose an end-to-end grasp generation method to solve this problem.\u0000\u0000\u0000Design/methodology/approach\u0000A new grasp representation method is proposed, which cleverly uses the normal vector of the table surface to derive the grasp baseline vectors, and maps the grasps to the pointed points (PP), so that there is no need to add orthogonal constraints between vectors when using a neural network to predict rotation matrixes of grasps.\u0000\u0000\u0000Findings\u0000Experimental results show that the proposed method is beneficial to the training of the neural network, and the model trained on synthetic data set can also have high grasping success rate and completion rate in real-world tasks.\u0000\u0000\u0000Originality/value\u0000The main contribution of this paper is that the authors propose a new grasp representation method, which maps the 6-DoF grasps to a PP and an angle related to the tabletop normal vector, thereby eliminating the need to add orthogonal constraints between vectors when directly predicting grasps using neural networks. The proposed method can generate hundreds of grasps covering the whole surface in about 0.3 s. The experimental results show that the proposed method has obvious superiority compared with other methods.\u0000","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"43 1","pages":"496-504"},"PeriodicalIF":1.8,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83323580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingwei Hu, Hongwei Sun, Liangchuang Liao, Jiajian He
{"title":"FESM-based approach for stiffness modeling, identification and updating of collaborative robots","authors":"Mingwei Hu, Hongwei Sun, Liangchuang Liao, Jiajian He","doi":"10.1108/IR-02-2022-0042","DOIUrl":"https://doi.org/10.1108/IR-02-2022-0042","url":null,"abstract":"","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"11 1","pages":"35-44"},"PeriodicalIF":1.8,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74406919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A global, continuous calibration curvature strategy for bending sensors of soft fingers","authors":"Ling-Jie Gai, Xiaofeng Zong, Jie Huang","doi":"10.1108/IR-02-2022-0041","DOIUrl":"https://doi.org/10.1108/IR-02-2022-0041","url":null,"abstract":"","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"111 1","pages":"562-570"},"PeriodicalIF":1.8,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73881311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}