Pub Date : 2024-06-14DOI: 10.1016/j.rcim.2024.102794
Kaige Shi , Xin Li
When gripping delicate workpieces such as a silicon wafer, contact should be minimized to protect the workpiece. Some existing suction grippers can grip a workpiece with only three contact points on its upper surface, which is minimal to fully constrain the workpiece. Further reducing the contact points will make the workpiece under-constrained and thus difficult to grip. This paper develops a new suction gripper that can grip an under-constrained workpiece with only two contact points at the edge of its upper surface. The uniqueness of the new gripper lies in that it uses feedback control to stabilize the unstable motion of the under-constrained workpiece. First, to overcome the negative-stiffness effect that makes the under-constrained gripping unstable, a zero-stiffness suction unit based on closed-loop pressure feedback is developed via optimal design. Next, a cooperative actuating mechanism based on four suction units is designed to actuate the workpiece in four different DOFs individually, so that the workpiece can be levitated stably with the contact forces being controlled. Finally, the dynamics of the gripping system is modeled, and an adaptive robust controller is designed based on the dynamics model. With the proposed controller, the gripper can handle workpieces with unknown inertial parameters and irregular upper surfaces. Experiments were conducted to verify the new suction gripper with the proposed controller.
{"title":"Development of a new suction gripper for gripping under-constrained workpiece with minimized contact","authors":"Kaige Shi , Xin Li","doi":"10.1016/j.rcim.2024.102794","DOIUrl":"https://doi.org/10.1016/j.rcim.2024.102794","url":null,"abstract":"<div><p>When gripping delicate workpieces such as a silicon wafer, contact should be minimized to protect the workpiece. Some existing suction grippers can grip a workpiece with only three contact points on its upper surface, which is minimal to fully constrain the workpiece. Further reducing the contact points will make the workpiece under-constrained and thus difficult to grip. This paper develops a new suction gripper that can grip an under-constrained workpiece with only two contact points at the edge of its upper surface. The uniqueness of the new gripper lies in that it uses feedback control to stabilize the unstable motion of the under-constrained workpiece. First, to overcome the negative-stiffness effect that makes the under-constrained gripping unstable, a zero-stiffness suction unit based on closed-loop pressure feedback is developed via optimal design. Next, a cooperative actuating mechanism based on four suction units is designed to actuate the workpiece in four different DOFs individually, so that the workpiece can be levitated stably with the contact forces being controlled. Finally, the dynamics of the gripping system is modeled, and an adaptive robust controller is designed based on the dynamics model. With the proposed controller, the gripper can handle workpieces with unknown inertial parameters and irregular upper surfaces. Experiments were conducted to verify the new suction gripper with the proposed controller.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"90 ","pages":"Article 102794"},"PeriodicalIF":10.4,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.1016/j.rcim.2024.102796
Xu Zhu , Guilin Chen , Chao Ni , Xubin Lu , Jiang Guo
Worn tools might lead to substantial detrimental implications on the surface integrity of workpieces for precision/ultra-precision machining. Most previous research has heavily relied on singular information, which might not be appropriate enough to ascertain tool conditions and guarantee the accuracy of workpieces. This paper proposes a CNN-LSTM hybrid model directly utilizing tool images to predict surface roughness on machined parts for tool condition assessment. This work first performs pruning based on UNet3+ architecture to eliminate redundant structures while integrating attention mechanisms to enhance the model's focus on the target region. On this basis, tool wear region information is intensely mined and heterogeneous data is optimized using Spearman correlation analysis. Subsequently, we innovatively proposed a hybrid model that integrates CNN and RNN, endowing the model with the ability to process spatial and sequential information. The effectiveness of the proposed methodology is validated using the practical data obtained from cutting experiments. The results indicate that the proposed tool condition assessment methodology significantly improves the segmentation accuracy of the tool wear region to 94.52 % (Dice coefficient) and predicts the surface roughness of machined parts with an accuracy exceeding 93.1 % (R2). It can be observed that the developed methodology may provide an effective solution for accurate tool condition assessment and the implementation of tool health management.
{"title":"Hybrid CNN-LSTM model driven image segmentation and roughness prediction for tool condition assessment with heterogeneous data","authors":"Xu Zhu , Guilin Chen , Chao Ni , Xubin Lu , Jiang Guo","doi":"10.1016/j.rcim.2024.102796","DOIUrl":"https://doi.org/10.1016/j.rcim.2024.102796","url":null,"abstract":"<div><p>Worn tools might lead to substantial detrimental implications on the surface integrity of workpieces for precision/ultra-precision machining. Most previous research has heavily relied on singular information, which might not be appropriate enough to ascertain tool conditions and guarantee the accuracy of workpieces. This paper proposes a CNN-LSTM hybrid model directly utilizing tool images to predict surface roughness on machined parts for tool condition assessment. This work first performs pruning based on UNet3+ architecture to eliminate redundant structures while integrating attention mechanisms to enhance the model's focus on the target region. On this basis, tool wear region information is intensely mined and heterogeneous data is optimized using Spearman correlation analysis. Subsequently, we innovatively proposed a hybrid model that integrates CNN and RNN, endowing the model with the ability to process spatial and sequential information. The effectiveness of the proposed methodology is validated using the practical data obtained from cutting experiments. The results indicate that the proposed tool condition assessment methodology significantly improves the segmentation accuracy of the tool wear region to 94.52 % (Dice coefficient) and predicts the surface roughness of machined parts with an accuracy exceeding 93.1 % (R<sup>2</sup>). It can be observed that the developed methodology may provide an effective solution for accurate tool condition assessment and the implementation of tool health management.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"90 ","pages":"Article 102796"},"PeriodicalIF":10.4,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141290693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.1016/j.rcim.2024.102790
Liang Guo , Yunlong He , Changcheng Wan , Yuantong Li , Longkun Luo
In recent years, the rapid development of information technology represented by the new generation of artificial intelligence has brought unprecedented impacts, challenges, and opportunities to the transformation of the manufacturing industry and the evolution of manufacturing models. In the past decade, a variety of new manufacturing systems and models have been proposed, with cloud manufacturing being one such representative manufacturing system. In this study, the overall research progress and existing key scientific issues in cloud manufacturing are analyzed. Combining with current cloud–edge collaboration, digital twin, edge computing, and other technologies, a deeply integrated human–machine–object manufacturing system based on cloud–edge collaboration is proposed. We call it cloud-edge collaborative manufacturing (CeCM). The similarities and differences between cloud-edge collaborative manufacturing with cloud manufacturing are analyzed from the system architecture level. The cloud-edge collaborative manufacturing is divided into three major spaces, including a physical reality space, a virtual resource space, and a cloud service space. Based on the above division, a five-layer architecture for cloud-edge collaborative manufacturing is proposed, including a manufacturing resource perception layer, an edge application service layer, a cloud–edge collaboration layer, a cloud–edge service layer, and a cloud–edge application layer. All the layers build a manufacturing system that deeply integrates manufacturing resources, computer systems, and humans, machines, and objects. Its overall system operation process is explained based on the above architecture design, and its 12 types of collaboration features of cloud–edge collaborative manufacturing are explained. In this paper, we also summarize 5 categories of key technology systems for cloud-edge collaborative manufacturing and 21 supporting key technologies. Under the framework of the above, a cloud–edge collaborative manufacturing for 3D printing was developed, and an application scenario for the petroleum equipment field was constructed. In a word, we believe the cloud-edge collaborative manufacturing will offer a new opportunity for the development of manufacturing network, digitalization and intelligence, providing a new technical path for the evolution of cloud manufacturing model and further promoting precision manufacturing services anytime, anywhere, and on demand.
{"title":"From cloud manufacturing to cloud–edge collaborative manufacturing","authors":"Liang Guo , Yunlong He , Changcheng Wan , Yuantong Li , Longkun Luo","doi":"10.1016/j.rcim.2024.102790","DOIUrl":"https://doi.org/10.1016/j.rcim.2024.102790","url":null,"abstract":"<div><p>In recent years, the rapid development of information technology represented by the new generation of artificial intelligence has brought unprecedented impacts, challenges, and opportunities to the transformation of the manufacturing industry and the evolution of manufacturing models. In the past decade, a variety of new manufacturing systems and models have been proposed, with cloud manufacturing being one such representative manufacturing system. In this study, the overall research progress and existing key scientific issues in cloud manufacturing are analyzed. Combining with current cloud–edge collaboration, digital twin, edge computing, and other technologies, a deeply integrated human–machine–object manufacturing system based on cloud–edge collaboration is proposed. We call it cloud-edge collaborative manufacturing (CeCM). The similarities and differences between cloud-edge collaborative manufacturing with cloud manufacturing are analyzed from the system architecture level. The cloud-edge collaborative manufacturing is divided into three major spaces, including a physical reality space, a virtual resource space, and a cloud service space. Based on the above division, a five-layer architecture for cloud-edge collaborative manufacturing is proposed, including a manufacturing resource perception layer, an edge application service layer, a cloud–edge collaboration layer, a cloud–edge service layer, and a cloud–edge application layer. All the layers build a manufacturing system that deeply integrates manufacturing resources, computer systems, and humans, machines, and objects. Its overall system operation process is explained based on the above architecture design, and its 12 types of collaboration features of cloud–edge collaborative manufacturing are explained. In this paper, we also summarize 5 categories of key technology systems for cloud-edge collaborative manufacturing and 21 supporting key technologies. Under the framework of the above, a cloud–edge collaborative manufacturing for 3D printing was developed, and an application scenario for the petroleum equipment field was constructed. In a word, we believe the cloud-edge collaborative manufacturing will offer a new opportunity for the development of manufacturing network, digitalization and intelligence, providing a new technical path for the evolution of cloud manufacturing model and further promoting precision manufacturing services anytime, anywhere, and on demand.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"90 ","pages":"Article 102790"},"PeriodicalIF":10.4,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141290692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.rcim.2024.102792
Tong Li , Yuhang Yan , Chengshun Yu , Jing An , Yifan Wang , Gang Chen
The Advancements in tactile sensors and machine learning techniques open new opportunities for achieving intelligent grasping in robotics. Traditional robot is limited in its ability to perform autonomous grasping in unstructured environments. Although the existing robotic grasping method enhances the robot's understanding of its environment by incorporating visual perception, it still lacks the capability for force perception and force adaptation. Therefore, tactile sensors are integrated into robot hands to enhance the robot's adaptive grasping capabilities in various complex scenarios by tactile perception. This paper primarily discusses the adaption of different types of tactile sensors in robotic grasping operations and grasping algorithms based on them. By dividing robotic grasping operations into four stages: grasping generation, robot planning, grasping state discrimination, and grasping destabilization adjustment, a further review of tactile-based and tactile-visual fusion methods is applied in related stages. The characteristics of these methods are comprehensively compared with different dimensions and indicators. Additionally, the challenges encountered in robotic tactile perception is summarized and insights into potential directions for future research are offered. This review is aimed for offering researchers and engineers a comprehensive understanding of the application of tactile perception techniques in robotic grasping operations, as well as facilitating future work to further enhance the intelligence of robotic grasping.
{"title":"A comprehensive review of robot intelligent grasping based on tactile perception","authors":"Tong Li , Yuhang Yan , Chengshun Yu , Jing An , Yifan Wang , Gang Chen","doi":"10.1016/j.rcim.2024.102792","DOIUrl":"https://doi.org/10.1016/j.rcim.2024.102792","url":null,"abstract":"<div><p>The Advancements in tactile sensors and machine learning techniques open new opportunities for achieving intelligent grasping in robotics. Traditional robot is limited in its ability to perform autonomous grasping in unstructured environments. Although the existing robotic grasping method enhances the robot's understanding of its environment by incorporating visual perception, it still lacks the capability for force perception and force adaptation. Therefore, tactile sensors are integrated into robot hands to enhance the robot's adaptive grasping capabilities in various complex scenarios by tactile perception. This paper primarily discusses the adaption of different types of tactile sensors in robotic grasping operations and grasping algorithms based on them. By dividing robotic grasping operations into four stages: grasping generation, robot planning, grasping state discrimination, and grasping destabilization adjustment, a further review of tactile-based and tactile-visual fusion methods is applied in related stages. The characteristics of these methods are comprehensively compared with different dimensions and indicators. Additionally, the challenges encountered in robotic tactile perception is summarized and insights into potential directions for future research are offered. This review is aimed for offering researchers and engineers a comprehensive understanding of the application of tactile perception techniques in robotic grasping operations, as well as facilitating future work to further enhance the intelligence of robotic grasping.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"90 ","pages":"Article 102792"},"PeriodicalIF":10.4,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141290495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of mobile robots for machining large components has received considerable research interest for the application of industrial robots in the machinery manufacturing sector. However, the low structural stiffness of industrial robots can result in poor machining quality under the action of cutting forces. Therefore, this paper proposes a simultaneous optimization method the mobile robot base position and cabin angle using homogeneous stiffness domain (HSD) index for large spacecraft cabins. First, a nonlinear joint stiffness model that considers the gravity compensator mechanism is established to describe the stiffness characteristics of heavy-duty robots more accurately. Subsequently, a HSD index is proposed to evaluate the overall stiffness values and stiffness fluctuation for all robot postures in the machining program. An optimization model is then established based on the HSD under the constraints of machining accessibility, joint angle limitation and singularity. The optimal base position and cabin angle are determined simultaneously using the sparrow search algorithm. Finally, simulation and milling experiments are used to demonstrate that the optimization method proposed in this paper can effectively improve the machining quality.
{"title":"Robot base position and spacecraft cabin angle optimization via homogeneous stiffness domain index with nonlinear stiffness characteristics","authors":"Zhiqi Wang, Dong Gao, Kenan Deng, Yong Lu, Shoudong Ma, Jiao Zhao","doi":"10.1016/j.rcim.2024.102793","DOIUrl":"10.1016/j.rcim.2024.102793","url":null,"abstract":"<div><p>The use of mobile robots for machining large components has received considerable research interest for the application of industrial robots in the machinery manufacturing sector. However, the low structural stiffness of industrial robots can result in poor machining quality under the action of cutting forces. Therefore, this paper proposes a simultaneous optimization method the mobile robot base position and cabin angle using homogeneous stiffness domain (HSD) index for large spacecraft cabins. First, a nonlinear joint stiffness model that considers the gravity compensator mechanism is established to describe the stiffness characteristics of heavy-duty robots more accurately. Subsequently, a HSD index is proposed to evaluate the overall stiffness values and stiffness fluctuation for all robot postures in the machining program. An optimization model is then established based on the HSD under the constraints of machining accessibility, joint angle limitation and singularity. The optimal base position and cabin angle are determined simultaneously using the sparrow search algorithm. Finally, simulation and milling experiments are used to demonstrate that the optimization method proposed in this paper can effectively improve the machining quality.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"90 ","pages":"Article 102793"},"PeriodicalIF":9.1,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141281171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.1016/j.rcim.2024.102795
Wei Fang , Lixi Chen , Tienong Zhang , Hao Hu , Jiapeng Bi
Existing augmented reality (AR) assembly mainly provides visual instructions for operators from a first-person perspective, and it is hard to share individual working intents for co-located workers on the shop floor, especially for large-scale product assembly task that requires multiple operators working together. To bridge this gap for practical deployments, this paper proposes Co2iAR, a co-located audio-visual enabled mobile collaborative AR assembly. Firstly, according to the stereo visual-inertial fusion strategy, robust and accurate self-contained motion tracking is achieved for the resource-constrained mobile AR platform, followed by a co-located alignment from multiple mobile AR clients on the shop floor. Then, a lightweight text-aware network for online wiring harness character recognition is proposed, as well as the audio-based confirming strategy, enabling natural audio-visual interaction among co-located workers within a shared immersive workplace, which can also monitor the current wiring assembly status and activate the step-by-step tutorials automatically. The novelty of this work is focused on the deployment of audio-visual aware interaction using the same device that is being used to deploy the co-located collaborative AR work instructions, establishing shared operating intents among multiple co-located workers. Finally, comprehensive experiments are carried out on the collaborative performance among multiple AR clients, and results illustrate that the proposed Co2iAR can alleviate the cognitive load and achieve superior performance for the co-located AR assembly tasks, providing a more human-centric collaborative assembly performance.
现有的增强现实(AR)装配主要是以第一人称视角为操作员提供视觉指示,很难为车间内的同地工人共享个人工作意图,特别是对于需要多名操作员协同工作的大型产品装配任务。为了在实际部署中弥补这一差距,本文提出了一种支持协同视听的移动协作式 AR 组装--Co2iAR。首先,根据立体视觉-惯性融合策略,为资源受限的移动 AR 平台实现了稳健、准确的自包含运动跟踪,然后由车间内的多个移动 AR 客户端进行共定位对齐。然后,提出了一种用于在线线束字符识别的轻量级文本感知网络,以及基于音频的确认策略,从而在共享的沉浸式工作场所内实现同地工人之间的自然视听交互,还可以监控当前的线束装配状态并自动激活分步教程。这项工作的新颖之处在于使用与部署同地协作式 AR 工作指示相同的设备来部署视听感知交互,从而在多个同地工人之间建立共享操作意图。最后,对多个 AR 客户端之间的协作性能进行了综合实验,结果表明所提出的 Co2iAR 可减轻认知负荷,在同地协作 AR 组装任务中实现卓越性能,提供更加以人为本的协作组装性能。
{"title":"Co2iAR: Co-located audio-visual enabled mobile collaborative industrial AR wiring harness assembly","authors":"Wei Fang , Lixi Chen , Tienong Zhang , Hao Hu , Jiapeng Bi","doi":"10.1016/j.rcim.2024.102795","DOIUrl":"https://doi.org/10.1016/j.rcim.2024.102795","url":null,"abstract":"<div><p>Existing augmented reality (AR) assembly mainly provides visual instructions for operators from a first-person perspective, and it is hard to share individual working intents for co-located workers on the shop floor, especially for large-scale product assembly task that requires multiple operators working together. To bridge this gap for practical deployments, this paper proposes Co<sup>2</sup>iAR, a co-located audio-visual enabled mobile collaborative AR assembly. Firstly, according to the stereo visual-inertial fusion strategy, robust and accurate self-contained motion tracking is achieved for the resource-constrained mobile AR platform, followed by a co-located alignment from multiple mobile AR clients on the shop floor. Then, a lightweight text-aware network for online wiring harness character recognition is proposed, as well as the audio-based confirming strategy, enabling natural audio-visual interaction among co-located workers within a shared immersive workplace, which can also monitor the current wiring assembly status and activate the step-by-step tutorials automatically. The novelty of this work is focused on the deployment of audio-visual aware interaction using the same device that is being used to deploy the co-located collaborative AR work instructions, establishing shared operating intents among multiple co-located workers. Finally, comprehensive experiments are carried out on the collaborative performance among multiple AR clients, and results illustrate that the proposed Co<sup>2</sup>iAR can alleviate the cognitive load and achieve superior performance for the co-located AR assembly tasks, providing a more human-centric collaborative assembly performance.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"90 ","pages":"Article 102795"},"PeriodicalIF":10.4,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141242719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1016/j.rcim.2024.102786
Qinglin Gao , Jianhua Liu , Huiting Li , Cunbo Zhuang , Ziwen Liu
Assembly processes for complex products primarily involve manual assembly and often encounter various disruptive events, such as the insertion of new orders, order cancellations, task adjustments, workers absences, and job rotations. The dynamic scheduling problem for complex product assembly workshops requires consideration of trigger events and time nodes for rescheduling, as well as the allocations of multi-skilled and multi-level workers. The application of digital twin technology in smart manufacturing enables managers to more effectively monitor and control disruptive events and production factors on the production site. Therefore, a dynamic scheduling strategy based on digital twin technology is proposed to enable real-time monitoring of dynamic events in the assembly workshop, triggering rescheduling when necessary, adjusting task processing sequences and team composition accordingly, and establishing a corresponding dynamic scheduling integer programming model. Additionally, based on NSGA-II, an improved multi-objective evolutionary algorithm (IMOEA) is proposed, which utilizes the maximum completion time as the production efficiency indicator and the time deviation before and after rescheduling as the production stability indicator. Three new population initialization rules are designed, and the optimal parameter combination for these rules is determined. Finally, the effectiveness of the scheduling strategy is verified through the construction of a workshop digital twin system.
{"title":"Digital twin-driven dynamic scheduling for the assembly workshop of complex products with workers allocation","authors":"Qinglin Gao , Jianhua Liu , Huiting Li , Cunbo Zhuang , Ziwen Liu","doi":"10.1016/j.rcim.2024.102786","DOIUrl":"https://doi.org/10.1016/j.rcim.2024.102786","url":null,"abstract":"<div><p>Assembly processes for complex products primarily involve manual assembly and often encounter various disruptive events, such as the insertion of new orders, order cancellations, task adjustments, workers absences, and job rotations. The dynamic scheduling problem for complex product assembly workshops requires consideration of trigger events and time nodes for rescheduling, as well as the allocations of multi-skilled and multi-level workers. The application of digital twin technology in smart manufacturing enables managers to more effectively monitor and control disruptive events and production factors on the production site. Therefore, a dynamic scheduling strategy based on digital twin technology is proposed to enable real-time monitoring of dynamic events in the assembly workshop, triggering rescheduling when necessary, adjusting task processing sequences and team composition accordingly, and establishing a corresponding dynamic scheduling integer programming model. Additionally, based on NSGA-II, an improved multi-objective evolutionary algorithm (IMOEA) is proposed, which utilizes the maximum completion time as the production efficiency indicator and the time deviation before and after rescheduling as the production stability indicator. Three new population initialization rules are designed, and the optimal parameter combination for these rules is determined. Finally, the effectiveness of the scheduling strategy is verified through the construction of a workshop digital twin system.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"89 ","pages":"Article 102786"},"PeriodicalIF":10.4,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1016/j.rcim.2024.102791
Shengzhe Wang , Ziyan Xu , Yidan Wang , Ziyao Tan , Dahu Zhu
Region-based robotic machining is considered an effective strategy for automatically repairing paint film defects compared to conventional global machining. However, this process faces challenges due to irregularities in defect position, shape, and size. To overcome these challenges, this paper proposes a model-enabled robotic machining framework for repairing paint film defects by leveraging the workpiece model as an enabling means. Within the system framework, an improved YOLOv5 algorithm is presented at first to enhance the visual detection accuracy of paint film defects in terms of network structure and loss function. Additionally, a target positioning method based on the pixel-point inverse projection technology is developed to map the 2D defect detection results onto the workpiece 3D model, which primarily aims at obtaining the orientation information through the connection between the monocular vision unit and the model. Finally, an optimal tool deployment strategy by virtue of the least projection coverage circle is proposed to determine the least machined position as well as the shortest robot path by constructing the mapping between the defects and the tool operation size. The constructed system framework is verified effective and practical by the experiments of region-based robotic grinding and repairing of paint film defects on high-speed train (HST) body sidewalls.
{"title":"Model-enabled robotic machining framework for repairing paint film defects","authors":"Shengzhe Wang , Ziyan Xu , Yidan Wang , Ziyao Tan , Dahu Zhu","doi":"10.1016/j.rcim.2024.102791","DOIUrl":"10.1016/j.rcim.2024.102791","url":null,"abstract":"<div><p>Region-based robotic machining is considered an effective strategy for automatically repairing paint film defects compared to conventional global machining. However, this process faces challenges due to irregularities in defect position, shape, and size. To overcome these challenges, this paper proposes a model-enabled robotic machining framework for repairing paint film defects by leveraging the workpiece model as an enabling means. Within the system framework, an improved YOLOv5 algorithm is presented at first to enhance the visual detection accuracy of paint film defects in terms of network structure and loss function. Additionally, a target positioning method based on the pixel-point inverse projection technology is developed to map the 2D defect detection results onto the workpiece 3D model, which primarily aims at obtaining the orientation information through the connection between the monocular vision unit and the model. Finally, an optimal tool deployment strategy by virtue of the least projection coverage circle is proposed to determine the least machined position as well as the shortest robot path by constructing the mapping between the defects and the tool operation size. The constructed system framework is verified effective and practical by the experiments of region-based robotic grinding and repairing of paint film defects on high-speed train (HST) body sidewalls.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"89 ","pages":"Article 102791"},"PeriodicalIF":10.4,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-22DOI: 10.1016/j.rcim.2024.102788
Xinyu Shi , Chaoran Wang , Liyu Shi , Haining Zhou , Tyson Keen Phillips , Kang Bi , Weijiu Cui , Chengpeng Sun , Da Wan
With the rapid advancements in three-dimensional (3D) printing, researchers have shifted their focus towards the mechanical systems and methods used in this field. While Fused Deposition Modelling (FDM) remains the dominant method, alternative printing methods such as Spatial 3DP (S-3DP) have emerged. However, the majority of existing research on 3D printing technology has been emphasizing offline control, which lacks the capability to dynamically adjust the printing path in real time. Such an limitation has resulted in a decrease in printing efficiency. Therefore, this paper proposes a human-robot interaction (HRI) method based on real-time gesture control for Robotic Spatial 3DP (RS-3DP). This method incorporates utilization of YOLOv5 and Mediapipe algorithms to recognize gestures and convert the gesture information into real-time robot operations. Results show that this approach offers a feasible solution to address the issue of discontinuous S-3DP nodes because it achieves a gesture-controlled robot movement accuracy of 91 % and an average system response time of approximately 0.54 s. The proposed HRI method represents a pioneering advancement in real-time control for RS-3DP, thereby paving the way for further exploration and development in this field.
{"title":"Research on human-robot interaction for robotic spatial 3D printing based on real-time hand gesture control","authors":"Xinyu Shi , Chaoran Wang , Liyu Shi , Haining Zhou , Tyson Keen Phillips , Kang Bi , Weijiu Cui , Chengpeng Sun , Da Wan","doi":"10.1016/j.rcim.2024.102788","DOIUrl":"https://doi.org/10.1016/j.rcim.2024.102788","url":null,"abstract":"<div><p>With the rapid advancements in three-dimensional (3D) printing, researchers have shifted their focus towards the mechanical systems and methods used in this field. While Fused Deposition Modelling (FDM) remains the dominant method, alternative printing methods such as Spatial 3DP (S-3DP) have emerged. However, the majority of existing research on 3D printing technology has been emphasizing offline control, which lacks the capability to dynamically adjust the printing path in real time. Such an limitation has resulted in a decrease in printing efficiency. Therefore, this paper proposes a human-robot interaction (HRI) method based on real-time gesture control for Robotic Spatial 3DP (RS-3DP). This method incorporates utilization of YOLOv5 and Mediapipe algorithms to recognize gestures and convert the gesture information into real-time robot operations. Results show that this approach offers a feasible solution to address the issue of discontinuous S-3DP nodes because it achieves a gesture-controlled robot movement accuracy of 91 % and an average system response time of approximately 0.54 s. The proposed HRI method represents a pioneering advancement in real-time control for RS-3DP, thereby paving the way for further exploration and development in this field.</p></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"89 ","pages":"Article 102788"},"PeriodicalIF":10.4,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141078343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-18DOI: 10.1016/j.rcim.2024.102783
Philipp Scholl , Maged Iskandar , Sebastian Wolf , Jinoh Lee , Aras Bacho , Alexander Dietrich , Alin Albu-Schäffer , Gitta Kutyniok
{"title":"Corrigendum to “Learning-based adaption of robotic friction models” [Robotics and Computer-Integrated Manufacturing Volume 89, October 2024]","authors":"Philipp Scholl , Maged Iskandar , Sebastian Wolf , Jinoh Lee , Aras Bacho , Alexander Dietrich , Alin Albu-Schäffer , Gitta Kutyniok","doi":"10.1016/j.rcim.2024.102783","DOIUrl":"10.1016/j.rcim.2024.102783","url":null,"abstract":"","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"89 ","pages":"Article 102783"},"PeriodicalIF":10.4,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S073658452400070X/pdfft?md5=16b7b6a0b340d6d29d3f9e04c0198e2a&pid=1-s2.0-S073658452400070X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141130748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}