Pub Date : 2026-02-10DOI: 10.1016/j.rcim.2026.103257
Yan Zheng, Wei Liu, Lei Han, Junqing Li, Hongguang Ding, Yang Zhang
Large components are core parts of high‒end equipment and are characterized by their large overall dimensions and small, complex local features with high‒precision requirements. These characteristics pose significant challenges to measurement and processing. To address the demand for measuring and machining the complex features of large components, this paper develops a cross‒scale multi‒sensor fusion robotic in situ measurement system based on vision and laser technologies. The system introduces a fusion algorithm based on image priors and point cloud enhancement, achieving the integration of multimodal measurement data. Through three‒layer integration of equipment, algorithms, and data, the system attains high‒precision measurements of both the global reconstruction and local features of large components, which provides essential data support for the comprehensive 3D reconstruction of large components and robotic in situ machining. Finally, the experimental results for the spacecraft component show that the robotic measurement system achieves a local measurement RMSE of no >0.015 mm, and the global measurement average error is 0.985 mm. This represents an improvement of approximately 49.4% over the binocular vision measurement system, with an overall efficiency increase of >90%. The final machining precision meets the stringent finishing requirements of the aerospace industry.
{"title":"The visual‒laser fusion measurement methodology for robotic in situ machining of large components with complex features","authors":"Yan Zheng, Wei Liu, Lei Han, Junqing Li, Hongguang Ding, Yang Zhang","doi":"10.1016/j.rcim.2026.103257","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103257","url":null,"abstract":"Large components are core parts of high‒end equipment and are characterized by their large overall dimensions and small, complex local features with high‒precision requirements. These characteristics pose significant challenges to measurement and processing. To address the demand for measuring and machining the complex features of large components, this paper develops a cross‒scale multi‒sensor fusion robotic in situ measurement system based on vision and laser technologies. The system introduces a fusion algorithm based on image priors and point cloud enhancement, achieving the integration of multimodal measurement data. Through three‒layer integration of equipment, algorithms, and data, the system attains high‒precision measurements of both the global reconstruction and local features of large components, which provides essential data support for the comprehensive 3D reconstruction of large components and robotic in situ machining. Finally, the experimental results for the spacecraft component show that the robotic measurement system achieves a local measurement RMSE of no >0.015 mm, and the global measurement average error is 0.985 mm. This represents an improvement of approximately 49.4% over the binocular vision measurement system, with an overall efficiency increase of >90%. The final machining precision meets the stringent finishing requirements of the aerospace industry.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"315 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-10DOI: 10.1016/j.rcim.2026.103258
Q. Ye, Y.Q. Niu, A.Y.C. Nee, S.K. Ong
The increasing demand for personalized products and operational flexibility in Industry 4.0 has exposed critical limitations in traditional quality assurance systems, particularly within human-in-the-loop manual assembly. To address these challenges, this paper introduces an integrated vision-based framework that combines spatiotemporal object detection and motion similarity assessment for the automated verification of procedural correctness. A multi-modal acquisition system, utilizing synchronized RGB and depth data, is employed to support precise 3D hand keypoint reconstruction. For process segmentation, STM-YOLO (Spatiotemporal Multi-View You Only Look Once) was developed. By reconfiguring standard detection heads into stage-transition classifiers, this model achieves an average accuracy of 87%, outperforming the single-view, single-frame baseline (S-S) by an absolute margin of 27.67%, and further validates the complementary contributions of both temporal modeling and multi-view perception modules to performance improvement. To assess operator action conformity without the need for gesture-level annotation, the Siamese Transformer Encoder Network (STEN) is introduced, evaluating 3D hand motion trajectories against golden-standard references. Validated on the filtered FPHA (First-Person Hand Action) benchmark, STEN attains 99.28% accuracy and an F1-score of 0.9858, surpassing classical approaches. The proposed system operates at over 10 fps on laboratory hardware and demonstrates robust generalization across diverse operators and tasks, offering a scalable solution for procedural verification in flexible manufacturing environments.
在工业4.0时代,对个性化产品和操作灵活性的需求不断增长,暴露了传统质量保证体系的严重局限性,特别是在人工环内手工组装方面。为了解决这些挑战,本文引入了一个集成的基于视觉的框架,该框架结合了时空目标检测和运动相似度评估,用于自动验证程序正确性。采用同步RGB和深度数据的多模态采集系统,支持三维手部关键点的精确重建。对于过程分割,开发了STM-YOLO (spatial - temporal Multi-View You Only Look Once)。通过将标准检测头重新配置为阶段转换分类器,该模型的平均准确率达到87%,比单视图、单帧基线(S-S)高出27.67%,并进一步验证了时间建模和多视图感知模块对性能改进的互补贡献。为了在不需要手势级注释的情况下评估操作员动作一致性,引入了Siamese变压器编码器网络(STEN),根据金标准参考评估3D手部运动轨迹。在过滤后的第一人称手动作(First-Person Hand Action, FPHA)基准上进行验证,STEN的准确率达到99.28%,f1得分为0.9858,超过了经典方法。该系统在实验室硬件上以超过10fps的速度运行,并在不同的操作人员和任务中展示了强大的通用性,为灵活的制造环境中的程序验证提供了可扩展的解决方案。
{"title":"Spatiotemporal perception and motion similarity learning for real-time assembly verification","authors":"Q. Ye, Y.Q. Niu, A.Y.C. Nee, S.K. Ong","doi":"10.1016/j.rcim.2026.103258","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103258","url":null,"abstract":"The increasing demand for personalized products and operational flexibility in Industry 4.0 has exposed critical limitations in traditional quality assurance systems, particularly within human-in-the-loop manual assembly. To address these challenges, this paper introduces an integrated vision-based framework that combines spatiotemporal object detection and motion similarity assessment for the automated verification of procedural correctness. A multi-modal acquisition system, utilizing synchronized RGB and depth data, is employed to support precise 3D hand keypoint reconstruction. For process segmentation, STM-YOLO (Spatiotemporal Multi-View You Only Look Once) was developed. By reconfiguring standard detection heads into stage-transition classifiers, this model achieves an average accuracy of 87%, outperforming the single-view, single-frame baseline (S-S) by an absolute margin of 27.67%, and further validates the complementary contributions of both temporal modeling and multi-view perception modules to performance improvement. To assess operator action conformity without the need for gesture-level annotation, the Siamese Transformer Encoder Network (STEN) is introduced, evaluating 3D hand motion trajectories against golden-standard references. Validated on the filtered FPHA (First-Person Hand Action) benchmark, STEN attains 99.28% accuracy and an F1-score of 0.9858, surpassing classical approaches. The proposed system operates at over 10 fps on laboratory hardware and demonstrates robust generalization across diverse operators and tasks, offering a scalable solution for procedural verification in flexible manufacturing environments.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"93 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1016/j.rcim.2026.103252
Moien Reyhani, Christian Hartl-Nesic, Andreas Kugi
Hand-eye calibration is a fundamental prerequisite for vision-based applications in industrial robotics. While this issue is largely addressed for 3D cameras, it remains a challenge for Laser Profile Scanners (LPSs) due to their inherent 2D measurement limitations. Performing hand-eye calibration with an uncalibrated robot often results in inaccurate calibration outcomes. Traditional methods for calibrating the robot prior to hand-eye calibration using highly accurate optical systems are often prohibitively expensive and time-consuming, making them impractical for the high demands of modern production environments. This study proposes a novel methodology for co-optimized elasto-geometrical calibration of the robot, while modeling joint compliance, and the hand-eye calibration of a dual LPS mounted on the robot’s end-effector. The two LPSs are configured such that their laser lines intersect on the object, thereby forming an intersecting Laser Line System (LLS). This necessitates the calibration of the LPSs relative to each other, which constitutes eye-to-eye calibration in the context of multiple LPSs. Therefore, this work also introduces an approach for the eye-to-eye calibration of the LPSs. Both proposed approaches leverage the iterative closest point (ICP) algorithm within a bilevel optimization framework and utilize an artifact specifically designed for the fast and stable calibration of LPSs. Notably, these methods are extensible to multiple laser lines with diverse configurations and are applicable to analogous calibration artifacts. To the best of the authors’ knowledge, this is the first study to address the eye-to-eye calibration of LPSs and to explicitly incorporate joint stiffness effects into the hand-eye calibration of LPSs. The experimental results demonstrate that the proposed calibration strategy achieves a registration error of 0.154mm when using a 3D-printed artifact, while employing a high-precision artifact results in an improved accuracy with a registration error as low as 0.068mm.
{"title":"Co-optimized elasto-geometrical and hand-eye calibration of industrial robots with integrated dual laser profile scanners","authors":"Moien Reyhani, Christian Hartl-Nesic, Andreas Kugi","doi":"10.1016/j.rcim.2026.103252","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103252","url":null,"abstract":"<mml:math altimg=\"si3.svg\" display=\"inline\"><mml:mtext>Hand-eye</mml:mtext></mml:math> calibration is a fundamental prerequisite for vision-based applications in industrial robotics. While this issue is largely addressed for 3D cameras, it remains a challenge for Laser Profile Scanners (LPSs) due to their inherent 2D measurement limitations. Performing <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration with an uncalibrated robot often results in inaccurate calibration outcomes. Traditional methods for calibrating the robot prior to <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration using highly accurate optical systems are often prohibitively expensive and time-consuming, making them impractical for the high demands of modern production environments. This study proposes a novel methodology for co-optimized elasto-geometrical calibration of the robot, while modeling joint compliance, and the <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration of a dual LPS mounted on the robot’s end-effector. The two LPSs are configured such that their laser lines intersect on the object, thereby forming an intersecting Laser Line System (LLS). This necessitates the calibration of the LPSs relative to each other, which constitutes <mml:math altimg=\"si241.svg\" display=\"inline\"><mml:mtext>eye-to-eye</mml:mtext></mml:math> calibration in the context of multiple LPSs. Therefore, this work also introduces an approach for the <mml:math altimg=\"si241.svg\" display=\"inline\"><mml:mtext>eye-to-eye</mml:mtext></mml:math> calibration of the LPSs. Both proposed approaches leverage the iterative closest point (ICP) algorithm within a bilevel optimization framework and utilize an artifact specifically designed for the fast and stable calibration of LPSs. Notably, these methods are extensible to multiple laser lines with diverse configurations and are applicable to analogous calibration artifacts. To the best of the authors’ knowledge, this is the first study to address the <mml:math altimg=\"si241.svg\" display=\"inline\"><mml:mtext>eye-to-eye</mml:mtext></mml:math> calibration of LPSs and to explicitly incorporate joint stiffness effects into the <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration of LPSs. The experimental results demonstrate that the proposed calibration strategy achieves a registration error of 0.154<ce:hsp sp=\"0.16667\"></ce:hsp>mm when using a 3D-printed artifact, while employing a high-precision artifact results in an improved accuracy with a registration error as low as 0.068<ce:hsp sp=\"0.16667\"></ce:hsp>mm.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"89 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05DOI: 10.1016/j.rcim.2026.103254
Qingfeng Xu, Chao Zhang, Dongxu Ma, Yan Cao, Guanghui Zhou
{"title":"Synergy of giants and specialists: A large-and-small model integration framework driven by generative AI for smart manufacturing","authors":"Qingfeng Xu, Chao Zhang, Dongxu Ma, Yan Cao, Guanghui Zhou","doi":"10.1016/j.rcim.2026.103254","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103254","url":null,"abstract":"","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"156 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05DOI: 10.1016/j.rcim.2026.103256
Mohammad Arjomandi, Tuhin Mukherjee
{"title":"A review of applications of collaborative robot in welding and additive manufacturing","authors":"Mohammad Arjomandi, Tuhin Mukherjee","doi":"10.1016/j.rcim.2026.103256","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103256","url":null,"abstract":"","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"33 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1016/j.rcim.2026.103247
Shengjie Jiang, Qijia Qian, Jianhong Liu, Pan Wang, Xiao Zhuang, Di Zhou, Weifang Sun, Jiawei Xiang
Conformance verification in assembly processes is crucial for ensuring manufacturing quality, yet it is often challenged in real production environments by viewpoint variations and individual differences in operator behavior. This paper presents a rotation-invariant conformance verification framework for intelligent assembly, adopting a hybrid modeling paradigm that synergizes data-driven learning with geometric priors. By jointly integrating action recognition, temporal logic validation, and spatial path evaluation, the framework enables fine-grained assessment of deviations from standard operating procedures. The research develops a spatio-temporal-semantic triple-attention network to achieve adaptive, high-accuracy procedural-level action recognition in a data-driven manner. Then, a dynamic state-transition model is introduced to capture temporal violations by online updating of operation transition probabilities. By combining differential chain codes with cyclic shift normalization, the proposed geometry-guided trajectory representation method enables rotation-robust quantification of path deviations in critical assembly processes without requiring multi-view training data. Experiments on our WZU complex product assembly process dataset show that the proposed framework achieves 96.17% accuracy in violation detection, significantly outperforming CNN-LSTM (+10.39%), I3D (+1.02%), and MobileNetV3 (+1.24%), with an end-to-end inference latency under 50 ms, making it suitable for edge deployment. This work provides an efficient, interpretable, and viewpoint-invariant vision-based solution for assembly process monitoring in industrial applications.
{"title":"Intelligent assembly conformance verification for complex products: A rotationally invariant multi-view visual framework","authors":"Shengjie Jiang, Qijia Qian, Jianhong Liu, Pan Wang, Xiao Zhuang, Di Zhou, Weifang Sun, Jiawei Xiang","doi":"10.1016/j.rcim.2026.103247","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103247","url":null,"abstract":"Conformance verification in assembly processes is crucial for ensuring manufacturing quality, yet it is often challenged in real production environments by viewpoint variations and individual differences in operator behavior. This paper presents a rotation-invariant conformance verification framework for intelligent assembly, adopting a hybrid modeling paradigm that synergizes data-driven learning with geometric priors. By jointly integrating action recognition, temporal logic validation, and spatial path evaluation, the framework enables fine-grained assessment of deviations from standard operating procedures. The research develops a spatio-temporal-semantic triple-attention network to achieve adaptive, high-accuracy procedural-level action recognition in a data-driven manner. Then, a dynamic state-transition model is introduced to capture temporal violations by online updating of operation transition probabilities. By combining differential chain codes with cyclic shift normalization, the proposed geometry-guided trajectory representation method enables rotation-robust quantification of path deviations in critical assembly processes without requiring multi-view training data. Experiments on our WZU complex product assembly process dataset show that the proposed framework achieves 96.17% accuracy in violation detection, significantly outperforming CNN-LSTM (+10.39%), I3D (+1.02%), and MobileNetV3 (+1.24%), with an end-to-end inference latency under 50 ms, making it suitable for edge deployment. This work provides an efficient, interpretable, and viewpoint-invariant vision-based solution for assembly process monitoring in industrial applications.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"42 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the demand for personalized products increases, manufacturing processes are becoming more complex due to greater variety and uncertainty in product requirements. Traditional manufacturing systems face challenges in adapting to product changes without manual interventions, leading to an increase in product delays and operational costs. Multi-agent manufacturing control systems, a decentralized framework consisting of collaborative agents, have been employed to enhance flexibility and adaptability in manufacturing. However, existing multi-agent system approaches are often initialized with predefined capabilities, limiting their ability to handle new requirements that were not modeled in advance. To address this challenge, this work proposes a large language model-enabled multi-agent framework that enables adaptive matching, translating new product requirements to manufacturing process control at runtime. A product agent, which is a decision-maker for a product, interprets unforeseen product requirements and matches with manufacturing capabilities by dynamically retrieving manufacturing knowledge during runtime. Communication strategies and a decision-making method are also introduced to facilitate adaptive task planning and coordination. The proposed framework was evaluated using an assembly task board testbed across three case studies of increasing complexity. Results demonstrate that the framework can process unforeseen product requirements into executable operations, dynamically discover manufacturing capabilities, and improve resource utilization.
{"title":"Adaptive task planning and coordination in multi-agent manufacturing systems using large language models","authors":"Jonghan Lim , Jiabao Zhao , Ezekiel Hernandez , Ilya Kovalenko","doi":"10.1016/j.rcim.2026.103245","DOIUrl":"10.1016/j.rcim.2026.103245","url":null,"abstract":"<div><div>As the demand for personalized products increases, manufacturing processes are becoming more complex due to greater variety and uncertainty in product requirements. Traditional manufacturing systems face challenges in adapting to product changes without manual interventions, leading to an increase in product delays and operational costs. Multi-agent manufacturing control systems, a decentralized framework consisting of collaborative agents, have been employed to enhance flexibility and adaptability in manufacturing. However, existing multi-agent system approaches are often initialized with predefined capabilities, limiting their ability to handle new requirements that were not modeled in advance. To address this challenge, this work proposes a large language model-enabled multi-agent framework that enables adaptive matching, translating new product requirements to manufacturing process control at runtime. A product agent, which is a decision-maker for a product, interprets unforeseen product requirements and matches with manufacturing capabilities by dynamically retrieving manufacturing knowledge during runtime. Communication strategies and a decision-making method are also introduced to facilitate adaptive task planning and coordination. The proposed framework was evaluated using an assembly task board testbed across three case studies of increasing complexity. Results demonstrate that the framework can process unforeseen product requirements into executable operations, dynamically discover manufacturing capabilities, and improve resource utilization.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"100 ","pages":"Article 103245"},"PeriodicalIF":11.4,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1016/j.rcim.2026.103253
Ali Karevan, Sylvie Nadeau
With the rise of Industry 5.0, wearables have become increasingly common in manufacturing, making effective risk management more critical than ever. Despite this trend, there remains a significant gap in research regarding the risks associated with the simultaneous use of multiple wearables, particularly in complex hybrid systems involving human operators. This study addresses this gap by using an improved Systems-Theoretic Process Analysis combined with Particle Swarm Optimization (STPA-PSO) methodology. Moreover, it introduces a circular, semi-automated methodology (incorporating mitigation measures) that can systematically identify, analyze, quantify, and mitigate risks, including those arising from human error, in the integration of multiple wearables. Three case studies, two assembly lines and one disassembly line, were tested to check the effectiveness of this method. The findings indicate that increased interactions among system components can lead to elevated risk levels. It demonstrates that highlighting the hazardous areas, calibration regulations, and training of workers are high-risk control action scenarios that need to be reduced. This methodology can provide a safer and more efficient integration of wearable technologies in human-centered manufacturing environments.
{"title":"Integrating smart glasses and smart gloves in hybrid assembly/disassembly systems: an STPA-driven semi-automated risk management tool","authors":"Ali Karevan, Sylvie Nadeau","doi":"10.1016/j.rcim.2026.103253","DOIUrl":"10.1016/j.rcim.2026.103253","url":null,"abstract":"<div><div>With the rise of Industry 5.0, wearables have become increasingly common in manufacturing, making effective risk management more critical than ever. Despite this trend, there remains a significant gap in research regarding the risks associated with the simultaneous use of multiple wearables, particularly in complex hybrid systems involving human operators. This study addresses this gap by using an improved Systems-Theoretic Process Analysis combined with Particle Swarm Optimization (STPA-PSO) methodology. Moreover, it introduces a circular, semi-automated methodology (incorporating mitigation measures) that can systematically identify, analyze, quantify, and mitigate risks, including those arising from human error, in the integration of multiple wearables. Three case studies, two assembly lines and one disassembly line, were tested to check the effectiveness of this method. The findings indicate that increased interactions among system components can lead to elevated risk levels. It demonstrates that highlighting the hazardous areas, calibration regulations, and training of workers are high-risk control action scenarios that need to be reduced. This methodology can provide a safer and more efficient integration of wearable technologies in human-centered manufacturing environments.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"100 ","pages":"Article 103253"},"PeriodicalIF":11.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146089492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}