首页 > 最新文献

Robotics and Computer-integrated Manufacturing最新文献

英文 中文
The visual‒laser fusion measurement methodology for robotic in situ machining of large components with complex features 大型复杂部件机器人原位加工的视觉激光融合测量方法
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-10 DOI: 10.1016/j.rcim.2026.103257
Yan Zheng, Wei Liu, Lei Han, Junqing Li, Hongguang Ding, Yang Zhang
Large components are core parts of high‒end equipment and are characterized by their large overall dimensions and small, complex local features with high‒precision requirements. These characteristics pose significant challenges to measurement and processing. To address the demand for measuring and machining the complex features of large components, this paper develops a cross‒scale multi‒sensor fusion robotic in situ measurement system based on vision and laser technologies. The system introduces a fusion algorithm based on image priors and point cloud enhancement, achieving the integration of multimodal measurement data. Through three‒layer integration of equipment, algorithms, and data, the system attains high‒precision measurements of both the global reconstruction and local features of large components, which provides essential data support for the comprehensive 3D reconstruction of large components and robotic in situ machining. Finally, the experimental results for the spacecraft component show that the robotic measurement system achieves a local measurement RMSE of no >0.015 mm, and the global measurement average error is 0.985 mm. This represents an improvement of approximately 49.4% over the binocular vision measurement system, with an overall efficiency increase of >90%. The final machining precision meets the stringent finishing requirements of the aerospace industry.
大型部件是高端设备的核心部件,具有整体尺寸大、局部特征小而复杂、精度要求高的特点。这些特征对测量和处理提出了重大挑战。针对大型部件复杂特征的测量和加工需求,开发了一种基于视觉和激光技术的跨尺度多传感器融合机器人原位测量系统。该系统引入了基于图像先验和点云增强的融合算法,实现了多模态测量数据的融合。该系统通过设备、算法和数据三层集成,实现了大型部件整体重构和局部特征的高精度测量,为大型部件的全面三维重构和机器人原位加工提供了必要的数据支持。最后,对航天器部件的实验结果表明,机器人测量系统的局部测量RMSE为0.015 mm,全局测量平均误差为0.985 mm。这比双目视觉测量系统提高了约49.4%,整体效率提高了90%。最终加工精度满足航空航天工业严格的精加工要求。
{"title":"The visual‒laser fusion measurement methodology for robotic in situ machining of large components with complex features","authors":"Yan Zheng, Wei Liu, Lei Han, Junqing Li, Hongguang Ding, Yang Zhang","doi":"10.1016/j.rcim.2026.103257","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103257","url":null,"abstract":"Large components are core parts of high‒end equipment and are characterized by their large overall dimensions and small, complex local features with high‒precision requirements. These characteristics pose significant challenges to measurement and processing. To address the demand for measuring and machining the complex features of large components, this paper develops a cross‒scale multi‒sensor fusion robotic in situ measurement system based on vision and laser technologies. The system introduces a fusion algorithm based on image priors and point cloud enhancement, achieving the integration of multimodal measurement data. Through three‒layer integration of equipment, algorithms, and data, the system attains high‒precision measurements of both the global reconstruction and local features of large components, which provides essential data support for the comprehensive 3D reconstruction of large components and robotic in situ machining. Finally, the experimental results for the spacecraft component show that the robotic measurement system achieves a local measurement RMSE of no >0.015 mm, and the global measurement average error is 0.985 mm. This represents an improvement of approximately 49.4% over the binocular vision measurement system, with an overall efficiency increase of >90%. The final machining precision meets the stringent finishing requirements of the aerospace industry.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"315 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal perception and motion similarity learning for real-time assembly verification 实时装配验证的时空感知和运动相似学习
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-10 DOI: 10.1016/j.rcim.2026.103258
Q. Ye, Y.Q. Niu, A.Y.C. Nee, S.K. Ong
The increasing demand for personalized products and operational flexibility in Industry 4.0 has exposed critical limitations in traditional quality assurance systems, particularly within human-in-the-loop manual assembly. To address these challenges, this paper introduces an integrated vision-based framework that combines spatiotemporal object detection and motion similarity assessment for the automated verification of procedural correctness. A multi-modal acquisition system, utilizing synchronized RGB and depth data, is employed to support precise 3D hand keypoint reconstruction. For process segmentation, STM-YOLO (Spatiotemporal Multi-View You Only Look Once) was developed. By reconfiguring standard detection heads into stage-transition classifiers, this model achieves an average accuracy of 87%, outperforming the single-view, single-frame baseline (S-S) by an absolute margin of 27.67%, and further validates the complementary contributions of both temporal modeling and multi-view perception modules to performance improvement. To assess operator action conformity without the need for gesture-level annotation, the Siamese Transformer Encoder Network (STEN) is introduced, evaluating 3D hand motion trajectories against golden-standard references. Validated on the filtered FPHA (First-Person Hand Action) benchmark, STEN attains 99.28% accuracy and an F1-score of 0.9858, surpassing classical approaches. The proposed system operates at over 10 fps on laboratory hardware and demonstrates robust generalization across diverse operators and tasks, offering a scalable solution for procedural verification in flexible manufacturing environments.
在工业4.0时代,对个性化产品和操作灵活性的需求不断增长,暴露了传统质量保证体系的严重局限性,特别是在人工环内手工组装方面。为了解决这些挑战,本文引入了一个集成的基于视觉的框架,该框架结合了时空目标检测和运动相似度评估,用于自动验证程序正确性。采用同步RGB和深度数据的多模态采集系统,支持三维手部关键点的精确重建。对于过程分割,开发了STM-YOLO (spatial - temporal Multi-View You Only Look Once)。通过将标准检测头重新配置为阶段转换分类器,该模型的平均准确率达到87%,比单视图、单帧基线(S-S)高出27.67%,并进一步验证了时间建模和多视图感知模块对性能改进的互补贡献。为了在不需要手势级注释的情况下评估操作员动作一致性,引入了Siamese变压器编码器网络(STEN),根据金标准参考评估3D手部运动轨迹。在过滤后的第一人称手动作(First-Person Hand Action, FPHA)基准上进行验证,STEN的准确率达到99.28%,f1得分为0.9858,超过了经典方法。该系统在实验室硬件上以超过10fps的速度运行,并在不同的操作人员和任务中展示了强大的通用性,为灵活的制造环境中的程序验证提供了可扩展的解决方案。
{"title":"Spatiotemporal perception and motion similarity learning for real-time assembly verification","authors":"Q. Ye, Y.Q. Niu, A.Y.C. Nee, S.K. Ong","doi":"10.1016/j.rcim.2026.103258","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103258","url":null,"abstract":"The increasing demand for personalized products and operational flexibility in Industry 4.0 has exposed critical limitations in traditional quality assurance systems, particularly within human-in-the-loop manual assembly. To address these challenges, this paper introduces an integrated vision-based framework that combines spatiotemporal object detection and motion similarity assessment for the automated verification of procedural correctness. A multi-modal acquisition system, utilizing synchronized RGB and depth data, is employed to support precise 3D hand keypoint reconstruction. For process segmentation, STM-YOLO (Spatiotemporal Multi-View You Only Look Once) was developed. By reconfiguring standard detection heads into stage-transition classifiers, this model achieves an average accuracy of 87%, outperforming the single-view, single-frame baseline (S-S) by an absolute margin of 27.67%, and further validates the complementary contributions of both temporal modeling and multi-view perception modules to performance improvement. To assess operator action conformity without the need for gesture-level annotation, the Siamese Transformer Encoder Network (STEN) is introduced, evaluating 3D hand motion trajectories against golden-standard references. Validated on the filtered FPHA (First-Person Hand Action) benchmark, STEN attains 99.28% accuracy and an F1-score of 0.9858, surpassing classical approaches. The proposed system operates at over 10 fps on laboratory hardware and demonstrates robust generalization across diverse operators and tasks, offering a scalable solution for procedural verification in flexible manufacturing environments.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"93 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-optimized elasto-geometrical and hand-eye calibration of industrial robots with integrated dual laser profile scanners 集成双激光轮廓扫描仪的工业机器人弹性几何和手眼协同优化标定
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-09 DOI: 10.1016/j.rcim.2026.103252
Moien Reyhani, Christian Hartl-Nesic, Andreas Kugi
Hand-eye calibration is a fundamental prerequisite for vision-based applications in industrial robotics. While this issue is largely addressed for 3D cameras, it remains a challenge for Laser Profile Scanners (LPSs) due to their inherent 2D measurement limitations. Performing hand-eye calibration with an uncalibrated robot often results in inaccurate calibration outcomes. Traditional methods for calibrating the robot prior to hand-eye calibration using highly accurate optical systems are often prohibitively expensive and time-consuming, making them impractical for the high demands of modern production environments. This study proposes a novel methodology for co-optimized elasto-geometrical calibration of the robot, while modeling joint compliance, and the hand-eye calibration of a dual LPS mounted on the robot’s end-effector. The two LPSs are configured such that their laser lines intersect on the object, thereby forming an intersecting Laser Line System (LLS). This necessitates the calibration of the LPSs relative to each other, which constitutes eye-to-eye calibration in the context of multiple LPSs. Therefore, this work also introduces an approach for the eye-to-eye calibration of the LPSs. Both proposed approaches leverage the iterative closest point (ICP) algorithm within a bilevel optimization framework and utilize an artifact specifically designed for the fast and stable calibration of LPSs. Notably, these methods are extensible to multiple laser lines with diverse configurations and are applicable to analogous calibration artifacts. To the best of the authors’ knowledge, this is the first study to address the eye-to-eye calibration of LPSs and to explicitly incorporate joint stiffness effects into the hand-eye calibration of LPSs. The experimental results demonstrate that the proposed calibration strategy achieves a registration error of 0.154mm when using a 3D-printed artifact, while employing a high-precision artifact results in an improved accuracy with a registration error as low as 0.068mm.
手眼标定是工业机器人中基于视觉的应用的基本前提。虽然这个问题在很大程度上解决了3D相机,但由于其固有的2D测量限制,激光轮廓扫描仪(lps)仍然是一个挑战。使用未校准的机器人进行手眼校准通常会导致不准确的校准结果。在使用高精度光学系统进行手眼校准之前对机器人进行校准的传统方法通常非常昂贵且耗时,这对于现代生产环境的高要求来说是不切实际的。本研究提出了一种新的方法,用于机器人的协同优化弹性几何校准,同时建模关节顺应性,以及安装在机器人末端执行器上的双LPS的手眼校准。两个LPSs被配置成它们的激光线在物体上相交,从而形成一个相交的激光线系统(LLS)。这就需要对LPSs进行相对校准,这在多个LPSs的情况下构成了眼对眼校准。因此,本工作还介绍了一种对lps进行眼对眼校准的方法。这两种方法都利用了双层优化框架中的迭代最近点(ICP)算法,并利用了专门为LPSs快速稳定校准而设计的工件。值得注意的是,这些方法可扩展到具有不同配置的多条激光线,并适用于类似的校准工件。据作者所知,这是第一个解决眼对眼校准的研究,并明确地将关节刚度效应纳入眼对眼校准的研究。实验结果表明,该标定策略在3d打印工件上的配准误差为0.154mm,在高精度工件上的配准误差可达0.068mm。
{"title":"Co-optimized elasto-geometrical and hand-eye calibration of industrial robots with integrated dual laser profile scanners","authors":"Moien Reyhani, Christian Hartl-Nesic, Andreas Kugi","doi":"10.1016/j.rcim.2026.103252","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103252","url":null,"abstract":"<mml:math altimg=\"si3.svg\" display=\"inline\"><mml:mtext>Hand-eye</mml:mtext></mml:math> calibration is a fundamental prerequisite for vision-based applications in industrial robotics. While this issue is largely addressed for 3D cameras, it remains a challenge for Laser Profile Scanners (LPSs) due to their inherent 2D measurement limitations. Performing <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration with an uncalibrated robot often results in inaccurate calibration outcomes. Traditional methods for calibrating the robot prior to <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration using highly accurate optical systems are often prohibitively expensive and time-consuming, making them impractical for the high demands of modern production environments. This study proposes a novel methodology for co-optimized elasto-geometrical calibration of the robot, while modeling joint compliance, and the <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration of a dual LPS mounted on the robot’s end-effector. The two LPSs are configured such that their laser lines intersect on the object, thereby forming an intersecting Laser Line System (LLS). This necessitates the calibration of the LPSs relative to each other, which constitutes <mml:math altimg=\"si241.svg\" display=\"inline\"><mml:mtext>eye-to-eye</mml:mtext></mml:math> calibration in the context of multiple LPSs. Therefore, this work also introduces an approach for the <mml:math altimg=\"si241.svg\" display=\"inline\"><mml:mtext>eye-to-eye</mml:mtext></mml:math> calibration of the LPSs. Both proposed approaches leverage the iterative closest point (ICP) algorithm within a bilevel optimization framework and utilize an artifact specifically designed for the fast and stable calibration of LPSs. Notably, these methods are extensible to multiple laser lines with diverse configurations and are applicable to analogous calibration artifacts. To the best of the authors’ knowledge, this is the first study to address the <mml:math altimg=\"si241.svg\" display=\"inline\"><mml:mtext>eye-to-eye</mml:mtext></mml:math> calibration of LPSs and to explicitly incorporate joint stiffness effects into the <mml:math altimg=\"si334.svg\" display=\"inline\"><mml:mtext>hand-eye</mml:mtext></mml:math> calibration of LPSs. The experimental results demonstrate that the proposed calibration strategy achieves a registration error of 0.154<ce:hsp sp=\"0.16667\"></ce:hsp>mm when using a 3D-printed artifact, while employing a high-precision artifact results in an improved accuracy with a registration error as low as 0.068<ce:hsp sp=\"0.16667\"></ce:hsp>mm.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"89 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Region-aware robotic polishing profiles generation of large complex components with Gaussian processes and differentiation 基于高斯过程和微分的区域感知机器人抛光轮廓生成
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-07 DOI: 10.1016/j.rcim.2026.103260
Zhongjun He, Hongmin Wu, Jia Pan, Zhaoyang Liao, Zhihao Xu, Xuefeng Zhou
{"title":"Region-aware robotic polishing profiles generation of large complex components with Gaussian processes and differentiation","authors":"Zhongjun He, Hongmin Wu, Jia Pan, Zhaoyang Liao, Zhihao Xu, Xuefeng Zhou","doi":"10.1016/j.rcim.2026.103260","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103260","url":null,"abstract":"","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"17 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergy of giants and specialists: A large-and-small model integration framework driven by generative AI for smart manufacturing 巨人与专家的协同:由生成式人工智能驱动的智能制造的大小模型集成框架
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-05 DOI: 10.1016/j.rcim.2026.103254
Qingfeng Xu, Chao Zhang, Dongxu Ma, Yan Cao, Guanghui Zhou
{"title":"Synergy of giants and specialists: A large-and-small model integration framework driven by generative AI for smart manufacturing","authors":"Qingfeng Xu, Chao Zhang, Dongxu Ma, Yan Cao, Guanghui Zhou","doi":"10.1016/j.rcim.2026.103254","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103254","url":null,"abstract":"","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"156 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of applications of collaborative robot in welding and additive manufacturing 协同机器人在焊接和增材制造中的应用综述
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-05 DOI: 10.1016/j.rcim.2026.103256
Mohammad Arjomandi, Tuhin Mukherjee
{"title":"A review of applications of collaborative robot in welding and additive manufacturing","authors":"Mohammad Arjomandi, Tuhin Mukherjee","doi":"10.1016/j.rcim.2026.103256","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103256","url":null,"abstract":"","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"33 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralised task planning and motion coordination for scalable multi-robot collaborative manufacturing 面向可扩展多机器人协同制造的分散任务规划与运动协调
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-04 DOI: 10.1016/j.rcim.2026.103255
Hang Yang, Wenjun Xu, Duc Truong Pham, Lei Qi, Mengyuan Ba
{"title":"Decentralised task planning and motion coordination for scalable multi-robot collaborative manufacturing","authors":"Hang Yang, Wenjun Xu, Duc Truong Pham, Lei Qi, Mengyuan Ba","doi":"10.1016/j.rcim.2026.103255","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103255","url":null,"abstract":"","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"6 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent assembly conformance verification for complex products: A rotationally invariant multi-view visual framework 复杂产品装配一致性的智能验证:一种旋转不变的多视图可视化框架
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-02-02 DOI: 10.1016/j.rcim.2026.103247
Shengjie Jiang, Qijia Qian, Jianhong Liu, Pan Wang, Xiao Zhuang, Di Zhou, Weifang Sun, Jiawei Xiang
Conformance verification in assembly processes is crucial for ensuring manufacturing quality, yet it is often challenged in real production environments by viewpoint variations and individual differences in operator behavior. This paper presents a rotation-invariant conformance verification framework for intelligent assembly, adopting a hybrid modeling paradigm that synergizes data-driven learning with geometric priors. By jointly integrating action recognition, temporal logic validation, and spatial path evaluation, the framework enables fine-grained assessment of deviations from standard operating procedures. The research develops a spatio-temporal-semantic triple-attention network to achieve adaptive, high-accuracy procedural-level action recognition in a data-driven manner. Then, a dynamic state-transition model is introduced to capture temporal violations by online updating of operation transition probabilities. By combining differential chain codes with cyclic shift normalization, the proposed geometry-guided trajectory representation method enables rotation-robust quantification of path deviations in critical assembly processes without requiring multi-view training data. Experiments on our WZU complex product assembly process dataset show that the proposed framework achieves 96.17% accuracy in violation detection, significantly outperforming CNN-LSTM (+10.39%), I3D (+1.02%), and MobileNetV3 (+1.24%), with an end-to-end inference latency under 50 ms, making it suitable for edge deployment. This work provides an efficient, interpretable, and viewpoint-invariant vision-based solution for assembly process monitoring in industrial applications.
装配过程中的一致性验证对于确保制造质量至关重要,但在实际生产环境中,由于操作人员行为的观点变化和个体差异,一致性验证经常受到挑战。本文提出了一种面向智能装配的旋转不变一致性验证框架,该框架采用数据驱动学习与几何先验相结合的混合建模范式。通过联合集成动作识别、时间逻辑验证和空间路径评估,该框架能够对偏离标准操作程序的偏差进行细粒度评估。该研究开发了一个时空语义三重注意网络,以数据驱动的方式实现自适应、高精度的过程级动作识别。然后,引入动态状态转移模型,通过在线更新操作转移概率来捕获时间违例。通过将差分链编码与循环移位归一化相结合,所提出的几何制导轨迹表示方法能够在不需要多视图训练数据的情况下实现关键装配过程中路径偏差的旋转鲁棒量化。在WZU复杂产品装配过程数据集上的实验表明,该框架的违例检测准确率达到96.17%,显著优于CNN-LSTM(+10.39%)、I3D(+1.02%)和MobileNetV3(+1.24%),端到端推理延迟低于50 ms,适合边缘部署。这项工作为工业应用中的装配过程监控提供了一种高效、可解释和视点不变的基于视觉的解决方案。
{"title":"Intelligent assembly conformance verification for complex products: A rotationally invariant multi-view visual framework","authors":"Shengjie Jiang, Qijia Qian, Jianhong Liu, Pan Wang, Xiao Zhuang, Di Zhou, Weifang Sun, Jiawei Xiang","doi":"10.1016/j.rcim.2026.103247","DOIUrl":"https://doi.org/10.1016/j.rcim.2026.103247","url":null,"abstract":"Conformance verification in assembly processes is crucial for ensuring manufacturing quality, yet it is often challenged in real production environments by viewpoint variations and individual differences in operator behavior. This paper presents a rotation-invariant conformance verification framework for intelligent assembly, adopting a hybrid modeling paradigm that synergizes data-driven learning with geometric priors. By jointly integrating action recognition, temporal logic validation, and spatial path evaluation, the framework enables fine-grained assessment of deviations from standard operating procedures. The research develops a spatio-temporal-semantic triple-attention network to achieve adaptive, high-accuracy procedural-level action recognition in a data-driven manner. Then, a dynamic state-transition model is introduced to capture temporal violations by online updating of operation transition probabilities. By combining differential chain codes with cyclic shift normalization, the proposed geometry-guided trajectory representation method enables rotation-robust quantification of path deviations in critical assembly processes without requiring multi-view training data. Experiments on our WZU complex product assembly process dataset show that the proposed framework achieves 96.17% accuracy in violation detection, significantly outperforming CNN-LSTM (+10.39%), I3D (+1.02%), and MobileNetV3 (+1.24%), with an end-to-end inference latency under 50 ms, making it suitable for edge deployment. This work provides an efficient, interpretable, and viewpoint-invariant vision-based solution for assembly process monitoring in industrial applications.","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"42 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive task planning and coordination in multi-agent manufacturing systems using large language models 基于大语言模型的多智能体制造系统自适应任务规划与协调
IF 11.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-31 DOI: 10.1016/j.rcim.2026.103245
Jonghan Lim , Jiabao Zhao , Ezekiel Hernandez , Ilya Kovalenko
As the demand for personalized products increases, manufacturing processes are becoming more complex due to greater variety and uncertainty in product requirements. Traditional manufacturing systems face challenges in adapting to product changes without manual interventions, leading to an increase in product delays and operational costs. Multi-agent manufacturing control systems, a decentralized framework consisting of collaborative agents, have been employed to enhance flexibility and adaptability in manufacturing. However, existing multi-agent system approaches are often initialized with predefined capabilities, limiting their ability to handle new requirements that were not modeled in advance. To address this challenge, this work proposes a large language model-enabled multi-agent framework that enables adaptive matching, translating new product requirements to manufacturing process control at runtime. A product agent, which is a decision-maker for a product, interprets unforeseen product requirements and matches with manufacturing capabilities by dynamically retrieving manufacturing knowledge during runtime. Communication strategies and a decision-making method are also introduced to facilitate adaptive task planning and coordination. The proposed framework was evaluated using an assembly task board testbed across three case studies of increasing complexity. Results demonstrate that the framework can process unforeseen product requirements into executable operations, dynamically discover manufacturing capabilities, and improve resource utilization.
随着对个性化产品需求的增加,由于产品需求的多样性和不确定性的增加,制造过程变得越来越复杂。传统的制造系统面临着在没有人工干预的情况下适应产品变化的挑战,导致产品延迟和运营成本的增加。多智能体制造控制系统是一种由协同智能体组成的分散框架,可提高制造过程的灵活性和适应性。然而,现有的多代理系统方法通常使用预定义的功能进行初始化,从而限制了它们处理未预先建模的新需求的能力。为了应对这一挑战,本工作提出了一个支持大型语言模型的多智能体框架,该框架支持自适应匹配,在运行时将新产品需求转换为制造过程控制。产品代理是产品的决策者,它解释不可预见的产品需求,并通过在运行时动态检索制造知识来匹配制造能力。还介绍了沟通策略和决策方法,以促进适应性任务规划和协调。使用装配任务板测试平台对提议的框架进行了评估,该测试平台跨越了三个日益复杂的案例研究。结果表明,该框架能够将不可预见的产品需求转化为可执行的操作,动态发现制造能力,提高资源利用率。
{"title":"Adaptive task planning and coordination in multi-agent manufacturing systems using large language models","authors":"Jonghan Lim ,&nbsp;Jiabao Zhao ,&nbsp;Ezekiel Hernandez ,&nbsp;Ilya Kovalenko","doi":"10.1016/j.rcim.2026.103245","DOIUrl":"10.1016/j.rcim.2026.103245","url":null,"abstract":"<div><div>As the demand for personalized products increases, manufacturing processes are becoming more complex due to greater variety and uncertainty in product requirements. Traditional manufacturing systems face challenges in adapting to product changes without manual interventions, leading to an increase in product delays and operational costs. Multi-agent manufacturing control systems, a decentralized framework consisting of collaborative agents, have been employed to enhance flexibility and adaptability in manufacturing. However, existing multi-agent system approaches are often initialized with predefined capabilities, limiting their ability to handle new requirements that were not modeled in advance. To address this challenge, this work proposes a large language model-enabled multi-agent framework that enables adaptive matching, translating new product requirements to manufacturing process control at runtime. A product agent, which is a decision-maker for a product, interprets unforeseen product requirements and matches with manufacturing capabilities by dynamically retrieving manufacturing knowledge during runtime. Communication strategies and a decision-making method are also introduced to facilitate adaptive task planning and coordination. The proposed framework was evaluated using an assembly task board testbed across three case studies of increasing complexity. Results demonstrate that the framework can process unforeseen product requirements into executable operations, dynamically discover manufacturing capabilities, and improve resource utilization.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"100 ","pages":"Article 103245"},"PeriodicalIF":11.4,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating smart glasses and smart gloves in hybrid assembly/disassembly systems: an STPA-driven semi-automated risk management tool 在混合装配/拆卸系统中集成智能眼镜和智能手套:一种stpa驱动的半自动风险管理工具
IF 11.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-30 DOI: 10.1016/j.rcim.2026.103253
Ali Karevan, Sylvie Nadeau
With the rise of Industry 5.0, wearables have become increasingly common in manufacturing, making effective risk management more critical than ever. Despite this trend, there remains a significant gap in research regarding the risks associated with the simultaneous use of multiple wearables, particularly in complex hybrid systems involving human operators. This study addresses this gap by using an improved Systems-Theoretic Process Analysis combined with Particle Swarm Optimization (STPA-PSO) methodology. Moreover, it introduces a circular, semi-automated methodology (incorporating mitigation measures) that can systematically identify, analyze, quantify, and mitigate risks, including those arising from human error, in the integration of multiple wearables. Three case studies, two assembly lines and one disassembly line, were tested to check the effectiveness of this method. The findings indicate that increased interactions among system components can lead to elevated risk levels. It demonstrates that highlighting the hazardous areas, calibration regulations, and training of workers are high-risk control action scenarios that need to be reduced. This methodology can provide a safer and more efficient integration of wearable technologies in human-centered manufacturing environments.
随着工业5.0的兴起,可穿戴设备在制造业中变得越来越普遍,有效的风险管理比以往任何时候都更加重要。尽管有这种趋势,但在研究同时使用多个可穿戴设备的风险方面,特别是在涉及人类操作员的复杂混合系统中,仍然存在很大的差距。本研究采用改进的系统理论过程分析与粒子群优化(STPA-PSO)方法相结合的方法来解决这一差距。此外,它还引入了一种循环、半自动化的方法(包含缓解措施),可以系统地识别、分析、量化和减轻多种可穿戴设备集成中的风险,包括由人为错误引起的风险。通过2条装配线和1条拆解线3个实例验证了该方法的有效性。研究结果表明,系统组件之间的相互作用增加会导致风险水平升高。它表明,强调危险区域、校准规定和工人培训是需要减少的高风险控制行动方案。这种方法可以在以人为中心的制造环境中提供更安全、更有效的可穿戴技术集成。
{"title":"Integrating smart glasses and smart gloves in hybrid assembly/disassembly systems: an STPA-driven semi-automated risk management tool","authors":"Ali Karevan,&nbsp;Sylvie Nadeau","doi":"10.1016/j.rcim.2026.103253","DOIUrl":"10.1016/j.rcim.2026.103253","url":null,"abstract":"<div><div>With the rise of Industry 5.0, wearables have become increasingly common in manufacturing, making effective risk management more critical than ever. Despite this trend, there remains a significant gap in research regarding the risks associated with the simultaneous use of multiple wearables, particularly in complex hybrid systems involving human operators. This study addresses this gap by using an improved Systems-Theoretic Process Analysis combined with Particle Swarm Optimization (STPA-PSO) methodology. Moreover, it introduces a circular, semi-automated methodology (incorporating mitigation measures) that can systematically identify, analyze, quantify, and mitigate risks, including those arising from human error, in the integration of multiple wearables. Three case studies, two assembly lines and one disassembly line, were tested to check the effectiveness of this method. The findings indicate that increased interactions among system components can lead to elevated risk levels. It demonstrates that highlighting the hazardous areas, calibration regulations, and training of workers are high-risk control action scenarios that need to be reduced. This methodology can provide a safer and more efficient integration of wearable technologies in human-centered manufacturing environments.</div></div>","PeriodicalId":21452,"journal":{"name":"Robotics and Computer-integrated Manufacturing","volume":"100 ","pages":"Article 103253"},"PeriodicalIF":11.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146089492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Computer-integrated Manufacturing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1