热成型中的可扩展多代理深度强化学习:基于红外摄像机反馈的热控制实验评估

IF 6.1 1区 工程技术 Q1 ENGINEERING, MANUFACTURING Journal of Manufacturing Processes Pub Date : 2024-09-16 DOI:10.1016/j.jmapro.2024.09.019
{"title":"热成型中的可扩展多代理深度强化学习:基于红外摄像机反馈的热控制实验评估","authors":"","doi":"10.1016/j.jmapro.2024.09.019","DOIUrl":null,"url":null,"abstract":"<div><p>This manuscript presents the development of multi-agent Deep Reinforcement Learning (DRL) for radiation thermal control in thermoforming processes involving multiple heaters. The complexity of such control systems is characterized by significant action and state spaces, where the actions of all actuators collectively influence the system's output. This complexity introduces substantial challenges regarding the computational demands for offline training of learning-based algorithms and the online computational costs associated with a real-world controller deployment. The study presents a novel approach to training an adaptive and robust DRL agent system that can control a single heating element on the thermoplastic sheet while dynamically considering interactive effects from nearby heaters. Results demonstrated that upon deploying the pre-trained agent for each heater within the heater bank, the group of agents could then regulate the temperature of the sheet to any physically feasible output temperature profile. In contrast to the conventional DRL approach, where a single agent manages all heaters, the multi-agent DRL method boasted that an offline training process was 110 times faster, coupled with an 8 times reduction in the final error margin on the simulator. The experimental data, conducted on a laboratory-scale setup, confirmed the performance of the proposed model, with a final absolute error under 4 <span><math><msup><mrow></mrow><mo>°</mo></msup><mi>C</mi></math></span>. Regardless of the number of heaters, the multi-agent DRL approach exhibited accurate and robust performance. Its advantage was that it incurred no significant offline and online computational burden when the number of heating elements increased, deemed a promising notion for industrial-scale applications.</p></div>","PeriodicalId":16148,"journal":{"name":"Journal of Manufacturing Processes","volume":null,"pages":null},"PeriodicalIF":6.1000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A scalable multi-agent deep reinforcement learning in thermoforming: An experimental evaluation of thermal control by infrared camera-based feedback\",\"authors\":\"\",\"doi\":\"10.1016/j.jmapro.2024.09.019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This manuscript presents the development of multi-agent Deep Reinforcement Learning (DRL) for radiation thermal control in thermoforming processes involving multiple heaters. The complexity of such control systems is characterized by significant action and state spaces, where the actions of all actuators collectively influence the system's output. This complexity introduces substantial challenges regarding the computational demands for offline training of learning-based algorithms and the online computational costs associated with a real-world controller deployment. The study presents a novel approach to training an adaptive and robust DRL agent system that can control a single heating element on the thermoplastic sheet while dynamically considering interactive effects from nearby heaters. Results demonstrated that upon deploying the pre-trained agent for each heater within the heater bank, the group of agents could then regulate the temperature of the sheet to any physically feasible output temperature profile. In contrast to the conventional DRL approach, where a single agent manages all heaters, the multi-agent DRL method boasted that an offline training process was 110 times faster, coupled with an 8 times reduction in the final error margin on the simulator. The experimental data, conducted on a laboratory-scale setup, confirmed the performance of the proposed model, with a final absolute error under 4 <span><math><msup><mrow></mrow><mo>°</mo></msup><mi>C</mi></math></span>. Regardless of the number of heaters, the multi-agent DRL approach exhibited accurate and robust performance. Its advantage was that it incurred no significant offline and online computational burden when the number of heating elements increased, deemed a promising notion for industrial-scale applications.</p></div>\",\"PeriodicalId\":16148,\"journal\":{\"name\":\"Journal of Manufacturing Processes\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.1000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Manufacturing Processes\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1526612524009241\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MANUFACTURING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Manufacturing Processes","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1526612524009241","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

摘要

本手稿介绍了多代理深度强化学习(DRL)的开发情况,用于涉及多个加热器的热成型过程中的辐射热控制。此类控制系统的复杂性表现为显著的动作和状态空间,其中所有执行器的动作都会共同影响系统的输出。这种复杂性给基于学习的算法离线训练的计算需求以及与实际控制器部署相关的在线计算成本带来了巨大挑战。本研究提出了一种训练自适应和鲁棒性 DRL 代理系统的新方法,该系统可以控制热塑性塑料板上的单个加热元件,同时动态考虑附近加热器的交互影响。结果表明,在为加热器组中的每个加热器部署预先训练好的代理后,代理群就能将板材的温度调节到任何物理上可行的输出温度曲线。与单个代理管理所有加热器的传统 DRL 方法相比,多代理 DRL 方法的离线训练过程快了 110 倍,模拟器的最终误差率也降低了 8 倍。在实验室规模的装置上进行的实验数据证实了所建议模型的性能,最终绝对误差低于 4 °C。无论加热器的数量多少,多代理 DRL 方法都表现出了准确和稳健的性能。它的优势在于,当加热元件的数量增加时,它不会产生明显的离线和在线计算负担,这被认为是工业规模应用的一个很有前景的概念。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A scalable multi-agent deep reinforcement learning in thermoforming: An experimental evaluation of thermal control by infrared camera-based feedback

This manuscript presents the development of multi-agent Deep Reinforcement Learning (DRL) for radiation thermal control in thermoforming processes involving multiple heaters. The complexity of such control systems is characterized by significant action and state spaces, where the actions of all actuators collectively influence the system's output. This complexity introduces substantial challenges regarding the computational demands for offline training of learning-based algorithms and the online computational costs associated with a real-world controller deployment. The study presents a novel approach to training an adaptive and robust DRL agent system that can control a single heating element on the thermoplastic sheet while dynamically considering interactive effects from nearby heaters. Results demonstrated that upon deploying the pre-trained agent for each heater within the heater bank, the group of agents could then regulate the temperature of the sheet to any physically feasible output temperature profile. In contrast to the conventional DRL approach, where a single agent manages all heaters, the multi-agent DRL method boasted that an offline training process was 110 times faster, coupled with an 8 times reduction in the final error margin on the simulator. The experimental data, conducted on a laboratory-scale setup, confirmed the performance of the proposed model, with a final absolute error under 4 °C. Regardless of the number of heaters, the multi-agent DRL approach exhibited accurate and robust performance. Its advantage was that it incurred no significant offline and online computational burden when the number of heating elements increased, deemed a promising notion for industrial-scale applications.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Manufacturing Processes
Journal of Manufacturing Processes ENGINEERING, MANUFACTURING-
CiteScore
10.20
自引率
11.30%
发文量
833
审稿时长
50 days
期刊介绍: The aim of the Journal of Manufacturing Processes (JMP) is to exchange current and future directions of manufacturing processes research, development and implementation, and to publish archival scholarly literature with a view to advancing state-of-the-art manufacturing processes and encouraging innovation for developing new and efficient processes. The journal will also publish from other research communities for rapid communication of innovative new concepts. Special-topic issues on emerging technologies and invited papers will also be published.
期刊最新文献
Achieving high thermal conductivity joining of Cf/C and Haynes 230 by using Cu-Mo30Cu-Ti composite foil as thermal interface material Examining the impact of tool taper angle in Al-Si tube manufacturing by friction stir extrusion A theoretical calculation method for asymmetric active counter-roller spinning force by combining strain electrical measurement and simulation Laser powder bed fusion processing of plasma atomized AlSi10Mg powder: Surface roughness and mechanical properties modification Control of hole rolling on 3D Servo Presses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1