Deep reinforcement learning-assisted extended state observer for run-to-run control in the semiconductor manufacturing process

Zhu Ma, Tianhong Pan
{"title":"Deep reinforcement learning-assisted extended state observer for run-to-run control in the semiconductor manufacturing process","authors":"Zhu Ma, Tianhong Pan","doi":"10.1177/01423312241229492","DOIUrl":null,"url":null,"abstract":"In the semiconductor manufacturing process, extended state observer (ESO)-based run-to-run (RtR) control is an intriguing solution. Although an ESO-RtR control strategy can effectively compensate for the lumped disturbance, appropriate gains are required. In this article, a cutting-edge deep reinforcement learning (DRL) technique is integrated into ESO-RtR, and a composite control framework of DRL-ESO-RtR is developed. In particular, the well-trained DRL agent serves as an assisted controller, which produces appropriate gains of ESO. The optimized ESO then presents a preferable control recipe for the manufacturing process. Under the RtR framework, the gain adjustment problem of ESO is formulated as a Markov decision process. An efficient state space and reward function are wisely designed using the system’s observable information. Correspondingly, the gain of the ESO is adaptively adjusted to cope with changing environmental disturbances. Finally, a twin-delayed deep deterministic policy gradient algorithm is employed to implement the suggested scheme. The feasibility and superiority of the developed method are validated in a deep reactive ion etching process. Comparative results demonstrate that the presented scheme outperforms the ordinary ESO-RtR controller in terms of disturbance rejection.","PeriodicalId":507087,"journal":{"name":"Transactions of the Institute of Measurement and Control","volume":"42 36","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of the Institute of Measurement and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/01423312241229492","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the semiconductor manufacturing process, extended state observer (ESO)-based run-to-run (RtR) control is an intriguing solution. Although an ESO-RtR control strategy can effectively compensate for the lumped disturbance, appropriate gains are required. In this article, a cutting-edge deep reinforcement learning (DRL) technique is integrated into ESO-RtR, and a composite control framework of DRL-ESO-RtR is developed. In particular, the well-trained DRL agent serves as an assisted controller, which produces appropriate gains of ESO. The optimized ESO then presents a preferable control recipe for the manufacturing process. Under the RtR framework, the gain adjustment problem of ESO is formulated as a Markov decision process. An efficient state space and reward function are wisely designed using the system’s observable information. Correspondingly, the gain of the ESO is adaptively adjusted to cope with changing environmental disturbances. Finally, a twin-delayed deep deterministic policy gradient algorithm is employed to implement the suggested scheme. The feasibility and superiority of the developed method are validated in a deep reactive ion etching process. Comparative results demonstrate that the presented scheme outperforms the ordinary ESO-RtR controller in terms of disturbance rejection.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
深度强化学习辅助扩展状态观测器用于半导体制造过程中的运行控制
在半导体制造过程中,基于扩展状态观测器(ESO)的运行控制(RtR)是一种令人感兴趣的解决方案。虽然 ESO-RtR 控制策略可以有效补偿叠加干扰,但需要适当的增益。本文将前沿的深度强化学习(DRL)技术集成到 ESO-RtR 中,并开发了 DRL-ESO-RtR 复合控制框架。其中,训练有素的 DRL 代理可作为辅助控制器,产生适当的 ESO 增益。优化后的 ESO 为制造过程提供了一个可取的控制配方。在 RtR 框架下,ESO 的增益调整问题被表述为一个马尔可夫决策过程。利用系统的可观测信息,可以明智地设计出高效的状态空间和奖励函数。相应地,ESO 的增益也会进行自适应调整,以应对不断变化的环境干扰。最后,采用双延迟深度确定性策略梯度算法来实现所建议的方案。所开发方法的可行性和优越性在深度反应离子蚀刻过程中得到了验证。比较结果表明,所提出的方案在干扰抑制方面优于普通的 ESO-RtR 控制器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Event-triggered leader-following consensus of nonlinear semi-Markovian multi-agent systems via improved integral inequalities Event-driven fuzzy L∞ control of DC microgrids under cyber attacks and quantization Stable constrained model predictive control based on IOFL technique for boiler-turbine system Improved adaptive snake optimization algorithm with application to multi-UAV path planning Adaptive model predictive control–based curved path-tracking strategy for autonomous vehicles under variable velocity
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1