基于多因素近端策略优化的单克隆抗体生产过程控制

IF 3 Q2 ENGINEERING, CHEMICAL Digital Chemical Engineering Pub Date : 2023-09-01 DOI:10.1016/j.dche.2023.100108
Nikita Gupta , Shikhar Anand , Tanuja Joshi , Deepak Kumar , Manojkumar Ramteke , Hariprasad Kodamana
{"title":"基于多因素近端策略优化的单克隆抗体生产过程控制","authors":"Nikita Gupta ,&nbsp;Shikhar Anand ,&nbsp;Tanuja Joshi ,&nbsp;Deepak Kumar ,&nbsp;Manojkumar Ramteke ,&nbsp;Hariprasad Kodamana","doi":"10.1016/j.dche.2023.100108","DOIUrl":null,"url":null,"abstract":"<div><p>Monoclonal antibodies (mAb) are biopharmaceutical products that improve human immunity. In this work, we propose a multi-actor proximal policy optimization-based reinforcement learning (RL) for the control of mAb production. Here, manipulated variable is flowrate and the control variable is mAb concentration. Based on root mean square error (RMSE) values and convergence performance, it has been observed that multi-actor PPO has performed better as compared to other RL algorithms. It is observed that PPO predicts a 40 % reduction in the number of days to reach the desired concentration. Moreover, the performance of PPO is improved as the number of actors increases. PPO agent shows the best performance with three actors, but on further increasing, its performance deteriorated. These results are verified based on three case studies, namely, (i) for nominal conditions, (ii) in the presence of noise in raw materials and measurements, and (iii) in the presence of stochastic disturbance in temperature and noise in measurements. The results indicate that the proposed approach outperforms the deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3), and proximal policy optimization (PPO) algorithms for the control of the bioreactor system.</p></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"8 ","pages":"Article 100108"},"PeriodicalIF":3.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Process control of mAb production using multi-actor proximal policy optimization\",\"authors\":\"Nikita Gupta ,&nbsp;Shikhar Anand ,&nbsp;Tanuja Joshi ,&nbsp;Deepak Kumar ,&nbsp;Manojkumar Ramteke ,&nbsp;Hariprasad Kodamana\",\"doi\":\"10.1016/j.dche.2023.100108\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Monoclonal antibodies (mAb) are biopharmaceutical products that improve human immunity. In this work, we propose a multi-actor proximal policy optimization-based reinforcement learning (RL) for the control of mAb production. Here, manipulated variable is flowrate and the control variable is mAb concentration. Based on root mean square error (RMSE) values and convergence performance, it has been observed that multi-actor PPO has performed better as compared to other RL algorithms. It is observed that PPO predicts a 40 % reduction in the number of days to reach the desired concentration. Moreover, the performance of PPO is improved as the number of actors increases. PPO agent shows the best performance with three actors, but on further increasing, its performance deteriorated. These results are verified based on three case studies, namely, (i) for nominal conditions, (ii) in the presence of noise in raw materials and measurements, and (iii) in the presence of stochastic disturbance in temperature and noise in measurements. The results indicate that the proposed approach outperforms the deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3), and proximal policy optimization (PPO) algorithms for the control of the bioreactor system.</p></div>\",\"PeriodicalId\":72815,\"journal\":{\"name\":\"Digital Chemical Engineering\",\"volume\":\"8 \",\"pages\":\"Article 100108\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Chemical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772508123000261\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, CHEMICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Chemical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772508123000261","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0

摘要

单克隆抗体(mAb)是提高人体免疫力的生物制药产品。在这项工作中,我们提出了一种基于多参与者近端策略优化的强化学习(RL)来控制单克隆抗体的产生。这里,操纵变量为流速,控制变量为mAb浓度。基于均方根误差(RMSE)值和收敛性能,已经观察到与其他RL算法相比,多参与者PPO表现更好。据观察,PPO预测达到所需浓度的天数减少40%。此外,随着参与者数量的增加,PPO的性能也有所提高。PPO助剂在添加三种药剂时表现出最佳的性能,但随着添加量的增加,其性能逐渐下降。这些结果基于三个案例研究进行验证,即(i)标称条件下,(ii)原材料和测量中存在噪声的情况下,以及(iii)测量中存在温度随机干扰和噪声的情况下。结果表明,该方法在生物反应器系统控制方面优于深度确定性策略梯度(DDPG)、双延迟深度确定性策略梯度(TD3)和近端策略优化(PPO)算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Process control of mAb production using multi-actor proximal policy optimization

Monoclonal antibodies (mAb) are biopharmaceutical products that improve human immunity. In this work, we propose a multi-actor proximal policy optimization-based reinforcement learning (RL) for the control of mAb production. Here, manipulated variable is flowrate and the control variable is mAb concentration. Based on root mean square error (RMSE) values and convergence performance, it has been observed that multi-actor PPO has performed better as compared to other RL algorithms. It is observed that PPO predicts a 40 % reduction in the number of days to reach the desired concentration. Moreover, the performance of PPO is improved as the number of actors increases. PPO agent shows the best performance with three actors, but on further increasing, its performance deteriorated. These results are verified based on three case studies, namely, (i) for nominal conditions, (ii) in the presence of noise in raw materials and measurements, and (iii) in the presence of stochastic disturbance in temperature and noise in measurements. The results indicate that the proposed approach outperforms the deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3), and proximal policy optimization (PPO) algorithms for the control of the bioreactor system.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
期刊最新文献
The trust region filter strategy: Survey of a rigorous approach for optimization with surrogate models Multi-agent distributed control of integrated process networks using an adaptive community detection approach Industrial data-driven machine learning soft sensing for optimal operation of etching tools Process integration technique for targeting carbon credit price subsidy Robust simulation and technical evaluation of large-scale gas oil hydrocracking process via extended water-energy-product (E-WEP) analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1