Pub Date : 2025-09-24DOI: 10.1016/j.compchemeng.2025.109348
Zhouchang Li, Runze Lin, Hongye Su, Lei Xie
In the era of smart manufacturing and Industry 4.0, the refining industry is evolving towards large-scale integration and flexible production systems. In response to these new demands, this paper presents a novel optimization framework for plant-wide refinery planning, integrating model decomposition with deep reinforcement learning. The approach decomposes the complex large-scale refinery optimization problem into manageable submodels, improving computational efficiency while preserving accuracy. A reinforcement learning-based pricing mechanism is introduced to generate pricing strategies for intermediate products, facilitating better coordination between submodels and enabling rapid responses to market changes. Two industrial case studies, covering both single-period and multi-period refinery planning, demonstrate significant improvements in computational efficiency while ensuring refinery profitability.
{"title":"Reinforcement learning-driven plant-wide refinery planning using model decomposition","authors":"Zhouchang Li, Runze Lin, Hongye Su, Lei Xie","doi":"10.1016/j.compchemeng.2025.109348","DOIUrl":"10.1016/j.compchemeng.2025.109348","url":null,"abstract":"<div><div>In the era of smart manufacturing and Industry 4.0, the refining industry is evolving towards large-scale integration and flexible production systems. In response to these new demands, this paper presents a novel optimization framework for plant-wide refinery planning, integrating model decomposition with deep reinforcement learning. The approach decomposes the complex large-scale refinery optimization problem into manageable submodels, improving computational efficiency while preserving accuracy. A reinforcement learning-based pricing mechanism is introduced to generate pricing strategies for intermediate products, facilitating better coordination between submodels and enabling rapid responses to market changes. Two industrial case studies, covering both single-period and multi-period refinery planning, demonstrate significant improvements in computational efficiency while ensuring refinery profitability.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109348"},"PeriodicalIF":3.9,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23DOI: 10.1016/j.compchemeng.2025.109407
Hritu Raj, Gargi Srivastava
Gas leak detection is a critical task for environmental and industrial safety, often facilitated through imaging techniques such as Mask R-CNN. However, accurately segmenting gas plumes remains challenging due to their dynamic nature and complex background. In this study, we propose a novel approach to improve gas leak plume segmentation accuracy by combining Mask R-CNN with augmented bit plane images. Initially trained on a dataset of 1000 gas leak images, our model, utilizing a ResNet101 backbone, achieved a commendable F1-Score of 95.6%, outperforming MobileNetV2 and DenseNet169. Through the incorporation of a novel bit plane image augmentation strategy, specifically focusing on the XOR combination of bit planes 4 and 5, the ResNet101 model’s F1-Score significantly improved to 98.7%, showcasing the effectiveness of our approach in enriching the training data and enhancing the model’s ability to generalize to unseen instances. This bit plane augmentation method also demonstrated superior performance compared to other mainstream image enhancement techniques like CLAHE and Gamma correction. These findings suggest promising implications for improving gas leak detection systems, thereby contributing to enhanced safety measures in various industrial and environmental settings, with considerations for real-time industrial deployment.
{"title":"A novel data augmentation strategy for gas leak detection and segmentation using Mask R-CNN and bit plane slicing in chemical process environments","authors":"Hritu Raj, Gargi Srivastava","doi":"10.1016/j.compchemeng.2025.109407","DOIUrl":"10.1016/j.compchemeng.2025.109407","url":null,"abstract":"<div><div>Gas leak detection is a critical task for environmental and industrial safety, often facilitated through imaging techniques such as Mask R-CNN. However, accurately segmenting gas plumes remains challenging due to their dynamic nature and complex background. In this study, we propose a novel approach to improve gas leak plume segmentation accuracy by combining Mask R-CNN with augmented bit plane images. Initially trained on a dataset of 1000 gas leak images, our model, utilizing a ResNet101 backbone, achieved a commendable F1-Score of 95.6%, outperforming MobileNetV2 and DenseNet169. Through the incorporation of a novel bit plane image augmentation strategy, specifically focusing on the XOR combination of bit planes 4 and 5, the ResNet101 model’s F1-Score significantly improved to 98.7%, showcasing the effectiveness of our approach in enriching the training data and enhancing the model’s ability to generalize to unseen instances. This bit plane augmentation method also demonstrated superior performance compared to other mainstream image enhancement techniques like CLAHE and Gamma correction. These findings suggest promising implications for improving gas leak detection systems, thereby contributing to enhanced safety measures in various industrial and environmental settings, with considerations for real-time industrial deployment.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109407"},"PeriodicalIF":3.9,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145154735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23DOI: 10.1016/j.compchemeng.2025.109416
Jean-Marc Commenge, Andres Piña-Martinez
Process synthesis using evolutionary methods, based on the iterative application of mutation operators, requires to initialize the method by one or a set of process flowsheets. Appropriate initialization might reduce computation times by providing first proposals that decrease the number of mutations to reach optimal structures, in terms of units and connectivity. This work illustrates how to identify, from a given database of flowsheets, the flowsheets that might play a pivotal role in the further evolutionary synthesis. A home-made database with over 2000 flowsheets, digitalized from 800 recent scientific publications, is used, exhibiting the variety of possible structures from single distillation columns to biorefinery layouts. Selection of initialization flowsheets should ensure diversity in structures and units while minimizing the number of mutations needed to evolve to any other process flowsheet. A distance function is defined as the minimum number of mutations required to transform one flowsheet into another, and computed for all pairs of flowsheets in the database enabling to compare their topologies and quantitatively analyze the population. Four sampling strategies are compared, considering centrality criteria, sampling flowsheets in groups of similar structures, random sampling, and k-medoids clustering. For each strategy, the distribution of distances from the selected structures to the database population and their diversity are compared. Centrality-based selection minimizes the required number of mutations but shows poor units’ diversity. Selection from distinct groups of similar structures improves performance only for distant flowsheets. Random sampling ensures diversity but performs poorly in reducing required mutations. Conversely, k-medoids sampling shows good performance in both the number of required mutations and the diversity of selected flowsheets, making it a balanced method for flowsheet sampling. The initialization strategies are applied to the case study of benzene chlorination and their fitness and diversity are monitored along the generations of the evolutionary synthesis.
{"title":"Data-driven initialization of evolutionary methods for process synthesis considering centrality and diversity criteria","authors":"Jean-Marc Commenge, Andres Piña-Martinez","doi":"10.1016/j.compchemeng.2025.109416","DOIUrl":"10.1016/j.compchemeng.2025.109416","url":null,"abstract":"<div><div>Process synthesis using evolutionary methods, based on the iterative application of mutation operators, requires to initialize the method by one or a set of process flowsheets. Appropriate initialization might reduce computation times by providing first proposals that decrease the number of mutations to reach optimal structures, in terms of units and connectivity. This work illustrates how to identify, from a given database of flowsheets, the flowsheets that might play a pivotal role in the further evolutionary synthesis. A home-made database with over 2000 flowsheets, digitalized from 800 recent scientific publications, is used, exhibiting the variety of possible structures from single distillation columns to biorefinery layouts. Selection of initialization flowsheets should ensure diversity in structures and units while minimizing the number of mutations needed to evolve to any other process flowsheet. A distance function is defined as the minimum number of mutations required to transform one flowsheet into another, and computed for all pairs of flowsheets in the database enabling to compare their topologies and quantitatively analyze the population. Four sampling strategies are compared, considering centrality criteria, sampling flowsheets in groups of similar structures, random sampling, and k-medoids clustering. For each strategy, the distribution of distances from the selected structures to the database population and their diversity are compared. Centrality-based selection minimizes the required number of mutations but shows poor units’ diversity. Selection from distinct groups of similar structures improves performance only for distant flowsheets. Random sampling ensures diversity but performs poorly in reducing required mutations. Conversely, k-medoids sampling shows good performance in both the number of required mutations and the diversity of selected flowsheets, making it a balanced method for flowsheet sampling. The initialization strategies are applied to the case study of benzene chlorination and their fitness and diversity are monitored along the generations of the evolutionary synthesis.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109416"},"PeriodicalIF":3.9,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1016/j.compchemeng.2025.109388
Anita L. Ziegler , Marc-Daniel Stumm , Tim Prömper , Thomas Steimann , Jørgen Magnus , Alexander Mitsos
When developing a biotechnological process, the microorganism is first designed, e.g., using metabolic engineering. Then, the optimum fermentation parameters are determined on a laboratory scale, and lastly, they are transferred to the bioreactor scale. However, this step-by-step approach is costly and time-consuming and may result in suboptimal configurations. Herein, we present the bilevel optimization formulation SimulKnockReactor, which connects bioreactor design with microbial strain design, an extension of our previous formulation, SimulKnock (Ziegler et al., 2024). At the upper (bioreactor) level, we minimize investment and operation costs for agitation, aeration, and pH control by determining the size and operating conditions of a continuous stirred-tank reactor—without selecting specific devices like the stirrer type. The lower (cellular) level is based on flux balance analysis and implements optimal reaction knockouts predicted by the upper level. Our results with a core and a genome-scale metabolic model of Escherichia coli show that the substrate is the largest cost factor. Our simultaneous approach outperforms a sequential approach using OptKnock. Namely, the knockouts proposed by OptKnock cannot guarantee the required production capacity in all cases considered. SimulKnockReactor, on the other hand, provides solutions in all cases considered, highlighting the advantage of combining cellular and bioreactor levels. This work is a further step towards a fully integrated bioprocess design.
{"title":"Simultaneous design of microbe and bioreactor","authors":"Anita L. Ziegler , Marc-Daniel Stumm , Tim Prömper , Thomas Steimann , Jørgen Magnus , Alexander Mitsos","doi":"10.1016/j.compchemeng.2025.109388","DOIUrl":"10.1016/j.compchemeng.2025.109388","url":null,"abstract":"<div><div>When developing a biotechnological process, the microorganism is first designed, e.g., using metabolic engineering. Then, the optimum fermentation parameters are determined on a laboratory scale, and lastly, they are transferred to the bioreactor scale. However, this step-by-step approach is costly and time-consuming and may result in suboptimal configurations. Herein, we present the bilevel optimization formulation <em>SimulKnockReactor</em>, which connects bioreactor design with microbial strain design, an extension of our previous formulation, SimulKnock (Ziegler et al., 2024). At the upper (bioreactor) level, we minimize investment and operation costs for agitation, aeration, and pH control by determining the size and operating conditions of a continuous stirred-tank reactor—without selecting specific devices like the stirrer type. The lower (cellular) level is based on flux balance analysis and implements optimal reaction knockouts predicted by the upper level. Our results with a core and a genome-scale metabolic model of <em>Escherichia coli</em> show that the substrate is the largest cost factor. Our simultaneous approach outperforms a sequential approach using OptKnock. Namely, the knockouts proposed by OptKnock cannot guarantee the required production capacity in all cases considered. SimulKnockReactor, on the other hand, provides solutions in all cases considered, highlighting the advantage of combining cellular and bioreactor levels. This work is a further step towards a fully integrated bioprocess design.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109388"},"PeriodicalIF":3.9,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1016/j.compchemeng.2025.109405
Oguzhan Dogru, Mahmut Berat Tatlici, Biao Huang
In the process industry, smart automation of complex operations has great potential for efficient and safe operation, making it a key component for unlocking economic and sustainable large-scale production. However, real-world process units such as primary separation vessels (PSVs) pose numerous challenges, such as sensory uncertainty, nonlinear dynamics, and operational variability. This study introduces a novel autonomous control framework integrating model predictive control (MPC), reinforcement learning (RL), and state estimation techniques for building an adaptive, optimal, and safe control strategy. The proposed framework is demonstrated in a real-world scenario using a bench-scale experimental setup of the PSV that mimics the actual process. The implemented closed-loop control system accurately predicted a crucial process variable, optimized the operating point in real time, and achieved robust set-point tracking performance by tuning the controller for real process conditions. The results indicate that incorporating adaptive and data-driven techniques such as reinforcement learning into feedback control approaches is promising for building robust autonomous control strategies that maximize efficiency while respecting physical constraints, paving the way for autonomous control systems that are deployable in complex real-world scenarios.
{"title":"Reinforcement learning-based autonomous control of bench-scale primary separation vessel","authors":"Oguzhan Dogru, Mahmut Berat Tatlici, Biao Huang","doi":"10.1016/j.compchemeng.2025.109405","DOIUrl":"10.1016/j.compchemeng.2025.109405","url":null,"abstract":"<div><div>In the process industry, smart automation of complex operations has great potential for efficient and safe operation, making it a key component for unlocking economic and sustainable large-scale production. However, real-world process units such as primary separation vessels (PSVs) pose numerous challenges, such as sensory uncertainty, nonlinear dynamics, and operational variability. This study introduces a novel autonomous control framework integrating model predictive control (MPC), reinforcement learning (RL), and state estimation techniques for building an adaptive, optimal, and safe control strategy. The proposed framework is demonstrated in a real-world scenario using a bench-scale experimental setup of the PSV that mimics the actual process. The implemented closed-loop control system accurately predicted a crucial process variable, optimized the operating point in real time, and achieved robust set-point tracking performance by tuning the controller for real process conditions. The results indicate that incorporating adaptive and data-driven techniques such as reinforcement learning into feedback control approaches is promising for building robust autonomous control strategies that maximize efficiency while respecting physical constraints, paving the way for autonomous control systems that are deployable in complex real-world scenarios.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109405"},"PeriodicalIF":3.9,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145155405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1016/j.compchemeng.2025.109397
Xuekun Wang , Zhaozhuang Guo , Ying Liu
The intensification of global energy shortages and continuous expansion of municipal solid waste require effectively optimizing the waste-to-energy supply chain (WtESC). When the distribution information of uncertain parameters is partially known, WtESC often faces complex and ambiguous challenges. To address this, we construct data-driven inner and outer ambiguity sets based on real data and utilize globalized distributionally robust (GDR) optimization framework to handle uncertainty. Compared with classical distributionally robust optimization, it allows for controllable violations of constraints in the outer ambiguity set. A data-driven globalized distributionally robust WtESC (GDR-WtESC) model is developed, and transformed into an equivalent mixed integer linear programming model according to duality theory. The computational results of real case indicate that There is a conflict between economic and environmental objectives, and decision-makers can prioritize them based on their own preferences. The tolerance level for constraint violation has a positive impact on the total cost. Specifically, the increase of tolerance level from 0.1 to 0.9 can reduce the optimal cost by 1.07%. The optimal decision of GDR-WtESC model has strong stability and high quality. Compared with the sample average approximation (SAA) model, the variance of the objective value in out of sample experiments decreases by 88.28% on average, and the average cost decreases by 0.55%. The SAA method can address the uncertainty, but cannot handle constraint violations in realistic. Thus, for decision makers who are sensitive to distributional ambiguity, the GDR method is recommended for WtESC problem, because it enhances the robustness and reduces conservatism.
{"title":"Data-driven globalized distributionally robust multi-period location-routing-scheduling model for waste-to-energy supply chain under emissions ambiguity","authors":"Xuekun Wang , Zhaozhuang Guo , Ying Liu","doi":"10.1016/j.compchemeng.2025.109397","DOIUrl":"10.1016/j.compchemeng.2025.109397","url":null,"abstract":"<div><div>The intensification of global energy shortages and continuous expansion of municipal solid waste require effectively optimizing the waste-to-energy supply chain (WtESC). When the distribution information of uncertain parameters is partially known, WtESC often faces complex and ambiguous challenges. To address this, we construct data-driven inner and outer ambiguity sets based on real data and utilize globalized distributionally robust (GDR) optimization framework to handle uncertainty. Compared with classical distributionally robust optimization, it allows for controllable violations of constraints in the outer ambiguity set. A data-driven globalized distributionally robust WtESC (GDR-WtESC) model is developed, and transformed into an equivalent mixed integer linear programming model according to duality theory. The computational results of real case indicate that <span><math><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></math></span> There is a conflict between economic and environmental objectives, and decision-makers can prioritize them based on their own preferences. <span><math><mrow><mo>(</mo><mi>i</mi><mi>i</mi><mo>)</mo></mrow></math></span> The tolerance level for constraint violation has a positive impact on the total cost. Specifically, the increase of tolerance level from 0.1 to 0.9 can reduce the optimal cost by 1.07%. <span><math><mrow><mo>(</mo><mi>i</mi><mi>i</mi><mi>i</mi><mo>)</mo></mrow></math></span> The optimal decision of GDR-WtESC model has strong stability and high quality. Compared with the sample average approximation (SAA) model, the variance of the objective value in out of sample experiments decreases by 88.28% on average, and the average cost decreases by 0.55%. The SAA method can address the uncertainty, but cannot handle constraint violations in realistic. Thus, for decision makers who are sensitive to distributional ambiguity, the GDR method is recommended for WtESC problem, because it enhances the robustness and reduces conservatism.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109397"},"PeriodicalIF":3.9,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145154737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1016/j.compchemeng.2025.109412
Hamed Darouni, Farnaz Barzinpour, Amin Reza Kalantari Khalil Abad
Agricultural supply chains face substantial challenges in ensuring food security and sustainability, particularly due to the impacts of climate change, including global warming. To optimize resource use and minimize waste, it is essential to manage these supply chains effectively, especially in the face of uncertainty. This research addresses the crucial challenge of designing a sustainable closed-loop agricultural supply chain network, with a specific focus on jujube products in the context of temperature-yield uncertainty. The model enhances economic sustainability by minimizing costs, social sustainability through job creation requirements, and environmental sustainability by implementing carbon emission caps, while taking into account decisions regarding facility locations, inter-facility flows, inventory, and shortage management. Our main contribution is a distributionally robust optimization approach that integrates a K-means clustering machine learning algorithm to generate scenarios from historical data patterns, addressing the dynamic and interrelated uncertainties in temperature-yield data. The framework incorporates closed-loop principles through thermochemical conversion processes that transform agricultural waste into value-added biochar products. A comprehensive case study of the jujube industry in South Khorasan Province, Iran, validates the model's effectiveness. Results demonstrate that moderate conservatism levels ( between 0.8 and 1.2) achieve an 88% reduction in operational risk variability while incurring only a 3% cost increase. A comparative analysis reveals that the proposed approach achieves a 0.95 risk-adjusted performance score, outperforming traditional stochastic programming and robust optimization alternatives. This research provides agricultural supply chain managers with practical guidelines for managing temperature-yield uncertainty.
{"title":"Integrating machine learning and distributionally robust optimization for sustainable agricultural supply chains under global warming uncertainty","authors":"Hamed Darouni, Farnaz Barzinpour, Amin Reza Kalantari Khalil Abad","doi":"10.1016/j.compchemeng.2025.109412","DOIUrl":"10.1016/j.compchemeng.2025.109412","url":null,"abstract":"<div><div>Agricultural supply chains face substantial challenges in ensuring food security and sustainability, particularly due to the impacts of climate change, including global warming. To optimize resource use and minimize waste, it is essential to manage these supply chains effectively, especially in the face of uncertainty. This research addresses the crucial challenge of designing a sustainable closed-loop agricultural supply chain network, with a specific focus on jujube products in the context of temperature-yield uncertainty. The model enhances economic sustainability by minimizing costs, social sustainability through job creation requirements, and environmental sustainability by implementing carbon emission caps, while taking into account decisions regarding facility locations, inter-facility flows, inventory, and shortage management. Our main contribution is a distributionally robust optimization approach that integrates a K-means clustering machine learning algorithm to generate scenarios from historical data patterns, addressing the dynamic and interrelated uncertainties in temperature-yield data. The framework incorporates closed-loop principles through thermochemical conversion processes that transform agricultural waste into value-added biochar products. A comprehensive case study of the jujube industry in South Khorasan Province, Iran, validates the model's effectiveness. Results demonstrate that moderate conservatism levels (<span><math><mi>ω</mi></math></span> between 0.8 and 1.2) achieve an 88% reduction in operational risk variability while incurring only a 3% cost increase. A comparative analysis reveals that the proposed approach achieves a 0.95 risk-adjusted performance score, outperforming traditional stochastic programming and robust optimization alternatives. This research provides agricultural supply chain managers with practical guidelines for managing temperature-yield uncertainty.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109412"},"PeriodicalIF":3.9,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning-based (ML) digital twins for chemical processes are gaining popularity with the advent of Industry 4.0. These digital twins are often developed under the assumption of constant process parameters. However, in most chemical engineering processes, parameters often change during operations. To ensure optimal performance under such evolving conditions, there is a need for models that can adapt to these changes. In this work, we propose a framework for developing a PINN-based (Physics-Informed Neural Network) digital twin that is sensitive to parameter variations. The proposed framework also monitors the process in real-time using physics-based residual equations, identifies the parameters undergoing changes using sensitivity matrices, and re-estimates them to maintain the performance of the PINN model. We demonstrate the utility of the framework through a case study involving a continuous stirred tank reactor experiencing changes in activation energy and the overall heat transfer coefficient. The results show that the proposed framework improves the predictive accuracy of the PINN by approximately 84% for ramp changes and 12% for step changes in parameters. The framework is further applied to more realistic case studies, including a polymethyl methacrylate polymerization reactor and a pressure swing adsorption process, highlighting its applicability to high-dimensional nonlinear systems and cyclic separation processes. These findings indicate that the performance of digital twins can be significantly enhanced in the presence of varying process parameters by employing a PINN architecture that incorporates parameters as inputs and solves real-time inverse problems to estimate parameter values.
{"title":"Online parameter estimation and model maintenance using parameter-aware physics-informed neural network","authors":"Devavrat Thosar , Abhijit Bhakte , Zukui Li , Rajagopalan Srinivasan , Vinay Prasad","doi":"10.1016/j.compchemeng.2025.109403","DOIUrl":"10.1016/j.compchemeng.2025.109403","url":null,"abstract":"<div><div>Machine learning-based (ML) digital twins for chemical processes are gaining popularity with the advent of Industry 4.0. These digital twins are often developed under the assumption of constant process parameters. However, in most chemical engineering processes, parameters often change during operations. To ensure optimal performance under such evolving conditions, there is a need for models that can adapt to these changes. In this work, we propose a framework for developing a PINN-based (Physics-Informed Neural Network) digital twin that is sensitive to parameter variations. The proposed framework also monitors the process in real-time using physics-based residual equations, identifies the parameters undergoing changes using sensitivity matrices, and re-estimates them to maintain the performance of the PINN model. We demonstrate the utility of the framework through a case study involving a continuous stirred tank reactor experiencing changes in activation energy and the overall heat transfer coefficient. The results show that the proposed framework improves the predictive accuracy of the PINN by approximately 84% for ramp changes and 12% for step changes in parameters. The framework is further applied to more realistic case studies, including a polymethyl methacrylate polymerization reactor and a pressure swing adsorption process, highlighting its applicability to high-dimensional nonlinear systems and cyclic separation processes. These findings indicate that the performance of digital twins can be significantly enhanced in the presence of varying process parameters by employing a PINN architecture that incorporates parameters as inputs and solves real-time inverse problems to estimate parameter values.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109403"},"PeriodicalIF":3.9,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-19DOI: 10.1016/j.compchemeng.2025.109406
Sanghoon Shin, Dabin Jeong, Yeonsoo Kim
With the increasing adoption of electric vehicles (EVs), effective battery thermal management is crucial to maintain safety and optimize performance. This study proposes a deep reinforcement learning (DRL)- based approach for battery thermal management, employing the Deep Deterministic Policy Gradient (DDPG) algorithm to regulate coolant flow rate and temperature. The objective is to maintain the battery temperature within the desirable operating range while minimizing energy consumption. A tailored reward function is formulated to consider the energy consumption minimization and thermal management. The effectiveness of the proposed DRL-based controller is evaluated by comparing the results with those of the zone model predictive controller (MPC). Simulation results demonstrate that the DRL-based controller achieves comparable performance to the MPC in battery temperature regulation, while reducing overall energy consumption and maintaining thermal stability. These findings highlight the potential of DRL-based control strategies as a viable alternative to MPC, offering improved energy efficiency for battery thermal management systems without requiring an explicit system model.
{"title":"Deep reinforcement learning-based thermal management of battery subpack in electric vehicle","authors":"Sanghoon Shin, Dabin Jeong, Yeonsoo Kim","doi":"10.1016/j.compchemeng.2025.109406","DOIUrl":"10.1016/j.compchemeng.2025.109406","url":null,"abstract":"<div><div>With the increasing adoption of electric vehicles (EVs), effective battery thermal management is crucial to maintain safety and optimize performance. This study proposes a deep reinforcement learning (DRL)- based approach for battery thermal management, employing the Deep Deterministic Policy Gradient (DDPG) algorithm to regulate coolant flow rate and temperature. The objective is to maintain the battery temperature within the desirable operating range while minimizing energy consumption. A tailored reward function is formulated to consider the energy consumption minimization and thermal management. The effectiveness of the proposed DRL-based controller is evaluated by comparing the results with those of the zone model predictive controller (MPC). Simulation results demonstrate that the DRL-based controller achieves comparable performance to the MPC in battery temperature regulation, while reducing overall energy consumption and maintaining thermal stability. These findings highlight the potential of DRL-based control strategies as a viable alternative to MPC, offering improved energy efficiency for battery thermal management systems without requiring an explicit system model.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109406"},"PeriodicalIF":3.9,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-17DOI: 10.1016/j.compchemeng.2025.109395
Raymoon Hwang, Jae Hyun Cho, Il Moon, Min Oh
Hybrid modelling offers a powerful means of combining mechanistic principles with data-driven learning for complex chemical processes. However, most existing approaches rely on structural coupling without a principled basis for integrating distinct modes of reasoning or enabling modular reuse. This work introduces a unified layered hybrid modelling architecture grounded in three epistemic layers: deductive, inductive, and abductive. Roles of each layer are: enforcing physical laws, learning unknown dynamics, and inferring latent states. The formulation is expressed in operator-theoretic terms. Results demonstrate improved accuracy, interpretability, and adaptability, highlighting the framework’s potential as a transparent and generalizable strategy for hybrid modelling under uncertainty in chemical process systems, while also supporting compositional reasoning and layer-wise retraining.
The first case study considers a single-unit non-isothermal batch polymerization reactor with unknown reaction kinetics and partial temperature observability. The deductive layer encodes mass and energy balances, the inductive layer learns kinetics via a neural network, and the abductive layer reconstructs latent temperature states. The second case study examines a multi-unit fed-batch bioreactor flowsheet, representative of typical chemical process configurations. Here, the deductive layer models feed-flow dynamics (unit #1), the inductive layer predicts biomass growth (unit #2), and the abductive layer estimates latent physiological states such as oxygen uptake rate and pH (unit #3). These examples demonstrate that the framework can integrate multiple inference modes within a single unit or distribute them across a flowsheet, enabling application to a wide range of hybrid modelling scenarios. The approach is general and suited for scalable, transparent modelling under uncertainty.
{"title":"Hybrid modelling of chemical processes: a unified framework based on deductive, inductive, and abductive inference","authors":"Raymoon Hwang, Jae Hyun Cho, Il Moon, Min Oh","doi":"10.1016/j.compchemeng.2025.109395","DOIUrl":"10.1016/j.compchemeng.2025.109395","url":null,"abstract":"<div><div>Hybrid modelling offers a powerful means of combining mechanistic principles with data-driven learning for complex chemical processes. However, most existing approaches rely on structural coupling without a principled basis for integrating distinct modes of reasoning or enabling modular reuse. This work introduces a unified layered hybrid modelling architecture grounded in three epistemic layers: deductive, inductive, and abductive. Roles of each layer are: enforcing physical laws, learning unknown dynamics, and inferring latent states. The formulation is expressed in operator-theoretic terms. Results demonstrate improved accuracy, interpretability, and adaptability, highlighting the framework’s potential as a transparent and generalizable strategy for hybrid modelling under uncertainty in chemical process systems, while also supporting compositional reasoning and layer-wise retraining.</div><div>The first case study considers a single-unit non-isothermal batch polymerization reactor with unknown reaction kinetics and partial temperature observability. The deductive layer encodes mass and energy balances, the inductive layer learns kinetics via a neural network, and the abductive layer reconstructs latent temperature states. The second case study examines a multi-unit fed-batch bioreactor flowsheet, representative of typical chemical process configurations. Here, the deductive layer models feed-flow dynamics (unit #1), the inductive layer predicts biomass growth (unit #2), and the abductive layer estimates latent physiological states such as oxygen uptake rate and pH (unit #3). These examples demonstrate that the framework can integrate multiple inference modes within a single unit or distribute them across a flowsheet, enabling application to a wide range of hybrid modelling scenarios. The approach is general and suited for scalable, transparent modelling under uncertainty.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"205 ","pages":"Article 109395"},"PeriodicalIF":3.9,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}