Pub Date : 2024-08-30DOI: 10.1016/j.compchemeng.2024.108856
Jing Liu , Junxian Wang , Jianye Xia , Fengfeng Lv , Dawei Wu
Biofermentation faces challenges in obtaining real-time quality variables, making it necessary to predict these variables. However, the fermentation process data vary in length and lack sufficient labeled data for model establishment. To solve this problem, this study introduces a framework named RL-SSR(Representation Learning-based Semi-Supervised Regression). First, a data rotation mechanism is designed to address the issue of non-equal-length data. Second, representation learning pre-tasks containing contrastive learning and data reconstruction tasks are implemented to introduce a priori knowledge and numeric features. Finally, the pre-trained model will be fine-tuned with limited labeled data. Experimental results using an industrial-scale penicillin fermentation dataset reveal that RL-SSR outperforms other baseline models, particularly with a small number of labels, confirming the robustness and effectiveness of RL-SSR in the real-time prediction of quality variables in fermentation processes.
{"title":"Semi-supervised regression based on Representation Learning for fermentation processes","authors":"Jing Liu , Junxian Wang , Jianye Xia , Fengfeng Lv , Dawei Wu","doi":"10.1016/j.compchemeng.2024.108856","DOIUrl":"10.1016/j.compchemeng.2024.108856","url":null,"abstract":"<div><p>Biofermentation faces challenges in obtaining real-time quality variables, making it necessary to predict these variables. However, the fermentation process data vary in length and lack sufficient labeled data for model establishment. To solve this problem, this study introduces a framework named RL-SSR(Representation Learning-based Semi-Supervised Regression). First, a data rotation mechanism is designed to address the issue of non-equal-length data. Second, representation learning pre-tasks containing contrastive learning and data reconstruction tasks are implemented to introduce a priori knowledge and numeric features. Finally, the pre-trained model will be fine-tuned with limited labeled data. Experimental results using an industrial-scale penicillin fermentation dataset reveal that RL-SSR outperforms other baseline models, particularly with a small number of labels, confirming the robustness and effectiveness of RL-SSR in the real-time prediction of quality variables in fermentation processes.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108856"},"PeriodicalIF":3.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142128800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1016/j.compchemeng.2024.108839
Dominique Bonvin , Gabriele Pannocchia
The real-time optimization scheme “modifier adaptation” (MA) has been developed to enforce steady-state plant optimality in the presence of model uncertainty. The key feature of MA is its ability to locally modify the model by adding bias and gradient correction terms to the cost and constraint functions or, alternatively, to the outputs. Since these correction terms are static in nature, their computation may require a significant amount of time, especially with slow processes. This paper presents two ways of speeding-up MA schemes for real-time optimization. The first approach proposes to estimate the modifiers from steady-state data via a tailored recursive least-squares scheme. The second approach investigates the estimation of static correction terms during transient operation. The idea is to first develop a calibration model to express the static plant-model mismatch as a function of inputs only. This calibration model can be generated via a single MA run that successively visits various steady states before reaching plant optimality. In addition, to account for process differences between calibration and subsequent operation, bias terms are estimated online from output measurements. Implementation and performance aspects are compared on two pedagogical examples, namely, an unconstrained nonlinear SISO plant and a constrained multivariable CSTR example.
实时优化方案 "修改器适应"(MA)的开发是为了在模型不确定的情况下实现稳态工厂优化。MA 的主要特点是能够通过在成本和约束函数中添加偏差和梯度修正项,或者在输出中添加偏差和梯度修正项,对模型进行局部修改。由于这些修正项的性质是静态的,因此其计算可能需要大量时间,尤其是对于慢速过程。本文提出了两种加速实时优化 MA 方案的方法。第一种方法建议通过量身定制的递归最小二乘法方案,从稳态数据中估算修正项。第二种方法研究了瞬态运行期间静态修正项的估算。其思路是首先开发一个校准模型,将静态设备与模型的不匹配表述为输入的函数。该校正模型可通过单次 MA 运行生成,该运行在达到设备最优之前会连续访问各种稳定状态。此外,为了考虑校准与后续运行之间的过程差异,还可通过输出测量在线估算偏差项。在两个教学实例(即无约束非线性 SISO 工厂和受约束多变量 CSTR 实例)中,对实施和性能方面进行了比较。
{"title":"On speeding-up modifier-adaptation schemes for real-time optimization","authors":"Dominique Bonvin , Gabriele Pannocchia","doi":"10.1016/j.compchemeng.2024.108839","DOIUrl":"10.1016/j.compchemeng.2024.108839","url":null,"abstract":"<div><p>The real-time optimization scheme “modifier adaptation” (MA) has been developed to enforce steady-state plant optimality in the presence of model uncertainty. The key feature of MA is its ability to locally modify the model by adding bias and gradient correction terms to the cost and constraint functions or, alternatively, to the outputs. Since these correction terms are static in nature, their computation may require a significant amount of time, especially with slow processes. This paper presents two ways of speeding-up MA schemes for real-time optimization. The first approach proposes to estimate the modifiers from steady-state data via a tailored recursive least-squares scheme. The second approach investigates the estimation of static correction terms during transient operation. The idea is to first develop a calibration model to express the static plant-model mismatch as a function of inputs only. This calibration model can be generated via a single MA run that successively visits various steady states before reaching plant optimality. In addition, to account for process differences between calibration and subsequent operation, bias terms are estimated online from output measurements. Implementation and performance aspects are compared on two pedagogical examples, namely, an unconstrained nonlinear SISO plant and a constrained multivariable CSTR example.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108839"},"PeriodicalIF":3.9,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0098135424002576/pdfft?md5=5dfae87f49b8531f4dc17c403f0a1dab&pid=1-s2.0-S0098135424002576-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24DOI: 10.1016/j.compchemeng.2024.108854
Zhaoyang Li , Minghao Han , Dat-Nguyen Vo , Xunyuan Yin
Koopman-based modeling and model predictive control have been a promising alternative for optimal control of nonlinear processes. Good Koopman modeling performance significantly depends on an appropriate nonlinear mapping from the original state-space to a lifted state space. In this work, we propose an input-augmented Koopman modeling and model predictive control approach. Both the states and the known inputs are lifted using two deep neural networks (DNNs), and a Koopman model with nonlinearity in inputs is trained within the higher-dimensional state space. A Koopman-based model predictive control problem is formulated. To bypass non-convex optimization induced by the nonlinearity in the Koopman model, we further present an iterative implementation algorithm, which approximates the optimal control input via solving a convex optimization problem iteratively. The proposed method is applied to a chemical process and a biological water treatment process via simulations. The efficacy and advantages of the proposed modeling and control approach are demonstrated.
{"title":"Machine learning-based input-augmented Koopman modeling and predictive control of nonlinear processes","authors":"Zhaoyang Li , Minghao Han , Dat-Nguyen Vo , Xunyuan Yin","doi":"10.1016/j.compchemeng.2024.108854","DOIUrl":"10.1016/j.compchemeng.2024.108854","url":null,"abstract":"<div><p>Koopman-based modeling and model predictive control have been a promising alternative for optimal control of nonlinear processes. Good Koopman modeling performance significantly depends on an appropriate nonlinear mapping from the original state-space to a lifted state space. In this work, we propose an input-augmented Koopman modeling and model predictive control approach. Both the states and the known inputs are lifted using two deep neural networks (DNNs), and a Koopman model with nonlinearity in inputs is trained within the higher-dimensional state space. A Koopman-based model predictive control problem is formulated. To bypass non-convex optimization induced by the nonlinearity in the Koopman model, we further present an iterative implementation algorithm, which approximates the optimal control input via solving a convex optimization problem iteratively. The proposed method is applied to a chemical process and a biological water treatment process via simulations. The efficacy and advantages of the proposed modeling and control approach are demonstrated.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108854"},"PeriodicalIF":3.9,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142075985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24DOI: 10.1016/j.compchemeng.2024.108849
Kinga Szatmári , Gergely Horváth , Sándor Németh , Wenshuai Bai , Alex Kummer
For future applications of artificial intelligence, namely reinforcement learning (RL), we develop a resilience-based explainable RL agent to make decisions about the activation of mitigation systems. The applied reinforcement learning algorithm is Deep Q-learning and the reward function is resilience. We investigate two explainable reinforcement learning methods, which are the decision tree, as a policy-explaining method, and the Shapley value as a state-explaining method.
The policy can be visualized in the agent’s state space using a decision tree for better understanding. We compare the agent’s decision boundary with the runaway boundaries defined by runaway criteria, namely the divergence criterion and modified dynamic condition. Shapley value explains the contribution of the state variables on the behavior of the agent over time. The results show that the decisions of the artificial agent in a resilience-based mitigation system can be explained and can be presented in a transparent way.
{"title":"Resilience-based explainable reinforcement learning in chemical process safety","authors":"Kinga Szatmári , Gergely Horváth , Sándor Németh , Wenshuai Bai , Alex Kummer","doi":"10.1016/j.compchemeng.2024.108849","DOIUrl":"10.1016/j.compchemeng.2024.108849","url":null,"abstract":"<div><p>For future applications of artificial intelligence, namely reinforcement learning (RL), we develop a resilience-based explainable RL agent to make decisions about the activation of mitigation systems. The applied reinforcement learning algorithm is Deep Q-learning and the reward function is resilience. We investigate two explainable reinforcement learning methods, which are the decision tree, as a policy-explaining method, and the Shapley value as a state-explaining method.</p><p>The policy can be visualized in the agent’s state space using a decision tree for better understanding. We compare the agent’s decision boundary with the runaway boundaries defined by runaway criteria, namely the divergence criterion and modified dynamic condition. Shapley value explains the contribution of the state variables on the behavior of the agent over time. The results show that the decisions of the artificial agent in a resilience-based mitigation system can be explained and can be presented in a transparent way.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108849"},"PeriodicalIF":3.9,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.compchemeng.2024.108855
Yiming Bai , Huawei Ye , Jinsong Zhao
With the transformation of industrial production digitization and automation, process monitoring has been an indispensable technical method to realize the safe and efficient production of chemical process. Accurate prediction of process variables in chemical process can indicate the possible system change to reduce the probability of abnormal conditions. Current popular deep learning prediction methods trained with MSE or its variants may exhibit limitations in extracting shape features of chemical process data. In this paper, we proposed an efficient prediction method incorporating trends and shapes features (EPMITS) for chemical process variables. Specifically, we introduced a novel differentiable loss function Efficient Shape Error (ESE) to quantify shape differences between two time series of equal length in chemical process data. Then we trained deep learning models with MSE and ESE as loss function by two steps in training stage, to effectively acquire both trend and shape features of chemical process data. The proposed method was evaluated by the Tennessee Eastman process datasets and a real fluid catalytic cracking dataset from a petrochemical company. The results indicate that EPMITS models exhibit high prediction accuracy and short model training time across various time scales. These findings demonstrate the considerable feasibility and significant potential of EPMITS for future fault prognosis applications.
{"title":"EPMITS: An efficient prediction method incorporating trends and shapes features for chemical process variables","authors":"Yiming Bai , Huawei Ye , Jinsong Zhao","doi":"10.1016/j.compchemeng.2024.108855","DOIUrl":"10.1016/j.compchemeng.2024.108855","url":null,"abstract":"<div><p>With the transformation of industrial production digitization and automation, process monitoring has been an indispensable technical method to realize the safe and efficient production of chemical process. Accurate prediction of process variables in chemical process can indicate the possible system change to reduce the probability of abnormal conditions. Current popular deep learning prediction methods trained with MSE or its variants may exhibit limitations in extracting shape features of chemical process data. In this paper, we proposed an efficient prediction method incorporating trends and shapes features (EPMITS) for chemical process variables. Specifically, we introduced a novel differentiable loss function Efficient Shape Error (ESE) to quantify shape differences between two time series of equal length in chemical process data. Then we trained deep learning models with MSE and ESE as loss function by two steps in training stage, to effectively acquire both trend and shape features of chemical process data. The proposed method was evaluated by the Tennessee Eastman process datasets and a real fluid catalytic cracking dataset from a petrochemical company. The results indicate that EPMITS models exhibit high prediction accuracy and short model training time across various time scales. These findings demonstrate the considerable feasibility and significant potential of EPMITS for future fault prognosis applications.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108855"},"PeriodicalIF":3.9,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142099313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.compchemeng.2024.108852
Yushi Deng, Mario Eden, Selen Cremaschi
In Gaussian Process, feature importance is inversely proportional to the corresponding length scale when applying the Automatic Relevance Determination (ARD) structured kernel function. Features can be selected by ranking them according to their importance. Among the ARD-based feature selection methods, no uniform score exists for quantifying the output variation explained by feature subsets. This study proposes two feature selection approaches using two cumulative feature importance scores, one titled derivative decomposition ratio and the other normalized sensitivity, to determine the optimal feature subset. The performance of the approaches is assessed to test if irrelevant features are accurately identified and if the feature rankings are correct. The approaches are applied to identify relevant dimensionless inputs for a hybrid model estimating liquid entrainment fraction in two-phase flow. The results reveal that the proposed methods can identify the optimal feature subset for the hybrid model without significantly worsening its Root Mean Squared Error.
{"title":"A Gaussian process embedded feature selection method based on automatic relevance determination","authors":"Yushi Deng, Mario Eden, Selen Cremaschi","doi":"10.1016/j.compchemeng.2024.108852","DOIUrl":"10.1016/j.compchemeng.2024.108852","url":null,"abstract":"<div><p>In Gaussian Process, feature importance is inversely proportional to the corresponding length scale when applying the Automatic Relevance Determination (ARD) structured kernel function. Features can be selected by ranking them according to their importance. Among the ARD-based feature selection methods, no uniform score exists for quantifying the output variation explained by feature subsets. This study proposes two feature selection approaches using two cumulative feature importance scores, one titled derivative decomposition ratio and the other normalized sensitivity, to determine the optimal feature subset. The performance of the approaches is assessed to test if irrelevant features are accurately identified and if the feature rankings are correct. The approaches are applied to identify relevant dimensionless inputs for a hybrid model estimating liquid entrainment fraction in two-phase flow. The results reveal that the proposed methods can identify the optimal feature subset for the hybrid model without significantly worsening its Root Mean Squared Error.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108852"},"PeriodicalIF":3.9,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142099531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.compchemeng.2024.108853
Abdolvahhab Fetanat , Mohsen Tayebi
Mitigating the impacts of thermal pollution caused by the oil and natural gas (O&G) industry by applying the appropriate cooling tower technology has advantages for environmental, economic, and health goals. We aim at implementing an intelligent decision support system (DSS). The DSS involves the Delphi and criteria importance through intercriteria correlation (CRITIC) integrated method (DEACRIM) and ranking of alternatives through functional mapping of criterion sub-intervals into a single interval (RAFSI) model under the linear Diophantine fuzzy set (LDFS). Ten criteria based on water-energy nexus and circularity policies and four cooling tower technologies including Natural draft cooling tower technology, Induced draft cooling tower technology, Crossflow cooling tower technology, and Forced draft cooling tower technology have been chosen for evaluation. The evaluation results reveal that the Natural draft cooling tower technology is the most suitable scenario for Iran's O&G energy system facilities in order to mitigate thermal pollution.
{"title":"A decision support system for cooling tower technologies evaluation in the oil and gas industry","authors":"Abdolvahhab Fetanat , Mohsen Tayebi","doi":"10.1016/j.compchemeng.2024.108853","DOIUrl":"10.1016/j.compchemeng.2024.108853","url":null,"abstract":"<div><p>Mitigating the impacts of thermal pollution caused by the oil and natural gas (O&G) industry by applying the appropriate cooling tower technology has advantages for environmental, economic, and health goals. We aim at implementing an intelligent decision support system (DSS). The DSS involves the Delphi and criteria importance through intercriteria correlation (CRITIC) integrated method (DEACRIM) and ranking of alternatives through functional mapping of criterion sub-intervals into a single interval (RAFSI) model under the linear Diophantine fuzzy set (LDFS). Ten criteria based on water-energy nexus and circularity policies and four cooling tower technologies including Natural draft cooling tower technology, Induced draft cooling tower technology, Crossflow cooling tower technology, and Forced draft cooling tower technology have been chosen for evaluation. The evaluation results reveal that the Natural draft cooling tower technology is the most suitable scenario for Iran's O&G energy system facilities in order to mitigate thermal pollution.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108853"},"PeriodicalIF":3.9,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.compchemeng.2024.108840
Chenxi Li , Nilay Shah , Zheng Li , Pei Liu
The energy system requires meticulous planning to achieve low-carbon development goals cost-effectively. However, optimizing large-scale energy systems with high spatial-temporal resolution and a rich variety of technologies has always been a challenge due to limited computational resources. Therefore, this study proposes a soft-linkage framework to deconstruct large-scale energy system optimization models based on sectors while ensuring the total carbon emission limit and the electricity supply-demand balance. Using China's energy system as a case study, the impact of uncertainty on emission reduction targets is analyzed. A long-term emission target curve is only described by the total carbon budget and its temporal distribution. Results show that different carbon budget time series can lead to total transition cost variations of up to nearly 100 trillion yuan. Moreover, although a lower carbon budget would increase the total cumulative transition cost quadratically, excessively high carbon budgets raise national natural gas demand, threatening energy security.
{"title":"Decoupling framework for large-scale energy systems simultaneously addressing carbon emissions and energy flow relationships through sector units: A case study on uncertainty in China's carbon emission targets","authors":"Chenxi Li , Nilay Shah , Zheng Li , Pei Liu","doi":"10.1016/j.compchemeng.2024.108840","DOIUrl":"10.1016/j.compchemeng.2024.108840","url":null,"abstract":"<div><p>The energy system requires meticulous planning to achieve low-carbon development goals cost-effectively. However, optimizing large-scale energy systems with high spatial-temporal resolution and a rich variety of technologies has always been a challenge due to limited computational resources. Therefore, this study proposes a soft-linkage framework to deconstruct large-scale energy system optimization models based on sectors while ensuring the total carbon emission limit and the electricity supply-demand balance. Using China's energy system as a case study, the impact of uncertainty on emission reduction targets is analyzed. A long-term emission target curve is only described by the total carbon budget and its temporal distribution. Results show that different carbon budget time series can lead to total transition cost variations of up to nearly 100 trillion yuan. Moreover, although a lower carbon budget would increase the total cumulative transition cost quadratically, excessively high carbon budgets raise national natural gas demand, threatening energy security.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108840"},"PeriodicalIF":3.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.compchemeng.2024.108841
Xiang C. Ma, Chang He, Qing L. Chen, Bing J. Zhang
To address the modeling and optimization challenges of the complex reaction system in the continuous catalytic reforming process, a new integrated simulation and optimization framework is presented. First, a detailed mechanism model is established based on a reaction network involving 32 components and 50 reactions, coupled with mass transfer, heat transfer, pressure drop, and catalyst deactivation equations. Then, to solve the differential-algebraic equations in the mechanism model, a multi-objective hybrid optimization method with the adaptive infill strategy is introduced. GAMS and MATLAB are integrated to perform a joint iterative solution. Finally, two cases are conducted with the proposed algorithm. Results show that the mechanism model calculation deviations are below 4 % of reactor temperature, pressure, and composition distribution, and the Pareto front of various production plans is obtained. The accurate simulation and rapid trade-off optimization among the key goals can be achieved to provide scientific decision support for enterprise production.
{"title":"Modeling and optimization for the continuous catalytic reforming process based on the hybrid surrogate optimization model","authors":"Xiang C. Ma, Chang He, Qing L. Chen, Bing J. Zhang","doi":"10.1016/j.compchemeng.2024.108841","DOIUrl":"10.1016/j.compchemeng.2024.108841","url":null,"abstract":"<div><p>To address the modeling and optimization challenges of the complex reaction system in the continuous catalytic reforming process, a new integrated simulation and optimization framework is presented. First, a detailed mechanism model is established based on a reaction network involving 32 components and 50 reactions, coupled with mass transfer, heat transfer, pressure drop, and catalyst deactivation equations. Then, to solve the differential-algebraic equations in the mechanism model, a multi-objective hybrid optimization method with the adaptive infill strategy is introduced. GAMS and MATLAB are integrated to perform a joint iterative solution. Finally, two cases are conducted with the proposed algorithm. Results show that the mechanism model calculation deviations are below 4 % of reactor temperature, pressure, and composition distribution, and the Pareto front of various production plans is obtained. The accurate simulation and rapid trade-off optimization among the key goals can be achieved to provide scientific decision support for enterprise production.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108841"},"PeriodicalIF":3.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142058428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.compchemeng.2024.108836
Ching-Mei Wen, Marianthi Ierapetritou
This study examines the techno-economic and life cycle analysis of bio-based isopropanol (IPA) production from sugar beet, utilizing a Geographical Information System (GIS)-enabled framework. By focusing on the innovative IPA production technology, the research demonstrates the economic and environmental feasibility of converting first-generation biomass into sustainable chemicals through the optimization of the Sugar Beet-to-Isopropanol supply chain. Findings highlight a cost-optimal production capacity of 55,800 mt/year with significant potential for reducing emissions and operational costs. The production cost of bio-IPA is potentially 70 % less than the fossil-derived IPA price. Additionally, the potential profits from bio-based IPA are estimated to be nearly double the market price of its primary raw material, sugar, demonstrating the economic feasibility of converting the first-generation biomass for sustainable IPA production. The study also explores the impact of facility clustering on transportation emissions and costs, revealing strategic approaches to expanding plant capacities in response to increasing demand. This research provides insights for designing sustainable industrial practices using first-generation biomass in the chemical industry.
{"title":"Optimization of sustainable supply chain for bio-based isopropanol production from sugar beet using techno-economic and life cycle analysis","authors":"Ching-Mei Wen, Marianthi Ierapetritou","doi":"10.1016/j.compchemeng.2024.108836","DOIUrl":"10.1016/j.compchemeng.2024.108836","url":null,"abstract":"<div><p>This study examines the techno-economic and life cycle analysis of bio-based isopropanol (IPA) production from sugar beet, utilizing a Geographical Information System (GIS)-enabled framework. By focusing on the innovative IPA production technology, the research demonstrates the economic and environmental feasibility of converting first-generation biomass into sustainable chemicals through the optimization of the Sugar Beet-to-Isopropanol supply chain. Findings highlight a cost-optimal production capacity of 55,800 mt/year with significant potential for reducing emissions and operational costs. The production cost of bio-IPA is potentially 70 % less than the fossil-derived IPA price. Additionally, the potential profits from bio-based IPA are estimated to be nearly double the market price of its primary raw material, sugar, demonstrating the economic feasibility of converting the first-generation biomass for sustainable IPA production. The study also explores the impact of facility clustering on transportation emissions and costs, revealing strategic approaches to expanding plant capacities in response to increasing demand. This research provides insights for designing sustainable industrial practices using first-generation biomass in the chemical industry.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108836"},"PeriodicalIF":3.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142075986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}