Pub Date : 2024-12-30DOI: 10.1016/j.compchemeng.2024.108987
Oscar Daniel Lara-Montaño , Fernando Israel Gómez-Castro , Claudia Gutiérrez-Antonio , Elena Niculina Dragoi
This paper presents the development of the Success-Based Optimization Algorithm (SBOA), a novel metaheuristic inspired by success attribution theory, designed to address complex, high-dimensional optimization problems. SBOA balances exploration and exploitation by utilizing high-performing solutions and average-performing candidates to guide the search process, dynamically adjusting based on solution quality. The algorithm is evaluated against seven well-established optimization methods using CEC 2017 benchmark functions in 10, 30, and 50 dimensions. It is applied to a real-world engineering problem involving the optimal design of shell-and-tube heat exchangers (STHEs). The results demonstrate that SBOA consistently surpasses most competing algorithms, especially in higher-dimensional cases, achieving lower objective values and faster convergence. Statistical analyses, including the Wilcoxon signed-rank test, confirm the significant advantages of SBOA in benchmark performance and cost-effectiveness in practical engineering applications. These findings position SBOA as a highly adaptable and efficient optimization tool for addressing complex tasks.
{"title":"Success-Based Optimization Algorithm (SBOA): Development and enhancement of a metaheuristic optimizer","authors":"Oscar Daniel Lara-Montaño , Fernando Israel Gómez-Castro , Claudia Gutiérrez-Antonio , Elena Niculina Dragoi","doi":"10.1016/j.compchemeng.2024.108987","DOIUrl":"10.1016/j.compchemeng.2024.108987","url":null,"abstract":"<div><div>This paper presents the development of the Success-Based Optimization Algorithm (SBOA), a novel metaheuristic inspired by success attribution theory, designed to address complex, high-dimensional optimization problems. SBOA balances exploration and exploitation by utilizing high-performing solutions and average-performing candidates to guide the search process, dynamically adjusting based on solution quality. The algorithm is evaluated against seven well-established optimization methods using CEC 2017 benchmark functions in 10, 30, and 50 dimensions. It is applied to a real-world engineering problem involving the optimal design of shell-and-tube heat exchangers (STHEs). The results demonstrate that SBOA consistently surpasses most competing algorithms, especially in higher-dimensional cases, achieving lower objective values and faster convergence. Statistical analyses, including the Wilcoxon signed-rank test, confirm the significant advantages of SBOA in benchmark performance and cost-effectiveness in practical engineering applications. These findings position SBOA as a highly adaptable and efficient optimization tool for addressing complex tasks.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108987"},"PeriodicalIF":3.9,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-28DOI: 10.1016/j.compchemeng.2024.108994
Angel Alfaro-Bernardino, César Ramírez-Márquez, José M. Ponce-Ortega, Fabricio Nápoles-Rivera
Arsenic contamination in groundwater presents significant health risks, demanding effective treatment solutions. This study introduces a mathematical programming method to determine the optimal location to place arsenic treatment plants, select the appropriate technology, and design large-scale water distribution networks. This work focuses on minimizing costs associated with pumping, piping, plant installation, and operation while complying with the regulations of arsenic levels in drinking water. The approach involves a nonlinear mixed-integer mathematical programming model coupled with a detailed procedure to find solutions. In the implementation of this model, the study not only explores the best strategies to reduce the arsenic found in drinking water to safer levels in affected wells, but it also works to design an efficient water network. An analysis of areas with wells that show a concentration of arsenic above permissible levels demonstrates how the proposed solutions can effectively lower arsenic levels to meet safety standards and optimize water supply systems. The findings highlight the potential of significantly improving water quality and public health through strategic infrastructure, planning, and technological application.
{"title":"Optimizing arsenic removal in water supply: A mathematical approach for plant location, technology selection, and network synthesis","authors":"Angel Alfaro-Bernardino, César Ramírez-Márquez, José M. Ponce-Ortega, Fabricio Nápoles-Rivera","doi":"10.1016/j.compchemeng.2024.108994","DOIUrl":"10.1016/j.compchemeng.2024.108994","url":null,"abstract":"<div><div>Arsenic contamination in groundwater presents significant health risks, demanding effective treatment solutions. This study introduces a mathematical programming method to determine the optimal location to place arsenic treatment plants, select the appropriate technology, and design large-scale water distribution networks. This work focuses on minimizing costs associated with pumping, piping, plant installation, and operation while complying with the regulations of arsenic levels in drinking water. The approach involves a nonlinear mixed-integer mathematical programming model coupled with a detailed procedure to find solutions. In the implementation of this model, the study not only explores the best strategies to reduce the arsenic found in drinking water to safer levels in affected wells, but it also works to design an efficient water network. An analysis of areas with wells that show a concentration of arsenic above permissible levels demonstrates how the proposed solutions can effectively lower arsenic levels to meet safety standards and optimize water supply systems. The findings highlight the potential of significantly improving water quality and public health through strategic infrastructure, planning, and technological application.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108994"},"PeriodicalIF":3.9,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1016/j.compchemeng.2024.108989
Akshay Ajagekar , Benjamin Decardi-Nelson , Chao Shang , Fengqi You
Deep generative models like diffusion models have generated significant interest in computer-aided molecular design by enabling the automated generation of novel molecular structures. This manuscript aims to highlight the potential of diffusion models in computer-aided molecular design (CAMD) while addressing key limitations in their practical implementation. Diffusion models trained for specific molecular design problems can suffer for design tasks with alternate desired property requirements. To address this challenge, we provide perspectives on the integration of generative diffusion models with optimization methods for CAMD. We examine how pretrained equivariant diffusion models can be effectively aligned with text-guided molecular generation through optimization in the latent space. Computational experiments targeting drug design demonstrate the framework's capability of generating valid molecular structures that satisfy multiple objectives. This work underscores the potential of combining pretrained generative models with gradient-free optimization methods like genetic algorithms to enhance molecular design precision without incurring significant computational costs associated with finetuning diffusion models. Beyond highlighting the practical utility of diffusion models in CAMD, we identify key challenges encountered while adopting these models and propose future research directions to address them, providing a comprehensive roadmap for advancing the field of computational molecular design.
{"title":"Computer-aided molecular design by aligning generative diffusion models: Perspectives and challenges","authors":"Akshay Ajagekar , Benjamin Decardi-Nelson , Chao Shang , Fengqi You","doi":"10.1016/j.compchemeng.2024.108989","DOIUrl":"10.1016/j.compchemeng.2024.108989","url":null,"abstract":"<div><div>Deep generative models like diffusion models have generated significant interest in computer-aided molecular design by enabling the automated generation of novel molecular structures. This manuscript aims to highlight the potential of diffusion models in computer-aided molecular design (CAMD) while addressing key limitations in their practical implementation. Diffusion models trained for specific molecular design problems can suffer for design tasks with alternate desired property requirements. To address this challenge, we provide perspectives on the integration of generative diffusion models with optimization methods for CAMD. We examine how pretrained equivariant diffusion models can be effectively aligned with text-guided molecular generation through optimization in the latent space. Computational experiments targeting drug design demonstrate the framework's capability of generating valid molecular structures that satisfy multiple objectives. This work underscores the potential of combining pretrained generative models with gradient-free optimization methods like genetic algorithms to enhance molecular design precision without incurring significant computational costs associated with finetuning diffusion models. Beyond highlighting the practical utility of diffusion models in CAMD, we identify key challenges encountered while adopting these models and propose future research directions to address them, providing a comprehensive roadmap for advancing the field of computational molecular design.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108989"},"PeriodicalIF":3.9,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1016/j.compchemeng.2024.108995
Muhammed Iberia Aydin , Ibrahim Dincer
This study explores renewable energy transitions in remote communities by addressing the environmental and health impacts of fossil fuel dependency. Remote communities face unique challenges in terms of economic, social and cultural development because of their geographical isolation and limited access to infrastructure, resources and services. Considering Sandy Lake First Nation community in Ontario, Canada as a case study, a life cycle assessment investigation is comprehensively conducted to evaluate the environmental outcomes of implementing hydrogen-based renewable systems into community's infrastructure. The respective life cycle impact assessment studies are then carried out to compare the environmental impacts of different energy production methods. The results for Global Warming Potential (GWP) show 1.88 kg CO₂ eq./kWh for the diesel-only scenario, while the renewable-integrated scenarios result in ranges from 0.08 to 0.37 kg CO₂ eq./kWh. The results further show that renewable-integrated scenarios reduce global warming potential (GWP) by up to 98.7 %, compared to diesel-only systems. While renewable energy significantly lowers the most environmental indicators, the manufacturing of renewable and hydrogen technologies makes some contributions to ecotoxicity. The study findings emphasize the need for sustainable manufacturing, strategic policymaking, and incentives to accelerate renewable adoption in isolated settlements.
{"title":"Comparative modeling and assessment of renewable hydrogen production and utilization in remote communities","authors":"Muhammed Iberia Aydin , Ibrahim Dincer","doi":"10.1016/j.compchemeng.2024.108995","DOIUrl":"10.1016/j.compchemeng.2024.108995","url":null,"abstract":"<div><div>This study explores renewable energy transitions in remote communities by addressing the environmental and health impacts of fossil fuel dependency. Remote communities face unique challenges in terms of economic, social and cultural development because of their geographical isolation and limited access to infrastructure, resources and services. Considering Sandy Lake First Nation community in Ontario, Canada as a case study, a life cycle assessment investigation is comprehensively conducted to evaluate the environmental outcomes of implementing hydrogen-based renewable systems into community's infrastructure. The respective life cycle impact assessment studies are then carried out to compare the environmental impacts of different energy production methods. The results for Global Warming Potential (GWP) show 1.88 kg CO₂ eq./kWh for the diesel-only scenario, while the renewable-integrated scenarios result in ranges from 0.08 to 0.37 kg CO₂ eq./kWh. The results further show that renewable-integrated scenarios reduce global warming potential (GWP) by up to 98.7 %, compared to diesel-only systems. While renewable energy significantly lowers the most environmental indicators, the manufacturing of renewable and hydrogen technologies makes some contributions to ecotoxicity. The study findings emphasize the need for sustainable manufacturing, strategic policymaking, and incentives to accelerate renewable adoption in isolated settlements.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108995"},"PeriodicalIF":3.9,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26DOI: 10.1016/j.compchemeng.2024.108990
Zhi-Qiang Zhang, Chun-Qing Huang
In general, the external excitation is indispensable for closed-loop identification of SISO and MIMO systems. In this paper, an excitation-free approach for closed-loop identification of multi-delay MIMO systems is proposed by using the routine operating closed-loop data. Both identifiability and consistency of the plant model estimation are achieved when the basic assumptions are met. The proposed approach provides an effective way to handle closed-loop identification of MIMO systems, while it becomes of a non-trivial task for the conventional identification methods and especially subspace identification method in lack of prior knowledge on the process. The effectiveness of the proposed approach is demonstrated by a industrial example viz. the Alatiqi column.
{"title":"Closed-loop identification of MIMO systems: An excitation-free approach","authors":"Zhi-Qiang Zhang, Chun-Qing Huang","doi":"10.1016/j.compchemeng.2024.108990","DOIUrl":"10.1016/j.compchemeng.2024.108990","url":null,"abstract":"<div><div>In general, the external excitation is indispensable for closed-loop identification of SISO and MIMO systems. In this paper, an excitation-free approach for closed-loop identification of multi-delay MIMO systems is proposed by using the routine operating closed-loop data. Both identifiability and consistency of the plant model estimation are achieved when the basic assumptions are met. The proposed approach provides an effective way to handle closed-loop identification of MIMO systems, while it becomes of a non-trivial task for the conventional identification methods and especially subspace identification method in lack of prior knowledge on the process. The effectiveness of the proposed approach is demonstrated by a <span><math><mrow><mn>4</mn><mo>×</mo><mn>4</mn></mrow></math></span> industrial example viz. the Alatiqi column.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108990"},"PeriodicalIF":3.9,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26DOI: 10.1016/j.compchemeng.2024.108991
Brett Metcalfe , Juan Camilo Acosta-Pavas , Carlos Eduardo Robles-Rodriguez , George K. Georgakilas , Theodore Dalamagas , Cesar Arturo Aceves-Lara , Fayza Daboussi , Jasper J Koehorst , David Camilo Corrales
Real-time predictions in fermentation processes are crucial because they enable continuous monitoring and control of bioprocessing. However, the availability of online measurements is limited by the availability and feasibility of sensing technology. Soft sensors - or software sensors that convert available measurements into measurements of interest (product yield, quality, etc.) - have the potential to improve efficiency and product quality. Machine learning (ML) based soft sensors have gained increased popularity over the years since they can incorporate variables that are measured in real-time, and exploit the intricate patterns embedded in such voluminous datasets. However, ML-based soft sensor requires more than just a classical ML learner with an unseen test set to evaluate the quality prediction of the model. When a ML model is deployed in production, its performance can deteriorate rapidly leading to an unanticipated decline in the quality of the output and predictions. Here a proof concept of Machine Learning Operations (MLOps) to automate the end-to-end soft sensor lifecycle in industrial scale fed-batch fermentation, from development and deployment to maintenance and monitoring is proposed. Using the industrial-scale penicillin fermentation (IndPenSim) dataset that includes 100 fermentation batches, to build a soft sensor based on Long Short Term Memory (LSTM) for penicillin concentration prediction. The batches containing deviations in the processes (91–100) were used to assess concept drift of the LSTM soft sensor. The evaluation of concept drift is evidenced by the soft sensor performance falling below the set threshold based on the Population Stability Index (PSI), which automatically triggers an alert to run the retraining pipeline.
{"title":"Towards a machine learning operations (MLOps) soft sensor for real-time predictions in industrial-scale fed-batch fermentation","authors":"Brett Metcalfe , Juan Camilo Acosta-Pavas , Carlos Eduardo Robles-Rodriguez , George K. Georgakilas , Theodore Dalamagas , Cesar Arturo Aceves-Lara , Fayza Daboussi , Jasper J Koehorst , David Camilo Corrales","doi":"10.1016/j.compchemeng.2024.108991","DOIUrl":"10.1016/j.compchemeng.2024.108991","url":null,"abstract":"<div><div>Real-time predictions in fermentation processes are crucial because they enable continuous monitoring and control of bioprocessing. However, the availability of online measurements is limited by the availability and feasibility of sensing technology. Soft sensors - or software sensors that convert available measurements into measurements of interest (product yield, quality, etc.) - have the potential to improve efficiency and product quality. Machine learning (ML) based soft sensors have gained increased popularity over the years since they can incorporate variables that are measured in real-time, and exploit the intricate patterns embedded in such voluminous datasets. However, ML-based soft sensor requires more than just a classical ML learner with an unseen test set to evaluate the quality prediction of the model. When a ML model is deployed in production, its performance can deteriorate rapidly leading to an unanticipated decline in the quality of the output and predictions. Here a proof concept of Machine Learning Operations (MLOps) to automate the end-to-end soft sensor lifecycle in industrial scale fed-batch fermentation, from development and deployment to maintenance and monitoring is proposed. Using the industrial-scale penicillin fermentation (<em>IndPenSim)</em> dataset that includes 100 fermentation batches, to build a soft sensor based on Long Short Term Memory (LSTM) for penicillin concentration prediction. The batches containing deviations in the processes (91–100) were used to assess concept drift of the LSTM soft sensor. The evaluation of concept drift is evidenced by the soft sensor performance falling below the set threshold based on the Population Stability Index (PSI), which automatically triggers an alert to run the retraining pipeline.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108991"},"PeriodicalIF":3.9,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-21DOI: 10.1016/j.compchemeng.2024.108988
Simone Reynoso-Donzelli, Luis A. Ricardez-Sandoval
This study introduces a Reinforcement Learning (RL) approach for synthesis, design, and control of chemical process flowsheets (CPFs). The proposed RL framework makes use of an inlet stream and a set of unit operations (UOs) available in the RL environment to build, evaluate and test CPFs. Moreover, the framework harnesses the power of surrogate models, specifically Neural Networks (NNs), to expedite the learning process of the RL agent and avoid reliance on mechanistic dynamic models embedded within the RL environment. These surrogate models approximate key process variables and descriptive closed-loop performance metrics for complex dynamic UO models. The proposed framework is evaluated through case studies, including a system where more than one type of UO is considered for simultaneous synthesis, design and control. The results show that the RL agent effectively learns to maintain the dynamic operability of the UOs under disturbances, adhere to equipment design and operational constraints, and generate viable and economically attractive CPFs.
{"title":"An integrated reinforcement learning framework for simultaneous generation, design, and control of chemical process flowsheets","authors":"Simone Reynoso-Donzelli, Luis A. Ricardez-Sandoval","doi":"10.1016/j.compchemeng.2024.108988","DOIUrl":"10.1016/j.compchemeng.2024.108988","url":null,"abstract":"<div><div>This study introduces a Reinforcement Learning (RL) approach for synthesis, design, and control of chemical process flowsheets (CPFs). The proposed RL framework makes use of an inlet stream and a set of unit operations (UOs) available in the RL environment to build, evaluate and test CPFs. Moreover, the framework harnesses the power of surrogate models, specifically Neural Networks (NNs), to expedite the learning process of the RL agent and avoid reliance on mechanistic dynamic models embedded within the RL environment. These surrogate models approximate key process variables and descriptive closed-loop performance metrics for complex dynamic UO models. The proposed framework is evaluated through case studies, including a system where more than one type of UO is considered for simultaneous synthesis, design and control. The results show that the RL agent effectively learns to maintain the dynamic operability of the UOs under disturbances, adhere to equipment design and operational constraints, and generate viable and economically attractive CPFs.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108988"},"PeriodicalIF":3.9,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1016/j.compchemeng.2024.108986
Sumeyya Ayca , Ibrahim Dincer
This research study aims to present a comparative analysis of the emission rates of toxic gases from various power sources suitable to meet the energy demand of a chlor-alkali plant producing hydrogen (H2), using a life cycle methodology. The emissions are then assessed using this methodology. The Greenhouse Gases, Regulated Emissions and Energy Use in Transport (GREET) software program is employed to analyze the power sources. The emission data from eight different energy sources that are part of the power generation segment at the proposed facility are analyzed comparatively. The subject matter data include carbon dioxide (CO2), methane (CH4), nitrogen oxides (NOx), sulfur oxides (SOx), volatile organic compounds (VOC), particulate matter pollutants (PM10), fine particles (PM2.5), nitrous oxide (N2O) and volatile organic carbon (POC). According to the life cycle assessment results, among the energy sources considered, wind power generated for hydrogen production using Pathway 1 appears to be the most environmentally benign option with the lowest emission rates while the oil-fired power generation option through Pathway 8 is the most harmful option with the highest emission rates. The emission values obtained in Pathway 1, where the electricity demand is met in a chlor-alkali production facility where 1 kg of hydrogen is produced, are as follows: CO2 1.65 kg, CH4 0.0032 kg, NOx 0.005 kg, SOx 0.0031 kg, VOC 0.00039 kg, PM10 0.00045 kg, PM2.5 0.0047 kg, N2O 0.000029 kg and POC 0.00014 kg. Excluding hydrogen transportation from the tube-trailers emission value, the highest emission from Pathway 1 as an energy source in the chlor-alkali production facility producing 1 kg of hydrogen is CO2 gas with 1.31 kg, and the lowest emission is N2O gas with 0.027 kg.
{"title":"A comprehensive life cycle assessment study on potential power supply options for a chlor alkali production plant","authors":"Sumeyya Ayca , Ibrahim Dincer","doi":"10.1016/j.compchemeng.2024.108986","DOIUrl":"10.1016/j.compchemeng.2024.108986","url":null,"abstract":"<div><div>This research study aims to present a comparative analysis of the emission rates of toxic gases from various power sources suitable to meet the energy demand of a chlor-alkali plant producing hydrogen (H<sub>2</sub>), using a life cycle methodology. The emissions are then assessed using this methodology. The Greenhouse Gases, Regulated Emissions and Energy Use in Transport (GREET) software program is employed to analyze the power sources. The emission data from eight different energy sources that are part of the power generation segment at the proposed facility are analyzed comparatively. The subject matter data include carbon dioxide (CO<sub>2</sub>), methane (CH<sub>4</sub>), nitrogen oxides (NO<sub>x</sub>), sulfur oxides (SO<sub>x</sub>), volatile organic compounds (VOC), particulate matter pollutants (PM10), fine particles (PM2.5), nitrous oxide (N<sub>2</sub>O) and volatile organic carbon (POC). According to the life cycle assessment results, among the energy sources considered, wind power generated for hydrogen production using Pathway 1 appears to be the most environmentally benign option with the lowest emission rates while the oil-fired power generation option through Pathway 8 is the most harmful option with the highest emission rates. The emission values obtained in Pathway 1, where the electricity demand is met in a chlor-alkali production facility where 1 kg of hydrogen is produced, are as follows: CO<sub>2</sub> 1.65 kg, CH<sub>4</sub> 0.0032 kg, NO<sub>x</sub> 0.005 kg, SO<sub>x</sub> 0.0031 kg, VOC 0.00039 kg, PM10 0.00045 kg, PM2.5 0.0047 kg, N<sub>2</sub>O 0.000029 kg and POC 0.00014 kg. Excluding hydrogen transportation from the tube-trailers emission value, the highest emission from Pathway 1 as an energy source in the chlor-alkali production facility producing 1 kg of hydrogen is CO<sub>2</sub> gas with 1.31 kg, and the lowest emission is N<sub>2</sub>O gas with 0.027 kg.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108986"},"PeriodicalIF":3.9,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1016/j.compchemeng.2024.108976
Joschka Winz, Florian Fromme, Sebastian Engell
Optimizing complex process models can be challenging due to the computation time required to solve the model equations. A popular technique is to replace difficult-to-evaluate submodels with surrogate models, creating a gray-box process model. Bayesian optimization (BO) is effective for global optimization with minimal function evaluations. However, existing extensions of BO to gray-box models rely on Monte Carlo (MC) sampling, which requires preselecting the number of MC samples, adding complexity. In this paper, we present a novel BO approach for gray-box process models that uses sensitivities instead of MC and can be used to exploit decoupled problems, where multiple submodels can be evaluated independently. The new approach is successfully applied to six benchmark test problems and to a realistic chemical process design problem. It is shown that the proposed methodology is more efficient than other methods and that exploiting the decoupled case additionally reduces the number of required submodel evaluations.
{"title":"Bayesian optimization of gray-box process models using a modified upper confidence bound acquisition function","authors":"Joschka Winz, Florian Fromme, Sebastian Engell","doi":"10.1016/j.compchemeng.2024.108976","DOIUrl":"10.1016/j.compchemeng.2024.108976","url":null,"abstract":"<div><div>Optimizing complex process models can be challenging due to the computation time required to solve the model equations. A popular technique is to replace difficult-to-evaluate submodels with surrogate models, creating a gray-box process model. Bayesian optimization (BO) is effective for global optimization with minimal function evaluations. However, existing extensions of BO to gray-box models rely on Monte Carlo (MC) sampling, which requires preselecting the number of MC samples, adding complexity. In this paper, we present a novel BO approach for gray-box process models that uses sensitivities instead of MC and can be used to exploit decoupled problems, where multiple submodels can be evaluated independently. The new approach is successfully applied to six benchmark test problems and to a realistic chemical process design problem. It is shown that the proposed methodology is more efficient than other methods and that exploiting the decoupled case additionally reduces the number of required submodel evaluations.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108976"},"PeriodicalIF":3.9,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1016/j.compchemeng.2024.108985
Alma Yunuen Raya-Tapia , Francisco Javier López-Flores , Javier Tovar-Facio , José María Ponce-Ortega
Considering the reliability and flexibility to supply future energy demand, power grid planning models are composed of mathematical formulations that represent investments in the installation and operation of generation and storage systems to reduce costs and environmental impacts. However, these can be computationally intractable to solve for many periods. Hence, in this paper, three methods are compared to obtain representative weeks in terms of their accuracy in representing the net load duration curve (NLDC) of the 5 regions that compose the Mexican peninsular electric system and in the objective function domain of a proposed model. The selection methods used for representative weeks were k-means with Euclidean metric, k-means with dynamic time warping (DTW) metric and a combinatorial method. It was observed that the combinatorial method obtained a root mean square error (RMSE) in the representation of 2.80, followed by k-means with DTW metric with 3.21 and finally k-means with Euclidean metric with 5.49. K-means with DTW metric requires about 17 and 70 times more computational time than the combinatorial method and k-means with Euclidean metric, because it had no restrictions on the amount of deformation allowed. In terms of the objective function, the combinatorial method had higher total system costs with $ 4.4274 × 1010, while they were 0.1 % and 0.2 % lower in k-means with DTW and k-means with Euclidean metric, respectively. These lower costs are due to underestimation of the system cost, as the methods do not adequately reflect operational situations and generate less expensive scenarios than are actually the case.
{"title":"Comparative framework of representative weeks selection methods for the optimization of power systems","authors":"Alma Yunuen Raya-Tapia , Francisco Javier López-Flores , Javier Tovar-Facio , José María Ponce-Ortega","doi":"10.1016/j.compchemeng.2024.108985","DOIUrl":"10.1016/j.compchemeng.2024.108985","url":null,"abstract":"<div><div>Considering the reliability and flexibility to supply future energy demand, power grid planning models are composed of mathematical formulations that represent investments in the installation and operation of generation and storage systems to reduce costs and environmental impacts. However, these can be computationally intractable to solve for many periods. Hence, in this paper, three methods are compared to obtain representative weeks in terms of their accuracy in representing the net load duration curve (NLDC) of the 5 regions that compose the Mexican peninsular electric system and in the objective function domain of a proposed model. The selection methods used for representative weeks were k-means with Euclidean metric, k-means with dynamic time warping (DTW) metric and a combinatorial method. It was observed that the combinatorial method obtained a root mean square error (RMSE) in the representation of 2.80, followed by k-means with DTW metric with 3.21 and finally k-means with Euclidean metric with 5.49. K-means with DTW metric requires about 17 and 70 times more computational time than the combinatorial method and k-means with Euclidean metric, because it had no restrictions on the amount of deformation allowed. In terms of the objective function, the combinatorial method had higher total system costs with $ 4.4274 × 10<sup>10</sup>, while they were 0.1 % and 0.2 % lower in k-means with DTW and k-means with Euclidean metric, respectively. These lower costs are due to underestimation of the system cost, as the methods do not adequately reflect operational situations and generate less expensive scenarios than are actually the case.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"194 ","pages":"Article 108985"},"PeriodicalIF":3.9,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143136626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}