Pub Date : 2024-11-14DOI: 10.1016/j.dche.2024.100197
Lorenz T. Biegler
Recent developments in efficient, large-scale nonlinear optimization strategies have had significants impact on the design and operation of engineering systems with equation-oriented (EO) models. On the other hand, rigorous first-principle procedural (i.e., black-box ’truth’) models may be difficult to incorporate directly within this optimization framework. Instead, black-box models are often substituted by lower fidelity surrogate models that may compromise the optimal solution. To overcome these challenges, Trust Region Filter (TRF) methods have been developed, which combine surrogate models optimization with intermittent sampling of truth models. The TRF approach combines efficient solution strategies with minimal recourse to truth models, and leads to guaranteed convergence to the truth model optimum. This survey paper provides a perspective on the conceptual development and evolution of the TRF method along with a review of applications that demonstrate the effectiveness of the TRF approach. In particular, three cases studies are presented on flowsheet optimization with embedded CFD models for advanced power plants and CO2 capture processes, as well as synthesis of heat exchanger networks with detailed finite-element equipment models.
{"title":"The trust region filter strategy: Survey of a rigorous approach for optimization with surrogate models","authors":"Lorenz T. Biegler","doi":"10.1016/j.dche.2024.100197","DOIUrl":"10.1016/j.dche.2024.100197","url":null,"abstract":"<div><div>Recent developments in efficient, large-scale nonlinear optimization strategies have had significants impact on the design and operation of engineering systems with equation-oriented (EO) models. On the other hand, rigorous first-principle procedural (i.e., black-box ’truth’) models may be difficult to incorporate directly within this optimization framework. Instead, black-box models are often substituted by lower fidelity surrogate models that may compromise the optimal solution. To overcome these challenges, Trust Region Filter (TRF) methods have been developed, which combine surrogate models optimization with intermittent sampling of truth models. The TRF approach combines efficient solution strategies with minimal recourse to truth models, and leads to guaranteed convergence to the truth model optimum. This survey paper provides a perspective on the conceptual development and evolution of the TRF method along with a review of applications that demonstrate the effectiveness of the TRF approach. In particular, three cases studies are presented on flowsheet optimization with embedded CFD models for advanced power plants and CO2 capture processes, as well as synthesis of heat exchanger networks with detailed finite-element equipment models.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100197"},"PeriodicalIF":3.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.dche.2024.100196
AmirMohammad Ebrahimi, Davood B. Pourkargar
This paper focuses on developing an adaptive system decomposition approach for multi-agent distributed model predictive control (DMPC) of integrated process networks. The proposed system decomposition employs a refined spectral community detection method to construct an optimal distributed control framework based on the weighted graph representation of the state space process model. The resulting distributed architecture assigns controlled outputs and manipulated inputs to controller agents and delineates their interactions. The decomposition evolves as the process network undergoes various operating conditions, enabling adjustments in the distributed architecture and DMPC design. This adaptive architecture enhances the closed-loop performance and robustness of DMPC systems. The effectiveness of the multi-agent distributed control approach is investigated for a benchmark benzene alkylation process under two distinct operating conditions characterized by medium and low recycle ratios. Simulation results demonstrate that adaptive decompositions derived through spectral community detection, utilizing weighted graph representations, outperform the commonly employed unweighted hierarchical community detection-based system decompositions in terms of closed-loop performance and computational efficiency.
{"title":"Multi-agent distributed control of integrated process networks using an adaptive community detection approach","authors":"AmirMohammad Ebrahimi, Davood B. Pourkargar","doi":"10.1016/j.dche.2024.100196","DOIUrl":"10.1016/j.dche.2024.100196","url":null,"abstract":"<div><div>This paper focuses on developing an adaptive system decomposition approach for multi-agent distributed model predictive control (DMPC) of integrated process networks. The proposed system decomposition employs a refined spectral community detection method to construct an optimal distributed control framework based on the weighted graph representation of the state space process model. The resulting distributed architecture assigns controlled outputs and manipulated inputs to controller agents and delineates their interactions. The decomposition evolves as the process network undergoes various operating conditions, enabling adjustments in the distributed architecture and DMPC design. This adaptive architecture enhances the closed-loop performance and robustness of DMPC systems. The effectiveness of the multi-agent distributed control approach is investigated for a benchmark benzene alkylation process under two distinct operating conditions characterized by medium and low recycle ratios. Simulation results demonstrate that adaptive decompositions derived through spectral community detection, utilizing weighted graph representations, outperform the commonly employed unweighted hierarchical community detection-based system decompositions in terms of closed-loop performance and computational efficiency.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100196"},"PeriodicalIF":3.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.dche.2024.100195
Feiyang Ou , Henrik Wang , Chao Zhang , Matthew Tom , Sthitie Bom , James F. Davis , Panagiotis D. Christofides
<div><div>Smart Manufacturing, or Industry 4.0, has gained significant attention in recent decades with the integration of Internet of Things (IoT) and Information Technologies (IT). As modern production methods continue to increase in complexity, there is a greater need to consider what variables can be physically measured. This advancement necessitates the use of physical sensors to comprehensively and directly gather measurable data on industrial processes; specifically, these sensors gather data that can be recontextualized into new process information. For example, artificial intelligence (AI) machine learning-based soft sensors can increase operational productivity and machine tool performance while still ensuring that critical product specifications are met. One industry that has a high volume of labor-intensive, time-consuming, and expensive processes is the semiconductor industry. AI machine learning methods can meet these challenges by taking in operational data and extracting process-specific information needed to meet the high product specifications of the industry. However, a key challenge is the availability of high quality data that covers the full operating range, including the day-to-day variance. This paper examines the applicability of soft sensing methods to the operational data of five industrial etching machines. Data is collected from readily accessible and cost-effective physical sensors installed on the tools that manage and control the operating conditions of the tool. The operational data are then used in an intelligent data aggregation approach that increases the scope and robustness for soft sensors in general by creating larger training datasets comprised of high value data with greater operational ranges and process variation. The generalized soft sensor can then be fine-tuned and validated for a particular machine. In this paper, we test the effects of data aggregation for high performing Feedforward Neural Network (FNN) models that are constructed in two ways: first as a classifier to estimate product PASS/FAIL outcomes and second as a regressor to quantitatively estimate oxide thickness. For PASS/FAIL classification, a data aggregation method is developed to enhance model predictive performance with larger training datasets. A statistical analysis method involving point-biserial correlation and the Mean Absolute Error (MAE) difference score is introduced to select the optimal candidate datasets for aggregation, further improving the effectiveness of data aggregation. For large datasets with high quality data that enable model training for more complex tasks, regression models that predict the oxide thickness of the product are also developed. Two types of models with different loss functions are tested to compare the effects of the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) loss functions on model performance. Both the classification and regression models can be applied in industrial setti
{"title":"Industrial data-driven machine learning soft sensing for optimal operation of etching tools","authors":"Feiyang Ou , Henrik Wang , Chao Zhang , Matthew Tom , Sthitie Bom , James F. Davis , Panagiotis D. Christofides","doi":"10.1016/j.dche.2024.100195","DOIUrl":"10.1016/j.dche.2024.100195","url":null,"abstract":"<div><div>Smart Manufacturing, or Industry 4.0, has gained significant attention in recent decades with the integration of Internet of Things (IoT) and Information Technologies (IT). As modern production methods continue to increase in complexity, there is a greater need to consider what variables can be physically measured. This advancement necessitates the use of physical sensors to comprehensively and directly gather measurable data on industrial processes; specifically, these sensors gather data that can be recontextualized into new process information. For example, artificial intelligence (AI) machine learning-based soft sensors can increase operational productivity and machine tool performance while still ensuring that critical product specifications are met. One industry that has a high volume of labor-intensive, time-consuming, and expensive processes is the semiconductor industry. AI machine learning methods can meet these challenges by taking in operational data and extracting process-specific information needed to meet the high product specifications of the industry. However, a key challenge is the availability of high quality data that covers the full operating range, including the day-to-day variance. This paper examines the applicability of soft sensing methods to the operational data of five industrial etching machines. Data is collected from readily accessible and cost-effective physical sensors installed on the tools that manage and control the operating conditions of the tool. The operational data are then used in an intelligent data aggregation approach that increases the scope and robustness for soft sensors in general by creating larger training datasets comprised of high value data with greater operational ranges and process variation. The generalized soft sensor can then be fine-tuned and validated for a particular machine. In this paper, we test the effects of data aggregation for high performing Feedforward Neural Network (FNN) models that are constructed in two ways: first as a classifier to estimate product PASS/FAIL outcomes and second as a regressor to quantitatively estimate oxide thickness. For PASS/FAIL classification, a data aggregation method is developed to enhance model predictive performance with larger training datasets. A statistical analysis method involving point-biserial correlation and the Mean Absolute Error (MAE) difference score is introduced to select the optimal candidate datasets for aggregation, further improving the effectiveness of data aggregation. For large datasets with high quality data that enable model training for more complex tasks, regression models that predict the oxide thickness of the product are also developed. Two types of models with different loss functions are tested to compare the effects of the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) loss functions on model performance. Both the classification and regression models can be applied in industrial setti","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100195"},"PeriodicalIF":3.0,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142531729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-20DOI: 10.1016/j.dche.2024.100192
Maria Victoria Migo-Sumagang , Kathleen B. Aviso , Raymond R. Tan , Xiaoping Jia , Zhiwei Li , Dominic C.Y. Foo
Mitigating climate change requires a portfolio of strategies and the use of carbon dioxide removal techniques or negative emissions technologies (NETs) will be necessary to achieve this goal. However, the high implementation costs of advanced NETs lead to expensive carbon credits, hindering their broad acceptance and use. One potential solution involves governmental support through subsidies, aiming to boost the availability of NET-derived carbon credits. This research uses a graphical technique based on an extension of pinch analysis to identify the ideal subsidy level for carbon dioxide removal, taking into account factors such as carbon pricing, supply, and demand. The proposed approach modifies the limiting composite curve (LCC) methodology to accurately determine the optimal subsidy and establish the baseline amount of subsidized carbon dioxide removal needed. The approach enables the convenient and efficient construction of the LCC using a composite table algorithm. To illustrate the proposed methodology, two case studies composed of different NETs and demand sectors are investigated. The results show the most advantageous subsidy levels for these technologies, providing valuable insights to guide policymakers and investors in their decarbonization efforts. This work contributes to the development of effective governance and investment strategies by optimizing NET subsidy allocation. Such optimization is crucial for facilitating the widespread implementation of these technologies, which are in-line with the global efforts to mitigate climate change.
减缓气候变化需要一系列战略,而使用二氧化碳清除技术或负排放技术(NET)将是实现这一目标的必要条件。然而,先进的负排放技术实施成本高昂,导致碳信用额度昂贵,阻碍了其被广泛接受和使用。一个潜在的解决方案是政府通过补贴提供支持,旨在提高由负向排放技术产生的碳信用额的可用性。本研究采用基于撮合分析扩展的图形技术,在考虑碳定价、供应和需求等因素的基础上,确定二氧化碳清除的理想补贴水平。所提出的方法修改了极限复合曲线(LCC)方法,以准确确定最佳补贴,并确定所需的二氧化碳减排补贴基线量。该方法采用复合表算法,可方便、高效地构建 LCC。为了说明所提出的方法,我们对由不同的净能源和需求部门组成的两个案例进行了研究。结果显示了这些技术最有利的补贴水平,为指导政策制定者和投资者的去碳化工作提供了宝贵的见解。这项工作通过优化 NET 补贴分配,有助于制定有效的治理和投资战略。这种优化对于促进这些技术的广泛实施至关重要,而这些技术与全球减缓气候变化的努力是一致的。
{"title":"Process integration technique for targeting carbon credit price subsidy","authors":"Maria Victoria Migo-Sumagang , Kathleen B. Aviso , Raymond R. Tan , Xiaoping Jia , Zhiwei Li , Dominic C.Y. Foo","doi":"10.1016/j.dche.2024.100192","DOIUrl":"10.1016/j.dche.2024.100192","url":null,"abstract":"<div><div>Mitigating climate change requires a portfolio of strategies and the use of <em>carbon dioxide removal</em> techniques or <em>negative emissions technologies</em> (NETs) will be necessary to achieve this goal. However, the high implementation costs of advanced NETs lead to expensive carbon credits, hindering their broad acceptance and use. One potential solution involves governmental support through subsidies, aiming to boost the availability of NET-derived carbon credits. This research uses a graphical technique based on an extension of <em>pinch analysis</em> to identify the ideal subsidy level for carbon dioxide removal, taking into account factors such as carbon pricing, supply, and demand. The proposed approach modifies the <em>limiting composite curve</em> (LCC) methodology to accurately determine the optimal subsidy and establish the baseline amount of subsidized carbon dioxide removal needed. The approach enables the convenient and efficient construction of the LCC using a composite table algorithm. To illustrate the proposed methodology, two case studies composed of different NETs and demand sectors are investigated. The results show the most advantageous subsidy levels for these technologies, providing valuable insights to guide policymakers and investors in their decarbonization efforts. This work contributes to the development of effective governance and investment strategies by optimizing NET subsidy allocation. Such optimization is crucial for facilitating the widespread implementation of these technologies, which are in-line with the global efforts to mitigate climate change.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100192"},"PeriodicalIF":3.0,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1016/j.dche.2024.100193
Sofía García-Maza, Ángel Darío González-Delgado
Currently, the implementation of techniques to improve the quality of refining products such as hydrocracking of gas oil requires a rigorous analysis of the operating conditions of the system, mainly because at the plant operation level it is difficult to make relevant modifications in the processes without considering the possible economic, environmental, and social impacts that may be generated. For this reason, the need has arisen to use specialized computational tools that allow predicting the behavior of various processes to optimize their stages. This work presents the modeling, simulation, and extended Water-Energy-Product (E-WEP) technical evaluation of the gas oil hydrocracking process on an industrial scale considering the general conditions of the system and the extended development of the material and energy balance, using the Aspen HYSYS® simulator. The results showed that for a load capacity of 487,545 lb/h of gas oil with 145,708 lb/h of hydrogen a Production Yield of 95.77 % was obtained. Finally, 12 technical indicators related to raw materials, products, water, and energy were calculated, where the efficiency of these parameters was determined, reaching the maximum efficiency in the Total Cost of Energy (TCE) indicator with a value of 98.96 %, and the minimum in Wastewater Production Ratio (WPR) with a value of 22.39 %, the latter shows that the process supports mass integration of water effluents.
{"title":"Robust simulation and technical evaluation of large-scale gas oil hydrocracking process via extended water-energy-product (E-WEP) analysis","authors":"Sofía García-Maza, Ángel Darío González-Delgado","doi":"10.1016/j.dche.2024.100193","DOIUrl":"10.1016/j.dche.2024.100193","url":null,"abstract":"<div><div>Currently, the implementation of techniques to improve the quality of refining products such as hydrocracking of gas oil requires a rigorous analysis of the operating conditions of the system, mainly because at the plant operation level it is difficult to make relevant modifications in the processes without considering the possible economic, environmental, and social impacts that may be generated. For this reason, the need has arisen to use specialized computational tools that allow predicting the behavior of various processes to optimize their stages. This work presents the modeling, simulation, and extended Water-Energy-Product (E-WEP) technical evaluation of the gas oil hydrocracking process on an industrial scale considering the general conditions of the system and the extended development of the material and energy balance, using the Aspen HYSYS® simulator. The results showed that for a load capacity of 487,545 lb/h of gas oil with 145,708 lb/h of hydrogen a Production Yield of 95.77 % was obtained. Finally, 12 technical indicators related to raw materials, products, water, and energy were calculated, where the efficiency of these parameters was determined, reaching the maximum efficiency in the Total Cost of Energy (TCE) indicator with a value of 98.96 %, and the minimum in Wastewater Production Ratio (WPR) with a value of 22.39 %, the latter shows that the process supports mass integration of water effluents.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100193"},"PeriodicalIF":3.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142531727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-12DOI: 10.1016/j.dche.2024.100194
He Wen , Faisal Khan
The conflicts stemming from discrepancies between human and artificial intelligence (AI) in observation, interpretation, and action have gained attention. Recent publications highlight the seriousness of the concern stemming from conflict and models to identify and assess the conflict risk. No work has been reported on systematically studying how to resolve human and artificial intelligence conflicts. This paper presents a novel approach to model the resolution strategies of human-AI conflicts. This approach reinterprets the conventional human conflict resolution mechanisms within AI. The study proposes a unique mathematical model to quantify conflict risks and delineate effective resolution strategies to minimize conflict risk. The proposed approach and mode are applied to control a two-phase separator system, a major component of a processing facility. The proposed approach promotes the development of robust AI systems with enhanced real-time responses to human inputs. It provides a platform to foster human-AI collaborative engagement and a mechanism of intelligence augmentation.
{"title":"A risk-based model for human-artificial intelligence conflict resolution in process systems","authors":"He Wen , Faisal Khan","doi":"10.1016/j.dche.2024.100194","DOIUrl":"10.1016/j.dche.2024.100194","url":null,"abstract":"<div><div>The conflicts stemming from discrepancies between human and artificial intelligence (AI) in observation, interpretation, and action have gained attention. Recent publications highlight the seriousness of the concern stemming from conflict and models to identify and assess the conflict risk. No work has been reported on systematically studying how to resolve human and artificial intelligence conflicts. This paper presents a novel approach to model the resolution strategies of human-AI conflicts. This approach reinterprets the conventional human conflict resolution mechanisms within AI. The study proposes a unique mathematical model to quantify conflict risks and delineate effective resolution strategies to minimize conflict risk. The proposed approach and mode are applied to control a two-phase separator system, a major component of a processing facility. The proposed approach promotes the development of robust AI systems with enhanced real-time responses to human inputs. It provides a platform to foster human-AI collaborative engagement and a mechanism of intelligence augmentation.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100194"},"PeriodicalIF":3.0,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1016/j.dche.2024.100190
Michael Kreitmeir, Bruno Villela Pedras Lago, Ladislaus Schoenfeld, Sebastian Rehfeldt, Harald Klein
We present a one-dimensional first-principle model for parallel-flow regenerative kilns that accounts for the most important effects. These include the kinetics and thermal effects of the limestone decomposition as well as the heat transfer between the gaseous and solid phases. The model consists of two coupled equation systems for the upper and lower part of the kiln. The results of the model are validated qualitatively and are used to train an artificial neural network that predicts the conversion and the temperature in the crossover channel. The artificial neural network performs very well with values of the root mean squared error that are two to three orders of magnitudes lower than the range covered within the data. Finally, we use a genetic algorithm to optimize the feed mass flows such that the conversion and the fuel efficiency are improved in a Pareto-optimal manner. The results are compared to those of a gradient-based optimization method, which shows the usefulness and validity of the approach with the genetic algorithm.
{"title":"First-principle modeling of parallel-flow regenerative kilns and their optimization with genetic algorithm and gradient-based method","authors":"Michael Kreitmeir, Bruno Villela Pedras Lago, Ladislaus Schoenfeld, Sebastian Rehfeldt, Harald Klein","doi":"10.1016/j.dche.2024.100190","DOIUrl":"10.1016/j.dche.2024.100190","url":null,"abstract":"<div><div>We present a one-dimensional first-principle model for parallel-flow regenerative kilns that accounts for the most important effects. These include the kinetics and thermal effects of the limestone decomposition as well as the heat transfer between the gaseous and solid phases. The model consists of two coupled equation systems for the upper and lower part of the kiln. The results of the model are validated qualitatively and are used to train an artificial neural network that predicts the conversion and the temperature in the crossover channel. The artificial neural network performs very well with values of the root mean squared error that are two to three orders of magnitudes lower than the range covered within the data. Finally, we use a genetic algorithm to optimize the feed mass flows such that the conversion and the fuel efficiency are improved in a Pareto-optimal manner. The results are compared to those of a gradient-based optimization method, which shows the usefulness and validity of the approach with the genetic algorithm.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100190"},"PeriodicalIF":3.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1016/j.dche.2024.100191
Meshkat Dolat , Rohit Murali , Mohammadamin Zarei , Ruosi Zhang , Tararag Pincam , Yong-Qiang Liu , Jhuma Sadhukhan , Angela Bywater , Michael Short
Anaerobic digestion (AD) offers a sustainable solution for clean energy production, with the potential for significant revenue enhancement through enhanced decision-making. However, the complexity and limited flexibility of AD systems pose challenges in developing reliable optimisation methods. Changing feeding strategies provides opportunities for efficient feedstock utilisation and optimal gas production, especially in volatile gas markets.
To provide better decision-making tools in AD for energy production, we propose an integrated site model for the dynamic behaviour of the AD process in a biomethane-to-grid system and optimise production based on predicted gas prices. The model includes methods for optimal feed co-digestion strategies and integrates these results into a scheduling model to identify the optimal feedstock acquisition, feeding pattern, and potential gas storage operation considering feedstock availability, properties, sustainability, and fluctuating gas demand under different pricing variations.
The methodology was tested on a 150 tonnes per day farm-scale AD plant in the UK, processing energy crops and manure considering both environmental (global warming potential) and economic objectives. The results showed strong adaptability of the proposed feeding schedule to the general trend of gas prices over time. To address the challenge of immediate price peaks, typically unattainable due to the system's sluggish behaviour and high retention times, the impacts of on-site storage were explored, leading to annual revenue increases ranging from 2 % to 7.4 %, depending on the pricing scheme, which translates to a significant boost in terms of revenue.
{"title":"Dynamic feed scheduling for optimised anaerobic digestion: An optimisation approach for better decision-making to enhance revenue and environmental benefits","authors":"Meshkat Dolat , Rohit Murali , Mohammadamin Zarei , Ruosi Zhang , Tararag Pincam , Yong-Qiang Liu , Jhuma Sadhukhan , Angela Bywater , Michael Short","doi":"10.1016/j.dche.2024.100191","DOIUrl":"10.1016/j.dche.2024.100191","url":null,"abstract":"<div><div>Anaerobic digestion (AD) offers a sustainable solution for clean energy production, with the potential for significant revenue enhancement through enhanced decision-making. However, the complexity and limited flexibility of AD systems pose challenges in developing reliable optimisation methods. Changing feeding strategies provides opportunities for efficient feedstock utilisation and optimal gas production, especially in volatile gas markets.</div><div>To provide better decision-making tools in AD for energy production, we propose an integrated site model for the dynamic behaviour of the AD process in a biomethane-to-grid system and optimise production based on predicted gas prices. The model includes methods for optimal feed co-digestion strategies and integrates these results into a scheduling model to identify the optimal feedstock acquisition, feeding pattern, and potential gas storage operation considering feedstock availability, properties, sustainability, and fluctuating gas demand under different pricing variations.</div><div>The methodology was tested on a 150 tonnes per day farm-scale AD plant in the UK, processing energy crops and manure considering both environmental (global warming potential) and economic objectives. The results showed strong adaptability of the proposed feeding schedule to the general trend of gas prices over time. To address the challenge of immediate price peaks, typically unattainable due to the system's sluggish behaviour and high retention times, the impacts of on-site storage were explored, leading to annual revenue increases ranging from 2 % to 7.4 %, depending on the pricing scheme, which translates to a significant boost in terms of revenue.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100191"},"PeriodicalIF":3.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1016/j.dche.2024.100189
Veit Schagon, Rohit Murali, Ruosi Zhang, Melis Duyar, Michael Short
Microbreweries have greater production costs per litre of beer compared to large breweries, as well as higher carbon footprints. Due to the range of different retrofit technologies available and the different capacities and configurations of microbreweries, it is not always clear what retrofits will improve operations. Therefore, this work proposes a novel mixed-integer nonlinear programming decision-making tool to be used by any microbrewery, that determines the technoeconomic feasibility and sizing of energy efficiency-improving retrofits, including solar and wind power, battery storage, anaerobic digestion, boiler type selection, heat integration by heat storage, and carbon capture via dual-function materials. The model was demonstrated on a real UK microbrewery case study. The model gave an optimal configuration of a 10 m3 anaerobic digester, 30 solar panels outputting 380 W each, an 800 W wind turbine and a 2.3 m3 heat storage tank, reducing annual operating costs by 62.9 % and carbon dioxide emissions by 77.1 % with a payback period of 8 years. The tool is designed to be flexible for use by any microbrewery in any location with any brewing recipe and allow the owner(s) to develop more profitable and sustainable microbreweries.
Tweetable abstract
Microbreweries can implement mathematically optimised renewable energy, heat integration and anaerobic digestion to reduce operating costs by 62.9 % and carbon emissions by 77.1 %.
{"title":"An MINLP-based decision-making tool to help microbreweries improve energy efficiency and reduce carbon footprint through retrofits","authors":"Veit Schagon, Rohit Murali, Ruosi Zhang, Melis Duyar, Michael Short","doi":"10.1016/j.dche.2024.100189","DOIUrl":"10.1016/j.dche.2024.100189","url":null,"abstract":"<div><div>Microbreweries have greater production costs per litre of beer compared to large breweries, as well as higher carbon footprints. Due to the range of different retrofit technologies available and the different capacities and configurations of microbreweries, it is not always clear what retrofits will improve operations. Therefore, this work proposes a novel mixed-integer nonlinear programming decision-making tool to be used by any microbrewery, that determines the technoeconomic feasibility and sizing of energy efficiency-improving retrofits, including solar and wind power, battery storage, anaerobic digestion, boiler type selection, heat integration by heat storage, and carbon capture via dual-function materials. The model was demonstrated on a real UK microbrewery case study. The model gave an optimal configuration of a 10 m<sup>3</sup> anaerobic digester, 30 solar panels outputting 380 W each, an 800 W wind turbine and a 2.3 m<sup>3</sup> heat storage tank, reducing annual operating costs by 62.9 % and carbon dioxide emissions by 77.1 % with a payback period of 8 years. The tool is designed to be flexible for use by any microbrewery in any location with any brewing recipe and allow the owner(s) to develop more profitable and sustainable microbreweries.</div><div>Tweetable abstract</div><div>Microbreweries can implement mathematically optimised renewable energy, heat integration and anaerobic digestion to reduce operating costs by 62.9 % and carbon emissions by 77.1 %.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100189"},"PeriodicalIF":3.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study introduces the hybrid of the Bayesian optimization algorithm and support vector regression (BOA-SVR) models to predict the removal of aerobic organic (total chemical oxygen demand, COD) and nitrogen compounds such as total Kjeldahl Nitrogen (TKN), ammonium nitrogen (NH4-N), and nitrate nitrogen (NO3-N) from municipal wastewater in a gas-liquid-solid circulating fluidized bed (GLSCFB) bioreactor. GLSCFB bioreactors treat wastewater by removing nutrients biologically. The downer of a GLSCFB bioreactor provided experimental data on TKN, NH4-N, NO3-N, and TCOD removal. The hybrid optimal intelligence algorithm (BOA-SVR) has improved model accuracy across multiple domains by combining BOA and SVR. The coefficient of determination (R2), residual, mean absolute error (MAE), root mean square error (RMSE), and fractional bias (FB) were used to analyze BOA-SVR model performance. The models match experimental data from four operational stages well, with R2 or adj R2 values above 0.99 for all responses. The model's accuracy was confirmed by relative deviations and residual plots showing dispersion around the zero-reference line. The BOA-SVR model consistently predicted dependent variables with low RMSE and MAE values (≤ 2.24 and 2.21, respectively) and near-zero FB. Computing efficiency was shown by optimizing TCOD, TKN, NH4-N, and NO3-N models in 70.61, 72.89, 74.37, and 54.07 s. A rigorous test on unseen data with different noise levels confirmed the model's stability. Furthermore, BOA-SVR performs better than traditional multiple linear regression (MLR). Overall, the BOA-SVR model predicts biological nutrient removal in municipal wastewater utilizing a GLSCFB bioreactor quickly, correctly, and efficiently, reducing experimental stress and resource use.
{"title":"A hybrid BOA-SVR approach for predicting aerobic organic and nitrogen removal in a gas-liquid-solid circulating fluidized bed bioreactor","authors":"Shaikh Abdur Razzak , Nahid Sultana , S.M. Zakir Hossain , Muhammad Muhitur Rahman , Yue Yuan , Mohammad Mozahar Hossain , Jesse Zhu","doi":"10.1016/j.dche.2024.100188","DOIUrl":"10.1016/j.dche.2024.100188","url":null,"abstract":"<div><div>This study introduces the hybrid of the Bayesian optimization algorithm and support vector regression (BOA-SVR) models to predict the removal of aerobic organic (total chemical oxygen demand, COD) and nitrogen compounds such as total Kjeldahl Nitrogen (TKN), ammonium nitrogen (NH<sub>4</sub>-N), and nitrate nitrogen (NO<sub>3</sub>-N) from municipal wastewater in a gas-liquid-solid circulating fluidized bed (GLSCFB) bioreactor. GLSCFB bioreactors treat wastewater by removing nutrients biologically. The downer of a GLSCFB bioreactor provided experimental data on TKN, NH<sub>4</sub>-N, NO<sub>3</sub>-N, and TCOD removal. The hybrid optimal intelligence algorithm (BOA-SVR) has improved model accuracy across multiple domains by combining BOA and SVR. The coefficient of determination (R<sup>2</sup>), residual, mean absolute error (MAE), root mean square error (RMSE), and fractional bias (FB) were used to analyze BOA-SVR model performance. The models match experimental data from four operational stages well, with R<sup>2</sup> or adj R<sup>2</sup> values above 0.99 for all responses. The model's accuracy was confirmed by relative deviations and residual plots showing dispersion around the zero-reference line. The BOA-SVR model consistently predicted dependent variables with low RMSE and MAE values (≤ 2.24 and 2.21, respectively) and near-zero FB. Computing efficiency was shown by optimizing TCOD, TKN, NH4-N, and NO3-N models in 70.61, 72.89, 74.37, and 54.07 s. A rigorous test on unseen data with different noise levels confirmed the model's stability. Furthermore, BOA-SVR performs better than traditional multiple linear regression (MLR). Overall, the BOA-SVR model predicts biological nutrient removal in municipal wastewater utilizing a GLSCFB bioreactor quickly, correctly, and efficiently, reducing experimental stress and resource use.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100188"},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142326665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}