Pub Date : 2025-12-30DOI: 10.1016/j.compchemeng.2025.109546
Xue Xu , Wei Zhao , Dong Lv , Yuanjian Fu , Chaomin Luo , Chengyi Xia
Due to load changes, unit aging, or other causes, industrial processes are in general time variant condition and characterized by nonstationarity, challenging conventional monitoring methods. A manifold-aware stationary subspace and divergence analysis (MSSDA) is proposed for monitoring nonstationary processes, which aims at capturing the underlying low-dimensional representations of data from geometric and statistical perspectives. Specifically, an across-epoch similarity term induced by Gromov-Wasserstein distance is developed to align the manifold structures across different epochs such that MSSDA faithfully explores the intrinsic geometric characteristics of data. An adaptive neighbor strategy is designed to learn the neighborhood relationship among data and tailor appropriate neighbors for each sample with conditions of data density. Afterwards, a maximizing-minimizing divergence analysis is also investigated to match the intra-epoch and inter-epoch statistical information. In this way, the learned reduced-dimensional representations of data provide an in-depth analysis into the operation process, enhancing the monitoring capabilities. To demonstrate its effectiveness, the MSSDA approach is applied to two complicated industrial processes including a wastewater treatment process and a real-world fluid catalytic cracking process.
{"title":"Manifold-aware stationary subspace and divergence analysis for nonstationary process monitoring","authors":"Xue Xu , Wei Zhao , Dong Lv , Yuanjian Fu , Chaomin Luo , Chengyi Xia","doi":"10.1016/j.compchemeng.2025.109546","DOIUrl":"10.1016/j.compchemeng.2025.109546","url":null,"abstract":"<div><div>Due to load changes, unit aging, or other causes, industrial processes are in general time variant condition and characterized by nonstationarity, challenging conventional monitoring methods. A manifold-aware stationary subspace and divergence analysis (MSSDA) is proposed for monitoring nonstationary processes, which aims at capturing the underlying low-dimensional representations of data from geometric and statistical perspectives. Specifically, an across-epoch similarity term induced by Gromov-Wasserstein distance is developed to align the manifold structures across different epochs such that MSSDA faithfully explores the intrinsic geometric characteristics of data. An adaptive neighbor strategy is designed to learn the neighborhood relationship among data and tailor appropriate neighbors for each sample with conditions of data density. Afterwards, a maximizing-minimizing divergence analysis is also investigated to match the intra-epoch and inter-epoch statistical information. In this way, the learned reduced-dimensional representations of data provide an in-depth analysis into the operation process, enhancing the monitoring capabilities. To demonstrate its effectiveness, the MSSDA approach is applied to two complicated industrial processes including a wastewater treatment process and a real-world fluid catalytic cracking process.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109546"},"PeriodicalIF":3.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.compchemeng.2025.109545
Jeremy Pantet , Ludovic Montastruc , Pierre Thiriet
Biomass has emerged as a pivotal new resource that could alleviate dependence on fossil resources and support the ecological transition by benefiting local communities. There has been an expanding literature on the subject for the past two decades. The focus of this literature is primarily on the organization and optimization of the biomass supply chain (BSC), which is the key component in providing profitable and sustainable valorized goods from biomass. The aim of this paper is to evaluate the present state of known research gaps, identify research gaps in BSC design by including economic considerations, and propose new research orientations on the subject that rely on more multidisciplinary approaches. We found three main understudied gaps. The majority of papers still only consider the strategic and tactical decision levels, excluding the operational decision level. Therefore, there are still opportunities to improve the currently accepted BSC design. The demand, as part of the supply chain, appears to be understudied. In the reviewed literature, the demand is treated as a parameter, and is perfectly met by the production, without consideration for pricing, surplus, or shortage. The other gap found is that most of the models considered in this review describe a BSC in autarky, and few take into account importations either of additional biomass or of bioproduct in their studied case, or the potential exportation of surplus. Consequently, closing these gaps in biomass supply design and optimization would facilitate the integration of BSC modeling into broader economic models.
{"title":"What’s new in biomass supply chain optimization? current trends and insights","authors":"Jeremy Pantet , Ludovic Montastruc , Pierre Thiriet","doi":"10.1016/j.compchemeng.2025.109545","DOIUrl":"10.1016/j.compchemeng.2025.109545","url":null,"abstract":"<div><div>Biomass has emerged as a pivotal new resource that could alleviate dependence on fossil resources and support the ecological transition by benefiting local communities. There has been an expanding literature on the subject for the past two decades. The focus of this literature is primarily on the organization and optimization of the biomass supply chain (BSC), which is the key component in providing profitable and sustainable valorized goods from biomass. The aim of this paper is to evaluate the present state of known research gaps, identify research gaps in BSC design by including economic considerations, and propose new research orientations on the subject that rely on more multidisciplinary approaches. We found three main understudied gaps. The majority of papers still only consider the strategic and tactical decision levels, excluding the operational decision level. Therefore, there are still opportunities to improve the currently accepted BSC design. The demand, as part of the supply chain, appears to be understudied. In the reviewed literature, the demand is treated as a parameter, and is perfectly met by the production, without consideration for pricing, surplus, or shortage. The other gap found is that most of the models considered in this review describe a BSC in autarky, and few take into account importations either of additional biomass or of bioproduct in their studied case, or the potential exportation of surplus. Consequently, closing these gaps in biomass supply design and optimization would facilitate the integration of BSC modeling into broader economic models.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109545"},"PeriodicalIF":3.9,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.compchemeng.2025.109538
Artemis Tsochatzidi , Georgios I. Liapis , Francesca Cenci , Magdalini Aroniada , Lazaros G. Papageorgiou
Modern industries rely on advanced modelling techniques to enhance process efficiency, yet the computational complexity of these models often limits their direct use in optimisation. To tackle this issue, surrogate-based approaches for optimising manufacturing flowsheets can be used. In this work, we introduce a multi-objective tree regression approach for surrogate-based optimisation, integrating a multi-target tree regression model to approximate complex process dynamics. The proposed approach can be extended and formulated as a strategic decision-making problem, to reveal optimal trade-offs between conflicting objectives such as yield, process mass intensity, and purity in Pharmaceutical Manufacturing. By combining Pareto-fronts with game-theoretic and/or compromise solutions, the methodology offers a systematic way to define the limits of the feasible space and identify optimal operational strategies in the absence of decision making preferences. The proposed approach enhances interpretability, computational efficiency, and practical applicability, offering a powerful tool for decision-making in pharmaceutical manufacturing and beyond.
{"title":"Surrogate-based multi-objective optimisation via tree regression","authors":"Artemis Tsochatzidi , Georgios I. Liapis , Francesca Cenci , Magdalini Aroniada , Lazaros G. Papageorgiou","doi":"10.1016/j.compchemeng.2025.109538","DOIUrl":"10.1016/j.compchemeng.2025.109538","url":null,"abstract":"<div><div>Modern industries rely on advanced modelling techniques to enhance process efficiency, yet the computational complexity of these models often limits their direct use in optimisation. To tackle this issue, surrogate-based approaches for optimising manufacturing flowsheets can be used. In this work, we introduce a multi-objective tree regression approach for surrogate-based optimisation, integrating a multi-target tree regression model to approximate complex process dynamics. The proposed approach can be extended and formulated as a strategic decision-making problem, to reveal optimal trade-offs between conflicting objectives such as yield, process mass intensity, and purity in Pharmaceutical Manufacturing. By combining Pareto-fronts with game-theoretic and/or compromise solutions, the methodology offers a systematic way to define the limits of the feasible space and identify optimal operational strategies in the absence of decision making preferences. The proposed approach enhances interpretability, computational efficiency, and practical applicability, offering a powerful tool for decision-making in pharmaceutical manufacturing and beyond.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109538"},"PeriodicalIF":3.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.compchemeng.2025.109543
Yang Cao , Chunjie Yang , Siwei Lou , Yuelin Yang
Blast furnace iron-making process (BFIP), constituting the core of modern steel production, presents formidable diagnostic challenges due to its inherent nonlinear dynamics and pronounced nonstationary characteristics. Addressing these challenges, we introduce the Stationary Combined Features Support Vector Machine (SCF-SVM) - a novel hybrid diagnostic framework that synergistically combines: a Differential Dynamic Feature (DDF) extraction module that precisely decouples the process’s complex temporal dynamics, and a Stationary Support Vector Machine (SSVM) classifier specifically engineered to handle nonstationary process behavior. This innovation establishes a new paradigm for BFIP condition monitoring, where the DDF component effectively captures transitional process states while the SSVM ensures robust classification under nonstationary conditions. Comprehensive validation using real-world BFIP operational data demonstrates the framework’s significant advancements over existing methods, achieving a 3.0% reduction in false alarms and a remarkable 9.5% enhancement in detection accuracy. Furthermore, we implement a dual-mode diagnostic system featuring seamless offline training-to-online deployment capability, confirming both the methodological superiority and practical viability for industrial applications.
{"title":"Stationarity fusion with SVM: A stationary combined features support vector machine approach for blast furnace iron-making process fault diagnosis","authors":"Yang Cao , Chunjie Yang , Siwei Lou , Yuelin Yang","doi":"10.1016/j.compchemeng.2025.109543","DOIUrl":"10.1016/j.compchemeng.2025.109543","url":null,"abstract":"<div><div>Blast furnace iron-making process (BFIP), constituting the core of modern steel production, presents formidable diagnostic challenges due to its inherent nonlinear dynamics and pronounced nonstationary characteristics. Addressing these challenges, we introduce the Stationary Combined Features Support Vector Machine (SCF-SVM) - a novel hybrid diagnostic framework that synergistically combines: a Differential Dynamic Feature (DDF) extraction module that precisely decouples the process’s complex temporal dynamics, and a Stationary Support Vector Machine (SSVM) classifier specifically engineered to handle nonstationary process behavior. This innovation establishes a new paradigm for BFIP condition monitoring, where the DDF component effectively captures transitional process states while the SSVM ensures robust classification under nonstationary conditions. Comprehensive validation using real-world BFIP operational data demonstrates the framework’s significant advancements over existing methods, achieving a 3.0% reduction in false alarms and a remarkable 9.5% enhancement in detection accuracy. Furthermore, we implement a dual-mode diagnostic system featuring seamless offline training-to-online deployment capability, confirming both the methodological superiority and practical viability for industrial applications.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"206 ","pages":"Article 109543"},"PeriodicalIF":3.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.compchemeng.2025.109535
Alex Durkin , Jasper Stolte , Matthew Jones , Raghuraman Pitchumani , Bei Li , Christian Michler , Mehmet Mercangöz
Offline reinforcement learning (offline RL) offers a promising framework for developing control strategies in chemical process systems using historical data, without the risks or costs of online experimentation. This work investigates the application of offline RL to the safe and efficient control of an exothermic polymerisation continuous stirred-tank reactor. We introduce a Gymnasium-compatible simulation environment that captures the reactor’s nonlinear dynamics, including reaction kinetics, energy balances, and operational constraints. The environment supports three industrially relevant scenarios: startup, grade change down, and grade change up. It also includes reproducible offline datasets generated from proportional–integral controllers with randomised tunings, providing a benchmark for evaluating offline RL algorithms in realistic process control tasks.
We assess behaviour cloning and implicit Q-learning as baseline algorithms, highlighting the challenges offline agents face, including steady-state offsets and degraded performance near setpoints. To address these issues, we propose a novel deployment-time safety layer that performs gradient-based action correction using partially input convex neural networks (PICNNs) as learned cost models. The PICNN enables real-time, differentiable correction of policy actions by descending a convex, state-conditioned cost surface, without requiring retraining or environment interaction.
Experimental results show that offline RL, particularly when combined with convex action correction, can outperform traditional control approaches and maintain stability across all scenarios. These findings demonstrate the feasibility of integrating offline RL with interpretable and safety-aware corrections for high-stakes chemical process control, and lay the groundwork for more reliable data-driven automation in industrial systems.
{"title":"Safe deployment of offline reinforcement learning via input convex action correction","authors":"Alex Durkin , Jasper Stolte , Matthew Jones , Raghuraman Pitchumani , Bei Li , Christian Michler , Mehmet Mercangöz","doi":"10.1016/j.compchemeng.2025.109535","DOIUrl":"10.1016/j.compchemeng.2025.109535","url":null,"abstract":"<div><div>Offline reinforcement learning (offline RL) offers a promising framework for developing control strategies in chemical process systems using historical data, without the risks or costs of online experimentation. This work investigates the application of offline RL to the safe and efficient control of an exothermic polymerisation continuous stirred-tank reactor. We introduce a Gymnasium-compatible simulation environment that captures the reactor’s nonlinear dynamics, including reaction kinetics, energy balances, and operational constraints. The environment supports three industrially relevant scenarios: startup, grade change down, and grade change up. It also includes reproducible offline datasets generated from proportional–integral controllers with randomised tunings, providing a benchmark for evaluating offline RL algorithms in realistic process control tasks.</div><div>We assess behaviour cloning and implicit Q-learning as baseline algorithms, highlighting the challenges offline agents face, including steady-state offsets and degraded performance near setpoints. To address these issues, we propose a novel deployment-time safety layer that performs gradient-based action correction using partially input convex neural networks (PICNNs) as learned cost models. The PICNN enables real-time, differentiable correction of policy actions by descending a convex, state-conditioned cost surface, without requiring retraining or environment interaction.</div><div>Experimental results show that offline RL, particularly when combined with convex action correction, can outperform traditional control approaches and maintain stability across all scenarios. These findings demonstrate the feasibility of integrating offline RL with interpretable and safety-aware corrections for high-stakes chemical process control, and lay the groundwork for more reliable data-driven automation in industrial systems.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"206 ","pages":"Article 109535"},"PeriodicalIF":3.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.compchemeng.2025.109540
Daniel Mayfrank , Kayra Dernek , Laura Lang , Alexander Mitsos , Manuel Dahmen
With our recently proposed method based on reinforcement learning (Mayfrank et al., 2024), Koopman surrogate models can be trained for optimal performance in specific (economic) nonlinear model predictive control ((e)NMPC) applications. So far, our method has exclusively been demonstrated on a small-scale case study. Herein, we show that our method scales well to a more challenging demand response case study built on a large-scale model of a single-product (nitrogen) air separation unit. Across all numerical experiments, we assume observability of only a few realistically measurable plant variables. Compared to a purely system identification-based Koopman eNMPC, which generates small economic savings but frequently violates constraints, our method delivers similar economic performance while avoiding constraint violations.
{"title":"End-to-end reinforcement learning of Koopman models for eNMPC of an air separation unit","authors":"Daniel Mayfrank , Kayra Dernek , Laura Lang , Alexander Mitsos , Manuel Dahmen","doi":"10.1016/j.compchemeng.2025.109540","DOIUrl":"10.1016/j.compchemeng.2025.109540","url":null,"abstract":"<div><div>With our recently proposed method based on reinforcement learning (Mayfrank et al., 2024), Koopman surrogate models can be trained for optimal performance in specific (economic) nonlinear model predictive control ((e)NMPC) applications. So far, our method has exclusively been demonstrated on a small-scale case study. Herein, we show that our method scales well to a more challenging demand response case study built on a large-scale model of a single-product (nitrogen) air separation unit. Across all numerical experiments, we assume observability of only a few realistically measurable plant variables. Compared to a purely system identification-based Koopman eNMPC, which generates small economic savings but frequently violates constraints, our method delivers similar economic performance while avoiding constraint violations.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109540"},"PeriodicalIF":3.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.compchemeng.2025.109539
Romina Lasry Testa , Fernando D. Ramos , Matías Ramos , Vanina Estrada , Maria Soledad Diaz
In this work, we propose a mixed-integer nonlinear multiobjective optimization framework for the determination of minimal target indices for the sustainable design of an integrated cyanobacteria-based biorefinery and its heat exchanger network (HEN). The potential production of phycocyanin and zeaxanthin, poly (3-hydroxybutyrate) (PHB), fourth-generation bioethanol, biogas, hydrogen and diethyl ether is analyzed. The main objective is to determine the minimal target indices (productivity, yield, titer) for Synechocystis sp. that must be reached to achieve a sustainable biorefinery design, by imposing lower bounds on an economic parameter (Net Present Value) and a multi-criteria sustainability metric (Sustainability Net Present Value).
For in silico strains, a bilevel optimization framework identifies gene knockouts in a genome-scale metabolic model of Synechocystis sp. PCC 6803 that couple product synthesis to growth. The resulting strain-specific performance indices are compared with their minimum feasible targets obtained from the process-level optimization. This integrated approach extends previous studies by combining genome-scale metabolic modelling with techno-economic and sustainability analysis within a unified optimization framework. Numerical results indicate that the minimal target indices are largely surpassed by two of the in silico strains, the wild-type and the coupled ethanol-producing, as well as by the in vivo strains. The proposed framework provides a quantitative basis to assess the feasibility of cyanobacteria-based biorefineries and to guide future metabolic engineering and process design strategies.
{"title":"Determination of minimal target indices for cyanobacteria-based biorefineries and optimal design of the metabolic network","authors":"Romina Lasry Testa , Fernando D. Ramos , Matías Ramos , Vanina Estrada , Maria Soledad Diaz","doi":"10.1016/j.compchemeng.2025.109539","DOIUrl":"10.1016/j.compchemeng.2025.109539","url":null,"abstract":"<div><div>In this work, we propose a mixed-integer nonlinear multiobjective optimization framework for the determination of minimal target indices for the sustainable design of an integrated cyanobacteria-based biorefinery and its heat exchanger network (HEN). The potential production of phycocyanin and zeaxanthin, poly (3-hydroxybutyrate) (PHB), fourth-generation bioethanol, biogas, hydrogen and diethyl ether is analyzed. The main objective is to determine the minimal target indices (productivity, yield, titer) for <em>Synechocystis</em> sp. that must be reached to achieve a sustainable biorefinery design, by imposing lower bounds on an economic parameter (Net Present Value) and a multi-criteria sustainability metric (Sustainability Net Present Value).</div><div>For in silico strains, a bilevel optimization framework identifies gene knockouts in a genome-scale metabolic model of <em>Synechocystis</em> sp. PCC 6803 that couple product synthesis to growth. The resulting strain-specific performance indices are compared with their minimum feasible targets obtained from the process-level optimization. This integrated approach extends previous studies by combining genome-scale metabolic modelling with techno-economic and sustainability analysis within a unified optimization framework. Numerical results indicate that the minimal target indices are largely surpassed by two of the <em>in silico</em> strains, the wild-type and the coupled ethanol-producing, as well as by the <em>in vivo</em> strains. The proposed framework provides a quantitative basis to assess the feasibility of cyanobacteria-based biorefineries and to guide future metabolic engineering and process design strategies.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"206 ","pages":"Article 109539"},"PeriodicalIF":3.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.compchemeng.2025.109532
Maximilian F. Theisen, Gabrie M.H. Meesters, Artur M. Schweidtmann
Soft sensors estimate process variables that are difficult or impossible to measure directly by using mathematical models and available sensor data, e.g., product concentrations. Machine learning-based approaches have become popular for soft sensing tasks. These approaches offer automatic modeling using historical process data but lack basic process information, such as the process topology. This can lead to (1) modeling of correlations instead of causation between process measurements, (2) model deterioration in deployment due to unseen process scenarios, and (3) large data requirements. To overcome these shortcomings, we propose a novel ML modeling approach incorporating the process topology into soft sensor models for improved spatio-temporal modeling. For this, we propose process topology-aware graph neural networks. We combine process topology and sensor data by representing process data in a directed graph and leverage these process graphs to train graph neural networks. Our method demonstrates enhanced model robustness, reduced data requirements, and more intuitive data representations compared to standard black-box machine learning modeling approaches. Overall, this work introduces a new paradigm for soft sensing by directly embedding process information into the data, paving the way for more efficient and reliable digital twin applications.
{"title":"Graph neural networks for soft sensors: Learning from process topology and operational data","authors":"Maximilian F. Theisen, Gabrie M.H. Meesters, Artur M. Schweidtmann","doi":"10.1016/j.compchemeng.2025.109532","DOIUrl":"10.1016/j.compchemeng.2025.109532","url":null,"abstract":"<div><div>Soft sensors estimate process variables that are difficult or impossible to measure directly by using mathematical models and available sensor data, e.g., product concentrations. Machine learning-based approaches have become popular for soft sensing tasks. These approaches offer automatic modeling using historical process data but lack basic process information, such as the process topology. This can lead to (1) modeling of correlations instead of causation between process measurements, (2) model deterioration in deployment due to unseen process scenarios, and (3) large data requirements. To overcome these shortcomings, we propose a novel ML modeling approach incorporating the process topology into soft sensor models for improved spatio-temporal modeling. For this, we propose process topology-aware graph neural networks. We combine process topology and sensor data by representing process data in a directed graph and leverage these process graphs to train graph neural networks. Our method demonstrates enhanced model robustness, reduced data requirements, and more intuitive data representations compared to standard black-box machine learning modeling approaches. Overall, this work introduces a new paradigm for soft sensing by directly embedding process information into the data, paving the way for more efficient and reliable digital twin applications.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"206 ","pages":"Article 109532"},"PeriodicalIF":3.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.compchemeng.2025.109541
Chunyu Du , Wenbo Ma
Energy security, investment choices, and risk management all depend on accurate crude oil price forecasting, which is still challenging because oil markets are nonlinear, volatile, and non-stationary. The current work posits a hybrid forecasting framework using the Multivariate Empirical Mode Decomposition-Barnacles Mating Optimizer- Interpretable Feature Temporal Self-Attention Transformer (MEMD-BMO-IFTT) model. The Interpretable Feature Temporal Self-Attention Transformer is used for implicitly capturing long-term temporal dependencies, the Barnacles Mating Optimizer is used for effective parameter optimization, and Multivariate Empirical Mode Decomposition is used for extracting multi-scale, informative features. The model was tested against prevalent algorithms such as Extreme Gradient Boosting with Random Forest, Extended Long Short-Term Memory, an IFTT, and MEMD-FA-IFTT based on June 2014 to October 2023 daily technical indicators, fundamental indicators, and trading volume data. It is observed that MEMD-BMO-IFTT is more efficient in the prediction process with a test coefficient of determination of 0.9937. While real-world backtesting demonstrated its usefulness by producing higher cumulative returns, lower drawdowns, and stronger risk-adjusted performance when compared to a buy-and-hold strategy, out-of-sample testing validated its generalization to unseen data. The MEMD-BMO-IFTT framework provides a reliable, understandable, and practically useful solution for forecasting crude oil prices. This novel hybrid model can provide a valuable tool for researchers and investors in forecasting other financial markets.
{"title":"Improving the precision of crude oil prices using hybrid modeling methods","authors":"Chunyu Du , Wenbo Ma","doi":"10.1016/j.compchemeng.2025.109541","DOIUrl":"10.1016/j.compchemeng.2025.109541","url":null,"abstract":"<div><div>Energy security, investment choices, and risk management all depend on accurate crude oil price forecasting, which is still challenging because oil markets are nonlinear, volatile, and non-stationary. The current work posits a hybrid forecasting framework using the Multivariate Empirical Mode Decomposition-Barnacles Mating Optimizer- Interpretable Feature Temporal Self-Attention Transformer (MEMD-BMO-IFTT) model. The Interpretable Feature Temporal Self-Attention Transformer is used for implicitly capturing long-term temporal dependencies, the Barnacles Mating Optimizer is used for effective parameter optimization, and Multivariate Empirical Mode Decomposition is used for extracting multi-scale, informative features. The model was tested against prevalent algorithms such as Extreme Gradient Boosting with Random Forest, Extended Long Short-Term Memory, an IFTT, and MEMD-FA-IFTT based on June 2014 to October 2023 daily technical indicators, fundamental indicators, and trading volume data. It is observed that MEMD-BMO-IFTT is more efficient in the prediction process with a test coefficient of determination of 0.9937. While real-world backtesting demonstrated its usefulness by producing higher cumulative returns, lower drawdowns, and stronger risk-adjusted performance when compared to a buy-and-hold strategy, out-of-sample testing validated its generalization to unseen data. The MEMD-BMO-IFTT framework provides a reliable, understandable, and practically useful solution for forecasting crude oil prices. This novel hybrid model can provide a valuable tool for researchers and investors in forecasting other financial markets.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"206 ","pages":"Article 109541"},"PeriodicalIF":3.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Batch distillation, as a widely applied separation technology, demonstrates significant advantages in small-scale, multi-component, and composition-variable separation tasks. Current research primarily focuses on two directions to enhance its overall performance, reduce energy consumption, and shorten separation time: development of novel process configurations and operational optimization. Among these, the dividing wall batch distillation column (DWBDC) remarkably improves separation efficiency of conventional batch distillation through its innovative batch dividing wall process intensification structure. However, as a highly nonlinear dynamic system, DWBDC presents considerable challenges in rigorous simulation and optimization, which is well worth further investigation. Bayesian optimization algorithms have gained extensive application in chemical engineering due to their efficient global search capability in recent years. This study innovatively applies Bayesian optimization to batch distillation processes, establishing an operational optimization framework based on this algorithm and implementing it in the optimal design of various batch distillation configurations. To systematically evaluate DWBDC's superiority, rigorous dynamic simulations were conducted on three configurations: conventional batch distillation column (BDC), middle vessel batch distillation column (MVBDC), and DWBDC, with corresponding concentration control systems developed. The research examined separation performances of these configurations for industrial waste solvents with different feed compositions (light-component-dominant, intermediate-component-dominant, and balanced light/intermediate-component feeds). Bayesian optimization was employed to optimize operational parameters for all three configurations, followed by comparative performance analysis under optimal conditions. Results demonstrate the superior performance of DWBDC in both operating time and economic efficiency, providing crucial theoretical foundations and practical guidance for industrial application of dividing wall batch distillation technology.
{"title":"Dynamic simulation and operational optimization of dividing wall batch distillation columns based on rigorous models","authors":"Xing Qian , Jiayi Du , Gonghan Guo , Shengkun Jia , Chao Zhang","doi":"10.1016/j.compchemeng.2025.109542","DOIUrl":"10.1016/j.compchemeng.2025.109542","url":null,"abstract":"<div><div>Batch distillation, as a widely applied separation technology, demonstrates significant advantages in small-scale, multi-component, and composition-variable separation tasks. Current research primarily focuses on two directions to enhance its overall performance, reduce energy consumption, and shorten separation time: development of novel process configurations and operational optimization. Among these, the dividing wall batch distillation column (DWBDC) remarkably improves separation efficiency of conventional batch distillation through its innovative batch dividing wall process intensification structure. However, as a highly nonlinear dynamic system, DWBDC presents considerable challenges in rigorous simulation and optimization, which is well worth further investigation. Bayesian optimization algorithms have gained extensive application in chemical engineering due to their efficient global search capability in recent years. This study innovatively applies Bayesian optimization to batch distillation processes, establishing an operational optimization framework based on this algorithm and implementing it in the optimal design of various batch distillation configurations. To systematically evaluate DWBDC's superiority, rigorous dynamic simulations were conducted on three configurations: conventional batch distillation column (BDC), middle vessel batch distillation column (MVBDC), and DWBDC, with corresponding concentration control systems developed. The research examined separation performances of these configurations for industrial waste solvents with different feed compositions (light-component-dominant, intermediate-component-dominant, and balanced light/intermediate-component feeds). Bayesian optimization was employed to optimize operational parameters for all three configurations, followed by comparative performance analysis under optimal conditions. Results demonstrate the superior performance of DWBDC in both operating time and economic efficiency, providing crucial theoretical foundations and practical guidance for industrial application of dividing wall batch distillation technology.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"206 ","pages":"Article 109542"},"PeriodicalIF":3.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145880239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}