Pub Date : 2026-01-14DOI: 10.1016/j.compchemeng.2026.109567
Xiaofan Zhou, Li Feng, Aihua Zhu, Haoxu Shi
In global supply chain management, optimizing joint inventory-transportation decisions remains a critical challenge. Existing approaches often rely on deterministic assumptions or oversimplified stochastic models, which fail to adequately capture the dynamic uncertainties and multimodal variability inherent in replenishment lead times. This limitation severely restricts the robustness and coordination efficiency of decision policies in real-world complex environments. To address these issues, this paper proposes an uncertainty-aware decision framework, termed Diffusion model with Entropy-guided Multi-Agent Proximal Policy Optimization (DE-MAPPO). Our method employs a diffusion model to generate probabilistic lead time forecasting, leverages Monte Carlo sampling to quantify uncertainty, and introduces an entropy-guided adaptive strategy that enables agents to dynamically adjust inventory and transportation decisions based on forecast confidence. The effectiveness of the proposed framework is validated through experiments conducted in a simulated global chemical supply chain environment. The experimental results demonstrate that DE-MAPPO framework significantly outperforms the baseline methods across key performance metrics.
{"title":"Uncertainty-aware joint inventory-transportation decisions in supply chain: A diffusion model-based multi-agent reinforcement learning approach with lead times estimation","authors":"Xiaofan Zhou, Li Feng, Aihua Zhu, Haoxu Shi","doi":"10.1016/j.compchemeng.2026.109567","DOIUrl":"10.1016/j.compchemeng.2026.109567","url":null,"abstract":"<div><div>In global supply chain management, optimizing joint inventory-transportation decisions remains a critical challenge. Existing approaches often rely on deterministic assumptions or oversimplified stochastic models, which fail to adequately capture the dynamic uncertainties and multimodal variability inherent in replenishment lead times. This limitation severely restricts the robustness and coordination efficiency of decision policies in real-world complex environments. To address these issues, this paper proposes an uncertainty-aware decision framework, termed Diffusion model with Entropy-guided Multi-Agent Proximal Policy Optimization (DE-MAPPO). Our method employs a diffusion model to generate probabilistic lead time forecasting, leverages Monte Carlo sampling to quantify uncertainty, and introduces an entropy-guided adaptive strategy that enables agents to dynamically adjust inventory and transportation decisions based on forecast confidence. The effectiveness of the proposed framework is validated through experiments conducted in a simulated global chemical supply chain environment. The experimental results demonstrate that DE-MAPPO framework significantly outperforms the baseline methods across key performance metrics.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109567"},"PeriodicalIF":3.9,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1016/j.compchemeng.2026.109564
E N Pistikopoulos , Rafiqul Gani
Process Systems Engineering (PSE) is the scientific discipline of integrating scales and components describing the behavior of various systems via mathematical modeling, data analytics, synthesis, design, optimization, monitoring, control, and many more. The emergence of Artificial Intelligence (AI) has provided an opportunity to re-assess the role of data, models and algorithms in the context of the evolving role of PSE. This article provides a critical guide in understanding and unlocking the potential opportunities and synergies that AI can offer empowering the next generation of PSE developments towards truly Augmented Intelligence driven methods and tools.
{"title":"Data, models, algorithms, AI and the role of PSE – the generation next","authors":"E N Pistikopoulos , Rafiqul Gani","doi":"10.1016/j.compchemeng.2026.109564","DOIUrl":"10.1016/j.compchemeng.2026.109564","url":null,"abstract":"<div><div>Process Systems Engineering (PSE) is the scientific discipline of integrating scales and components describing the behavior of various systems via mathematical modeling, data analytics, synthesis, design, optimization, monitoring, control, and many more. The emergence of Artificial Intelligence (AI) has provided an opportunity to re-assess the role of data, models and algorithms in the context of the evolving role of PSE. This article provides a critical guide in understanding and unlocking the potential opportunities and synergies that AI can offer empowering the next generation of PSE developments towards truly Augmented Intelligence driven methods and tools.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109564"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.compchemeng.2026.109563
Xianming Lang , Yibing Wang , Jiangtao Cao , Qiang Liu , Edith C.H. Ngai
Urban water distribution networks face significant challenges from pipeline leakage, which leads to water loss and operational inefficiencies. Existing data-driven detection methods often neglect inherent hydraulic principles, resulting in poor model generalizability and a lack of quantitative leakage severity assessment. To address these issues, this paper proposes a physics-informed graph transformer fusion (PI-GTF) framework that integrates hydraulic mechanisms with deep learning for leakage detection and grading. The model embeds hydraulic governing equations and signal propagation rules into a graph convolutional network (GCN) and a transformer to capture spatial pipeline topology and long-term temporal dependencies of leakage signals. A novel physics-aware hierarchical adversarial gating attention (PHAGA) module is designed to align and fuse these heterogeneous features effectively. Furthermore, a five-level leakage grading system is established by combining hydraulic model outputs with sensor-based features such as pressure fluctuations and abnormal flow durations. The experimental results of a high-fidelity simulation model of Shenyang’s water network show that PI-GTF outperforms existing methods in terms of accuracy, precision, and F1 score, with zero cross-level misclassification. Migration tests on real residential networks demonstrate strong generalizability, with performance degradation within 2%. This study provides a reliable dual-driven framework for end-to-end leakage management and supports intelligent decision-making in water network maintenance.
{"title":"Physics-informed graph transformer fusion for leakage detection and grading in water distribution networks","authors":"Xianming Lang , Yibing Wang , Jiangtao Cao , Qiang Liu , Edith C.H. Ngai","doi":"10.1016/j.compchemeng.2026.109563","DOIUrl":"10.1016/j.compchemeng.2026.109563","url":null,"abstract":"<div><div>Urban water distribution networks face significant challenges from pipeline leakage, which leads to water loss and operational inefficiencies. Existing data-driven detection methods often neglect inherent hydraulic principles, resulting in poor model generalizability and a lack of quantitative leakage severity assessment. To address these issues, this paper proposes a physics-informed graph transformer fusion (PI-GTF) framework that integrates hydraulic mechanisms with deep learning for leakage detection and grading. The model embeds hydraulic governing equations and signal propagation rules into a graph convolutional network (GCN) and a transformer to capture spatial pipeline topology and long-term temporal dependencies of leakage signals. A novel physics-aware hierarchical adversarial gating attention (PHAGA) module is designed to align and fuse these heterogeneous features effectively. Furthermore, a five-level leakage grading system is established by combining hydraulic model outputs with sensor-based features such as pressure fluctuations and abnormal flow durations. The experimental results of a high-fidelity simulation model of Shenyang’s water network show that PI-GTF outperforms existing methods in terms of accuracy, precision, and F1 score, with zero cross-level misclassification. Migration tests on real residential networks demonstrate strong generalizability, with performance degradation within 2%. This study provides a reliable dual-driven framework for end-to-end leakage management and supports intelligent decision-making in water network maintenance.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109563"},"PeriodicalIF":3.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The adoption of continuous pharmaceutical manufacturing has driven increased use of modeling, simulation, and advanced process control strategies. Artificial intelligence (AI) model-based approaches, like neural network predictive control (NNPC), offer advantages in providing insights, predictions, and process adjustments. However, evaluating the credibility of such models and accurately quantifying their impact on product quality remains challenging. In this study, a digital twin model of a continuous direct compression (CDC) line was developed based on residence time distribution theory. A two-layer neural network model was trained using data from the digital twin to predict system outputs. The NNPC model combined the trained neural network with an optimization block to adjust control signals and minimize tracking error and control effort. A proportional-integral-derivative (PID) controller was also developed for comparison. The developed neural network model accurately represented the dynamics of the nonlinear system. The tuned NNPC outperformed PID in setpoint tracking (zero overshoot, shorter settling times) and disturbance rejection (≤1.6% peak deviation, settling time of zero) for ±20% and ±50% changes. In conclusion, the NNPC model demonstrated remarkable performance in setpoint tracking and disturbance rejection for the simulated CDC line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment.
{"title":"Advanced control of continuous pharmaceutical manufacturing processes: A case study on the application of artificial neural network for predictive control of a CDC line","authors":"Jianan Zhao, Geng Tian, Wei Yang, Das Jayanti, Abdollah Koolivand, Xiaoming Xu","doi":"10.1016/j.compchemeng.2026.109560","DOIUrl":"10.1016/j.compchemeng.2026.109560","url":null,"abstract":"<div><div>The adoption of continuous pharmaceutical manufacturing has driven increased use of modeling, simulation, and advanced process control strategies. Artificial intelligence (AI) model-based approaches, like neural network predictive control (NNPC), offer advantages in providing insights, predictions, and process adjustments. However, evaluating the credibility of such models and accurately quantifying their impact on product quality remains challenging. In this study, a digital twin model of a continuous direct compression (CDC) line was developed based on residence time distribution theory. A two-layer neural network model was trained using data from the digital twin to predict system outputs. The NNPC model combined the trained neural network with an optimization block to adjust control signals and minimize tracking error and control effort. A proportional-integral-derivative (PID) controller was also developed for comparison. The developed neural network model accurately represented the dynamics of the nonlinear system. The tuned NNPC outperformed PID in setpoint tracking (zero overshoot, shorter settling times) and disturbance rejection (≤1.6% peak deviation, settling time of zero) for ±20% and ±50% changes. In conclusion, the NNPC model demonstrated remarkable performance in setpoint tracking and disturbance rejection for the simulated CDC line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109560"},"PeriodicalIF":3.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.compchemeng.2026.109559
Maaz Ahmad, Iftekhar A Karimi
Global optimization of large-scale, complex systems such as multi-physics black-box simulations and real-world industrial systems is important but challenging. This work presents a novel Surrogate-Based Optimization framework based on Clustering (SBOC) for global optimization of such systems, which can be used with any surrogate modeling technique. At each iteration, it uses a single surrogate model for the entire domain, employs k-means clustering to identify unexplored domain, and exploits a local region around the surrogate’s optimum to potentially add three new sample points in the domain. SBOC has been tested against sixteen promising benchmarking algorithms using 52 analytical test functions of varying input dimensionalities and shape profiles. It successfully identified a global minimum for most test functions with substantially lower computational effort than other algorithms. It worked especially well on test functions with four or more input variables. It was also among the top six algorithms in approaching a global minimum closely. Overall, SBOC is a robust, reliable, and efficient algorithm for global optimization of box-constrained systems.
{"title":"Surrogate-based optimization via clustering for box-constrained problems","authors":"Maaz Ahmad, Iftekhar A Karimi","doi":"10.1016/j.compchemeng.2026.109559","DOIUrl":"10.1016/j.compchemeng.2026.109559","url":null,"abstract":"<div><div>Global optimization of large-scale, complex systems such as multi-physics black-box simulations and real-world industrial systems is important but challenging. This work presents a novel <u>S</u>urrogate-<u>B</u>ased <u>O</u>ptimization framework based on <u>C</u>lustering (SBOC) for global optimization of such systems, which can be used with any surrogate modeling technique. At each iteration, it uses a single surrogate model for the entire domain, employs k-means clustering to identify unexplored domain, and exploits a local region around the surrogate’s optimum to potentially add three new sample points in the domain. SBOC has been tested against sixteen promising benchmarking algorithms using 52 analytical test functions of varying input dimensionalities and shape profiles. It successfully identified a global minimum for most test functions with substantially lower computational effort than other algorithms. It worked especially well on test functions with four or more input variables. It was also among the top six algorithms in approaching a global minimum closely. Overall, SBOC is a robust, reliable, and efficient algorithm for global optimization of box-constrained systems.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109559"},"PeriodicalIF":3.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.compchemeng.2026.109550
Jie Zhu , Weifeng Chen , Lorenz T. Biegler
Estimating reaction kinetic parameters from spectral measurement data remains a critical yet unresolved challenge. Although singular value decomposition (SVD) is commonly used for spectra-based kinetic parameter estimation, the effectiveness of the estimation formulation using reduced data is not well understood. In this work, the rationale behind this formulation is supported by its derivation within a maximum likelihood framework. To address the large-scale kinetic parameter estimation problem under multiple initial conditions, a SVD-based simultaneous approach is introduced, which, in contrast to the traditional simultaneous method, avoids the direct manipulation of large-scale spectral matrices. While the specific systems of ordinary differential equations governing the reaction process vary with experimental conditions, an underlying mathematical structure is common to all. Hence, proper orthogonal decomposition is introduced to compress the model, yielding a reduced-order model for kinetic estimation. The intrinsic properties of POD make the SVD-POD simultaneous approach effective for handling weakly nonlinear reaction systems. Numerical results show that the proposed approach substantially lowers computational demands while preserving the accuracy of reaction kinetic parameter estimation from multiple spectral data.
{"title":"Data compression and model reduction based approach for kinetic parameter estimation with multiple spectra","authors":"Jie Zhu , Weifeng Chen , Lorenz T. Biegler","doi":"10.1016/j.compchemeng.2026.109550","DOIUrl":"10.1016/j.compchemeng.2026.109550","url":null,"abstract":"<div><div>Estimating reaction kinetic parameters from spectral measurement data remains a critical yet unresolved challenge. Although singular value decomposition (SVD) is commonly used for spectra-based kinetic parameter estimation, the effectiveness of the estimation formulation using reduced data is not well understood. In this work, the rationale behind this formulation is supported by its derivation within a maximum likelihood framework. To address the large-scale kinetic parameter estimation problem under multiple initial conditions, a SVD-based simultaneous approach is introduced, which, in contrast to the traditional simultaneous method, avoids the direct manipulation of large-scale spectral matrices. While the specific systems of ordinary differential equations governing the reaction process vary with experimental conditions, an underlying mathematical structure is common to all. Hence, proper orthogonal decomposition is introduced to compress the model, yielding a reduced-order model for kinetic estimation. The intrinsic properties of POD make the SVD-POD simultaneous approach effective for handling weakly nonlinear reaction systems. Numerical results show that the proposed approach substantially lowers computational demands while preserving the accuracy of reaction kinetic parameter estimation from multiple spectral data.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109550"},"PeriodicalIF":3.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.compchemeng.2026.109558
Jinqiu Hu , Mingjun Ma , Laibin Zhang
For pipeline corrosion-rate prediction in refinery units characterized by scarce high-corrosion-rate samples, numerous operating variables, and strong temporal perturbations in process parameters, this study proposes a hybrid framework that integrates structural diagnosis, feature selection, and improved ensemble learning. First, kernel principal component analysis (KPCA) is employed to identify nonlinear and redundant structures in the data, and a subset of operating-condition features with high relevance and low redundancy is constructed using mutual information–minimum redundancy maximum relevance (MI–mRMR). Then, Dropout meets Multiple Additive Regression Trees (DART) is incorporated into XGBoost to mitigate overfitting, while a hybrid dynamic perturbation strategy grey wolf optimizer (HDPSGWO) is used to perform global optimization of the hyperparameters. Using multi-loop data from the purification section of a sulfuric acid alkylation unit as a case study, the proposed model achieves RMSE=0.005876, MAE=0.004282, and R²=0.9648 on the test set, and maintains the best performance in a systematic comparison against five benchmark models. Based on TreeSHAP, the model interpretation further reveals the dominant factors driving corrosion-rate variations as well as the interval effects between operating parameters and corrosion rate. Reproduction of an engineering corrosion event verifies the early-warning capability of the proposed model. The results demonstrate that the hybrid framework can provide reliable corrosion-rate prediction under complex, non-stationary operating conditions, offering quantitative support for corrosion management and maintenance decision-making in refinery and petrochemical units.
{"title":"Application and interpretability of a hybrid-enhanced XGBoost model for corrosion-rate prediction in alkylation unit piping","authors":"Jinqiu Hu , Mingjun Ma , Laibin Zhang","doi":"10.1016/j.compchemeng.2026.109558","DOIUrl":"10.1016/j.compchemeng.2026.109558","url":null,"abstract":"<div><div>For pipeline corrosion-rate prediction in refinery units characterized by scarce high-corrosion-rate samples, numerous operating variables, and strong temporal perturbations in process parameters, this study proposes a hybrid framework that integrates structural diagnosis, feature selection, and improved ensemble learning. First, kernel principal component analysis (KPCA) is employed to identify nonlinear and redundant structures in the data, and a subset of operating-condition features with high relevance and low redundancy is constructed using mutual information–minimum redundancy maximum relevance (MI–mRMR). Then, Dropout meets Multiple Additive Regression Trees (DART) is incorporated into XGBoost to mitigate overfitting, while a hybrid dynamic perturbation strategy grey wolf optimizer (HDPSGWO) is used to perform global optimization of the hyperparameters. Using multi-loop data from the purification section of a sulfuric acid alkylation unit as a case study, the proposed model achieves RMSE=0.005876, MAE=0.004282, and R²=0.9648 on the test set, and maintains the best performance in a systematic comparison against five benchmark models. Based on TreeSHAP, the model interpretation further reveals the dominant factors driving corrosion-rate variations as well as the interval effects between operating parameters and corrosion rate. Reproduction of an engineering corrosion event verifies the early-warning capability of the proposed model. The results demonstrate that the hybrid framework can provide reliable corrosion-rate prediction under complex, non-stationary operating conditions, offering quantitative support for corrosion management and maintenance decision-making in refinery and petrochemical units.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109558"},"PeriodicalIF":3.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study introduces a unified data-driven feedforward–feedback control framework for a four-column natural gas liquids (NGL) separation system. A soft sensor estimates upstream feed composition and flow disturbances, while predictive neural networks forecast the required control-action adjustments one step ahead, enabling early compensation of disturbances as they propagate through the column train. Unlike conventional approaches, the framework captures disturbance propagation effects through data-driven intercolumn relationships, without relying on state estimation or rigorous process models. The hybrid controller, implemented in an Aspen Dynamics–Simulink environment, combines predictive compensation with local PI feedback for regulatory stability. Simulation results demonstrate significant performance improvements, reducing integral absolute error (IAE) by over 50 % and integral time absolute error (ITAE) by up to 67 % across the distillation train. The proposed framework provides a generalizable and computationally efficient strategy for coordinated control of multicolumn and other cascade-type process systems.
{"title":"Data-driven hybrid control for coordinated operation of multicolumn NGL separation systems","authors":"Sahar Shahriari , Norollah Kasiri , Javad Ivakpour","doi":"10.1016/j.compchemeng.2025.109548","DOIUrl":"10.1016/j.compchemeng.2025.109548","url":null,"abstract":"<div><div>This study introduces a unified data-driven feedforward–feedback control framework for a four-column natural gas liquids (NGL) separation system. A soft sensor estimates upstream feed composition and flow disturbances, while predictive neural networks forecast the required control-action adjustments one step ahead, enabling early compensation of disturbances as they propagate through the column train. Unlike conventional approaches, the framework captures disturbance propagation effects through data-driven intercolumn relationships, without relying on state estimation or rigorous process models. The hybrid controller, implemented in an Aspen Dynamics–Simulink environment, combines predictive compensation with local PI feedback for regulatory stability. Simulation results demonstrate significant performance improvements, reducing integral absolute error (IAE) by over 50 % and integral time absolute error (ITAE) by up to 67 % across the distillation train. The proposed framework provides a generalizable and computationally efficient strategy for coordinated control of multicolumn and other cascade-type process systems.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109548"},"PeriodicalIF":3.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.compchemeng.2025.109544
Ellis R. Crabtree , Dimitris G. Giovanis , Nikolaos Evangelou , Juan M. Bello-Rivas , Ioannis G. Kevrekidis
In dynamical systems characterized by separation of time scales, the approximation of so called “slow manifolds”, on which the long term dynamics lie, is a useful step for model reduction. Initializing on such slow manifolds is a useful step in modeling, since it circumvents fast transients, and is crucial in multiscale algorithms (like the equation-free approach) alternating between fine scale (fast) and coarser scale (slow) simulations. In a similar spirit, when one studies the infinite time dynamics of systems depending on parameters, the system attractors (e.g., its steady states) lie on bifurcation diagrams (curves for one-parameter continuation, and more generally, on manifolds in state parameter space. Sampling these manifolds gives us representative attractors (here, steady states of ODEs or PDEs) at different parameter values. Algorithms for the systematic construction of these manifolds (slow manifolds, bifurcation diagrams) are required parts of the “traditional” numerical nonlinear dynamics toolkit.
In more recent years, as the field of Machine Learning develops, conditional score-based generative models (cSGMs) have been demonstrated to exhibit remarkable capabilities in generating plausible data from target distributions that are conditioned on some given label. It is tempting to exploit such generative models to produce samples of data distributions (points on a slow manifold, steady states on a bifurcation surface) conditioned on (consistent with) some quantity of interest (QoI, observable). In this work, we present a framework for using cSGMs to quickly (a) initialize on a low-dimensional (reduced-order) slow manifold of a multi-time-scale system consistent with desired value(s) of a QoI (a “label”) on the manifold, and (b) approximate steady states in a bifurcation diagram consistent with a (new, out-of-sample) parameter value. This conditional sampling can help uncover the geometry of the reduced slow-manifold and/or approximately “fill in” missing segments of steady states in a bifurcation diagram. The quantity of interest, which determines how the sampling is conditioned, is either known a priori or identified using manifold learning-based dimensionality reduction techniques applied to the training data.
{"title":"Generative learning for slow manifolds and bifurcation diagrams","authors":"Ellis R. Crabtree , Dimitris G. Giovanis , Nikolaos Evangelou , Juan M. Bello-Rivas , Ioannis G. Kevrekidis","doi":"10.1016/j.compchemeng.2025.109544","DOIUrl":"10.1016/j.compchemeng.2025.109544","url":null,"abstract":"<div><div>In dynamical systems characterized by separation of time scales, the approximation of so called “slow manifolds”, on which the long term dynamics lie, is a useful step for model reduction. Initializing on such slow manifolds is a useful step in modeling, since it circumvents fast transients, and is crucial in multiscale algorithms (like the equation-free approach) alternating between fine scale (fast) and coarser scale (slow) simulations. In a similar spirit, when one studies the infinite time dynamics of systems depending on parameters, the system attractors (e.g., its steady states) lie on bifurcation diagrams (curves for one-parameter continuation, and more generally, on manifolds in state <span><math><mo>×</mo></math></span> parameter space. Sampling these manifolds gives us representative attractors (here, steady states of ODEs or PDEs) at different parameter values. Algorithms for the systematic construction of these manifolds (slow manifolds, bifurcation diagrams) are required parts of the “traditional” numerical nonlinear dynamics toolkit.</div><div>In more recent years, as the field of Machine Learning develops, conditional score-based generative models (cSGMs) have been demonstrated to exhibit remarkable capabilities in generating plausible data from target distributions that are conditioned on some given label. It is tempting to exploit such generative models to produce samples of data distributions (points on a slow manifold, steady states on a bifurcation surface) conditioned on (consistent with) some quantity of interest (QoI, observable). In this work, we present a framework for using cSGMs to quickly (a) initialize on a low-dimensional (reduced-order) slow manifold of a multi-time-scale system consistent with desired value(s) of a QoI (a “label”) on the manifold, and (b) approximate steady states in a bifurcation diagram consistent with a (new, out-of-sample) parameter value. This conditional sampling can help uncover the geometry of the reduced slow-manifold and/or approximately “fill in” missing segments of steady states in a bifurcation diagram. The quantity of interest, which determines how the sampling is conditioned, is either known <em>a priori</em> or identified using manifold learning-based dimensionality reduction techniques applied to the training data.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109544"},"PeriodicalIF":3.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.compchemeng.2025.109547
Guoxi He , Jing Tian , Dezhi Tang , Fei Zhao , Shuhua Li , Chao Li , Kexi Liao , XiaoFei Chen , Wen Yang
Accurate prediction of corrosion rates is of great significance for ensuring pipeline integrity and operational safety. This study proposes a novel hybrid prediction model—GAN-QPSO-XGBoost—which integrates a Generative Adversarial Network (GAN), Quantum-behaved Particle Swarm Optimization (QPSO), and the XGBoost algorithm. This study used GAN to augment 100 field data sets with 50 high-quality synthetic samples, forming an enhanced dataset of 150. The Kolmogorov-Smirnov test showed p greater than 0.05 and MAPE around 5%, confirming the synthetic data’s statistical consistency and numerical reliability. QPSO, by introducing quantum behavior mechanisms, effectively overcomes the issues of local optima and premature convergence commonly found in traditional optimization algorithms, further optimizing the predictive performance of XGBoost.To comprehensively evaluate model performance, this study adopts multiple standard metrics for validation and introduces the SHAP (Shapley Additive exPlanations) method to enhance model interpretability. Experimental results demonstrate that the GAN-QPSO-XGBoost hybrid model significantly outperforms existing benchmark models in corrosion rate prediction, with the following evaluation metrics: R² = 0.922, MAPE = 1.24%, MAE = 0.036, MSE = 0.0018, and RMSE = 0.042, fully proving its excellent predictive accuracy and stability. SHAP analysis further reveals that temperature, liquid holdup, flow velocity, CO2 partial pressure, gas-wall shear stress, and liquid-wall shear stress are the most significant factors influencing corrosion rate.In conclusion, the GAN-QPSO-XGBoost hybrid model not only significantly improves the accuracy and reliability of corrosion rate prediction but also provides a scientific basis and operational guidance for pipeline maintenance, safety assessment, and protection strategy formulation in practical engineering.
{"title":"Research on natural gas pipeline corrosion prediction by integrating extreme gradient boosting and generative adversarial network","authors":"Guoxi He , Jing Tian , Dezhi Tang , Fei Zhao , Shuhua Li , Chao Li , Kexi Liao , XiaoFei Chen , Wen Yang","doi":"10.1016/j.compchemeng.2025.109547","DOIUrl":"10.1016/j.compchemeng.2025.109547","url":null,"abstract":"<div><div>Accurate prediction of corrosion rates is of great significance for ensuring pipeline integrity and operational safety. This study proposes a novel hybrid prediction model—GAN-QPSO-XGBoost—which integrates a Generative Adversarial Network (GAN), Quantum-behaved Particle Swarm Optimization (QPSO), and the XGBoost algorithm. This study used GAN to augment 100 field data sets with 50 high-quality synthetic samples, forming an enhanced dataset of 150. The Kolmogorov-Smirnov test showed p greater than 0.05 and MAPE around 5%, confirming the synthetic data’s statistical consistency and numerical reliability. QPSO, by introducing quantum behavior mechanisms, effectively overcomes the issues of local optima and premature convergence commonly found in traditional optimization algorithms, further optimizing the predictive performance of XGBoost.To comprehensively evaluate model performance, this study adopts multiple standard metrics for validation and introduces the SHAP (Shapley Additive exPlanations) method to enhance model interpretability. Experimental results demonstrate that the GAN-QPSO-XGBoost hybrid model significantly outperforms existing benchmark models in corrosion rate prediction, with the following evaluation metrics: R² = 0.922, MAPE = 1.24%, MAE = 0.036, MSE = 0.0018, and RMSE = 0.042, fully proving its excellent predictive accuracy and stability. SHAP analysis further reveals that temperature, liquid holdup, flow velocity, CO<sub>2</sub> partial pressure, gas-wall shear stress, and liquid-wall shear stress are the most significant factors influencing corrosion rate.In conclusion, the GAN-QPSO-XGBoost hybrid model not only significantly improves the accuracy and reliability of corrosion rate prediction but also provides a scientific basis and operational guidance for pipeline maintenance, safety assessment, and protection strategy formulation in practical engineering.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109547"},"PeriodicalIF":3.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}