首页 > 最新文献

Computers & Chemical Engineering最新文献

英文 中文
Uncertainty-aware joint inventory-transportation decisions in supply chain: A diffusion model-based multi-agent reinforcement learning approach with lead times estimation 供应链中不确定性感知的联合库存运输决策:一种基于扩散模型的多智能体强化学习方法
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-14 DOI: 10.1016/j.compchemeng.2026.109567
Xiaofan Zhou, Li Feng, Aihua Zhu, Haoxu Shi
In global supply chain management, optimizing joint inventory-transportation decisions remains a critical challenge. Existing approaches often rely on deterministic assumptions or oversimplified stochastic models, which fail to adequately capture the dynamic uncertainties and multimodal variability inherent in replenishment lead times. This limitation severely restricts the robustness and coordination efficiency of decision policies in real-world complex environments. To address these issues, this paper proposes an uncertainty-aware decision framework, termed Diffusion model with Entropy-guided Multi-Agent Proximal Policy Optimization (DE-MAPPO). Our method employs a diffusion model to generate probabilistic lead time forecasting, leverages Monte Carlo sampling to quantify uncertainty, and introduces an entropy-guided adaptive strategy that enables agents to dynamically adjust inventory and transportation decisions based on forecast confidence. The effectiveness of the proposed framework is validated through experiments conducted in a simulated global chemical supply chain environment. The experimental results demonstrate that DE-MAPPO framework significantly outperforms the baseline methods across key performance metrics.
在全球供应链管理中,优化联合库存运输决策仍然是一个关键的挑战。现有的方法往往依赖于确定性假设或过于简化的随机模型,这些模型不能充分捕捉到补给提前期固有的动态不确定性和多模态可变性。这一限制严重制约了现实复杂环境中决策策略的鲁棒性和协调效率。为了解决这些问题,本文提出了一种不确定性感知决策框架,称为熵引导的多智能体近端策略优化扩散模型(DE-MAPPO)。我们的方法采用扩散模型来生成概率提前期预测,利用蒙特卡罗采样来量化不确定性,并引入熵引导的自适应策略,使代理能够根据预测置信度动态调整库存和运输决策。通过在模拟的全球化学品供应链环境中进行的实验,验证了所提出框架的有效性。实验结果表明,DE-MAPPO框架在关键性能指标上明显优于基线方法。
{"title":"Uncertainty-aware joint inventory-transportation decisions in supply chain: A diffusion model-based multi-agent reinforcement learning approach with lead times estimation","authors":"Xiaofan Zhou,&nbsp;Li Feng,&nbsp;Aihua Zhu,&nbsp;Haoxu Shi","doi":"10.1016/j.compchemeng.2026.109567","DOIUrl":"10.1016/j.compchemeng.2026.109567","url":null,"abstract":"<div><div>In global supply chain management, optimizing joint inventory-transportation decisions remains a critical challenge. Existing approaches often rely on deterministic assumptions or oversimplified stochastic models, which fail to adequately capture the dynamic uncertainties and multimodal variability inherent in replenishment lead times. This limitation severely restricts the robustness and coordination efficiency of decision policies in real-world complex environments. To address these issues, this paper proposes an uncertainty-aware decision framework, termed Diffusion model with Entropy-guided Multi-Agent Proximal Policy Optimization (DE-MAPPO). Our method employs a diffusion model to generate probabilistic lead time forecasting, leverages Monte Carlo sampling to quantify uncertainty, and introduces an entropy-guided adaptive strategy that enables agents to dynamically adjust inventory and transportation decisions based on forecast confidence. The effectiveness of the proposed framework is validated through experiments conducted in a simulated global chemical supply chain environment. The experimental results demonstrate that DE-MAPPO framework significantly outperforms the baseline methods across key performance metrics.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109567"},"PeriodicalIF":3.9,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data, models, algorithms, AI and the role of PSE – the generation next 数据,模型,算法,人工智能和PSE的作用-下一代
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-13 DOI: 10.1016/j.compchemeng.2026.109564
E N Pistikopoulos , Rafiqul Gani
Process Systems Engineering (PSE) is the scientific discipline of integrating scales and components describing the behavior of various systems via mathematical modeling, data analytics, synthesis, design, optimization, monitoring, control, and many more. The emergence of Artificial Intelligence (AI) has provided an opportunity to re-assess the role of data, models and algorithms in the context of the evolving role of PSE. This article provides a critical guide in understanding and unlocking the potential opportunities and synergies that AI can offer empowering the next generation of PSE developments towards truly Augmented Intelligence driven methods and tools.
过程系统工程(PSE)是一门通过数学建模、数据分析、综合、设计、优化、监视、控制等等,将描述各种系统行为的尺度和组件集成在一起的科学学科。人工智能(AI)的出现为在PSE角色不断演变的背景下重新评估数据、模型和算法的作用提供了机会。本文提供了一个重要的指南,以理解和释放AI可以提供的潜在机会和协同作用,从而使下一代PSE开发朝着真正的增强智能驱动的方法和工具发展。
{"title":"Data, models, algorithms, AI and the role of PSE – the generation next","authors":"E N Pistikopoulos ,&nbsp;Rafiqul Gani","doi":"10.1016/j.compchemeng.2026.109564","DOIUrl":"10.1016/j.compchemeng.2026.109564","url":null,"abstract":"<div><div>Process Systems Engineering (PSE) is the scientific discipline of integrating scales and components describing the behavior of various systems via mathematical modeling, data analytics, synthesis, design, optimization, monitoring, control, and many more. The emergence of Artificial Intelligence (AI) has provided an opportunity to re-assess the role of data, models and algorithms in the context of the evolving role of PSE. This article provides a critical guide in understanding and unlocking the potential opportunities and synergies that AI can offer empowering the next generation of PSE developments towards truly Augmented Intelligence driven methods and tools.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109564"},"PeriodicalIF":3.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-informed graph transformer fusion for leakage detection and grading in water distribution networks 用于配水网络泄漏检测和分级的物理信息图变压器融合
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-09 DOI: 10.1016/j.compchemeng.2026.109563
Xianming Lang , Yibing Wang , Jiangtao Cao , Qiang Liu , Edith C.H. Ngai
Urban water distribution networks face significant challenges from pipeline leakage, which leads to water loss and operational inefficiencies. Existing data-driven detection methods often neglect inherent hydraulic principles, resulting in poor model generalizability and a lack of quantitative leakage severity assessment. To address these issues, this paper proposes a physics-informed graph transformer fusion (PI-GTF) framework that integrates hydraulic mechanisms with deep learning for leakage detection and grading. The model embeds hydraulic governing equations and signal propagation rules into a graph convolutional network (GCN) and a transformer to capture spatial pipeline topology and long-term temporal dependencies of leakage signals. A novel physics-aware hierarchical adversarial gating attention (PHAGA) module is designed to align and fuse these heterogeneous features effectively. Furthermore, a five-level leakage grading system is established by combining hydraulic model outputs with sensor-based features such as pressure fluctuations and abnormal flow durations. The experimental results of a high-fidelity simulation model of Shenyang’s water network show that PI-GTF outperforms existing methods in terms of accuracy, precision, and F1 score, with zero cross-level misclassification. Migration tests on real residential networks demonstrate strong generalizability, with performance degradation within 2%. This study provides a reliable dual-driven framework for end-to-end leakage management and supports intelligent decision-making in water network maintenance.
城市配水管网面临着管道泄漏的重大挑战,管道泄漏导致水资源流失和运行效率低下。现有的数据驱动检测方法往往忽略了固有的水力原理,导致模型通用性差,缺乏定量的泄漏严重程度评估。为了解决这些问题,本文提出了一个物理知情图变压器融合(PI-GTF)框架,该框架将液压机制与深度学习集成在一起,用于泄漏检测和分级。该模型将水力控制方程和信号传播规则嵌入到图卷积网络(GCN)和变压器中,以捕获管道的空间拓扑结构和泄漏信号的长期时间依赖性。设计了一种新的物理感知分层对抗性门控注意(PHAGA)模块来有效地对齐和融合这些异构特征。将水力模型输出与基于传感器的压力波动、异常流量持续时间等特征相结合,建立了五级泄漏分级系统。沈阳市水网高保真仿真模型实验结果表明,PI-GTF在准确率、精密度和F1评分方面均优于现有方法,且无交叉水平误分类。在实际住宅网络上的迁移测试显示了很强的泛化性,性能下降在2%以内。该研究为端到端泄漏管理提供了可靠的双驱动框架,并支持水网维护的智能决策。
{"title":"Physics-informed graph transformer fusion for leakage detection and grading in water distribution networks","authors":"Xianming Lang ,&nbsp;Yibing Wang ,&nbsp;Jiangtao Cao ,&nbsp;Qiang Liu ,&nbsp;Edith C.H. Ngai","doi":"10.1016/j.compchemeng.2026.109563","DOIUrl":"10.1016/j.compchemeng.2026.109563","url":null,"abstract":"<div><div>Urban water distribution networks face significant challenges from pipeline leakage, which leads to water loss and operational inefficiencies. Existing data-driven detection methods often neglect inherent hydraulic principles, resulting in poor model generalizability and a lack of quantitative leakage severity assessment. To address these issues, this paper proposes a physics-informed graph transformer fusion (PI-GTF) framework that integrates hydraulic mechanisms with deep learning for leakage detection and grading. The model embeds hydraulic governing equations and signal propagation rules into a graph convolutional network (GCN) and a transformer to capture spatial pipeline topology and long-term temporal dependencies of leakage signals. A novel physics-aware hierarchical adversarial gating attention (PHAGA) module is designed to align and fuse these heterogeneous features effectively. Furthermore, a five-level leakage grading system is established by combining hydraulic model outputs with sensor-based features such as pressure fluctuations and abnormal flow durations. The experimental results of a high-fidelity simulation model of Shenyang’s water network show that PI-GTF outperforms existing methods in terms of accuracy, precision, and F1 score, with zero cross-level misclassification. Migration tests on real residential networks demonstrate strong generalizability, with performance degradation within 2%. This study provides a reliable dual-driven framework for end-to-end leakage management and supports intelligent decision-making in water network maintenance.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109563"},"PeriodicalIF":3.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced control of continuous pharmaceutical manufacturing processes: A case study on the application of artificial neural network for predictive control of a CDC line 连续制药过程的先进控制:人工神经网络在CDC生产线预测控制中的应用案例研究
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-09 DOI: 10.1016/j.compchemeng.2026.109560
Jianan Zhao, Geng Tian, Wei Yang, Das Jayanti, Abdollah Koolivand, Xiaoming Xu
The adoption of continuous pharmaceutical manufacturing has driven increased use of modeling, simulation, and advanced process control strategies. Artificial intelligence (AI) model-based approaches, like neural network predictive control (NNPC), offer advantages in providing insights, predictions, and process adjustments. However, evaluating the credibility of such models and accurately quantifying their impact on product quality remains challenging. In this study, a digital twin model of a continuous direct compression (CDC) line was developed based on residence time distribution theory. A two-layer neural network model was trained using data from the digital twin to predict system outputs. The NNPC model combined the trained neural network with an optimization block to adjust control signals and minimize tracking error and control effort. A proportional-integral-derivative (PID) controller was also developed for comparison. The developed neural network model accurately represented the dynamics of the nonlinear system. The tuned NNPC outperformed PID in setpoint tracking (zero overshoot, shorter settling times) and disturbance rejection (≤1.6% peak deviation, settling time of zero) for ±20% and ±50% changes. In conclusion, the NNPC model demonstrated remarkable performance in setpoint tracking and disturbance rejection for the simulated CDC line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment.
连续制药生产的采用推动了建模、仿真和高级过程控制策略的使用增加。基于人工智能(AI)模型的方法,如神经网络预测控制(NNPC),在提供见解、预测和过程调整方面具有优势。然而,评估这些模型的可信度并准确量化它们对产品质量的影响仍然具有挑战性。本文基于停留时间分布理论,建立了连续直接压缩线的数字孪生模型。利用数字孪生的数据训练了一个双层神经网络模型来预测系统输出。NNPC模型将训练好的神经网络与优化块相结合,调整控制信号,使跟踪误差和控制努力最小化。为了进行比较,还开发了一种比例-积分-导数(PID)控制器。所建立的神经网络模型准确地反映了非线性系统的动力学特性。调整后的NNPC在±20%和±50%的变化情况下,在设定值跟踪(零超调,更短的沉降时间)和干扰抑制(峰值偏差≤1.6%,沉降时间为零)方面优于PID。总之,NNPC模型在模拟的CDC线的设定值跟踪和干扰抑制方面表现出色,强调了基于人工智能的控制策略在提高产品质量和监管评估方面的潜力。
{"title":"Advanced control of continuous pharmaceutical manufacturing processes: A case study on the application of artificial neural network for predictive control of a CDC line","authors":"Jianan Zhao,&nbsp;Geng Tian,&nbsp;Wei Yang,&nbsp;Das Jayanti,&nbsp;Abdollah Koolivand,&nbsp;Xiaoming Xu","doi":"10.1016/j.compchemeng.2026.109560","DOIUrl":"10.1016/j.compchemeng.2026.109560","url":null,"abstract":"<div><div>The adoption of continuous pharmaceutical manufacturing has driven increased use of modeling, simulation, and advanced process control strategies. Artificial intelligence (AI) model-based approaches, like neural network predictive control (NNPC), offer advantages in providing insights, predictions, and process adjustments. However, evaluating the credibility of such models and accurately quantifying their impact on product quality remains challenging. In this study, a digital twin model of a continuous direct compression (CDC) line was developed based on residence time distribution theory. A two-layer neural network model was trained using data from the digital twin to predict system outputs. The NNPC model combined the trained neural network with an optimization block to adjust control signals and minimize tracking error and control effort. A proportional-integral-derivative (PID) controller was also developed for comparison. The developed neural network model accurately represented the dynamics of the nonlinear system. The tuned NNPC outperformed PID in setpoint tracking (zero overshoot, shorter settling times) and disturbance rejection (≤1.6% peak deviation, settling time of zero) for ±20% and ±50% changes. In conclusion, the NNPC model demonstrated remarkable performance in setpoint tracking and disturbance rejection for the simulated CDC line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109560"},"PeriodicalIF":3.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surrogate-based optimization via clustering for box-constrained problems 基于代理的盒约束问题聚类优化
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-09 DOI: 10.1016/j.compchemeng.2026.109559
Maaz Ahmad, Iftekhar A Karimi
Global optimization of large-scale, complex systems such as multi-physics black-box simulations and real-world industrial systems is important but challenging. This work presents a novel Surrogate-Based Optimization framework based on Clustering (SBOC) for global optimization of such systems, which can be used with any surrogate modeling technique. At each iteration, it uses a single surrogate model for the entire domain, employs k-means clustering to identify unexplored domain, and exploits a local region around the surrogate’s optimum to potentially add three new sample points in the domain. SBOC has been tested against sixteen promising benchmarking algorithms using 52 analytical test functions of varying input dimensionalities and shape profiles. It successfully identified a global minimum for most test functions with substantially lower computational effort than other algorithms. It worked especially well on test functions with four or more input variables. It was also among the top six algorithms in approaching a global minimum closely. Overall, SBOC is a robust, reliable, and efficient algorithm for global optimization of box-constrained systems.
大规模、复杂系统的全局优化,如多物理场黑盒模拟和现实世界的工业系统是重要的,但具有挑战性。本文提出了一种新的基于聚类的基于代理的优化框架(SBOC),用于此类系统的全局优化,该框架可与任何代理建模技术一起使用。在每次迭代中,它对整个域使用单个代理模型,使用k-means聚类来识别未探索的域,并利用代理最优周围的局部区域在域中潜在地添加三个新的样本点。SBOC已经使用52种不同输入维度和形状轮廓的分析测试函数对16种有前途的基准测试算法进行了测试。它成功地确定了大多数测试函数的全局最小值,比其他算法的计算工作量要低得多。它在具有四个或更多输入变量的测试函数上工作得特别好。它也是接近全局最小值的前六种算法之一。总体而言,SBOC是一种鲁棒、可靠、高效的盒约束系统全局优化算法。
{"title":"Surrogate-based optimization via clustering for box-constrained problems","authors":"Maaz Ahmad,&nbsp;Iftekhar A Karimi","doi":"10.1016/j.compchemeng.2026.109559","DOIUrl":"10.1016/j.compchemeng.2026.109559","url":null,"abstract":"<div><div>Global optimization of large-scale, complex systems such as multi-physics black-box simulations and real-world industrial systems is important but challenging. This work presents a novel <u>S</u>urrogate-<u>B</u>ased <u>O</u>ptimization framework based on <u>C</u>lustering (SBOC) for global optimization of such systems, which can be used with any surrogate modeling technique. At each iteration, it uses a single surrogate model for the entire domain, employs k-means clustering to identify unexplored domain, and exploits a local region around the surrogate’s optimum to potentially add three new sample points in the domain. SBOC has been tested against sixteen promising benchmarking algorithms using 52 analytical test functions of varying input dimensionalities and shape profiles. It successfully identified a global minimum for most test functions with substantially lower computational effort than other algorithms. It worked especially well on test functions with four or more input variables. It was also among the top six algorithms in approaching a global minimum closely. Overall, SBOC is a robust, reliable, and efficient algorithm for global optimization of box-constrained systems.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109559"},"PeriodicalIF":3.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data compression and model reduction based approach for kinetic parameter estimation with multiple spectra 基于数据压缩和模型约简的多光谱动力学参数估计方法
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-07 DOI: 10.1016/j.compchemeng.2026.109550
Jie Zhu , Weifeng Chen , Lorenz T. Biegler
Estimating reaction kinetic parameters from spectral measurement data remains a critical yet unresolved challenge. Although singular value decomposition (SVD) is commonly used for spectra-based kinetic parameter estimation, the effectiveness of the estimation formulation using reduced data is not well understood. In this work, the rationale behind this formulation is supported by its derivation within a maximum likelihood framework. To address the large-scale kinetic parameter estimation problem under multiple initial conditions, a SVD-based simultaneous approach is introduced, which, in contrast to the traditional simultaneous method, avoids the direct manipulation of large-scale spectral matrices. While the specific systems of ordinary differential equations governing the reaction process vary with experimental conditions, an underlying mathematical structure is common to all. Hence, proper orthogonal decomposition is introduced to compress the model, yielding a reduced-order model for kinetic estimation. The intrinsic properties of POD make the SVD-POD simultaneous approach effective for handling weakly nonlinear reaction systems. Numerical results show that the proposed approach substantially lowers computational demands while preserving the accuracy of reaction kinetic parameter estimation from multiple spectral data.
从光谱测量数据估计反应动力学参数仍然是一个关键但尚未解决的挑战。尽管奇异值分解(SVD)通常用于基于光谱的动力学参数估计,但使用约简数据的估计公式的有效性尚未得到很好的理解。在这项工作中,该公式背后的基本原理是由其在最大似然框架内的推导支持的。为了解决多初始条件下的大尺度动力学参数估计问题,提出了一种基于奇异值分解的同时估计方法,与传统的同时估计方法相比,该方法避免了对大尺度谱矩阵的直接操作。虽然控制反应过程的常微分方程的特定系统随实验条件的不同而变化,但一个潜在的数学结构对所有人都是共同的。因此,引入适当的正交分解来压缩模型,得到一个降阶的动力学估计模型。POD的固有性质使得SVD-POD同步方法对处理弱非线性反应系统是有效的。数值结果表明,该方法在保证多光谱数据反应动力学参数估计精度的同时,大大降低了计算量。
{"title":"Data compression and model reduction based approach for kinetic parameter estimation with multiple spectra","authors":"Jie Zhu ,&nbsp;Weifeng Chen ,&nbsp;Lorenz T. Biegler","doi":"10.1016/j.compchemeng.2026.109550","DOIUrl":"10.1016/j.compchemeng.2026.109550","url":null,"abstract":"<div><div>Estimating reaction kinetic parameters from spectral measurement data remains a critical yet unresolved challenge. Although singular value decomposition (SVD) is commonly used for spectra-based kinetic parameter estimation, the effectiveness of the estimation formulation using reduced data is not well understood. In this work, the rationale behind this formulation is supported by its derivation within a maximum likelihood framework. To address the large-scale kinetic parameter estimation problem under multiple initial conditions, a SVD-based simultaneous approach is introduced, which, in contrast to the traditional simultaneous method, avoids the direct manipulation of large-scale spectral matrices. While the specific systems of ordinary differential equations governing the reaction process vary with experimental conditions, an underlying mathematical structure is common to all. Hence, proper orthogonal decomposition is introduced to compress the model, yielding a reduced-order model for kinetic estimation. The intrinsic properties of POD make the SVD-POD simultaneous approach effective for handling weakly nonlinear reaction systems. Numerical results show that the proposed approach substantially lowers computational demands while preserving the accuracy of reaction kinetic parameter estimation from multiple spectral data.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109550"},"PeriodicalIF":3.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application and interpretability of a hybrid-enhanced XGBoost model for corrosion-rate prediction in alkylation unit piping 混合增强XGBoost模型在烷基化装置管道腐蚀速率预测中的应用及可解释性
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-06 DOI: 10.1016/j.compchemeng.2026.109558
Jinqiu Hu , Mingjun Ma , Laibin Zhang
For pipeline corrosion-rate prediction in refinery units characterized by scarce high-corrosion-rate samples, numerous operating variables, and strong temporal perturbations in process parameters, this study proposes a hybrid framework that integrates structural diagnosis, feature selection, and improved ensemble learning. First, kernel principal component analysis (KPCA) is employed to identify nonlinear and redundant structures in the data, and a subset of operating-condition features with high relevance and low redundancy is constructed using mutual information–minimum redundancy maximum relevance (MI–mRMR). Then, Dropout meets Multiple Additive Regression Trees (DART) is incorporated into XGBoost to mitigate overfitting, while a hybrid dynamic perturbation strategy grey wolf optimizer (HDPSGWO) is used to perform global optimization of the hyperparameters. Using multi-loop data from the purification section of a sulfuric acid alkylation unit as a case study, the proposed model achieves RMSE=0.005876, MAE=0.004282, and R²=0.9648 on the test set, and maintains the best performance in a systematic comparison against five benchmark models. Based on TreeSHAP, the model interpretation further reveals the dominant factors driving corrosion-rate variations as well as the interval effects between operating parameters and corrosion rate. Reproduction of an engineering corrosion event verifies the early-warning capability of the proposed model. The results demonstrate that the hybrid framework can provide reliable corrosion-rate prediction under complex, non-stationary operating conditions, offering quantitative support for corrosion management and maintenance decision-making in refinery and petrochemical units.
对于炼油厂的管道腐蚀速率预测,其特点是缺乏高腐蚀速率样本,操作变量众多,工艺参数具有较强的时间扰动,本研究提出了一个混合框架,该框架集成了结构诊断、特征选择和改进的集成学习。首先,利用核主成分分析(KPCA)识别数据中的非线性和冗余结构,并利用互信息最小冗余最大关联(MI-mRMR)构建高相关性和低冗余的工况特征子集;然后,在XGBoost中引入Dropout满足多元加性回归树(DART)来缓解过拟合,并使用混合动态扰动策略灰狼优化器(HDPSGWO)对超参数进行全局优化。以硫酸烷基化装置净化段的多回路数据为例,该模型在测试集上的RMSE=0.005876, MAE=0.004282, R²=0.9648,在与5个基准模型的系统比较中保持最佳性能。基于TreeSHAP,模型解释进一步揭示了影响腐蚀速率变化的主要因素,以及作业参数和腐蚀速率之间的间隔效应。工程腐蚀事件的再现验证了所提模型的预警能力。结果表明,混合框架可以在复杂、非平稳的运行条件下提供可靠的腐蚀速率预测,为炼油厂和石化装置的腐蚀管理和维护决策提供定量支持。
{"title":"Application and interpretability of a hybrid-enhanced XGBoost model for corrosion-rate prediction in alkylation unit piping","authors":"Jinqiu Hu ,&nbsp;Mingjun Ma ,&nbsp;Laibin Zhang","doi":"10.1016/j.compchemeng.2026.109558","DOIUrl":"10.1016/j.compchemeng.2026.109558","url":null,"abstract":"<div><div>For pipeline corrosion-rate prediction in refinery units characterized by scarce high-corrosion-rate samples, numerous operating variables, and strong temporal perturbations in process parameters, this study proposes a hybrid framework that integrates structural diagnosis, feature selection, and improved ensemble learning. First, kernel principal component analysis (KPCA) is employed to identify nonlinear and redundant structures in the data, and a subset of operating-condition features with high relevance and low redundancy is constructed using mutual information–minimum redundancy maximum relevance (MI–mRMR). Then, Dropout meets Multiple Additive Regression Trees (DART) is incorporated into XGBoost to mitigate overfitting, while a hybrid dynamic perturbation strategy grey wolf optimizer (HDPSGWO) is used to perform global optimization of the hyperparameters. Using multi-loop data from the purification section of a sulfuric acid alkylation unit as a case study, the proposed model achieves RMSE=0.005876, MAE=0.004282, and R²=0.9648 on the test set, and maintains the best performance in a systematic comparison against five benchmark models. Based on TreeSHAP, the model interpretation further reveals the dominant factors driving corrosion-rate variations as well as the interval effects between operating parameters and corrosion rate. Reproduction of an engineering corrosion event verifies the early-warning capability of the proposed model. The results demonstrate that the hybrid framework can provide reliable corrosion-rate prediction under complex, non-stationary operating conditions, offering quantitative support for corrosion management and maintenance decision-making in refinery and petrochemical units.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109558"},"PeriodicalIF":3.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven hybrid control for coordinated operation of multicolumn NGL separation systems 多塔NGL分离系统协同操作的数据驱动混合控制
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2026-01-01 DOI: 10.1016/j.compchemeng.2025.109548
Sahar Shahriari , Norollah Kasiri , Javad Ivakpour
This study introduces a unified data-driven feedforward–feedback control framework for a four-column natural gas liquids (NGL) separation system. A soft sensor estimates upstream feed composition and flow disturbances, while predictive neural networks forecast the required control-action adjustments one step ahead, enabling early compensation of disturbances as they propagate through the column train. Unlike conventional approaches, the framework captures disturbance propagation effects through data-driven intercolumn relationships, without relying on state estimation or rigorous process models. The hybrid controller, implemented in an Aspen Dynamics–Simulink environment, combines predictive compensation with local PI feedback for regulatory stability. Simulation results demonstrate significant performance improvements, reducing integral absolute error (IAE) by over 50 % and integral time absolute error (ITAE) by up to 67 % across the distillation train. The proposed framework provides a generalizable and computationally efficient strategy for coordinated control of multicolumn and other cascade-type process systems.
介绍了一种统一的数据驱动的四柱天然气液体(NGL)分离系统前馈-反馈控制框架。软传感器估计上游进料组成和流量干扰,而预测神经网络提前一步预测所需的控制动作调整,从而在干扰通过柱列传播时对其进行早期补偿。与传统方法不同,该框架通过数据驱动的列间关系捕获干扰传播效应,而不依赖于状态估计或严格的过程模型。在Aspen Dynamics-Simulink环境中实现的混合控制器将预测补偿与局部PI反馈相结合,以实现调节稳定性。仿真结果显示了显著的性能改进,在整个蒸馏过程中,积分绝对误差(IAE)降低了50%以上,积分时间绝对误差(ITAE)降低了67%。所提出的框架为多列和其他级联型过程系统的协调控制提供了一种可推广且计算效率高的策略。
{"title":"Data-driven hybrid control for coordinated operation of multicolumn NGL separation systems","authors":"Sahar Shahriari ,&nbsp;Norollah Kasiri ,&nbsp;Javad Ivakpour","doi":"10.1016/j.compchemeng.2025.109548","DOIUrl":"10.1016/j.compchemeng.2025.109548","url":null,"abstract":"<div><div>This study introduces a unified data-driven feedforward–feedback control framework for a four-column natural gas liquids (NGL) separation system. A soft sensor estimates upstream feed composition and flow disturbances, while predictive neural networks forecast the required control-action adjustments one step ahead, enabling early compensation of disturbances as they propagate through the column train. Unlike conventional approaches, the framework captures disturbance propagation effects through data-driven intercolumn relationships, without relying on state estimation or rigorous process models. The hybrid controller, implemented in an Aspen Dynamics–Simulink environment, combines predictive compensation with local PI feedback for regulatory stability. Simulation results demonstrate significant performance improvements, reducing integral absolute error (IAE) by over 50 % and integral time absolute error (ITAE) by up to 67 % across the distillation train. The proposed framework provides a generalizable and computationally efficient strategy for coordinated control of multicolumn and other cascade-type process systems.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109548"},"PeriodicalIF":3.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative learning for slow manifolds and bifurcation diagrams 慢流形和分岔图的生成学习
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-30 DOI: 10.1016/j.compchemeng.2025.109544
Ellis R. Crabtree , Dimitris G. Giovanis , Nikolaos Evangelou , Juan M. Bello-Rivas , Ioannis G. Kevrekidis
In dynamical systems characterized by separation of time scales, the approximation of so called “slow manifolds”, on which the long term dynamics lie, is a useful step for model reduction. Initializing on such slow manifolds is a useful step in modeling, since it circumvents fast transients, and is crucial in multiscale algorithms (like the equation-free approach) alternating between fine scale (fast) and coarser scale (slow) simulations. In a similar spirit, when one studies the infinite time dynamics of systems depending on parameters, the system attractors (e.g., its steady states) lie on bifurcation diagrams (curves for one-parameter continuation, and more generally, on manifolds in state × parameter space. Sampling these manifolds gives us representative attractors (here, steady states of ODEs or PDEs) at different parameter values. Algorithms for the systematic construction of these manifolds (slow manifolds, bifurcation diagrams) are required parts of the “traditional” numerical nonlinear dynamics toolkit.
In more recent years, as the field of Machine Learning develops, conditional score-based generative models (cSGMs) have been demonstrated to exhibit remarkable capabilities in generating plausible data from target distributions that are conditioned on some given label. It is tempting to exploit such generative models to produce samples of data distributions (points on a slow manifold, steady states on a bifurcation surface) conditioned on (consistent with) some quantity of interest (QoI, observable). In this work, we present a framework for using cSGMs to quickly (a) initialize on a low-dimensional (reduced-order) slow manifold of a multi-time-scale system consistent with desired value(s) of a QoI (a “label”) on the manifold, and (b) approximate steady states in a bifurcation diagram consistent with a (new, out-of-sample) parameter value. This conditional sampling can help uncover the geometry of the reduced slow-manifold and/or approximately “fill in” missing segments of steady states in a bifurcation diagram. The quantity of interest, which determines how the sampling is conditioned, is either known a priori or identified using manifold learning-based dimensionality reduction techniques applied to the training data.
在以时间尺度分离为特征的动力系统中,长期动力学所依赖的所谓“慢流形”的近似是模型简化的一个有用步骤。在这种慢流形上初始化是建模中的一个有用步骤,因为它绕过了快速瞬态,并且在多尺度算法(如无方程方法)中在精细尺度(快速)和粗尺度(慢)模拟之间交替是至关重要的。同样,当一个人研究依赖于参数的系统的无限时间动力学时,系统吸引子(例如,它的稳态)位于分岔图(单参数延拓的曲线)上,更一般地说,位于状态x参数空间中的流形上。对这些流形进行采样,可以得到不同参数值下具有代表性的吸引子(这里是ode或pde的稳定状态)。这些流形(慢流形,分岔图)的系统构造算法是“传统”数值非线性动力学工具包的必要组成部分。近年来,随着机器学习领域的发展,基于条件分数的生成模型(cSGMs)已经被证明在从特定标签条件下的目标分布生成可信数据方面表现出非凡的能力。利用这种生成模型来产生数据分布的样本(慢流形上的点,分岔表面上的稳定状态)是很诱人的,这些数据分布的条件是(与)一些兴趣量(qi,可观察值)一致。在这项工作中,我们提出了一个使用cSGMs的框架,用于快速(a)初始化与流形上的qi(一个“标签”)的期望值一致的多时间尺度系统的低维(降阶)慢流形,以及(b)在与(新的,样本外)参数值一致的分岔图中近似稳态。这种条件采样可以帮助揭示减少的慢流形的几何形状和/或近似地“填补”分岔图中稳态的缺失部分。兴趣的数量决定了采样的条件,它要么是先验的,要么是使用应用于训练数据的基于学习的多种降维技术来识别的。
{"title":"Generative learning for slow manifolds and bifurcation diagrams","authors":"Ellis R. Crabtree ,&nbsp;Dimitris G. Giovanis ,&nbsp;Nikolaos Evangelou ,&nbsp;Juan M. Bello-Rivas ,&nbsp;Ioannis G. Kevrekidis","doi":"10.1016/j.compchemeng.2025.109544","DOIUrl":"10.1016/j.compchemeng.2025.109544","url":null,"abstract":"<div><div>In dynamical systems characterized by separation of time scales, the approximation of so called “slow manifolds”, on which the long term dynamics lie, is a useful step for model reduction. Initializing on such slow manifolds is a useful step in modeling, since it circumvents fast transients, and is crucial in multiscale algorithms (like the equation-free approach) alternating between fine scale (fast) and coarser scale (slow) simulations. In a similar spirit, when one studies the infinite time dynamics of systems depending on parameters, the system attractors (e.g., its steady states) lie on bifurcation diagrams (curves for one-parameter continuation, and more generally, on manifolds in state <span><math><mo>×</mo></math></span> parameter space. Sampling these manifolds gives us representative attractors (here, steady states of ODEs or PDEs) at different parameter values. Algorithms for the systematic construction of these manifolds (slow manifolds, bifurcation diagrams) are required parts of the “traditional” numerical nonlinear dynamics toolkit.</div><div>In more recent years, as the field of Machine Learning develops, conditional score-based generative models (cSGMs) have been demonstrated to exhibit remarkable capabilities in generating plausible data from target distributions that are conditioned on some given label. It is tempting to exploit such generative models to produce samples of data distributions (points on a slow manifold, steady states on a bifurcation surface) conditioned on (consistent with) some quantity of interest (QoI, observable). In this work, we present a framework for using cSGMs to quickly (a) initialize on a low-dimensional (reduced-order) slow manifold of a multi-time-scale system consistent with desired value(s) of a QoI (a “label”) on the manifold, and (b) approximate steady states in a bifurcation diagram consistent with a (new, out-of-sample) parameter value. This conditional sampling can help uncover the geometry of the reduced slow-manifold and/or approximately “fill in” missing segments of steady states in a bifurcation diagram. The quantity of interest, which determines how the sampling is conditioned, is either known <em>a priori</em> or identified using manifold learning-based dimensionality reduction techniques applied to the training data.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109544"},"PeriodicalIF":3.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145923461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on natural gas pipeline corrosion prediction by integrating extreme gradient boosting and generative adversarial network 结合极端梯度增强和生成对抗网络的天然气管道腐蚀预测研究
IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-30 DOI: 10.1016/j.compchemeng.2025.109547
Guoxi He , Jing Tian , Dezhi Tang , Fei Zhao , Shuhua Li , Chao Li , Kexi Liao , XiaoFei Chen , Wen Yang
Accurate prediction of corrosion rates is of great significance for ensuring pipeline integrity and operational safety. This study proposes a novel hybrid prediction model—GAN-QPSO-XGBoost—which integrates a Generative Adversarial Network (GAN), Quantum-behaved Particle Swarm Optimization (QPSO), and the XGBoost algorithm. This study used GAN to augment 100 field data sets with 50 high-quality synthetic samples, forming an enhanced dataset of 150. The Kolmogorov-Smirnov test showed p greater than 0.05 and MAPE around 5%, confirming the synthetic data’s statistical consistency and numerical reliability. QPSO, by introducing quantum behavior mechanisms, effectively overcomes the issues of local optima and premature convergence commonly found in traditional optimization algorithms, further optimizing the predictive performance of XGBoost.To comprehensively evaluate model performance, this study adopts multiple standard metrics for validation and introduces the SHAP (Shapley Additive exPlanations) method to enhance model interpretability. Experimental results demonstrate that the GAN-QPSO-XGBoost hybrid model significantly outperforms existing benchmark models in corrosion rate prediction, with the following evaluation metrics: R² = 0.922, MAPE = 1.24%, MAE = 0.036, MSE = 0.0018, and RMSE = 0.042, fully proving its excellent predictive accuracy and stability. SHAP analysis further reveals that temperature, liquid holdup, flow velocity, CO2 partial pressure, gas-wall shear stress, and liquid-wall shear stress are the most significant factors influencing corrosion rate.In conclusion, the GAN-QPSO-XGBoost hybrid model not only significantly improves the accuracy and reliability of corrosion rate prediction but also provides a scientific basis and operational guidance for pipeline maintenance, safety assessment, and protection strategy formulation in practical engineering.
准确预测管道腐蚀速率对保证管道的完整性和运行安全具有重要意义。本研究提出了一种新的混合预测模型GAN-QPSO-XGBoost,该模型集成了生成对抗网络(GAN)、量子粒子群优化(QPSO)和XGBoost算法。本研究使用GAN用50个高质量的合成样本增强了100个现场数据集,形成了150个增强数据集。Kolmogorov-Smirnov检验显示p > 0.05, MAPE在5%左右,证实了合成数据的统计一致性和数值可靠性。QPSO通过引入量子行为机制,有效克服了传统优化算法存在的局部最优和过早收敛问题,进一步优化了XGBoost的预测性能。为了综合评价模型的性能,本研究采用多个标准指标进行验证,并引入SHAP (Shapley Additive exPlanations)方法来增强模型的可解释性。实验结果表明,GAN-QPSO-XGBoost混合模型在腐蚀速率预测方面明显优于现有的基准模型,其评价指标为:R²= 0.922,MAPE = 1.24%, MAE = 0.036, MSE = 0.0018, RMSE = 0.042,充分证明了其良好的预测精度和稳定性。进一步的SHAP分析表明,温度、液含率、流速、CO2分压、气壁剪切应力和液壁剪切应力是影响腐蚀速率最显著的因素。综上所述,GAN-QPSO-XGBoost混合模型不仅显著提高了腐蚀速率预测的准确性和可靠性,而且为实际工程中的管道维护、安全评估和保护策略制定提供了科学依据和操作指导。
{"title":"Research on natural gas pipeline corrosion prediction by integrating extreme gradient boosting and generative adversarial network","authors":"Guoxi He ,&nbsp;Jing Tian ,&nbsp;Dezhi Tang ,&nbsp;Fei Zhao ,&nbsp;Shuhua Li ,&nbsp;Chao Li ,&nbsp;Kexi Liao ,&nbsp;XiaoFei Chen ,&nbsp;Wen Yang","doi":"10.1016/j.compchemeng.2025.109547","DOIUrl":"10.1016/j.compchemeng.2025.109547","url":null,"abstract":"<div><div>Accurate prediction of corrosion rates is of great significance for ensuring pipeline integrity and operational safety. This study proposes a novel hybrid prediction model—GAN-QPSO-XGBoost—which integrates a Generative Adversarial Network (GAN), Quantum-behaved Particle Swarm Optimization (QPSO), and the XGBoost algorithm. This study used GAN to augment 100 field data sets with 50 high-quality synthetic samples, forming an enhanced dataset of 150. The Kolmogorov-Smirnov test showed p greater than 0.05 and MAPE around 5%, confirming the synthetic data’s statistical consistency and numerical reliability. QPSO, by introducing quantum behavior mechanisms, effectively overcomes the issues of local optima and premature convergence commonly found in traditional optimization algorithms, further optimizing the predictive performance of XGBoost.To comprehensively evaluate model performance, this study adopts multiple standard metrics for validation and introduces the SHAP (Shapley Additive exPlanations) method to enhance model interpretability. Experimental results demonstrate that the GAN-QPSO-XGBoost hybrid model significantly outperforms existing benchmark models in corrosion rate prediction, with the following evaluation metrics: R² = 0.922, MAPE = 1.24%, MAE = 0.036, MSE = 0.0018, and RMSE = 0.042, fully proving its excellent predictive accuracy and stability. SHAP analysis further reveals that temperature, liquid holdup, flow velocity, CO<sub>2</sub> partial pressure, gas-wall shear stress, and liquid-wall shear stress are the most significant factors influencing corrosion rate.In conclusion, the GAN-QPSO-XGBoost hybrid model not only significantly improves the accuracy and reliability of corrosion rate prediction but also provides a scientific basis and operational guidance for pipeline maintenance, safety assessment, and protection strategy formulation in practical engineering.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"207 ","pages":"Article 109547"},"PeriodicalIF":3.9,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145974183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Chemical Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1