This paper develops sparse hybrid Gaussian Radial Basis Neural Networks (GRAB-NNs) for data-driven models. The proposed architectures are hidden-layered networks combining Gaussian and sigmoid hidden nodes. Efficient training algorithms are developed for solving the mixed integer nonlinear programming problem, where the optimal number of radial basis function (RBF) centers is obtained by a bidirectional branch and bound algorithm followed by optimal estimation of the coordinates of centers / widths and connection weights by minimizing the corrected Akaike Information Criterion. Algorithmic approaches are developed for exactly satisfying mass constraints both during the training and simulation problems. Sequential decomposition-based training approaches are developed by exploiting the structure of the hybrid model that facilitates use of different training algorithms for each sublayer of the hybrid structure thus leading to faster computation. The performance of the proposed network structures and training algorithms in presence / absence of constraints are evaluated for two nonlinear dynamic chemical systems.
{"title":"Mass-Constrained hybrid Gaussian radial basis neural networks: Development, training, and applications to modeling nonlinear dynamic noisy chemical processes","authors":"Angan Mukherjee , Dipendu Gupta , Debangsu Bhattacharyya","doi":"10.1016/j.compchemeng.2025.109080","DOIUrl":"10.1016/j.compchemeng.2025.109080","url":null,"abstract":"<div><div>This paper develops sparse hybrid Gaussian Radial Basis Neural Networks (GRAB-NNs) for data-driven models. The proposed architectures are hidden-layered networks combining Gaussian and sigmoid hidden nodes. Efficient training algorithms are developed for solving the mixed integer nonlinear programming problem, where the optimal number of radial basis function (RBF) centers is obtained by a bidirectional branch and bound algorithm followed by optimal estimation of the coordinates of centers / widths and connection weights by minimizing the corrected Akaike Information Criterion. Algorithmic approaches are developed for exactly satisfying mass constraints both during the training and simulation problems. Sequential decomposition-based training approaches are developed by exploiting the structure of the hybrid model that facilitates use of different training algorithms for each sublayer of the hybrid structure thus leading to faster computation. The performance of the proposed network structures and training algorithms in presence / absence of constraints are evaluated for two nonlinear dynamic chemical systems.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109080"},"PeriodicalIF":3.9,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143549412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Field activities performed by human operators are indispensable in process industries despite the prevalence of automation. To ensure safe and efficient plant operations, periodic training and performance assessment of field operators (FOPs) is essential. While numerous studies have focused on control room operators, relatively little attention has been directed to FOPs. Conventional training and assessment techniques for FOPs are action-based and ignore the cognitive aspects. Here, we seek to address this crucial gap in the performance assessment of FOPs. Specifically, we use eye gaze movements of FOPs to gain insights into their information acquisition patterns, a key component of cognitive behavior. As the FOPs are mobile and visit different sections of the plant, we use head-mounted eye-trackers. A major challenge in analyzing gaze information obtained from head-mounted eye trackers is that the operators’ Field of View (FoV) varies continuously as they perform different activities. Traditionally, the challenge posed by the variations in the FoV is tackled through manual annotation of the gaze on Areas of Interest (AOIs), which is knowledge- and time-intensive. Here, we propose a methodology based on Scale-Invariant-Feature-Transform to automate the AOI identification. We demonstrate our methodology with a case study involving human subjects operating a lab-scale heat exchanger setup. Our automated approach shows high accuracy (99.6 %) in gaze-AOI mapping and requires a fraction of the time, compared to manual, frame-by-frame annotation. It, therefore, offers a practical approach for performing eye tracking on FOPs, and can engender quantification of their skills and expertise and operator-specific training.
{"title":"Performance monitoring of chemical plant field operators through eye gaze tracking","authors":"Rohit Suresh , Babji Srinivasan , Rajagopalan Srinivasan","doi":"10.1016/j.compchemeng.2025.109079","DOIUrl":"10.1016/j.compchemeng.2025.109079","url":null,"abstract":"<div><div>Field activities performed by human operators are indispensable in process industries despite the prevalence of automation. To ensure safe and efficient plant operations, periodic training and performance assessment of field operators (FOPs) is essential. While numerous studies have focused on control room operators, relatively little attention has been directed to FOPs. Conventional training and assessment techniques for FOPs are action-based and ignore the cognitive aspects. Here, we seek to address this crucial gap in the performance assessment of FOPs. Specifically, we use eye gaze movements of FOPs to gain insights into their information acquisition patterns, a key component of cognitive behavior. As the FOPs are mobile and visit different sections of the plant, we use head-mounted eye-trackers. A major challenge in analyzing gaze information obtained from head-mounted eye trackers is that the operators’ Field of View (FoV) varies continuously as they perform different activities. Traditionally, the challenge posed by the variations in the FoV is tackled through manual annotation of the gaze on Areas of Interest (AOIs), which is knowledge- and time-intensive. Here, we propose a methodology based on Scale-Invariant-Feature-Transform to automate the AOI identification. We demonstrate our methodology with a case study involving human subjects operating a lab-scale heat exchanger setup. Our automated approach shows high accuracy (99.6 %) in gaze-AOI mapping and requires a fraction of the time, compared to manual, frame-by-frame annotation. It, therefore, offers a practical approach for performing eye tracking on FOPs, and can engender quantification of their skills and expertise and operator-specific training.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"198 ","pages":"Article 109079"},"PeriodicalIF":3.9,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-25DOI: 10.1016/j.compchemeng.2025.109076
Lunan Li , Zhimin Wu , Chuan Jin
Integrating renewable sources with existing power plants represents a viable strategy for enhancing feasibility, reducing thermodynamic irreversibility, and lowering air pollution. This study employs a biomass digestion method to produce syngas, which feeds a post-combustion chamber to assist a methane-fueled Brayton cycle. An efficient heat design model is developed using the Engineering Equation Solver (EES), integrating a geothermal-powered trigeneration unit with the upper cycle to produce power, cooling, and potable water. The integrated scheme includes a flash-binary geothermal plant, a separation vessel desalination process, multi-effect desalination, and generator-absorber-heat exchange refrigeration units. Energy, exergy, and economic analyses are conducted to assess the thermodynamic and economic feasibility of the system. A multi-criteria optimization is conducted in two scenarios: power-freshwater and exergy-net present value (NPV), using an integrated Histogram Gradient Boosting Regression (HGBR) and Multi-Objective Particle Swarm Optimization (MOPSO) model. The first scenario showed a 55.37 % increase in net electricity output (2100.28 kW) and a 51.7 % improvement in freshwater generation (36.09 kg/s) compared to the base case. The optimum point revealed an exergy efficiency of 28.36 %, a total NPV of $5.703 M, and a payback period of 4.85 years. In the second scenario, an exergy efficiency of 29.52 %, an NPV of $4.41 M, and a payback period of 5.37 years are achieved. Based on the results, the first scenario demonstrates superior performance.
{"title":"Integrating a multigeneration system into a biogas-fueled gas turbine power plant for CO2 emission reduction: An efficient design and exergy-economic assessment","authors":"Lunan Li , Zhimin Wu , Chuan Jin","doi":"10.1016/j.compchemeng.2025.109076","DOIUrl":"10.1016/j.compchemeng.2025.109076","url":null,"abstract":"<div><div>Integrating renewable sources with existing power plants represents a viable strategy for enhancing feasibility, reducing thermodynamic irreversibility, and lowering air pollution. This study employs a biomass digestion method to produce syngas, which feeds a post-combustion chamber to assist a methane-fueled Brayton cycle. An efficient heat design model is developed using the Engineering Equation Solver (EES), integrating a geothermal-powered trigeneration unit with the upper cycle to produce power, cooling, and potable water. The integrated scheme includes a flash-binary geothermal plant, a separation vessel desalination process, multi-effect desalination, and generator-absorber-heat exchange refrigeration units. Energy, exergy, and economic analyses are conducted to assess the thermodynamic and economic feasibility of the system. A multi-criteria optimization is conducted in two scenarios: power-freshwater and exergy-net present value (NPV), using an integrated Histogram Gradient Boosting Regression (HGBR) and Multi-Objective Particle Swarm Optimization (MOPSO) model. The first scenario showed a 55.37 % increase in net electricity output (2100.28 kW) and a 51.7 % improvement in freshwater generation (36.09 kg/s) compared to the base case. The optimum point revealed an exergy efficiency of 28.36 %, a total NPV of $5.703 M, and a payback period of 4.85 years. In the second scenario, an exergy efficiency of 29.52 %, an NPV of $4.41 M, and a payback period of 5.37 years are achieved. Based on the results, the first scenario demonstrates superior performance.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109076"},"PeriodicalIF":3.9,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143529284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-23DOI: 10.1016/j.compchemeng.2025.109066
Subhadra Devi Saripalli , Rajagopalan Srinivasan
The fine chemical industry regularly develops novel products for diverse applications and produces them at scale in multi-purpose, batch processes. These processes often involve highly hazardous chemicals and reactive chemical hazards. If an unacceptable risk is identified after the production route has been finalized, it would necessitate expensive redesigns and result in suboptimal risk management strategies with significant delays in time to market. It is, therefore, desirable to consider inherent safety analysis during route selection. The traditional methods for inherent safety analysis are not directly applicable to the fine chemicals industry which have unique characteristics; specifically, they require information on a large number of properties of materials and reactions, which are not usually available for novel pathways, especially at the route selection stage. While safety data could be determined experimentally, this would be time-consuming and expensive, especially if the route were to be rejected later in the process development. In this paper, we propose a practicable methodology that addresses these important challenges unique to fine chemicals industry. Our methodology leverages chemoinformatic models, which are increasingly becoming available and reliable, to estimate material and reaction properties. Various chemoinformatic models are systematically integrated into the process development workflow so that fire, toxicity, and reactivity hazards can be estimated when necessary, thus enabling inherently safer route selection. The methodology is illustrated using an industrial case study of Boscalid manufacture. Fifty-three safety-critical properties are predicted using various chemoinformatics methods and enable the identification of safety issues at the early stages of the process lifecycle.
{"title":"A cheminformatics-based methodology to incorporate safety considerations during accelerated process development","authors":"Subhadra Devi Saripalli , Rajagopalan Srinivasan","doi":"10.1016/j.compchemeng.2025.109066","DOIUrl":"10.1016/j.compchemeng.2025.109066","url":null,"abstract":"<div><div>The fine chemical industry regularly develops novel products for diverse applications and produces them at scale in multi-purpose, batch processes. These processes often involve highly hazardous chemicals and reactive chemical hazards. If an unacceptable risk is identified after the production route has been finalized, it would necessitate expensive redesigns and result in suboptimal risk management strategies with significant delays in time to market. It is, therefore, desirable to consider inherent safety analysis during route selection. The traditional methods for inherent safety analysis are not directly applicable to the fine chemicals industry which have unique characteristics; specifically, they require information on a large number of properties of materials and reactions, which are not usually available for novel pathways, especially at the route selection stage. While safety data could be determined experimentally, this would be time-consuming and expensive, especially if the route were to be rejected later in the process development. In this paper, we propose a practicable methodology that addresses these important challenges unique to fine chemicals industry. Our methodology leverages chemoinformatic models, which are increasingly becoming available and reliable, to estimate material and reaction properties. Various chemoinformatic models are systematically integrated into the process development workflow so that fire, toxicity, and reactivity hazards can be estimated when necessary, thus enabling inherently safer route selection. The methodology is illustrated using an industrial case study of Boscalid manufacture. Fifty-three safety-critical properties are predicted using various chemoinformatics methods and enable the identification of safety issues at the early stages of the process lifecycle.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"198 ","pages":"Article 109066"},"PeriodicalIF":3.9,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143562996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.compchemeng.2025.109068
Wu Deng , Xiankang Xin , Ruixuan Song , Xinzhou Yang , Weifeng Wang , Gaoming Yu
Oil production forecasting is essential in the petroleum and natural gas sector, providing a fundamental basis for the adjustment of development plans and improving resource utilization efficiency for engineers and decision-makers. However, current deep learning models often struggle with long-term dependencies in long time series and high computational costs, limiting their effectiveness in complex time series forecasting tasks. This paper introduced the Informer model, an enhancement over the Transformer framework, to address these limitations. For evaluation and verification, the Informer model and reference models such as CNN, LSTM, GRU, CNN-GRU, and GRU-LSTM were applied to publicly available time-series datasets, and the optimal hyperparameters of the model were identified using Bayesian optimization and the hyperband algorithm (BOHB). The experimental results demonstrated that the Informer model outperformed others in computational speed, resource efficiency, and handling large-scale data, showing potential for practical applications in the future.
{"title":"A time series forecasting method for oil production based on Informer optimized by Bayesian optimization and the hyperband algorithm (BOHB)","authors":"Wu Deng , Xiankang Xin , Ruixuan Song , Xinzhou Yang , Weifeng Wang , Gaoming Yu","doi":"10.1016/j.compchemeng.2025.109068","DOIUrl":"10.1016/j.compchemeng.2025.109068","url":null,"abstract":"<div><div>Oil production forecasting is essential in the petroleum and natural gas sector, providing a fundamental basis for the adjustment of development plans and improving resource utilization efficiency for engineers and decision-makers. However, current deep learning models often struggle with long-term dependencies in long time series and high computational costs, limiting their effectiveness in complex time series forecasting tasks. This paper introduced the Informer model, an enhancement over the Transformer framework, to address these limitations. For evaluation and verification, the Informer model and reference models such as CNN, LSTM, GRU, CNN-GRU, and GRU-LSTM were applied to publicly available time-series datasets, and the optimal hyperparameters of the model were identified using Bayesian optimization and the hyperband algorithm (BOHB). The experimental results demonstrated that the Informer model outperformed others in computational speed, resource efficiency, and handling large-scale data, showing potential for practical applications in the future.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109068"},"PeriodicalIF":3.9,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143549411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.compchemeng.2025.109061
Dimitrios M. Fardis , Donghyun Oh , Nikolaos V. Sahinidis , Alejandro Garciadiego , Andrew Lee
Critical minerals (CMs) and Rare Earth Elements (REEs) play a vital role in crucial infrastructure technologies such as renewable energy generation and batteries. Recovering them from waste materials has recently been found to significantly reduce environmental impact and supply chain costs related to these materials. In this work, we investigate surrogate modeling techniques aimed to simplify the modeling, simulation, and optimization of the leaching processes involved in CM and REE recovery flowsheets. As there is currently a lack of systematic studies on this topic, we perform extensive computational testing to ascertain which surrogate models are easier to construct and offer high predictive accuracy. Our results suggest that sparse quadratic models balance predictive accuracy and computational efficiency. Training and using these surrogates for global optimization of the leaching process requires two orders of magnitude fewer measurements and is up to four orders of magnitude faster than optimizing the original simulation using equation-oriented optimization or derivative-free optimization.
{"title":"Surrogate modeling and optimization of the leaching process in a rare earth elements recovery plant","authors":"Dimitrios M. Fardis , Donghyun Oh , Nikolaos V. Sahinidis , Alejandro Garciadiego , Andrew Lee","doi":"10.1016/j.compchemeng.2025.109061","DOIUrl":"10.1016/j.compchemeng.2025.109061","url":null,"abstract":"<div><div>Critical minerals (CMs) and Rare Earth Elements (REEs) play a vital role in crucial infrastructure technologies such as renewable energy generation and batteries. Recovering them from waste materials has recently been found to significantly reduce environmental impact and supply chain costs related to these materials. In this work, we investigate surrogate modeling techniques aimed to simplify the modeling, simulation, and optimization of the leaching processes involved in CM and REE recovery flowsheets. As there is currently a lack of systematic studies on this topic, we perform extensive computational testing to ascertain which surrogate models are easier to construct and offer high predictive accuracy. Our results suggest that sparse quadratic models balance predictive accuracy and computational efficiency. Training and using these surrogates for global optimization of the leaching process requires two orders of magnitude fewer measurements and is up to four orders of magnitude faster than optimizing the original simulation using equation-oriented optimization or derivative-free optimization.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109061"},"PeriodicalIF":3.9,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143509479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-22DOI: 10.1016/j.compchemeng.2025.109044
Lucia Balsemão Furtado Logsdon , Virgilio José Martins Ferreira Filho , Paulo Cesar Ribas
Efficient operation sequencing is crucial in industrial processes to minimize delays and optimize resource utilization. This study focuses on the sequencing of operations for the recovery of decommissioned submarine pipelines, aiming to minimize project completion times. Unlike traditional sequencing problems, our approach incorporates unique constraints such as precedence relationships and the composition of trips for pipeline removal. We propose an optimization framework integrating a mathematical model and a hybrid solution that combines metaheuristic algorithms with exact methods for solving large-scale instances. Computational experiments were conducted on 40 instances of 100 pipelines each, randomly drawn from real-world data. The heuristic generated feasible initial solutions in all cases and enabled the mathematical model to find optimal solutions in 42.5% of the instances. However, in 35% of the cases, no feasible solutions were obtained within the time limit. For cases where the solver reached a solution, the average project completion time was 214.07 days, with a median of 0.0 and a standard deviation of 547.35 days. A real-world case study highlighted the practical applicability of the proposed approach. Using the constructive heuristic as the solver’s initial solution achieved the best result within 5000 s, with an objective function value of 9774 days. This work is particularly relevant in Brazil’s Oil and Gas industry, where deep-water flexible pipelines and strict environmental deadlines demand effective optimization models for decommissioning planning.
{"title":"Optimization models and heuristics for effective pipeline decommissioning planning in the oil and gas industry","authors":"Lucia Balsemão Furtado Logsdon , Virgilio José Martins Ferreira Filho , Paulo Cesar Ribas","doi":"10.1016/j.compchemeng.2025.109044","DOIUrl":"10.1016/j.compchemeng.2025.109044","url":null,"abstract":"<div><div>Efficient operation sequencing is crucial in industrial processes to minimize delays and optimize resource utilization. This study focuses on the sequencing of operations for the recovery of decommissioned submarine pipelines, aiming to minimize project completion times. Unlike traditional sequencing problems, our approach incorporates unique constraints such as precedence relationships and the composition of trips for pipeline removal. We propose an optimization framework integrating a mathematical model and a hybrid solution that combines metaheuristic algorithms with exact methods for solving large-scale instances. Computational experiments were conducted on 40 instances of 100 pipelines each, randomly drawn from real-world data. The heuristic generated feasible initial solutions in all cases and enabled the mathematical model to find optimal solutions in 42.5% of the instances. However, in 35% of the cases, no feasible solutions were obtained within the time limit. For cases where the solver reached a solution, the average project completion time was 214.07 days, with a median of 0.0 and a standard deviation of 547.35 days. A real-world case study highlighted the practical applicability of the proposed approach. Using the constructive heuristic as the solver’s initial solution achieved the best result within 5000 s, with an objective function value of 9774 days. This work is particularly relevant in Brazil’s Oil and Gas industry, where deep-water flexible pipelines and strict environmental deadlines demand effective optimization models for decommissioning planning.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109044"},"PeriodicalIF":3.9,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industry 4.0 has increased the demand for advanced fault detection and diagnosis (FDD) in complex industrial processes. This research introduces a novel approach to causal discovery and FDD using Variational Graph Autoencoders (VGAEs) enhanced with physics-informed constraints and conformal learning. Our method addresses limitations in conventional techniques, such as Granger causality, which struggle with high-dimensional, nonlinear systems. By integrating Graph Convolutional Networks (GCNs) and an entropy-based dynamic edge sampling method, the framework focuses on high-uncertainty regions of the causal graph. Conformal learning establishes rigorous thresholds for causal inference. Validated through simulation and case studies, including an Australian refinery and the Tennessee Eastman Process, our approach improves causal discovery accuracy, reduces spurious connections, and enhances fault classification. Integrating domain-specific physics information also led to faster convergence and reduced computational demands. This research provides an efficient, statistically robust approach for causal discovery and FDD in complex industrial systems.
{"title":"Entropy-enhanced batch sampling and conformal learning in VGAE for physics-informed causal discovery and fault diagnosis","authors":"Mohammadhossein Modirrousta, Alireza Memarian, Biao Huang","doi":"10.1016/j.compchemeng.2025.109053","DOIUrl":"10.1016/j.compchemeng.2025.109053","url":null,"abstract":"<div><div>Industry 4.0 has increased the demand for advanced fault detection and diagnosis (FDD) in complex industrial processes. This research introduces a novel approach to causal discovery and FDD using Variational Graph Autoencoders (VGAEs) enhanced with physics-informed constraints and conformal learning. Our method addresses limitations in conventional techniques, such as Granger causality, which struggle with high-dimensional, nonlinear systems. By integrating Graph Convolutional Networks (GCNs) and an entropy-based dynamic edge sampling method, the framework focuses on high-uncertainty regions of the causal graph. Conformal learning establishes rigorous thresholds for causal inference. Validated through simulation and case studies, including an Australian refinery and the Tennessee Eastman Process, our approach improves causal discovery accuracy, reduces spurious connections, and enhances fault classification. Integrating domain-specific physics information also led to faster convergence and reduced computational demands. This research provides an efficient, statistically robust approach for causal discovery and FDD in complex industrial systems.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109053"},"PeriodicalIF":3.9,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19DOI: 10.1016/j.compchemeng.2025.109067
Derrick Adams , Jay H. Lee , Shin Hyuk Kim , Seongmin Heo
This study presents a transformative approach for the real-time monitoring of continuous slug-flow crystallizers in the pharmaceutical and fine chemical industries, marking a shift from traditional batch processing to continuous manufacturing. By leveraging advanced computer vision techniques within inline imaging systems, including single, binocular, and trinocular stereo visions, we offer a novel solution for the multispatial monitoring and analysis of the crystallization process. This methodology facilitates the automatic detection of solution slugs and bulk crystal regions, enabling the estimation of dynamic bulk crystal density, slug volumes, and porosity in real time. The deployment of ResNet18 and Mask R-CNN models underpins the method's efficacy, demonstrating remarkable performance metrics: ResNet18 ensures precise image detection, while Mask R-CNN achieves an average precision (AP) of 96.4%, with 100% at both AP50 and AP75 thresholds for bulk crystals and solution slugs’ segmentation. These results validate the models’ accuracy and reliability in estimating quality variables essential for continuous slug flow crystallization. This advancement not only addresses the limitations of existing monitoring methods but also signifies a leap forward in applying computer vision for process monitoring, offering significant implications for enhancing decision-making, optimization, and control in continuous manufacturing operations.
{"title":"Noninvasive inline imaging and computer vision-based quality variable estimation for continuous slug-flow crystallizers","authors":"Derrick Adams , Jay H. Lee , Shin Hyuk Kim , Seongmin Heo","doi":"10.1016/j.compchemeng.2025.109067","DOIUrl":"10.1016/j.compchemeng.2025.109067","url":null,"abstract":"<div><div>This study presents a transformative approach for the real-time monitoring of continuous slug-flow crystallizers in the pharmaceutical and fine chemical industries, marking a shift from traditional batch processing to continuous manufacturing. By leveraging advanced computer vision techniques within inline imaging systems, including single, binocular, and trinocular stereo visions, we offer a novel solution for the multispatial monitoring and analysis of the crystallization process. This methodology facilitates the automatic detection of solution slugs and bulk crystal regions, enabling the estimation of dynamic bulk crystal density, slug volumes, and porosity in real time. The deployment of ResNet18 and Mask R-CNN models underpins the method's efficacy, demonstrating remarkable performance metrics: ResNet18 ensures precise image detection, while Mask R-CNN achieves an average precision (AP) of 96.4%, with 100% at both AP50 and AP75 thresholds for bulk crystals and solution slugs’ segmentation. These results validate the models’ accuracy and reliability in estimating quality variables essential for continuous slug flow crystallization. This advancement not only addresses the limitations of existing monitoring methods but also signifies a leap forward in applying computer vision for process monitoring, offering significant implications for enhancing decision-making, optimization, and control in continuous manufacturing operations.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109067"},"PeriodicalIF":3.9,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19DOI: 10.1016/j.compchemeng.2025.109052
Alexander Smith, Dipanjan Ghosh, Andrew Tan, Xiang Cheng, Prodromos Daoutidis
Deciphering how local interactions drive self-assembly and multi-scale organization is essential for understanding active matter systems, such as self-organizing bacterial colonies. This study combines topological data analysis with causal discovery to capture the complex, hierarchical causality within these dynamic systems. By leveraging the Euler characteristic as a topological descriptor, we reduce high-dimensional, multi-scale data into essential structural representations, enabling efficient, meaningful analysis. Through causal discovery methods applied to the topology of these dynamic, multi-scale structures, we reveal how localized bacterial interactions propagate, guiding global organization in systems with both homogeneous and heterogeneous ordering. The findings indicate that, while ordering patterns may differ, the mechanisms underlying multi-scale self-assembly remain consistent, with information flowing primarily from local, highly-ordered structures. This framework enhances understanding of self-organization principles and supports applications requiring scalable causal analysis in complex data environments across natural and synthetic active matter.
{"title":"Multi-scale causality in active matter","authors":"Alexander Smith, Dipanjan Ghosh, Andrew Tan, Xiang Cheng, Prodromos Daoutidis","doi":"10.1016/j.compchemeng.2025.109052","DOIUrl":"10.1016/j.compchemeng.2025.109052","url":null,"abstract":"<div><div>Deciphering how local interactions drive self-assembly and multi-scale organization is essential for understanding active matter systems, such as self-organizing bacterial colonies. This study combines topological data analysis with causal discovery to capture the complex, hierarchical causality within these dynamic systems. By leveraging the Euler characteristic as a topological descriptor, we reduce high-dimensional, multi-scale data into essential structural representations, enabling efficient, meaningful analysis. Through causal discovery methods applied to the topology of these dynamic, multi-scale structures, we reveal how localized bacterial interactions propagate, guiding global organization in systems with both homogeneous and heterogeneous ordering. The findings indicate that, while ordering patterns may differ, the mechanisms underlying multi-scale self-assembly remain consistent, with information flowing primarily from local, highly-ordered structures. This framework enhances understanding of self-organization principles and supports applications requiring scalable causal analysis in complex data environments across natural and synthetic active matter.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"197 ","pages":"Article 109052"},"PeriodicalIF":3.9,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143509477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}