Shaohua Li, Bingxin Liu, Jingying Feng, Ruihua Qi, Wei He, Ming Xu, Linxin Yuan, Shiwen Wang
Optimal maintenance decision for a sensor network aims to intelligently determine the optimal repair time. The accuracy of the optimal maintenance decision method directly affects the reliability and safety of the sensor network. This paper develops a new optimal maintenance decision method based on belief rule base considering attribute correlation (BRB-c), which is designed to address three challenges: the lack of observation data, complex system mechanisms, and characteristic correlation. This method consists of two sections: the health state assessment model and the health state prediction model. Firstly, the former is accomplished through a BRB-c-based health assessment model that considers characteristic correlation. Subsequently, based on the current health state, a Wiener process is used to predict the health state of the sensor network. After predicting the health state, experts are then required to establish the minimum threshold, which in turn determines the optimal maintenance time. To demonstrate the proposed method is effective, a case study for the wireless sensor network (WSN) of oil storage tank was conducted. The experimental data were collected from an actual storage tank sensor network in Hainan Province, China. The experimental results validate the accuracy of the developed optimal maintenance decision model, confirming its capability to efficiently predict the optimal maintenance time.
{"title":"Optimal Maintenance Decision Method for a Sensor Network Based on Belief Rule Base considering Attribute Correlation","authors":"Shaohua Li, Bingxin Liu, Jingying Feng, Ruihua Qi, Wei He, Ming Xu, Linxin Yuan, Shiwen Wang","doi":"10.1155/2024/6616366","DOIUrl":"https://doi.org/10.1155/2024/6616366","url":null,"abstract":"<p>Optimal maintenance decision for a sensor network aims to intelligently determine the optimal repair time. The accuracy of the optimal maintenance decision method directly affects the reliability and safety of the sensor network. This paper develops a new optimal maintenance decision method based on belief rule base considering attribute correlation (BRB-c), which is designed to address three challenges: the lack of observation data, complex system mechanisms, and characteristic correlation. This method consists of two sections: the health state assessment model and the health state prediction model. Firstly, the former is accomplished through a BRB-c-based health assessment model that considers characteristic correlation. Subsequently, based on the current health state, a Wiener process is used to predict the health state of the sensor network. After predicting the health state, experts are then required to establish the minimum threshold, which in turn determines the optimal maintenance time. To demonstrate the proposed method is effective, a case study for the wireless sensor network (WSN) of oil storage tank was conducted. The experimental data were collected from an actual storage tank sensor network in Hainan Province, China. The experimental results validate the accuracy of the developed optimal maintenance decision model, confirming its capability to efficiently predict the optimal maintenance time.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Precisely segmenting the organs at risk (OARs) in computed tomography (CT) plays an important role in radiotherapy’s treatment planning, aiding in the protection of critical tissues during irradiation. Renowned deep convolutional neural networks (DCNNs) and prevailing transformer-based architectures are widely utilized to accomplish the segmentation task, showcasing advantages in capturing local and contextual characteristics. Graph convolutional networks (GCNs) are another specialized model designed for processing the nongrid dataset, e.g., citation relationship. The DCNNs and GCNs are considered as two distinct models applicable to the grid and nongrid datasets, respectively. Motivated by the recently developed dynamic-channel GCN (DCGCN) that attempts to leverage the graph structure to enhance the feature extracted by the DCNNs, this paper proposes a novel architecture termed adaptive sparse GCN (ASGCN) to mitigate the inherent limitations in DCGCN from the aspect of node’s representation and adjacency matrix’s construction. For the node’s representation, the global average pooling used in the DCGCN is replaced by the learning mechanism to accommodate the segmentation task. For the adjacency matrix, an adaptive regularization strategy is leveraged to penalize the coefficient in the adjacency matrix, resulting in a sparse one that can better exploit the relationships between nodes. Rigorous experiments on multiple OARs’ segmentation tasks of the head and neck demonstrate that the proposed ASGCN can effectively improve the segmentation accuracy. Comparison between the proposed method and other prevalent architectures further confirms the superiority of the ASGCN.
{"title":"Incorporating Adaptive Sparse Graph Convolutional Neural Networks for Segmentation of Organs at Risk in Radiotherapy","authors":"Junjie Hu, Chengrong Yu, Shengqian Zhu, Haixian Zhang","doi":"10.1155/2024/1728801","DOIUrl":"https://doi.org/10.1155/2024/1728801","url":null,"abstract":"<p>Precisely segmenting the organs at risk (OARs) in computed tomography (CT) plays an important role in radiotherapy’s treatment planning, aiding in the protection of critical tissues during irradiation. Renowned deep convolutional neural networks (DCNNs) and prevailing transformer-based architectures are widely utilized to accomplish the segmentation task, showcasing advantages in capturing local and contextual characteristics. Graph convolutional networks (GCNs) are another specialized model designed for processing the nongrid dataset, e.g., citation relationship. The DCNNs and GCNs are considered as two distinct models applicable to the grid and nongrid datasets, respectively. Motivated by the recently developed dynamic-channel GCN (DCGCN) that attempts to leverage the graph structure to enhance the feature extracted by the DCNNs, this paper proposes a novel architecture termed adaptive sparse GCN (ASGCN) to mitigate the inherent limitations in DCGCN from the aspect of node’s representation and adjacency matrix’s construction. For the node’s representation, the global average pooling used in the DCGCN is replaced by the learning mechanism to accommodate the segmentation task. For the adjacency matrix, an adaptive regularization strategy is leveraged to penalize the coefficient in the adjacency matrix, resulting in a sparse one that can better exploit the relationships between nodes. Rigorous experiments on multiple OARs’ segmentation tasks of the head and neck demonstrate that the proposed ASGCN can effectively improve the segmentation accuracy. Comparison between the proposed method and other prevalent architectures further confirms the superiority of the ASGCN.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study identifies critical inefficiencies within a dual-channel operation model employed by a fast fashion company, particularly the independent operation of three logistics distribution systems. These systems result in high operational costs and low resource utilization, primarily due to redundant vehicle dispatches to meet the distinct demands of retail store replenishment, online customer orders, and customer return demands, as well as random and scattered return requests leading to vehicle underutilization. To address these challenges, we propose a novel integrated logistics distribution system design and management method tailored for dual-channel sales and distribution businesses. The approach consolidates the three distribution systems into one cohesive framework, thus streamlining the delivery process and reducing vehicle trips by combining retail and customer visits. An optimization algorithm is introduced to factor in inventory and distribution distance, aiming to achieve global optimization in pairing retail store inventory with online customer orders and unifying the distribution of replenishment products, online products, and returned products. The paper contributes to the field by introducing a new variation of the Vehicle Routing Problem (VRP) that arises from an integrated distribution system, combining common VRP issues with more complex challenges. A custom Branch-and-Price (B&P) algorithm is developed to efficiently find optimal routes. Furthermore, we demonstrate the benefits of the integrated system over traditional, segregated systems through real-world data analysis and assess various factors including return rates and inventory conditions. The study also enhances the model by allowing inventory transfers between retail stores, improving inventory distribution balance, and offering solutions for scenarios with critically low inventory levels. Our findings highlight a significant reduction in total operating cost savings of up to 49.9% and vehicle usage when using the integrated distribution system compared to independent two-stage and three-stage systems. The integrated approach enables the utilization of vacant vehicle space and the dynamic selection and combination of tasks, preventing unnecessary mileage and space wastage. Notably, the integration of inventory sharing among retail stores has proven to be a key factor in generating feasible solutions under tight inventory conditions and reducing operational costs and vehicle numbers, with the benefits amplified in large-scale problem instances.
{"title":"A Branch-and-Price Algorithm for an Integrated Online and Offline Retailing Distribution System with Product Return","authors":"Wanchen Jie, Cheng Pei, Jiating Xu, Hong Yan","doi":"10.1155/2024/8880791","DOIUrl":"10.1155/2024/8880791","url":null,"abstract":"<p>This study identifies critical inefficiencies within a dual-channel operation model employed by a fast fashion company, particularly the independent operation of three logistics distribution systems. These systems result in high operational costs and low resource utilization, primarily due to redundant vehicle dispatches to meet the distinct demands of retail store replenishment, online customer orders, and customer return demands, as well as random and scattered return requests leading to vehicle underutilization. To address these challenges, we propose a novel integrated logistics distribution system design and management method tailored for dual-channel sales and distribution businesses. The approach consolidates the three distribution systems into one cohesive framework, thus streamlining the delivery process and reducing vehicle trips by combining retail and customer visits. An optimization algorithm is introduced to factor in inventory and distribution distance, aiming to achieve global optimization in pairing retail store inventory with online customer orders and unifying the distribution of replenishment products, online products, and returned products. The paper contributes to the field by introducing a new variation of the Vehicle Routing Problem (VRP) that arises from an integrated distribution system, combining common VRP issues with more complex challenges. A custom Branch-and-Price (B&P) algorithm is developed to efficiently find optimal routes. Furthermore, we demonstrate the benefits of the integrated system over traditional, segregated systems through real-world data analysis and assess various factors including return rates and inventory conditions. The study also enhances the model by allowing inventory transfers between retail stores, improving inventory distribution balance, and offering solutions for scenarios with critically low inventory levels. Our findings highlight a significant reduction in total operating cost savings of up to 49.9% and vehicle usage when using the integrated distribution system compared to independent two-stage and three-stage systems. The integrated approach enables the utilization of vacant vehicle space and the dynamic selection and combination of tasks, preventing unnecessary mileage and space wastage. Notably, the integration of inventory sharing among retail stores has proven to be a key factor in generating feasible solutions under tight inventory conditions and reducing operational costs and vehicle numbers, with the benefits amplified in large-scale problem instances.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140659334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syslog is a critical data source for analyzing system problems. Converting unstructured log entries into structured log data is necessary for effective log analysis. However, existing log parsing methods demonstrate promising accuracy on limited datasets, but their generalizability and precision are uncertain when applied to diverse log data. Enhancements in these areas are necessary. This paper proposes an online log parsing method called DLLog, which is based on deep learning and has the longest common subsequence. DLLog utilizes the GRU neural network to mine template words and applies the longest common subsequence to parse log entries in real-time. In the offline stage, DLLog combines multiple log features to accurately extract the template words, creating a log template set to assist online log parsing. In the online stage, DLLog parses log entries by calculating the matching degree between the real-time log entry and the log template in the log template set. This method also supports the incremental update of the log template set to handle new log entries generated by systems. We summarized the previous works and validated DLLog using real log data collected from 16 systems. The results demonstrate that DLLog achieves high parsing accuracy, universality, and adaptability.
{"title":"DLLog: An Online Log Parsing Approach for Large-Scale System","authors":"Hailong Cheng, Shi Ying, Xiaoyu Duan, Wanli Yuan","doi":"10.1155/2024/5961993","DOIUrl":"10.1155/2024/5961993","url":null,"abstract":"<p>Syslog is a critical data source for analyzing system problems. Converting unstructured log entries into structured log data is necessary for effective log analysis. However, existing log parsing methods demonstrate promising accuracy on limited datasets, but their generalizability and precision are uncertain when applied to diverse log data. Enhancements in these areas are necessary. This paper proposes an online log parsing method called DLLog, which is based on deep learning and has the longest common subsequence. DLLog utilizes the GRU neural network to mine template words and applies the longest common subsequence to parse log entries in real-time. In the offline stage, DLLog combines multiple log features to accurately extract the template words, creating a log template set to assist online log parsing. In the online stage, DLLog parses log entries by calculating the matching degree between the real-time log entry and the log template in the log template set. This method also supports the incremental update of the log template set to handle new log entries generated by systems. We summarized the previous works and validated DLLog using real log data collected from 16 systems. The results demonstrate that DLLog achieves high parsing accuracy, universality, and adaptability.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140698794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pretrained Language Models (PLMs) acquire rich prior semantic knowledge during the pretraining phase and utilize it to enhance downstream Natural Language Processing (NLP) tasks. Entity Matching (EM), a fundamental NLP task, aims to determine whether two entity records from different knowledge bases refer to the same real-world entity. This study, for the first time, explores the potential of using a PLM to boost the EM task through two transfer learning techniques, namely, fine-tuning and prompt learning. Our work also represents the first application of the soft prompt in an EM task. Experimental results across eleven EM datasets show that the soft prompt consistently outperforms other methods in terms of F1 scores across all datasets. Additionally, this study also investigates the capability of prompt learning in few-shot learning and observes that the hard prompt achieves the highest F1 scores in both zero-shot and one-shot context. These findings underscore the effectiveness of prompt learning paradigms in tackling challenging EM tasks.
预训练语言模型(PLM)在预训练阶段获得丰富的先验语义知识,并利用这些知识加强下游的自然语言处理(NLP)任务。实体匹配(EM)是一项基本的 NLP 任务,旨在确定来自不同知识库的两个实体记录是否指代同一个现实世界实体。本研究首次探索了使用 PLM 通过两种迁移学习技术(即微调和及时学习)促进 EM 任务的潜力。我们的工作也是软提示在电磁任务中的首次应用。11 个电磁数据集的实验结果表明,在所有数据集上,软提示的 F1 分数始终优于其他方法。此外,本研究还考察了提示学习在少次学习中的能力,并观察到硬提示在零次和一次学习中都获得了最高的 F1 分数。这些发现强调了提示学习范式在处理具有挑战性的电磁任务时的有效性。
{"title":"Leveraging Pretrained Language Models for Enhanced Entity Matching: A Comprehensive Study of Fine-Tuning and Prompt Learning Paradigms","authors":"Yu Wang, Luyao Zhou, Yuan Wang, Zhenwan Peng","doi":"10.1155/2024/1941221","DOIUrl":"10.1155/2024/1941221","url":null,"abstract":"<p>Pretrained Language Models (PLMs) acquire rich prior semantic knowledge during the pretraining phase and utilize it to enhance downstream Natural Language Processing (NLP) tasks. Entity Matching (EM), a fundamental NLP task, aims to determine whether two entity records from different knowledge bases refer to the same real-world entity. This study, for the first time, explores the potential of using a PLM to boost the EM task through two transfer learning techniques, namely, fine-tuning and prompt learning. Our work also represents the first application of the soft prompt in an EM task. Experimental results across eleven EM datasets show that the soft prompt consistently outperforms other methods in terms of <i>F</i>1 scores across all datasets. Additionally, this study also investigates the capability of prompt learning in few-shot learning and observes that the hard prompt achieves the highest <i>F</i>1 scores in both zero-shot and one-shot context. These findings underscore the effectiveness of prompt learning paradigms in tackling challenging EM tasks.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140700371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semi-supervised learning (SSL) is a common approach to learning predictive models using not only labeled, but also unlabeled examples. While SSL for the simple tasks of classification and regression has received much attention from the research community, this is not the case for complex prediction tasks with structurally dependent variables, such as multi-label classification and hierarchical multi-label classification. These tasks may require additional information, possibly coming from the underlying distribution in the descriptive space provided by unlabeled examples, to better face the challenging task of simultaneously predicting multiple class labels. In this paper, we investigate this aspect and propose a (hierarchical) multi-label classification method based on semi-supervised learning of predictive clustering trees, which we also extend towards ensemble learning. Extensive experimental evaluation conducted on 24 datasets shows significant advantages of the proposed method and its extension with respect to their supervised counterparts. Moreover, the method preserves interpretability of classical tree-based models.
{"title":"Semi-Supervised Predictive Clustering Trees for (Hierarchical) Multi-Label Classification","authors":"Jurica Levatić, Michelangelo Ceci, Dragi Kocev, Sašo Džeroski","doi":"10.1155/2024/5610291","DOIUrl":"https://doi.org/10.1155/2024/5610291","url":null,"abstract":"<p>Semi-supervised learning (SSL) is a common approach to learning predictive models using not only labeled, but also unlabeled examples. While SSL for the simple tasks of classification and regression has received much attention from the research community, this is not the case for complex prediction tasks with structurally dependent variables, such as multi-label classification and hierarchical multi-label classification. These tasks may require additional information, possibly coming from the underlying distribution in the descriptive space provided by unlabeled examples, to better face the challenging task of simultaneously predicting multiple class labels. In this paper, we investigate this aspect and propose a (hierarchical) multi-label classification method based on semi-supervised learning of predictive clustering trees, which we also extend towards ensemble learning. Extensive experimental evaluation conducted on 24 datasets shows significant advantages of the proposed method and its extension with respect to their supervised counterparts. Moreover, the method preserves interpretability of classical tree-based models.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a comparative analysis of bioinspired algorithms employed on a PV system subject to standard conditions, under step-change of irradiance conditions, and a partial shading condition for tracking the global maximum power point (GMPP). Four performance analysis and comparison techniques are artificial bee colony, particle swarm optimization, genetic algorithm, and a new metaheuristic technique called jellyfish optimization, respectively. These existing algorithms are well-known for tracking the GMPP with high efficiency. This paper compares these algorithms based on extracting GMPP in terms of maximum power from a PV module running at a uniform (STC), nonuniform solar irradiation (under step-change of irradiance), and partial shading conditions (PSCs). For analysis and comparison, two modules are taken: 1Soltech-1STH-215P and SolarWorld Industries GmbH Sunmodule plus SW 245 poly module, which are considered to form a panel by connecting four series modules. Comparison is based on maximum power tracking, total execution time, and minimum number of iterations to achieve the GMPP with high tracking efficiency and minimum error. Minitab software finds the regression equation (objective function) for STC, step-changing irradiation, and PSC. The reliability of the data (P-V curves) was measured in terms of p value, R, R2, and VIF. The R2 value comes out to be near 1, which shows the accuracy of the data. The simulation results prove that the new evolutionary jellyfish optimization technique gives better results in terms of higher tracking efficiency with very less time to obtain GMPP in all environmental conditions, with a higher efficiency of 98 to 99.9% with less time of 0.0386 to 0.1219 sec in comparison to ABC, GA, and PSO. The RMSE value for the proposed method JFO (0.59) is much lower than that of ABC, GA, and PSO.
{"title":"Comparison of Bioinspired Techniques for Tracking Maximum Power under Variable Environmental Conditions","authors":"Dilip Yadav, Nidhi Singh, Nimay Chandra Giri, Vikas Singh Bhadoria, Subrata Kumar Sarker","doi":"10.1155/2024/6678384","DOIUrl":"https://doi.org/10.1155/2024/6678384","url":null,"abstract":"<p>This paper presents a comparative analysis of bioinspired algorithms employed on a PV system subject to standard conditions, under step-change of irradiance conditions, and a partial shading condition for tracking the global maximum power point (GMPP). Four performance analysis and comparison techniques are artificial bee colony, particle swarm optimization, genetic algorithm, and a new metaheuristic technique called jellyfish optimization, respectively. These existing algorithms are well-known for tracking the GMPP with high efficiency. This paper compares these algorithms based on extracting GMPP in terms of maximum power from a PV module running at a uniform (STC), nonuniform solar irradiation (under step-change of irradiance), and partial shading conditions (PSCs). For analysis and comparison, two modules are taken: 1Soltech-1STH-215P and SolarWorld Industries GmbH Sunmodule plus SW 245 poly module, which are considered to form a panel by connecting four series modules. Comparison is based on maximum power tracking, total execution time, and minimum number of iterations to achieve the GMPP with high tracking efficiency and minimum error. Minitab software finds the regression equation (objective function) for STC, step-changing irradiation, and PSC. The reliability of the data (P-V curves) was measured in terms of <i>p</i> value, <i>R</i>, <i>R</i><sup>2</sup>, and VIF. The <i>R</i><sup>2</sup> value comes out to be near 1, which shows the accuracy of the data. The simulation results prove that the new evolutionary jellyfish optimization technique gives better results in terms of higher tracking efficiency with very less time to obtain GMPP in all environmental conditions, with a higher efficiency of 98 to 99.9% with less time of 0.0386 to 0.1219 sec in comparison to ABC, GA, and PSO. The RMSE value for the proposed method JFO (0.59) is much lower than that of ABC, GA, and PSO.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the dynamic global trade environment, accurately predicting trade values of diverse commodities is challenged by unpredictable economic and political changes. This study introduces the Meta-TFSTL framework, an innovative neural model that integrates Meta-Learning Enhanced Trade Forecasting with efficient multicommodity STL decomposition to adeptly navigate the complexities of forecasting. Our approach begins with STL decomposition to partition trade value sequences into seasonal, trend, and residual elements, identifying a potential 10-month economic cycle through the Ljung–Box test. The model employs a dual-channel spatiotemporal encoder for processing these components, ensuring a comprehensive grasp of temporal correlations. By constructing spatial and temporal graphs leveraging correlation matrices and graph embeddings and introducing fused attention and multitasking strategies at the decoding phase, Meta-TFSTL surpasses benchmark models in performance. Additionally, integrating meta-learning and fine-tuning techniques enhances shared knowledge across import and export trade predictions. Ultimately, our research significantly advances the precision and efficiency of trade forecasting in a volatile global economic scenario.
{"title":"Meta-Learning Enhanced Trade Forecasting: A Neural Framework Leveraging Efficient Multicommodity STL Decomposition","authors":"Bohan Ma, Yushan Xue, Jing Chen, Fangfang Sun","doi":"10.1155/2024/6176898","DOIUrl":"10.1155/2024/6176898","url":null,"abstract":"<p>In the dynamic global trade environment, accurately predicting trade values of diverse commodities is challenged by unpredictable economic and political changes. This study introduces the Meta-TFSTL framework, an innovative neural model that integrates Meta-Learning Enhanced Trade Forecasting with efficient multicommodity STL decomposition to adeptly navigate the complexities of forecasting. Our approach begins with STL decomposition to partition trade value sequences into seasonal, trend, and residual elements, identifying a potential 10-month economic cycle through the Ljung–Box test. The model employs a dual-channel spatiotemporal encoder for processing these components, ensuring a comprehensive grasp of temporal correlations. By constructing spatial and temporal graphs leveraging correlation matrices and graph embeddings and introducing fused attention and multitasking strategies at the decoding phase, Meta-TFSTL surpasses benchmark models in performance. Additionally, integrating meta-learning and fine-tuning techniques enhances shared knowledge across import and export trade predictions. Ultimately, our research significantly advances the precision and efficiency of trade forecasting in a volatile global economic scenario.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140746871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To reduce diesel emissions and fuel consumption and improve DPF regeneration performance, a multiobjective optimization method for DPF regeneration conditions, combined with nondominated sorting genetic algorithms (NSGA-III) and a back propagation neural network (BPNN) prediction model, is proposed. In NSGA-III, DPF regeneration temperature (T4 and T5), O2, NOx, smoke, and brake-specific fuel consumption (BSFC) are optimized by adjusting the engine injection control parameters. An improved seagull optimization algorithm (ISOA) is proposed to enhance the accuracy of BPNN predictions. The ISOA-BP diesel engine regeneration condition prediction model is established to evaluate fitness. The optimized fuel injection parameters are programmed into the engine’s electronic control unit (ECU) for experimental validation through steady-state testing, DPF active regeneration testing, and WHTC transient cycle testing. The results demonstrate that the introduced ISOA algorithm exhibits faster convergence and improved search abilities, effectively addressing calculation accuracy challenges. A comparison between the SOA-BPNN and ISOA-BPNN models shows the superior accuracy of the latter, with reduced errors and improved R2 values. The optimization method, integrating NSGA-III and ISOA-BPNN, achieves multiobjective calibration for T4 and T5 temperatures. Steady-state testing reveals average increases of 3.14%, 2.07%, and 10.79% in T4, T5, and exhaust oxygen concentrations, while NOx, smoke, and BSFC exhibit average decreases of 8.68%, 12.07%, and 1.03%. Regeneration experiments affirm the efficiency of the proposed method, with DPF regeneration reaching 88.2% and notable improvements in T4, T5, and oxygen concentrations during WHTC transient testing. This research provides a promising and effective solution for calibrating the regeneration temperature of DPF, thus reducing emissions and fuel consumption of diesel engines while ensuring safe and efficient DPF regeneration.
{"title":"Multiobjective Optimization of Diesel Particulate Filter Regeneration Conditions Based on Machine Learning Combined with Intelligent Algorithms","authors":"Yuhua Wang, Jinlong Li, Guiyong Wang, Guisheng Chen, Qianqiao Shen, Boshun Zeng, Shuchao He","doi":"10.1155/2024/7775139","DOIUrl":"https://doi.org/10.1155/2024/7775139","url":null,"abstract":"<p>To reduce diesel emissions and fuel consumption and improve DPF regeneration performance, a multiobjective optimization method for DPF regeneration conditions, combined with nondominated sorting genetic algorithms (NSGA-III) and a back propagation neural network (BPNN) prediction model, is proposed. In NSGA-III, DPF regeneration temperature (T4 and T5), O<sub>2</sub>, NO<sub>x</sub>, smoke, and brake-specific fuel consumption (BSFC) are optimized by adjusting the engine injection control parameters. An improved seagull optimization algorithm (ISOA) is proposed to enhance the accuracy of BPNN predictions. The ISOA-BP diesel engine regeneration condition prediction model is established to evaluate fitness. The optimized fuel injection parameters are programmed into the engine’s electronic control unit (ECU) for experimental validation through steady-state testing, DPF active regeneration testing, and WHTC transient cycle testing. The results demonstrate that the introduced ISOA algorithm exhibits faster convergence and improved search abilities, effectively addressing calculation accuracy challenges. A comparison between the SOA-BPNN and ISOA-BPNN models shows the superior accuracy of the latter, with reduced errors and improved <i>R</i><sup>2</sup> values. The optimization method, integrating NSGA-III and ISOA-BPNN, achieves multiobjective calibration for T4 and T5 temperatures. Steady-state testing reveals average increases of 3.14%, 2.07%, and 10.79% in T4, T5, and exhaust oxygen concentrations, while NO<sub>x</sub>, smoke, and BSFC exhibit average decreases of 8.68%, 12.07%, and 1.03%. Regeneration experiments affirm the efficiency of the proposed method, with DPF regeneration reaching 88.2% and notable improvements in T4, T5, and oxygen concentrations during WHTC transient testing. This research provides a promising and effective solution for calibrating the regeneration temperature of DPF, thus reducing emissions and fuel consumption of diesel engines while ensuring safe and efficient DPF regeneration.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiasheng Chen, Juan Tang, Ming Yan, Shuai Lai, Kun Liang, Jianguang Lu, Wenqiang Yang
As is well known, differential algebraic equations (DAEs), which are able to describe dynamic changes and underlying constraints, have been widely applied in engineering fields such as fluid dynamics, multi-body dynamics, mechanical systems, and control theory. In practical physical modeling within these domains, the systems often generate high-index DAEs. Classical implicit numerical methods typically result in varying order reduction of numerical accuracy when solving high-index systems. Recently, the physics-informed neural networks (PINNs) have gained attention for solving DAE systems. However, it faces challenges like the inability to directly solve high-index systems, lower predictive accuracy, and weaker generalization capabilities. In this paper, we propose a PINN computational framework, combined Radau IIA numerical method with an improved fully connected neural network structure, to directly solve high-index DAEs. Furthermore, we employ a domain decomposition strategy to enhance solution accuracy. We conduct numerical experiments with two classical high-index systems as illustrative examples, investigating how different orders and time-step sizes of the Radau IIA method affect the accuracy of neural network solutions. For different time-step sizes, the experimental results indicate that utilizing a 5th-order Radau IIA method in the PINN achieves a high level of system accuracy and stability. Specifically, the absolute errors for all differential variables remain as low as 10−6, and the absolute errors for algebraic variables are maintained at 10−5. Therefore, our method exhibits excellent computational accuracy and strong generalization capabilities, providing a feasible approach for the high-precision solution of larger-scale DAEs with higher indices or challenging high-dimensional partial differential algebraic equation systems.
{"title":"Physics-Informed Neural Networks for Solving High-Index Differential-Algebraic Equation Systems Based on Radau Methods","authors":"Jiasheng Chen, Juan Tang, Ming Yan, Shuai Lai, Kun Liang, Jianguang Lu, Wenqiang Yang","doi":"10.1155/2024/6641674","DOIUrl":"https://doi.org/10.1155/2024/6641674","url":null,"abstract":"<p>As is well known, differential algebraic equations (DAEs), which are able to describe dynamic changes and underlying constraints, have been widely applied in engineering fields such as fluid dynamics, multi-body dynamics, mechanical systems, and control theory. In practical physical modeling within these domains, the systems often generate high-index DAEs. Classical implicit numerical methods typically result in varying order reduction of numerical accuracy when solving high-index systems. Recently, the physics-informed neural networks (PINNs) have gained attention for solving DAE systems. However, it faces challenges like the inability to directly solve high-index systems, lower predictive accuracy, and weaker generalization capabilities. In this paper, we propose a PINN computational framework, combined Radau IIA numerical method with an improved fully connected neural network structure, to directly solve high-index DAEs. Furthermore, we employ a domain decomposition strategy to enhance solution accuracy. We conduct numerical experiments with two classical high-index systems as illustrative examples, investigating how different orders and time-step sizes of the Radau IIA method affect the accuracy of neural network solutions. For different time-step sizes, the experimental results indicate that utilizing a 5th-order Radau IIA method in the PINN achieves a high level of system accuracy and stability. Specifically, the absolute errors for all differential variables remain as low as 10<sup>−6</sup>, and the absolute errors for algebraic variables are maintained at 10<sup>−5</sup>. Therefore, our method exhibits excellent computational accuracy and strong generalization capabilities, providing a feasible approach for the high-precision solution of larger-scale DAEs with higher indices or challenging high-dimensional partial differential algebraic equation systems.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}