首页 > 最新文献

International Journal of Intelligent Systems最新文献

英文 中文
Leveraging Pretrained Language Models for Enhanced Entity Matching: A Comprehensive Study of Fine-Tuning and Prompt Learning Paradigms 利用预训练语言模型增强实体匹配:微调和提示学习范例的综合研究
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-15 DOI: 10.1155/2024/1941221
Yu Wang, Luyao Zhou, Yuan Wang, Zhenwan Peng

Pretrained Language Models (PLMs) acquire rich prior semantic knowledge during the pretraining phase and utilize it to enhance downstream Natural Language Processing (NLP) tasks. Entity Matching (EM), a fundamental NLP task, aims to determine whether two entity records from different knowledge bases refer to the same real-world entity. This study, for the first time, explores the potential of using a PLM to boost the EM task through two transfer learning techniques, namely, fine-tuning and prompt learning. Our work also represents the first application of the soft prompt in an EM task. Experimental results across eleven EM datasets show that the soft prompt consistently outperforms other methods in terms of F1 scores across all datasets. Additionally, this study also investigates the capability of prompt learning in few-shot learning and observes that the hard prompt achieves the highest F1 scores in both zero-shot and one-shot context. These findings underscore the effectiveness of prompt learning paradigms in tackling challenging EM tasks.

预训练语言模型(PLM)在预训练阶段获得丰富的先验语义知识,并利用这些知识加强下游的自然语言处理(NLP)任务。实体匹配(EM)是一项基本的 NLP 任务,旨在确定来自不同知识库的两个实体记录是否指代同一个现实世界实体。本研究首次探索了使用 PLM 通过两种迁移学习技术(即微调和及时学习)促进 EM 任务的潜力。我们的工作也是软提示在电磁任务中的首次应用。11 个电磁数据集的实验结果表明,在所有数据集上,软提示的 F1 分数始终优于其他方法。此外,本研究还考察了提示学习在少次学习中的能力,并观察到硬提示在零次和一次学习中都获得了最高的 F1 分数。这些发现强调了提示学习范式在处理具有挑战性的电磁任务时的有效性。
{"title":"Leveraging Pretrained Language Models for Enhanced Entity Matching: A Comprehensive Study of Fine-Tuning and Prompt Learning Paradigms","authors":"Yu Wang,&nbsp;Luyao Zhou,&nbsp;Yuan Wang,&nbsp;Zhenwan Peng","doi":"10.1155/2024/1941221","DOIUrl":"10.1155/2024/1941221","url":null,"abstract":"<p>Pretrained Language Models (PLMs) acquire rich prior semantic knowledge during the pretraining phase and utilize it to enhance downstream Natural Language Processing (NLP) tasks. Entity Matching (EM), a fundamental NLP task, aims to determine whether two entity records from different knowledge bases refer to the same real-world entity. This study, for the first time, explores the potential of using a PLM to boost the EM task through two transfer learning techniques, namely, fine-tuning and prompt learning. Our work also represents the first application of the soft prompt in an EM task. Experimental results across eleven EM datasets show that the soft prompt consistently outperforms other methods in terms of <i>F</i>1 scores across all datasets. Additionally, this study also investigates the capability of prompt learning in few-shot learning and observes that the hard prompt achieves the highest <i>F</i>1 scores in both zero-shot and one-shot context. These findings underscore the effectiveness of prompt learning paradigms in tackling challenging EM tasks.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140700371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Predictive Clustering Trees for (Hierarchical) Multi-Label Classification 用于(分层)多标签分类的半监督预测聚类树
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-13 DOI: 10.1155/2024/5610291
Jurica Levatić, Michelangelo Ceci, Dragi Kocev, Sašo Džeroski

Semi-supervised learning (SSL) is a common approach to learning predictive models using not only labeled, but also unlabeled examples. While SSL for the simple tasks of classification and regression has received much attention from the research community, this is not the case for complex prediction tasks with structurally dependent variables, such as multi-label classification and hierarchical multi-label classification. These tasks may require additional information, possibly coming from the underlying distribution in the descriptive space provided by unlabeled examples, to better face the challenging task of simultaneously predicting multiple class labels. In this paper, we investigate this aspect and propose a (hierarchical) multi-label classification method based on semi-supervised learning of predictive clustering trees, which we also extend towards ensemble learning. Extensive experimental evaluation conducted on 24 datasets shows significant advantages of the proposed method and its extension with respect to their supervised counterparts. Moreover, the method preserves interpretability of classical tree-based models.

半监督学习(SSL)是一种常用的预测模型学习方法,它不仅使用已标记的示例,还使用未标记的示例。虽然针对分类和回归等简单任务的半监督学习受到了研究界的广泛关注,但对于具有结构依赖变量的复杂预测任务(如多标签分类和分层多标签分类)来说,情况并非如此。这些任务可能需要额外的信息,这些信息可能来自未标记示例提供的描述空间中的底层分布,以便更好地面对同时预测多个类标签的挑战性任务。在本文中,我们对这方面进行了研究,并提出了一种基于预测聚类树半监督学习的(分层)多标签分类方法,我们还将该方法扩展到了集合学习。我们在 24 个数据集上进行了广泛的实验评估,结果表明,与有监督的分类方法相比,我们提出的方法及其扩展具有显著优势。此外,该方法还保留了基于树的经典模型的可解释性。
{"title":"Semi-Supervised Predictive Clustering Trees for (Hierarchical) Multi-Label Classification","authors":"Jurica Levatić,&nbsp;Michelangelo Ceci,&nbsp;Dragi Kocev,&nbsp;Sašo Džeroski","doi":"10.1155/2024/5610291","DOIUrl":"https://doi.org/10.1155/2024/5610291","url":null,"abstract":"<p>Semi-supervised learning (SSL) is a common approach to learning predictive models using not only labeled, but also unlabeled examples. While SSL for the simple tasks of classification and regression has received much attention from the research community, this is not the case for complex prediction tasks with structurally dependent variables, such as multi-label classification and hierarchical multi-label classification. These tasks may require additional information, possibly coming from the underlying distribution in the descriptive space provided by unlabeled examples, to better face the challenging task of simultaneously predicting multiple class labels. In this paper, we investigate this aspect and propose a (hierarchical) multi-label classification method based on semi-supervised learning of predictive clustering trees, which we also extend towards ensemble learning. Extensive experimental evaluation conducted on 24 datasets shows significant advantages of the proposed method and its extension with respect to their supervised counterparts. Moreover, the method preserves interpretability of classical tree-based models.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Bioinspired Techniques for Tracking Maximum Power under Variable Environmental Conditions 生物启发技术在多变环境条件下跟踪最大功率的比较
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-12 DOI: 10.1155/2024/6678384
Dilip Yadav, Nidhi Singh, Nimay Chandra Giri, Vikas Singh Bhadoria, Subrata Kumar Sarker

This paper presents a comparative analysis of bioinspired algorithms employed on a PV system subject to standard conditions, under step-change of irradiance conditions, and a partial shading condition for tracking the global maximum power point (GMPP). Four performance analysis and comparison techniques are artificial bee colony, particle swarm optimization, genetic algorithm, and a new metaheuristic technique called jellyfish optimization, respectively. These existing algorithms are well-known for tracking the GMPP with high efficiency. This paper compares these algorithms based on extracting GMPP in terms of maximum power from a PV module running at a uniform (STC), nonuniform solar irradiation (under step-change of irradiance), and partial shading conditions (PSCs). For analysis and comparison, two modules are taken: 1Soltech-1STH-215P and SolarWorld Industries GmbH Sunmodule plus SW 245 poly module, which are considered to form a panel by connecting four series modules. Comparison is based on maximum power tracking, total execution time, and minimum number of iterations to achieve the GMPP with high tracking efficiency and minimum error. Minitab software finds the regression equation (objective function) for STC, step-changing irradiation, and PSC. The reliability of the data (P-V curves) was measured in terms of p value, R, R2, and VIF. The R2 value comes out to be near 1, which shows the accuracy of the data. The simulation results prove that the new evolutionary jellyfish optimization technique gives better results in terms of higher tracking efficiency with very less time to obtain GMPP in all environmental conditions, with a higher efficiency of 98 to 99.9% with less time of 0.0386 to 0.1219 sec in comparison to ABC, GA, and PSO. The RMSE value for the proposed method JFO (0.59) is much lower than that of ABC, GA, and PSO.

本文比较分析了在标准条件下、辐照度阶跃变化条件下和部分遮挡条件下采用生物启发算法跟踪全局最大功率点(GMPP)的光伏系统。四种性能分析和比较技术分别是人工蜂群、粒子群优化、遗传算法和一种名为水母优化的新元启发式技术。这些现有算法在高效跟踪 GMPP 方面都很有名。本文对这些算法进行了比较,这些算法基于从在均匀(STC)、非均匀太阳辐照(辐照阶跃变化下)和部分遮阳条件(PSCs)下运行的光伏组件中提取最大功率的 GMPP。为了进行分析和比较,我们选取了两个模块:1Soltech-1STH-215P 和 SolarWorld Industries GmbH Sunmodule plus SW 245 poly 模块,通过连接四个串联模块组成一个面板。比较基于最大功率跟踪、总执行时间和最小迭代次数,以实现具有高跟踪效率和最小误差的 GMPP。Minitab 软件找出了 STC、阶跃变化辐照度和 PSC 的回归方程(目标函数)。用 p 值、R、R2 和 VIF 来衡量数据(P-V 曲线)的可靠性。R2 值接近 1,这表明了数据的准确性。仿真结果证明,与 ABC、GA 和 PSO 相比,新的水母进化优化技术在所有环境条件下都能以更短的时间获得更高的跟踪效率,并以更低的时间(0.0386 至 0.1219 秒)获得更高的跟踪效率(98% 至 99.9%)。拟议方法 JFO 的 RMSE 值(0.59)远低于 ABC、GA 和 PSO。
{"title":"Comparison of Bioinspired Techniques for Tracking Maximum Power under Variable Environmental Conditions","authors":"Dilip Yadav,&nbsp;Nidhi Singh,&nbsp;Nimay Chandra Giri,&nbsp;Vikas Singh Bhadoria,&nbsp;Subrata Kumar Sarker","doi":"10.1155/2024/6678384","DOIUrl":"https://doi.org/10.1155/2024/6678384","url":null,"abstract":"<p>This paper presents a comparative analysis of bioinspired algorithms employed on a PV system subject to standard conditions, under step-change of irradiance conditions, and a partial shading condition for tracking the global maximum power point (GMPP). Four performance analysis and comparison techniques are artificial bee colony, particle swarm optimization, genetic algorithm, and a new metaheuristic technique called jellyfish optimization, respectively. These existing algorithms are well-known for tracking the GMPP with high efficiency. This paper compares these algorithms based on extracting GMPP in terms of maximum power from a PV module running at a uniform (STC), nonuniform solar irradiation (under step-change of irradiance), and partial shading conditions (PSCs). For analysis and comparison, two modules are taken: 1Soltech-1STH-215P and SolarWorld Industries GmbH Sunmodule plus SW 245 poly module, which are considered to form a panel by connecting four series modules. Comparison is based on maximum power tracking, total execution time, and minimum number of iterations to achieve the GMPP with high tracking efficiency and minimum error. Minitab software finds the regression equation (objective function) for STC, step-changing irradiation, and PSC. The reliability of the data (P-V curves) was measured in terms of <i>p</i> value, <i>R</i>, <i>R</i><sup>2</sup>, and VIF. The <i>R</i><sup>2</sup> value comes out to be near 1, which shows the accuracy of the data. The simulation results prove that the new evolutionary jellyfish optimization technique gives better results in terms of higher tracking efficiency with very less time to obtain GMPP in all environmental conditions, with a higher efficiency of 98 to 99.9% with less time of 0.0386 to 0.1219 sec in comparison to ABC, GA, and PSO. The RMSE value for the proposed method JFO (0.59) is much lower than that of ABC, GA, and PSO.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-Learning Enhanced Trade Forecasting: A Neural Framework Leveraging Efficient Multicommodity STL Decomposition 元学习增强型贸易预测:利用高效多商品 STL 分解的神经框架
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-03 DOI: 10.1155/2024/6176898
Bohan Ma, Yushan Xue, Jing Chen, Fangfang Sun

In the dynamic global trade environment, accurately predicting trade values of diverse commodities is challenged by unpredictable economic and political changes. This study introduces the Meta-TFSTL framework, an innovative neural model that integrates Meta-Learning Enhanced Trade Forecasting with efficient multicommodity STL decomposition to adeptly navigate the complexities of forecasting. Our approach begins with STL decomposition to partition trade value sequences into seasonal, trend, and residual elements, identifying a potential 10-month economic cycle through the Ljung–Box test. The model employs a dual-channel spatiotemporal encoder for processing these components, ensuring a comprehensive grasp of temporal correlations. By constructing spatial and temporal graphs leveraging correlation matrices and graph embeddings and introducing fused attention and multitasking strategies at the decoding phase, Meta-TFSTL surpasses benchmark models in performance. Additionally, integrating meta-learning and fine-tuning techniques enhances shared knowledge across import and export trade predictions. Ultimately, our research significantly advances the precision and efficiency of trade forecasting in a volatile global economic scenario.

在动态的全球贸易环境中,准确预测各种商品的贸易价值面临着不可预测的经济和政治变化的挑战。本研究介绍了 Meta-TFSTL 框架,这是一个创新的神经模型,它将元学习增强型贸易预测与高效的多商品 STL 分解相结合,从而巧妙地驾驭复杂的预测。我们的方法从 STL 分解开始,将贸易价值序列划分为季节、趋势和残差元素,并通过 Ljung-Box 检验确定潜在的 10 个月经济周期。该模型采用双通道时空编码器处理这些成分,确保全面掌握时间相关性。通过利用相关矩阵和图嵌入构建空间和时间图,并在解码阶段引入融合注意力和多任务处理策略,Meta-TFSTL 在性能上超越了基准模型。此外,整合元学习和微调技术还增强了进出口贸易预测的共享知识。最终,我们的研究大大提高了全球经济动荡形势下贸易预测的精度和效率。
{"title":"Meta-Learning Enhanced Trade Forecasting: A Neural Framework Leveraging Efficient Multicommodity STL Decomposition","authors":"Bohan Ma,&nbsp;Yushan Xue,&nbsp;Jing Chen,&nbsp;Fangfang Sun","doi":"10.1155/2024/6176898","DOIUrl":"10.1155/2024/6176898","url":null,"abstract":"<p>In the dynamic global trade environment, accurately predicting trade values of diverse commodities is challenged by unpredictable economic and political changes. This study introduces the Meta-TFSTL framework, an innovative neural model that integrates Meta-Learning Enhanced Trade Forecasting with efficient multicommodity STL decomposition to adeptly navigate the complexities of forecasting. Our approach begins with STL decomposition to partition trade value sequences into seasonal, trend, and residual elements, identifying a potential 10-month economic cycle through the Ljung–Box test. The model employs a dual-channel spatiotemporal encoder for processing these components, ensuring a comprehensive grasp of temporal correlations. By constructing spatial and temporal graphs leveraging correlation matrices and graph embeddings and introducing fused attention and multitasking strategies at the decoding phase, Meta-TFSTL surpasses benchmark models in performance. Additionally, integrating meta-learning and fine-tuning techniques enhances shared knowledge across import and export trade predictions. Ultimately, our research significantly advances the precision and efficiency of trade forecasting in a volatile global economic scenario.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140746871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiobjective Optimization of Diesel Particulate Filter Regeneration Conditions Based on Machine Learning Combined with Intelligent Algorithms 基于机器学习与智能算法相结合的柴油机微粒过滤器再生条件多目标优化技术
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-01 DOI: 10.1155/2024/7775139
Yuhua Wang, Jinlong Li, Guiyong Wang, Guisheng Chen, Qianqiao Shen, Boshun Zeng, Shuchao He

To reduce diesel emissions and fuel consumption and improve DPF regeneration performance, a multiobjective optimization method for DPF regeneration conditions, combined with nondominated sorting genetic algorithms (NSGA-III) and a back propagation neural network (BPNN) prediction model, is proposed. In NSGA-III, DPF regeneration temperature (T4 and T5), O2, NOx, smoke, and brake-specific fuel consumption (BSFC) are optimized by adjusting the engine injection control parameters. An improved seagull optimization algorithm (ISOA) is proposed to enhance the accuracy of BPNN predictions. The ISOA-BP diesel engine regeneration condition prediction model is established to evaluate fitness. The optimized fuel injection parameters are programmed into the engine’s electronic control unit (ECU) for experimental validation through steady-state testing, DPF active regeneration testing, and WHTC transient cycle testing. The results demonstrate that the introduced ISOA algorithm exhibits faster convergence and improved search abilities, effectively addressing calculation accuracy challenges. A comparison between the SOA-BPNN and ISOA-BPNN models shows the superior accuracy of the latter, with reduced errors and improved R2 values. The optimization method, integrating NSGA-III and ISOA-BPNN, achieves multiobjective calibration for T4 and T5 temperatures. Steady-state testing reveals average increases of 3.14%, 2.07%, and 10.79% in T4, T5, and exhaust oxygen concentrations, while NOx, smoke, and BSFC exhibit average decreases of 8.68%, 12.07%, and 1.03%. Regeneration experiments affirm the efficiency of the proposed method, with DPF regeneration reaching 88.2% and notable improvements in T4, T5, and oxygen concentrations during WHTC transient testing. This research provides a promising and effective solution for calibrating the regeneration temperature of DPF, thus reducing emissions and fuel consumption of diesel engines while ensuring safe and efficient DPF regeneration.

为了减少柴油排放和燃料消耗,提高柴油微粒滤清器(DPF)的再生性能,提出了一种结合非支配排序遗传算法(NSGA-III)和反向传播神经网络(BPNN)预测模型的柴油微粒滤清器(DPF)再生条件多目标优化方法。在 NSGA-III 中,通过调整发动机喷油控制参数来优化 DPF 再生温度(T4 和 T5)、O2、NOx、烟雾和制动油耗(BSFC)。为提高 BPNN 预测的准确性,提出了一种改进的海鸥优化算法(ISOA)。建立了 ISOA-BP 柴油发动机再生条件预测模型来评估适应性。将优化后的燃油喷射参数编程到发动机的电子控制单元(ECU)中,通过稳态测试、柴油微粒滤清器主动再生测试和 WHTC 瞬态循环测试进行实验验证。结果表明,引入的 ISOA 算法收敛速度更快,搜索能力更强,能有效解决计算精度难题。SOA-BPNN 模型和 ISOA-BPNN 模型之间的比较表明,后者的精度更高,误差更小,R2 值更高。集成了 NSGA-III 和 ISOA-BPNN 的优化方法实现了 T4 和 T5 温度的多目标校准。稳态测试显示,T4、T5 和排气氧浓度分别平均增加了 3.14%、2.07% 和 10.79%,而氮氧化物、烟雾和 BSFC 分别平均减少了 8.68%、12.07% 和 1.03%。再生实验证实了所提方法的高效性,在 WHTC 瞬态测试中,DPF 的再生率达到 88.2%,T4、T5 和氧浓度也有显著改善。这项研究为校准柴油微粒滤清器的再生温度提供了一种前景广阔的有效解决方案,从而在确保柴油微粒滤清器安全高效再生的同时,减少柴油发动机的排放和油耗。
{"title":"Multiobjective Optimization of Diesel Particulate Filter Regeneration Conditions Based on Machine Learning Combined with Intelligent Algorithms","authors":"Yuhua Wang,&nbsp;Jinlong Li,&nbsp;Guiyong Wang,&nbsp;Guisheng Chen,&nbsp;Qianqiao Shen,&nbsp;Boshun Zeng,&nbsp;Shuchao He","doi":"10.1155/2024/7775139","DOIUrl":"https://doi.org/10.1155/2024/7775139","url":null,"abstract":"<p>To reduce diesel emissions and fuel consumption and improve DPF regeneration performance, a multiobjective optimization method for DPF regeneration conditions, combined with nondominated sorting genetic algorithms (NSGA-III) and a back propagation neural network (BPNN) prediction model, is proposed. In NSGA-III, DPF regeneration temperature (T4 and T5), O<sub>2</sub>, NO<sub>x</sub>, smoke, and brake-specific fuel consumption (BSFC) are optimized by adjusting the engine injection control parameters. An improved seagull optimization algorithm (ISOA) is proposed to enhance the accuracy of BPNN predictions. The ISOA-BP diesel engine regeneration condition prediction model is established to evaluate fitness. The optimized fuel injection parameters are programmed into the engine’s electronic control unit (ECU) for experimental validation through steady-state testing, DPF active regeneration testing, and WHTC transient cycle testing. The results demonstrate that the introduced ISOA algorithm exhibits faster convergence and improved search abilities, effectively addressing calculation accuracy challenges. A comparison between the SOA-BPNN and ISOA-BPNN models shows the superior accuracy of the latter, with reduced errors and improved <i>R</i><sup>2</sup> values. The optimization method, integrating NSGA-III and ISOA-BPNN, achieves multiobjective calibration for T4 and T5 temperatures. Steady-state testing reveals average increases of 3.14%, 2.07%, and 10.79% in T4, T5, and exhaust oxygen concentrations, while NO<sub>x</sub>, smoke, and BSFC exhibit average decreases of 8.68%, 12.07%, and 1.03%. Regeneration experiments affirm the efficiency of the proposed method, with DPF regeneration reaching 88.2% and notable improvements in T4, T5, and oxygen concentrations during WHTC transient testing. This research provides a promising and effective solution for calibrating the regeneration temperature of DPF, thus reducing emissions and fuel consumption of diesel engines while ensuring safe and efficient DPF regeneration.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Informed Neural Networks for Solving High-Index Differential-Algebraic Equation Systems Based on Radau Methods 基于 Radau 方法的用于求解高指数微分代数方程系统的物理信息神经网络
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-29 DOI: 10.1155/2024/6641674
Jiasheng Chen, Juan Tang, Ming Yan, Shuai Lai, Kun Liang, Jianguang Lu, Wenqiang Yang

As is well known, differential algebraic equations (DAEs), which are able to describe dynamic changes and underlying constraints, have been widely applied in engineering fields such as fluid dynamics, multi-body dynamics, mechanical systems, and control theory. In practical physical modeling within these domains, the systems often generate high-index DAEs. Classical implicit numerical methods typically result in varying order reduction of numerical accuracy when solving high-index systems. Recently, the physics-informed neural networks (PINNs) have gained attention for solving DAE systems. However, it faces challenges like the inability to directly solve high-index systems, lower predictive accuracy, and weaker generalization capabilities. In this paper, we propose a PINN computational framework, combined Radau IIA numerical method with an improved fully connected neural network structure, to directly solve high-index DAEs. Furthermore, we employ a domain decomposition strategy to enhance solution accuracy. We conduct numerical experiments with two classical high-index systems as illustrative examples, investigating how different orders and time-step sizes of the Radau IIA method affect the accuracy of neural network solutions. For different time-step sizes, the experimental results indicate that utilizing a 5th-order Radau IIA method in the PINN achieves a high level of system accuracy and stability. Specifically, the absolute errors for all differential variables remain as low as 10−6, and the absolute errors for algebraic variables are maintained at 10−5. Therefore, our method exhibits excellent computational accuracy and strong generalization capabilities, providing a feasible approach for the high-precision solution of larger-scale DAEs with higher indices or challenging high-dimensional partial differential algebraic equation systems.

众所周知,微分代数方程(DAE)能够描述动态变化和潜在约束,已被广泛应用于流体动力学、多体动力学、机械系统和控制理论等工程领域。在这些领域的实际物理建模中,系统通常会产生高指数 DAE。在求解高指数系统时,经典的隐式数值方法通常会导致数值精度的不同阶降低。最近,物理信息神经网络(PINNs)在求解 DAE 系统方面受到关注。然而,它面临着无法直接求解高指数系统、预测精度较低、泛化能力较弱等挑战。本文提出了一种 PINN 计算框架,将 Radau IIA 数值方法与改进的全连接神经网络结构相结合,直接求解高指数 DAE。此外,我们还采用了领域分解策略来提高求解精度。我们以两个经典高指数系统为例进行了数值实验,研究了 Radau IIA 方法的不同阶数和时间步长对神经网络求解精度的影响。对于不同的时间步长,实验结果表明,在 PINN 中使用 5 阶 Radau IIA 方法可以实现较高的系统精度和稳定性。具体来说,所有微分变量的绝对误差保持在 10-6 以下,代数变量的绝对误差保持在 10-5。因此,我们的方法表现出优异的计算精度和强大的泛化能力,为高精度求解更大规模、更高指数的 DAE 或具有挑战性的高维偏微分代数方程系统提供了可行的方法。
{"title":"Physics-Informed Neural Networks for Solving High-Index Differential-Algebraic Equation Systems Based on Radau Methods","authors":"Jiasheng Chen,&nbsp;Juan Tang,&nbsp;Ming Yan,&nbsp;Shuai Lai,&nbsp;Kun Liang,&nbsp;Jianguang Lu,&nbsp;Wenqiang Yang","doi":"10.1155/2024/6641674","DOIUrl":"https://doi.org/10.1155/2024/6641674","url":null,"abstract":"<p>As is well known, differential algebraic equations (DAEs), which are able to describe dynamic changes and underlying constraints, have been widely applied in engineering fields such as fluid dynamics, multi-body dynamics, mechanical systems, and control theory. In practical physical modeling within these domains, the systems often generate high-index DAEs. Classical implicit numerical methods typically result in varying order reduction of numerical accuracy when solving high-index systems. Recently, the physics-informed neural networks (PINNs) have gained attention for solving DAE systems. However, it faces challenges like the inability to directly solve high-index systems, lower predictive accuracy, and weaker generalization capabilities. In this paper, we propose a PINN computational framework, combined Radau IIA numerical method with an improved fully connected neural network structure, to directly solve high-index DAEs. Furthermore, we employ a domain decomposition strategy to enhance solution accuracy. We conduct numerical experiments with two classical high-index systems as illustrative examples, investigating how different orders and time-step sizes of the Radau IIA method affect the accuracy of neural network solutions. For different time-step sizes, the experimental results indicate that utilizing a 5th-order Radau IIA method in the PINN achieves a high level of system accuracy and stability. Specifically, the absolute errors for all differential variables remain as low as 10<sup>−6</sup>, and the absolute errors for algebraic variables are maintained at 10<sup>−5</sup>. Therefore, our method exhibits excellent computational accuracy and strong generalization capabilities, providing a feasible approach for the high-precision solution of larger-scale DAEs with higher indices or challenging high-dimensional partial differential algebraic equation systems.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
State Feedback Control for Vehicle Electro-Hydraulic Braking Systems Based on Adaptive Genetic Algorithm Optimization 基于自适应遗传算法优化的车辆电液制动系统状态反馈控制
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-27 DOI: 10.1155/2024/3616505
Jinhua Zhang, Lifeng Ding, Shangbin Long

In traditional state feedback control, the difficulty in determining the coefficient matrix is a significant factor that prevents achieving optimal control. To address this issue, this paper proposes the integration of adaptive genetic algorithms with state feedback control. The effectiveness of the proposed algorithm is validated via an electro-hydraulic braking system. Firstly, a model of the electro-hydraulic braking system is introduced. Next, a state feedback controller optimized by parameter-adaptive genetic algorithm is designed. Additionally, a penalty term is introduced into the fitness function to suppress overshoots. Finally, simulations are conducted to compare the convergence speed of parameter-adaptive genetic algorithm with genetic algorithm, ant colony optimization, and particle swarm optimization. Furthermore, the performance of the proposed algorithm, the state feedback control, and the proportional-integral control are also compared. The comparison results show that the proposed algorithm effectively accelerates the settling time of the electro-hydraulic braking system and suppresses the overshoots.

在传统的状态反馈控制中,难以确定系数矩阵是阻碍实现最优控制的一个重要因素。为解决这一问题,本文提出将自适应遗传算法与状态反馈控制相结合。本文通过一个电液制动系统验证了所提算法的有效性。首先,介绍了电液制动系统的模型。接着,设计了一个通过参数自适应遗传算法优化的状态反馈控制器。此外,在拟合函数中引入了惩罚项,以抑制超调。最后,通过仿真比较了参数自适应遗传算法与遗传算法、蚁群优化和粒子群优化的收敛速度。此外,还比较了拟议算法、状态反馈控制和比例积分控制的性能。比较结果表明,提出的算法有效地加快了电液制动系统的平稳时间,并抑制了过冲。
{"title":"State Feedback Control for Vehicle Electro-Hydraulic Braking Systems Based on Adaptive Genetic Algorithm Optimization","authors":"Jinhua Zhang,&nbsp;Lifeng Ding,&nbsp;Shangbin Long","doi":"10.1155/2024/3616505","DOIUrl":"10.1155/2024/3616505","url":null,"abstract":"<p>In traditional state feedback control, the difficulty in determining the coefficient matrix is a significant factor that prevents achieving optimal control. To address this issue, this paper proposes the integration of adaptive genetic algorithms with state feedback control. The effectiveness of the proposed algorithm is validated via an electro-hydraulic braking system. Firstly, a model of the electro-hydraulic braking system is introduced. Next, a state feedback controller optimized by parameter-adaptive genetic algorithm is designed. Additionally, a penalty term is introduced into the fitness function to suppress overshoots. Finally, simulations are conducted to compare the convergence speed of parameter-adaptive genetic algorithm with genetic algorithm, ant colony optimization, and particle swarm optimization. Furthermore, the performance of the proposed algorithm, the state feedback control, and the proportional-integral control are also compared. The comparison results show that the proposed algorithm effectively accelerates the settling time of the electro-hydraulic braking system and suppresses the overshoots.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140375044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence in 6G Wireless Networks: Opportunities, Applications, and Challenges 6G 无线网络中的人工智能:机遇、应用和挑战
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-25 DOI: 10.1155/2024/8845070
Abdulraqeb Alhammadi, Ibraheem Shayea, Ayman A. El-Saleh, Marwan Hadri Azmi, Zool Hilmi Ismail, Lida Kouhalvandi, Sawan Ali Saad

Wireless technologies are growing unprecedentedly with the advent and increasing popularity of wireless services worldwide. With the advancement in technology, profound techniques can potentially improve the performance of wireless networks. Besides, the advancement of artificial intelligence (AI) enables systems to make intelligent decisions, automation, data analysis, insights, predictive capabilities, learning, and adaptation. A sophisticated AI will be required for next-generation wireless networks to automate information delivery between smart applications simultaneously. AI technologies, such as machines and deep learning techniques, have attained tremendous success in many applications in recent years. Hances, researchers in academia and industry have turned their attention to the advanced development of AI-enabled wireless networks. This paper comprehensively surveys AI technologies for different wireless networks with various applications. Moreover, we present various AI-enabled applications that exploit the power of AI to enable the desired evolution of wireless networks. Besides, the challenges of unsolved research in this area, which represent the future research trends of AI-enabled wireless networks, are discussed in detail. We provide several suggestions and solutions that help wireless networks be more intelligent and sophisticated to handle complicated problems. In summary, this paper can help researchers deeply understand the up-to-the-minute wireless network designs based on AI technologies and identify interesting unsolved issues to be pursued in their research in a fast way.

随着全球无线服务的出现和日益普及,无线技术正以前所未有的速度发展。随着技术的进步,高深的技术有可能提高无线网络的性能。此外,人工智能(AI)的发展使系统能够进行智能决策、自动化、数据分析、洞察力、预测能力、学习和适应。下一代无线网络需要复杂的人工智能,以便同时在智能应用之间自动传递信息。近年来,机器和深度学习技术等人工智能技术在许多应用领域取得了巨大成功。因此,学术界和工业界的研究人员已将注意力转向人工智能无线网络的先进发展。本文全面探讨了人工智能技术在不同无线网络中的各种应用。此外,我们还介绍了各种人工智能应用,这些应用利用人工智能的力量实现了无线网络的理想演进。此外,我们还详细讨论了该领域尚未解决的研究挑战,这些挑战代表了人工智能无线网络的未来研究趋势。我们还提供了一些建议和解决方案,帮助无线网络更加智能和精密地处理复杂问题。总之,本文可以帮助研究人员深入了解基于人工智能技术的最新无线网络设计,并在研究中快速发现有趣的未决问题。
{"title":"Artificial Intelligence in 6G Wireless Networks: Opportunities, Applications, and Challenges","authors":"Abdulraqeb Alhammadi,&nbsp;Ibraheem Shayea,&nbsp;Ayman A. El-Saleh,&nbsp;Marwan Hadri Azmi,&nbsp;Zool Hilmi Ismail,&nbsp;Lida Kouhalvandi,&nbsp;Sawan Ali Saad","doi":"10.1155/2024/8845070","DOIUrl":"10.1155/2024/8845070","url":null,"abstract":"<p>Wireless technologies are growing unprecedentedly with the advent and increasing popularity of wireless services worldwide. With the advancement in technology, profound techniques can potentially improve the performance of wireless networks. Besides, the advancement of artificial intelligence (AI) enables systems to make intelligent decisions, automation, data analysis, insights, predictive capabilities, learning, and adaptation. A sophisticated AI will be required for next-generation wireless networks to automate information delivery between smart applications simultaneously. AI technologies, such as machines and deep learning techniques, have attained tremendous success in many applications in recent years. Hances, researchers in academia and industry have turned their attention to the advanced development of AI-enabled wireless networks. This paper comprehensively surveys AI technologies for different wireless networks with various applications. Moreover, we present various AI-enabled applications that exploit the power of AI to enable the desired evolution of wireless networks. Besides, the challenges of unsolved research in this area, which represent the future research trends of AI-enabled wireless networks, are discussed in detail. We provide several suggestions and solutions that help wireless networks be more intelligent and sophisticated to handle complicated problems. In summary, this paper can help researchers deeply understand the up-to-the-minute wireless network designs based on AI technologies and identify interesting unsolved issues to be pursued in their research in a fast way.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140384922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive Combined Learning of Grading System for Early Stage Emerging Diseases 针对早期新发疾病的自适应分级组合学习系统
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-23 DOI: 10.1155/2024/6619263
Li Wen, Wei Pan, Yongdong Shi, Wulin Pan, Cheng Hu, Wenxuan Kong, Renjie Wang, Wei Zhang, Shujie Liao

Currently, individual artificial intelligence (AI) algorithms face significant challenges in effectively diagnosing and predicting early stage emerging serious diseases. Our investigation indicates that these challenges primarily arise from insufficient clinical treatment data, leading to inadequate model training and substantial disparities among algorithm outcomes. Therefore, this study introduces an adaptive framework aimed at increasing prediction accuracy and mitigating instability by integrating various AI algorithms. In analyzing two cohorts of early cases of the coronavirus disease 2019 (COVID-19) in Wuhan, China, we demonstrate the reliability and precision of the adaptive combined learning algorithm. Employing an adaptive combination with three feature importance methods (Random Forest (RF), Scalable end-to-end Tree Boosting System (XGBoost), and Sparsity Oriented Importance Learning (SOIL)) for two cohorts, we identified 23 clinical features with significant impacts on COVID-19 outcomes. Subsequently, the adaptive combined prediction leveraged and enhanced the advantages of individual methods based on three forecasting algorithms (RF, XGBoost, and Logistic regression). The average accuracy for both cohorts exceeded 0.95, with the area under the receiver operating characteristics curve (AUC) values of 0.983 and 0.988, respectively. We established a severity grading system for COVID-19 based on the combined probability of death. Compared to the original classification, there was a significant decrease in the number of patients in the severe and critical levels, while the levels of mild and moderate showed a substantial increase. This severity grading system provides a more rational grading in clinical treatment. Clinicians can utilize this system for effective and reliable preliminary assessments and examinations of patients with emerging diseases, enabling timely and targeted treatment.

目前,单个人工智能(AI)算法在有效诊断和预测早期新出现的严重疾病方面面临巨大挑战。我们的调查表明,这些挑战主要源于临床治疗数据不足,导致模型训练不足和算法结果之间的巨大差异。因此,本研究引入了一个自适应框架,旨在通过整合各种人工智能算法来提高预测准确性并降低不稳定性。通过分析中国武汉两组2019年冠状病毒病(COVID-19)早期病例,我们证明了自适应组合学习算法的可靠性和精确性。我们在两个队列中采用了自适应组合与三种特征重要性方法(随机森林(Random Forest,RF)、可扩展端到端树提升系统(Scalable end-to-end Tree Boosting System,XGBoost)和稀疏性导向重要性学习(Sparsity Oriented Importance Learning,SOIL)),识别出了对COVID-19结果有显著影响的23个临床特征。随后,基于三种预测算法(RF、XGBoost 和逻辑回归)的自适应组合预测利用并增强了单个方法的优势。两个队列的平均准确率都超过了 0.95,接收者操作特征曲线下面积(AUC)值分别为 0.983 和 0.988。我们根据综合死亡概率为 COVID-19 建立了严重程度分级系统。与原来的分级相比,重度和危重患者人数明显减少,而轻度和中度患者人数则大幅增加。这种严重程度分级系统为临床治疗提供了更合理的分级。临床医生可利用该系统对新发疾病患者进行有效、可靠的初步评估和检查,以便及时进行有针对性的治疗。
{"title":"An Adaptive Combined Learning of Grading System for Early Stage Emerging Diseases","authors":"Li Wen,&nbsp;Wei Pan,&nbsp;Yongdong Shi,&nbsp;Wulin Pan,&nbsp;Cheng Hu,&nbsp;Wenxuan Kong,&nbsp;Renjie Wang,&nbsp;Wei Zhang,&nbsp;Shujie Liao","doi":"10.1155/2024/6619263","DOIUrl":"10.1155/2024/6619263","url":null,"abstract":"<p>Currently, individual artificial intelligence (AI) algorithms face significant challenges in effectively diagnosing and predicting early stage emerging serious diseases. Our investigation indicates that these challenges primarily arise from insufficient clinical treatment data, leading to inadequate model training and substantial disparities among algorithm outcomes. Therefore, this study introduces an adaptive framework aimed at increasing prediction accuracy and mitigating instability by integrating various AI algorithms. In analyzing two cohorts of early cases of the coronavirus disease 2019 (COVID-19) in Wuhan, China, we demonstrate the reliability and precision of the adaptive combined learning algorithm. Employing an adaptive combination with three feature importance methods (Random Forest (RF), Scalable end-to-end Tree Boosting System (XGBoost), and Sparsity Oriented Importance Learning (SOIL)) for two cohorts, we identified 23 clinical features with significant impacts on COVID-19 outcomes. Subsequently, the adaptive combined prediction leveraged and enhanced the advantages of individual methods based on three forecasting algorithms (RF, XGBoost, and Logistic regression). The average accuracy for both cohorts exceeded 0.95, with the area under the receiver operating characteristics curve (AUC) values of 0.983 and 0.988, respectively. We established a severity grading system for COVID-19 based on the combined probability of death. Compared to the original classification, there was a significant decrease in the number of patients in the severe and critical levels, while the levels of mild and moderate showed a substantial increase. This severity grading system provides a more rational grading in clinical treatment. Clinicians can utilize this system for effective and reliable preliminary assessments and examinations of patients with emerging diseases, enabling timely and targeted treatment.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140210588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Secure Sharing of Electronic Health Records Using IoT-Based Hyperledger Blockchain 利用基于物联网的超级账本区块链高效安全地共享电子健康记录
IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-22 DOI: 10.1155/2024/6995202
Velmurugan S., Prakash M., Neelakandan S., Eric Ofori Martinson

Electronic Health Record (EHR) systems are a valuable and effective tool for exchanging medical information about patients between hospitals and other significant healthcare sector stakeholders in order to improve patient diagnosis and treatment around the world. Nevertheless, the majority of the hospital infrastructures that are now in place lack the proper security, trusted access control, and management of privacy and confidentiality concerns that the current EHR systems are supposed to provide. Goal. For various EHR systems, this research proposes a Blockchain-enabled Hyperledger Fabric Architecture as a solution to this delicate issue. The three steps of the suggested system are the secure upload phase, the secure download phase, and authentication. Patient registration, login, and verification make up the authentication step. The administrator grants authorization to read, edit, delete, or revoke the files following user details verification. In the secure upload phase, feature extraction is carried out first, and then a hashed access policy is created from the extracted feature. Next, the hash value is stored in an IoT-based Hyperledger blockchain. The uploaded EHR files are additionally encrypted before being stored on the cloud server. In the secure download step, the physician uses a hashed access policy to send the request to the cloud and decrypts the corresponding files. The experimental findings demonstrate that the system outperformed cutting-edge techniques. The proposed Modified Key Policy Attribute-Based Encryption performs better for the remaining 10 to 25 mb file sizes. This IoT framework compares MKP-ABE with certain efficiency indicators, such as encryption, decryption period, protection level analysis and encrypted memory use, resource use on decryption, upload time, and transfer time, which are present in the KP-ABE, the ECC, RSA, and AES. Here, the IoT device suggested requires 4008 ms for data encryption and 4138 ms for the data decryption.

电子病历(EHR)系统是医院与其他重要的医疗保健部门利益相关者之间交换病人医疗信息的重要而有效的工具,可改善世界各地的病人诊断和治疗。然而,目前大多数医院的基础设施都缺乏适当的安全性、可信的访问控制以及隐私和保密性管理,而这些正是目前的电子病历系统应该提供的。目标。针对各种电子病历系统,本研究提出了一种支持区块链的超级账本架构(Hyperledger Fabric Architecture),作为这一棘手问题的解决方案。建议系统的三个步骤是安全上传阶段、安全下载阶段和身份验证。患者注册、登录和验证构成了身份验证步骤。管理员在验证用户详细信息后,授予读取、编辑、删除或撤销文件的权限。在安全上传阶段,首先进行特征提取,然后根据提取的特征创建哈希访问策略。然后,将哈希值存储在基于物联网的超级账本区块链中。上传的电子病历文件在存储到云服务器之前还要进行加密。在安全下载步骤中,医生使用散列访问策略向云发送请求并解密相应文件。实验结果表明,该系统的性能优于尖端技术。对于剩余的 10 到 25 mb 文件大小,所提出的基于属性的修改密钥策略加密技术表现更好。该物联网框架将 MKP-ABE 与 KP-ABE、ECC、RSA 和 AES 中的某些效率指标进行了比较,如加密、解密周期、保护级别分析和加密内存使用、解密资源使用、上传时间和传输时间。在此,建议物联网设备的数据加密时间为 4008 毫秒,数据解密时间为 4138 毫秒。
{"title":"An Efficient Secure Sharing of Electronic Health Records Using IoT-Based Hyperledger Blockchain","authors":"Velmurugan S.,&nbsp;Prakash M.,&nbsp;Neelakandan S.,&nbsp;Eric Ofori Martinson","doi":"10.1155/2024/6995202","DOIUrl":"10.1155/2024/6995202","url":null,"abstract":"<p>Electronic Health Record (EHR) systems are a valuable and effective tool for exchanging medical information about patients between hospitals and other significant healthcare sector stakeholders in order to improve patient diagnosis and treatment around the world. Nevertheless, the majority of the hospital infrastructures that are now in place lack the proper security, trusted access control, and management of privacy and confidentiality concerns that the current EHR systems are supposed to provide. <i>Goal</i>. For various EHR systems, this research proposes a Blockchain-enabled Hyperledger Fabric Architecture as a solution to this delicate issue. The three steps of the suggested system are the secure upload phase, the secure download phase, and authentication. Patient registration, login, and verification make up the authentication step. The administrator grants authorization to read, edit, delete, or revoke the files following user details verification. In the secure upload phase, feature extraction is carried out first, and then a hashed access policy is created from the extracted feature. Next, the hash value is stored in an IoT-based Hyperledger blockchain. The uploaded EHR files are additionally encrypted before being stored on the cloud server. In the secure download step, the physician uses a hashed access policy to send the request to the cloud and decrypts the corresponding files. The experimental findings demonstrate that the system outperformed cutting-edge techniques. The proposed Modified Key Policy Attribute-Based Encryption performs better for the remaining 10 to 25 mb file sizes. This IoT framework compares MKP-ABE with certain efficiency indicators, such as encryption, decryption period, protection level analysis and encrypted memory use, resource use on decryption, upload time, and transfer time, which are present in the KP-ABE, the ECC, RSA, and AES. Here, the IoT device suggested requires 4008 ms for data encryption and 4138 ms for the data decryption.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140220455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1