首页 > 最新文献

International Journal of Intelligent Systems最新文献

英文 中文
MalFSLDF: A Few-Shot Learning-Based Malware Family Detection Framework
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1155/int/7390905
Wenjie Guo, Jingfeng Xue, Yuxin Lin, Wenbiao Du, Jingjing Hu, Ning Shi, Weijie Han

The evolution of malware has led to the development of increasingly sophisticated evasion techniques, significantly escalating the challenges for researchers in obtaining and labeling new instances for analysis. Conventional deep learning detection approaches struggle to identify new malware variants with limited sample availability. Recently, researchers have proposed few-shot detection models to address the above issues. However, existing studies predominantly focus on model-level improvements, overlooking the potential of domain adaptation to leverage the unique characteristics of malware. Motivated by these challenges, we propose a few-shot learning-based malware family detection framework (MalFSLDF). We introduce a novel method for malware representation using structural features and a feature fusion strategy. Specifically, our framework employs contrastive learning to capture the unique textural features of malware families, enhancing the identification capability for novel malware variants. In addition, we integrate entropy graphs (EGs) and gray-level co-occurrence matrices (GLCMs) into the feature fusion strategy to enrich sample representations and mitigate information loss. Furthermore, a domain alignment strategy is proposed to adjust the feature distribution of samples from new classes, enhancing the model’s generalization performance. Finally, comprehensive evaluations of the MaleVis and BIG-2015 datasets show significant performance improvements in both 5-way 1-shot and 5-way 5-shot scenarios, demonstrating the effectiveness of the proposed framework.

{"title":"MalFSLDF: A Few-Shot Learning-Based Malware Family Detection Framework","authors":"Wenjie Guo,&nbsp;Jingfeng Xue,&nbsp;Yuxin Lin,&nbsp;Wenbiao Du,&nbsp;Jingjing Hu,&nbsp;Ning Shi,&nbsp;Weijie Han","doi":"10.1155/int/7390905","DOIUrl":"https://doi.org/10.1155/int/7390905","url":null,"abstract":"<div>\u0000 <p>The evolution of malware has led to the development of increasingly sophisticated evasion techniques, significantly escalating the challenges for researchers in obtaining and labeling new instances for analysis. Conventional deep learning detection approaches struggle to identify new malware variants with limited sample availability. Recently, researchers have proposed few-shot detection models to address the above issues. However, existing studies predominantly focus on model-level improvements, overlooking the potential of domain adaptation to leverage the unique characteristics of malware. Motivated by these challenges, we propose a few-shot learning-based malware family detection framework (MalFSLDF). We introduce a novel method for malware representation using structural features and a feature fusion strategy. Specifically, our framework employs contrastive learning to capture the unique textural features of malware families, enhancing the identification capability for novel malware variants. In addition, we integrate entropy graphs (EGs) and gray-level co-occurrence matrices (GLCMs) into the feature fusion strategy to enrich sample representations and mitigate information loss. Furthermore, a domain alignment strategy is proposed to adjust the feature distribution of samples from new classes, enhancing the model’s generalization performance. Finally, comprehensive evaluations of the MaleVis and BIG-2015 datasets show significant performance improvements in both 5-way 1-shot and 5-way 5-shot scenarios, demonstrating the effectiveness of the proposed framework.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/7390905","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Emotion Recognition System for Human–Robot Interaction (HRI) Using Deep Ensemble Classification
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1155/int/6611276
Khalid Zaman, Gan Zengkang, Sun Zhaoyun, Sayyed Mudassar Shah, Waqar Riaz, Jiancheng (Charles) Ji, Tariq Hussain, Razaz Waheeb Attar

Human emotion recognition (HER) has rapidly advanced, with applications in intelligent customer service, adaptive system training, human–robot interaction (HRI), and mental health monitoring. HER’s primary goal is to accurately recognize and classify emotions from digital inputs. Emotion recognition (ER) and feature extraction have long been core elements of HER, with deep neural networks (DNNs), particularly convolutional neural networks (CNNs), playing a critical role due to their superior visual feature extraction capabilities. This study proposes improving HER by integrating EfficientNet with transfer learning (TL) to train CNNs. Initially, an efficient R-CNN accurately recognizes faces in online and offline videos. The ensemble classification model is trained by combining features from four CNN models using feature pooling. The novel VGG-19 block is used to enhance the Faster R-CNN learning block, boosting face recognition efficiency and accuracy. The model benefits from fully connected mean pooling, dense pooling, and global dropout layers, solving the evanescent gradient issue. Tested on CK+, FER-2013, and the custom novel HER dataset (HERD), the approach shows significant accuracy improvements, reaching 89.23% (CK+), 94.36% (FER-2013), and 97.01% (HERD), proving its robustness and effectiveness.

{"title":"A Novel Emotion Recognition System for Human–Robot Interaction (HRI) Using Deep Ensemble Classification","authors":"Khalid Zaman,&nbsp;Gan Zengkang,&nbsp;Sun Zhaoyun,&nbsp;Sayyed Mudassar Shah,&nbsp;Waqar Riaz,&nbsp;Jiancheng (Charles) Ji,&nbsp;Tariq Hussain,&nbsp;Razaz Waheeb Attar","doi":"10.1155/int/6611276","DOIUrl":"https://doi.org/10.1155/int/6611276","url":null,"abstract":"<div>\u0000 <p>Human emotion recognition (HER) has rapidly advanced, with applications in intelligent customer service, adaptive system training, human–robot interaction (HRI), and mental health monitoring. HER’s primary goal is to accurately recognize and classify emotions from digital inputs. Emotion recognition (ER) and feature extraction have long been core elements of HER, with deep neural networks (DNNs), particularly convolutional neural networks (CNNs), playing a critical role due to their superior visual feature extraction capabilities. This study proposes improving HER by integrating EfficientNet with transfer learning (TL) to train CNNs. Initially, an efficient R-CNN accurately recognizes faces in online and offline videos. The ensemble classification model is trained by combining features from four CNN models using feature pooling. The novel VGG-19 block is used to enhance the Faster R-CNN learning block, boosting face recognition efficiency and accuracy. The model benefits from fully connected mean pooling, dense pooling, and global dropout layers, solving the evanescent gradient issue. Tested on CK+, FER-2013, and the custom novel HER dataset (HERD), the approach shows significant accuracy improvements, reaching 89.23% (CK+), 94.36% (FER-2013), and 97.01% (HERD), proving its robustness and effectiveness.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6611276","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Transformation Reconstruction (FTR) Network for Unsupervised Anomaly Detection
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-23 DOI: 10.1155/int/1780499
Linna Zhang, Lanyao Zhang, Qi Cao, Shichao Kan, Yigang Cen, Fugui Zhang, Yansen Huang

The goal of the feature reconstruction network based on an autoencoder in the training phase is to force the network to reconstruct the input features well. The network tends to learn shortcuts of “identity mapping,” which leads to the network outputting abnormal features as they are in the inference phase. As such, the abnormal features based on reconstruction error cannot be distinguished from normal features, significantly limiting the detection performance of such methods. To address this issue, we propose a feature transformation reconstruction (FTR) network, which can avoid the identity mapping problem. Specifically, we use a normalizing flow model as a feature transformation (FT) network to transform input features into other forms. The training goal of the feature reconstruction (FR) network is no longer to reconstruct the input features but to reconstruct the transformed features, effectively avoiding the shortcut of learning the “identity map.” Furthermore, this paper proposes a masked convolutional attention (MCA) module, which randomly masks the input features in the training phase and reconstructs the input features in a self-supervised manner. In the testing phase, the MCA can effectively suppress the excessive reconstruction of abnormal features and further improve anomaly detection performance. FTR achieves the scores of the area under the receiver operating characteristic curve (AUROC) at 99.5% and 97.8% on the MVTec AD and BTAD datasets, respectively, outperforming other state-of-the-art methods. Moreover, FTR is faster than the existing methods, with a high speed of 137 frames per second (FPS) on a 3080ti GPU.

{"title":"Feature Transformation Reconstruction (FTR) Network for Unsupervised Anomaly Detection","authors":"Linna Zhang,&nbsp;Lanyao Zhang,&nbsp;Qi Cao,&nbsp;Shichao Kan,&nbsp;Yigang Cen,&nbsp;Fugui Zhang,&nbsp;Yansen Huang","doi":"10.1155/int/1780499","DOIUrl":"https://doi.org/10.1155/int/1780499","url":null,"abstract":"<div>\u0000 <p>The goal of the feature reconstruction network based on an autoencoder in the training phase is to force the network to reconstruct the input features well. The network tends to learn shortcuts of “identity mapping,” which leads to the network outputting abnormal features as they are in the inference phase. As such, the abnormal features based on reconstruction error cannot be distinguished from normal features, significantly limiting the detection performance of such methods. To address this issue, we propose a feature transformation reconstruction (FTR) network, which can avoid the identity mapping problem. Specifically, we use a normalizing flow model as a feature transformation (FT) network to transform input features into other forms. The training goal of the feature reconstruction (FR) network is no longer to reconstruct the input features but to reconstruct the transformed features, effectively avoiding the shortcut of learning the “identity map.” Furthermore, this paper proposes a masked convolutional attention (MCA) module, which randomly masks the input features in the training phase and reconstructs the input features in a self-supervised manner. In the testing phase, the MCA can effectively suppress the excessive reconstruction of abnormal features and further improve anomaly detection performance. FTR achieves the scores of the area under the receiver operating characteristic curve (AUROC) at 99.5% and 97.8% on the MVTec AD and BTAD datasets, respectively, outperforming other state-of-the-art methods. Moreover, FTR is faster than the existing methods, with a high speed of 137 frames per second (FPS) on a 3080ti GPU.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1780499","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143861884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IntFedSV: A Novel Participants’ Contribution Evaluation Mechanism for Federated Learning IntFedSV:联合学习的新型参与者贡献评估机制
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-22 DOI: 10.1155/int/3466867
Tianxu Cui, Ying Shi, Wenge Li, Rijia Ding, Qing Wang

Federated learning (FL), which is a distributed privacy computing technology, has demonstrated strong capabilities in addressing potential privacy leakage for multisource data fusion and has been widely applied in various industries. Existing contribution evaluation mechanisms based on Shapley values uniquely allocate the total utility of a federation based on the marginal contributions of participants. However, in practical engineering applications, participants from different data sources typically exhibit significant differences and uncertainties in terms of their contributions to a federation, thus rendering it difficult to represent their contributions precisely. To evaluate the contribution of each participant to FL more effectively, we propose a novel interval federated Shapley value (IntFedSV) contribution evaluation mechanism. Second, to improve computational efficiency, we utilize a matrix semitensor product-based method to compute the IntFedSV. Finally, extensive experiments on four public datasets (MNIST, CIFAR10, AG_NEWS, and IMDB) demonstrate its potential in engineering applications. Our proposed mechanism can effectively evaluate the contribution levels of participants. Compared with the case of three advanced baseline methods, the minimum and maximum improvement rates of standard deviation for our proposed mechanism are 11.83% and 99.00%, respectively, thus demonstrating its greater stability and fault tolerance. This study contributes positively to promoting engineering applications of FL.

联邦学习(FL)是一种分布式隐私计算技术,在解决多源数据融合中潜在的隐私泄露问题方面表现出强大的能力,并已广泛应用于各行各业。现有的基于 Shapley 值的贡献评估机制根据参与者的边际贡献唯一分配联盟的总效用。然而,在实际工程应用中,来自不同数据源的参与者对联盟的贡献通常会表现出显著的差异和不确定性,因此很难精确地表示他们的贡献。为了更有效地评估每个参与者对 FL 的贡献,我们提出了一种新颖的区间联合夏普利值(IntFedSV)贡献评估机制。其次,为了提高计算效率,我们采用了一种基于矩阵半张量乘积的方法来计算 IntFedSV。最后,在四个公共数据集(MNIST、CIFAR10、AG_NEWS 和 IMDB)上进行的大量实验证明了其在工程应用中的潜力。我们提出的机制可以有效评估参与者的贡献水平。与三种先进的基线方法相比,我们提出的机制的标准偏差最小改进率和最大改进率分别为 11.83% 和 99.00%,从而证明了其更高的稳定性和容错性。这项研究为促进 FL 的工程应用做出了积极贡献。
{"title":"IntFedSV: A Novel Participants’ Contribution Evaluation Mechanism for Federated Learning","authors":"Tianxu Cui,&nbsp;Ying Shi,&nbsp;Wenge Li,&nbsp;Rijia Ding,&nbsp;Qing Wang","doi":"10.1155/int/3466867","DOIUrl":"https://doi.org/10.1155/int/3466867","url":null,"abstract":"<div>\u0000 <p>Federated learning (FL), which is a distributed privacy computing technology, has demonstrated strong capabilities in addressing potential privacy leakage for multisource data fusion and has been widely applied in various industries. Existing contribution evaluation mechanisms based on Shapley values uniquely allocate the total utility of a federation based on the marginal contributions of participants. However, in practical engineering applications, participants from different data sources typically exhibit significant differences and uncertainties in terms of their contributions to a federation, thus rendering it difficult to represent their contributions precisely. To evaluate the contribution of each participant to FL more effectively, we propose a novel interval federated Shapley value (IntFedSV) contribution evaluation mechanism. Second, to improve computational efficiency, we utilize a matrix semitensor product-based method to compute the IntFedSV. Finally, extensive experiments on four public datasets (MNIST, CIFAR10, AG_NEWS, and IMDB) demonstrate its potential in engineering applications. Our proposed mechanism can effectively evaluate the contribution levels of participants. Compared with the case of three advanced baseline methods, the minimum and maximum improvement rates of standard deviation for our proposed mechanism are 11.83% and 99.00%, respectively, thus demonstrating its greater stability and fault tolerance. This study contributes positively to promoting engineering applications of FL.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3466867","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143857063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical Principles of Integrating ChatGPT Into IoT–Based Software Wearables: A Fuzzy-TOPSIS Ranking and Analysis Approach
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-22 DOI: 10.1155/int/6660868
Maseeh Ullah Khan, Muhammad Farhat Ullah, Sabeeh Ullah Khan, Weiqiang Kong

The rapid development of the internet of things (IoT) prompts organizations and developers to seek innovative approaches for future IoT device development and research. Leveraging advanced artificial intelligence (AI) models such as ChatGPT holds promise in reshaping the conceptualization, development, and commercialization of IoT devices. Through real-world data utilization, AI enhances the effectiveness, adaptability, and intelligence of IoT devices and wearables, expediting their production process from ideation to deployment and customer assistance. However, integrating ChatGPT into IoT–based devices and wearables poses ethical concerns including data ownership, security, privacy, accessibility, bias, accountability, cost, design, quality, storage, model training, explainability, consistency, fairness, safety, transparency, trust, and generalizability. Addressing these ethical principles necessitates a comprehensive review of the literature to identify and classify relevant principles. The author identified 14 ethical principles from the literature using a systematic literature review (SLR) with a criteria of frequency ≥ 50% based on similarities. Four categories emerge based on the identified ethical principles, culminating in the application of Fuzzy-TOPSIS for analyzing, categorizing, ranking, and prioritizing these ethical principles. From the Fuzzy-TOPSIS technique results, the principle of data security and privacy is the highly ranked ethical principle for IoT–based software wearable devices with the ranking value of “0.925” as a consistency coefficient index. This method, well-established in computer science, effectively navigates fuzzy and uncertain decision-making scenarios. The pioneer outcomes of this study provide a taxonomy-based valuable insight for software manufacturers, facilitating the analysis, ranking, categorization, and prioritization of ethical principles amid the integration of ChatGPT in IoT–based devices and wearables’ research and development.

{"title":"Ethical Principles of Integrating ChatGPT Into IoT–Based Software Wearables: A Fuzzy-TOPSIS Ranking and Analysis Approach","authors":"Maseeh Ullah Khan,&nbsp;Muhammad Farhat Ullah,&nbsp;Sabeeh Ullah Khan,&nbsp;Weiqiang Kong","doi":"10.1155/int/6660868","DOIUrl":"https://doi.org/10.1155/int/6660868","url":null,"abstract":"<div>\u0000 <p>The rapid development of the internet of things (IoT) prompts organizations and developers to seek innovative approaches for future IoT device development and research. Leveraging advanced artificial intelligence (AI) models such as ChatGPT holds promise in reshaping the conceptualization, development, and commercialization of IoT devices. Through real-world data utilization, AI enhances the effectiveness, adaptability, and intelligence of IoT devices and wearables, expediting their production process from ideation to deployment and customer assistance. However, integrating ChatGPT into IoT–based devices and wearables poses ethical concerns including data ownership, security, privacy, accessibility, bias, accountability, cost, design, quality, storage, model training, explainability, consistency, fairness, safety, transparency, trust, and generalizability. Addressing these ethical principles necessitates a comprehensive review of the literature to identify and classify relevant principles. The author identified 14 ethical principles from the literature using a systematic literature review (SLR) with a criteria of frequency ≥ 50% based on similarities. Four categories emerge based on the identified ethical principles, culminating in the application of Fuzzy-TOPSIS for analyzing, categorizing, ranking, and prioritizing these ethical principles. From the Fuzzy-TOPSIS technique results, the principle of data security and privacy is the highly ranked ethical principle for IoT–based software wearable devices with the ranking value of “0.925” as a consistency coefficient index. This method, well-established in computer science, effectively navigates fuzzy and uncertain decision-making scenarios. The pioneer outcomes of this study provide a taxonomy-based valuable insight for software manufacturers, facilitating the analysis, ranking, categorization, and prioritization of ethical principles amid the integration of ChatGPT in IoT–based devices and wearables’ research and development.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6660868","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143861674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Expression Recognition Method Based on Octonion Orthogonal Feature Extraction and Octonion Vision Transformer 基于八叉正交特征提取和八叉视觉变换器的面部表情识别方法
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-21 DOI: 10.1155/int/6388642
Yuan Tian, Hang Cai, Huang Yao, Di Chen

In the field of artificial intelligence, facial expression recognition (FER) in natural scenes is a challenging topic. In recent years, vision transformer (ViT) models have been applied to FER tasks. The direct use of the original ViT structure consumes a lot of computational resources and longer training time. To overcome these problems, we propose a FER method based on octonion orthogonal feature extraction and octonion ViT. First, to reduce feature redundancy, we propose an orthogonal feature decomposition method to map the extracted features onto seven orthogonal sub-features. Then, an octonion orthogonal representation method is introduced to correlate the orthogonal features, maintain the intrinsic dependencies between different orthogonal features, and enhance the model’s ability to extract features. Finally, an octonion ViT is presented, which reduces the number of parameters to one-eighth of ViT while improving the accuracy of FER. Experimental results on three commonly used facial expression datasets show that the proposed method outperforms several state-of-the-art models with a significant reduction in the number of parameters.

{"title":"Facial Expression Recognition Method Based on Octonion Orthogonal Feature Extraction and Octonion Vision Transformer","authors":"Yuan Tian,&nbsp;Hang Cai,&nbsp;Huang Yao,&nbsp;Di Chen","doi":"10.1155/int/6388642","DOIUrl":"https://doi.org/10.1155/int/6388642","url":null,"abstract":"<div>\u0000 <p>In the field of artificial intelligence, facial expression recognition (FER) in natural scenes is a challenging topic. In recent years, vision transformer (ViT) models have been applied to FER tasks. The direct use of the original ViT structure consumes a lot of computational resources and longer training time. To overcome these problems, we propose a FER method based on octonion orthogonal feature extraction and octonion ViT. First, to reduce feature redundancy, we propose an orthogonal feature decomposition method to map the extracted features onto seven orthogonal sub-features. Then, an octonion orthogonal representation method is introduced to correlate the orthogonal features, maintain the intrinsic dependencies between different orthogonal features, and enhance the model’s ability to extract features. Finally, an octonion ViT is presented, which reduces the number of parameters to one-eighth of ViT while improving the accuracy of FER. Experimental results on three commonly used facial expression datasets show that the proposed method outperforms several state-of-the-art models with a significant reduction in the number of parameters.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6388642","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Resilience Recovery Method for Complex Traffic Network Security Based on Trend Forecasting 基于趋势预测的复杂流量网络安全复原方法
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-21 DOI: 10.1155/int/3715086
Sheng Hong, Tianyu Yue, Yang You, Zhengnan Lv, Xu Tang, Jing Hu, Hongwei Yin

Due to the rapid development of information technology, a huge and complex traffic network has been established across various sectors including aviation, aerospace, vehicles, ships, electric power, and industry. However, because of the complexity and diversity of its structure, the complex traffic network is vulnerable to be attacked and faces serious security challenges. Therefore, this paper innovatively proposes a traffic network resilience recovery method based on resilience trend forecasting. In this paper, the risk value is introduced into the analysis of network fault propagation process, and the Susceptible, Infectious, Recovered, Dead-Risk (SIRD-R) fault propagation model is established. The resilience model of traffic network, which encompasses real-time resilience and overall resilience, is constructed through the integration of network resilience bearing capacity and resilience recovery capacity. Then, the resilience of complex traffic network is forecasted by using long short-term memory network, and the resilience recovery strategy of complex traffic network based on forecasting is proposed. Finally, the effectiveness and scalability of the proposed method are demonstrated through experimental analysis conducted on a diverse range of complex traffic networks, affirming its applicability in real-world scenarios.

{"title":"A Resilience Recovery Method for Complex Traffic Network Security Based on Trend Forecasting","authors":"Sheng Hong,&nbsp;Tianyu Yue,&nbsp;Yang You,&nbsp;Zhengnan Lv,&nbsp;Xu Tang,&nbsp;Jing Hu,&nbsp;Hongwei Yin","doi":"10.1155/int/3715086","DOIUrl":"https://doi.org/10.1155/int/3715086","url":null,"abstract":"<div>\u0000 <p>Due to the rapid development of information technology, a huge and complex traffic network has been established across various sectors including aviation, aerospace, vehicles, ships, electric power, and industry. However, because of the complexity and diversity of its structure, the complex traffic network is vulnerable to be attacked and faces serious security challenges. Therefore, this paper innovatively proposes a traffic network resilience recovery method based on resilience trend forecasting. In this paper, the risk value is introduced into the analysis of network fault propagation process, and the Susceptible, Infectious, Recovered, Dead-Risk (SIRD-R) fault propagation model is established. The resilience model of traffic network, which encompasses real-time resilience and overall resilience, is constructed through the integration of network resilience bearing capacity and resilience recovery capacity. Then, the resilience of complex traffic network is forecasted by using long short-term memory network, and the resilience recovery strategy of complex traffic network based on forecasting is proposed. Finally, the effectiveness and scalability of the proposed method are demonstrated through experimental analysis conducted on a diverse range of complex traffic networks, affirming its applicability in real-world scenarios.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3715086","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Integrated Radio Detection and Identification Deep Learning Architecture
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-21 DOI: 10.1155/int/4477742
Zhiyong Luo, Yanru Wang, Xiti Wang

The detection and identification of radio signals play a crucial role in cognitive radio, electronic reconnaissance, noncooperative communication, etc. Deep neural networks have emerged as a promising approach for electromagnetic signal detection and identification, outperforming traditional methods. Nevertheless, the present deep neural networks not only overlook the characteristics of electromagnetic signals but also treat these two tasks as independent components, similar to conventional methods. These issues limit overall performance and unnecessarily increase computational consumption. In this paper, we have designed a novel and universally applicable integrated radio detection and identification deep architecture and corresponding training method, which organically combines detection and identification networks. Furthermore, we extract signal features using only one-dimensional horizontal convolution based on the characteristics of the impact of wireless channels on time-domain signals. Experiments show that the proposed methods perform signal detection and identification more efficiently, which can not only reduce unnecessary computational consumption but also improve the accuracy and robustness of both detection and identification simultaneously. More specifically, the ability to distinguish different modulated signal categories tends to increase with the rise in SNRs, and the upper limit of detection accuracy can exceed 95% at SNRs above 0 dB. The proposed method can improve both signal detection and identification accuracy from 83.44% to 83.56% and from 61.27% to 62.32%, respectively.

{"title":"An Efficient Integrated Radio Detection and Identification Deep Learning Architecture","authors":"Zhiyong Luo,&nbsp;Yanru Wang,&nbsp;Xiti Wang","doi":"10.1155/int/4477742","DOIUrl":"https://doi.org/10.1155/int/4477742","url":null,"abstract":"<div>\u0000 <p>The detection and identification of radio signals play a crucial role in cognitive radio, electronic reconnaissance, noncooperative communication, etc. Deep neural networks have emerged as a promising approach for electromagnetic signal detection and identification, outperforming traditional methods. Nevertheless, the present deep neural networks not only overlook the characteristics of electromagnetic signals but also treat these two tasks as independent components, similar to conventional methods. These issues limit overall performance and unnecessarily increase computational consumption. In this paper, we have designed a novel and universally applicable integrated radio detection and identification deep architecture and corresponding training method, which organically combines detection and identification networks. Furthermore, we extract signal features using only one-dimensional horizontal convolution based on the characteristics of the impact of wireless channels on time-domain signals. Experiments show that the proposed methods perform signal detection and identification more efficiently, which can not only reduce unnecessary computational consumption but also improve the accuracy and robustness of both detection and identification simultaneously. More specifically, the ability to distinguish different modulated signal categories tends to increase with the rise in SNRs, and the upper limit of detection accuracy can exceed 95% at SNRs above 0 dB. The proposed method can improve both signal detection and identification accuracy from 83.44% to 83.56% and from 61.27% to 62.32%, respectively.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/4477742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Grained Dance Style Classification Using an Optimized Hybrid Convolutional Neural Network Architecture for Video Processing Over Multimedia Networks 利用优化的混合卷积神经网络架构为多媒体网络视频处理提供精细的舞蹈风格分类
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-21 DOI: 10.1155/int/6434673
Na Guo, Ahong Yang, Yan Wang, Elaheh Dastbaravardeh

Dance style recognition through video analysis during university training can significantly benefit both instructors and novice dancers. Employing video analysis in training offers substantial advantages, including the potential to train future dancers using innovative technologies. Over time, intricate dance gestures can be honed, reducing the burden on instructors who would, otherwise, need to provide repetitive demonstrations. Recognizing dancers’ movements, evaluating and adjusting their gestures, and extracting cognitive functions for efficient evaluation and classification are pivotal aspects of our model. Deep learning currently stands as one of the most effective approaches for achieving these objectives, particularly with short video clips. However, limited research has focused on automated analysis of dance videos for training purposes and assisting instructors. In addition, assessing the quality and accuracy of performance video recordings presents a complex challenge, especially when judges cannot fully focus on the on-stage performance. This paper proposes an alternative to manual evaluation through a video-based approach for dance assessment. By utilizing short video clips, we conduct dance analysis employing techniques such as fine-grained dance style classification in video frames, convolutional neural networks (CNNs) with channel attention mechanisms (CAMs), and autoencoders (AEs). These methods enable accurate evaluation and data gathering, leading to precise conclusions. Furthermore, utilizing cloud space for real-time processing of video frames is essential for timely analysis of dance styles, enhancing the efficiency of information processing. Experimental results demonstrate the effectiveness of our evaluation method in terms of accuracy and F1-score calculation, with accuracy exceeding 97.24% and the F1-score reaching 97.30%. These findings corroborate the efficacy and precision of our approach in dance evaluation analysis.

{"title":"Fine-Grained Dance Style Classification Using an Optimized Hybrid Convolutional Neural Network Architecture for Video Processing Over Multimedia Networks","authors":"Na Guo,&nbsp;Ahong Yang,&nbsp;Yan Wang,&nbsp;Elaheh Dastbaravardeh","doi":"10.1155/int/6434673","DOIUrl":"https://doi.org/10.1155/int/6434673","url":null,"abstract":"<div>\u0000 <p>Dance style recognition through video analysis during university training can significantly benefit both instructors and novice dancers. Employing video analysis in training offers substantial advantages, including the potential to train future dancers using innovative technologies. Over time, intricate dance gestures can be honed, reducing the burden on instructors who would, otherwise, need to provide repetitive demonstrations. Recognizing dancers’ movements, evaluating and adjusting their gestures, and extracting cognitive functions for efficient evaluation and classification are pivotal aspects of our model. Deep learning currently stands as one of the most effective approaches for achieving these objectives, particularly with short video clips. However, limited research has focused on automated analysis of dance videos for training purposes and assisting instructors. In addition, assessing the quality and accuracy of performance video recordings presents a complex challenge, especially when judges cannot fully focus on the on-stage performance. This paper proposes an alternative to manual evaluation through a video-based approach for dance assessment. By utilizing short video clips, we conduct dance analysis employing techniques such as fine-grained dance style classification in video frames, convolutional neural networks (CNNs) with channel attention mechanisms (CAMs), and autoencoders (AEs). These methods enable accurate evaluation and data gathering, leading to precise conclusions. Furthermore, utilizing cloud space for real-time processing of video frames is essential for timely analysis of dance styles, enhancing the efficiency of information processing. Experimental results demonstrate the effectiveness of our evaluation method in terms of accuracy and F1-score calculation, with accuracy exceeding 97.24% and the F1-score reaching 97.30%. These findings corroborate the efficacy and precision of our approach in dance evaluation analysis.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6434673","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiResFF-Net: Multilevel Residual Block-Based Lightweight Feature Fused Network With Attention for Gastrointestinal Disease Diagnosis
IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-15 DOI: 10.1155/int/1902285
Sohaib Asif, Yajun Ying, Tingting Qian, Jun Yao, Jinjie Qu, Vicky Yang Wang, Rongbiao Ying, Dong Xu

Accurate detection of gastrointestinal (GI) diseases is crucial due to their high prevalence. Screening is often inefficient with existing methods, and the complexity of medical images challenges single-model approaches. Leveraging diverse model features can improve accuracy and simplify detection. In this study, we introduce a novel deep learning model tailored for the diagnosis of GI diseases through the analysis of endoscopy images. This innovative model, named MultiResFF-Net, employs a multilevel residual block-based feature fusion network. The key strategy involves the integration of features from truncated DenseNet121 and MobileNet architectures. This fusion not only optimizes the model’s diagnostic performance but also strategically minimizes complexity and computational demands, making MultiResFF-Net a valuable tool for efficient and accurate disease diagnosis in GI endoscopy images. A pivotal component enhancing the model’s performance is the introduction of the Modified MultiRes-Block (MMRes-Block) and the Convolutional Block Attention Module (CBAM). The MMRes-Block, a customized residual learning component, optimally handles fused features at the endpoint of both models, fostering richer feature sets without escalating parameters. Simultaneously, the CBAM ensures dynamic recalibration of feature maps, emphasizing relevant channels and spatial locations. This dual incorporation significantly reduces overfitting, augments precision, and refines the feature extraction process. Extensive evaluations on three diverse datasets—endoscopic images, GastroVision data, and histopathological images—demonstrate exceptional accuracy of 99.37%, 97.47%, and 99.80%, respectively. Notably, MultiResFF-Net achieves superior efficiency, requiring only 2.22 MFLOPS and 0.47 million parameters, outperforming state-of-the-art models in both accuracy and cost-effectiveness. These results establish MultiResFF-Net as a robust and practical diagnostic tool for GI disease detection.

{"title":"MultiResFF-Net: Multilevel Residual Block-Based Lightweight Feature Fused Network With Attention for Gastrointestinal Disease Diagnosis","authors":"Sohaib Asif,&nbsp;Yajun Ying,&nbsp;Tingting Qian,&nbsp;Jun Yao,&nbsp;Jinjie Qu,&nbsp;Vicky Yang Wang,&nbsp;Rongbiao Ying,&nbsp;Dong Xu","doi":"10.1155/int/1902285","DOIUrl":"https://doi.org/10.1155/int/1902285","url":null,"abstract":"<div>\u0000 <p>Accurate detection of gastrointestinal (GI) diseases is crucial due to their high prevalence. Screening is often inefficient with existing methods, and the complexity of medical images challenges single-model approaches. Leveraging diverse model features can improve accuracy and simplify detection. In this study, we introduce a novel deep learning model tailored for the diagnosis of GI diseases through the analysis of endoscopy images. This innovative model, named MultiResFF-Net, employs a multilevel residual block-based feature fusion network. The key strategy involves the integration of features from truncated DenseNet121 and MobileNet architectures. This fusion not only optimizes the model’s diagnostic performance but also strategically minimizes complexity and computational demands, making MultiResFF-Net a valuable tool for efficient and accurate disease diagnosis in GI endoscopy images. A pivotal component enhancing the model’s performance is the introduction of the Modified MultiRes-Block (MMRes-Block) and the Convolutional Block Attention Module (CBAM). The MMRes-Block, a customized residual learning component, optimally handles fused features at the endpoint of both models, fostering richer feature sets without escalating parameters. Simultaneously, the CBAM ensures dynamic recalibration of feature maps, emphasizing relevant channels and spatial locations. This dual incorporation significantly reduces overfitting, augments precision, and refines the feature extraction process. Extensive evaluations on three diverse datasets—endoscopic images, GastroVision data, and histopathological images—demonstrate exceptional accuracy of 99.37%, 97.47%, and 99.80%, respectively. Notably, MultiResFF-Net achieves superior efficiency, requiring only 2.22 MFLOPS and 0.47 million parameters, outperforming state-of-the-art models in both accuracy and cost-effectiveness. These results establish MultiResFF-Net as a robust and practical diagnostic tool for GI disease detection.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1902285","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1