Pub Date : 2026-01-12DOI: 10.1016/j.eij.2026.100886
Ahmet Okan Arık , Gizem Parlayandemir , Serra Çelik
Political fake news fuels a significant epistemic crisis, yet detection in low-resource languages like Turkish is constrained by data scarcity and class imbalance. This study addresses these challenges by constructing the Turkish Political Fake News Dataset (TPFND) and employing a Turkish LLaMA-3 model to generate synthetic samples for data augmentation. The augmented dataset was used to train an XGBoost classifier, compared against baseline and Random Oversampling methods. Results demonstrate that LLM-based augmentation significantly enhances sensitivity to fake news. While overall accuracy remained high 89–90.5%, the fake news detection rate increased from 91.12% to 97.62%, effectively minimizing false negatives despite a slight precision trade-off. These findings confirm the methodology provides a robust “safety net” for the Turkish digital ecosystem and a scalable framework for other low-resource languages.
{"title":"LLM-based data augmentation for text classification on imbalanced datasets: A case study on fake news detection","authors":"Ahmet Okan Arık , Gizem Parlayandemir , Serra Çelik","doi":"10.1016/j.eij.2026.100886","DOIUrl":"10.1016/j.eij.2026.100886","url":null,"abstract":"<div><div>Political fake news fuels a significant epistemic crisis, yet detection in low-resource languages like Turkish is constrained by data scarcity and class imbalance. This study addresses these challenges by constructing the Turkish Political Fake News Dataset (TPFND) and employing a Turkish LLaMA-3 model to generate synthetic samples for data augmentation. The augmented dataset was used to train an XGBoost classifier, compared against baseline and Random Oversampling methods. Results demonstrate that LLM-based augmentation significantly enhances sensitivity to fake news. While overall accuracy remained high 89–90.5%, the fake news detection rate increased from 91.12% to 97.62%, effectively minimizing false negatives despite a slight precision trade-off. These findings confirm the methodology provides a robust “safety net” for the Turkish digital ecosystem and a scalable framework for other low-resource languages.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100886"},"PeriodicalIF":4.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.eij.2026.100885
Ramzi Saifan , Rami Al-zyadat , Mohammed Hawa , Iyad Jafar , Samah Rahamneh
Cognitive radio technology allows opportunistic access to underutilized licensed radio spectrum by dynamically sensing and accessing this scarce resource. Existing cognitive radio network (CRN) protocols may suffer from limited number of transmission channels, conflicts among secondary users (SUs), and a lack of awareness about potentially superior available channels. This paper introduces a decentralized protocol to enhance CRN performance, allowing seamless acquisition of spectrum bands. Unlike traditional protocols with information exchange overhead, our proposed method does not require communication between SUs, ensuring high performance and minimal interference.
The proposed protocol is called Probabilistic History-based Distributed Sensing Protocol in Cognitive Radio Networks (PHDS-CRN), and it is designed to address the limitations of existing protocols. This protocol offers a fully distributed approach to spectrum sensing and channel allocation, enabling SUs to efficiently utilize available spectrum bands while minimizing interference with primary users (PUs) and other SUs. By categorizing spectrum bands into distinct groups and employing a three-phase decision process, PHDS-CRN optimizes channel access in CRNs. Our experimental evaluation demonstrates the superior performance of PHDS-CRN compared to existing methodologies. Under 100% load conditions, our proposed method achieves high channel access rate, while significantly reducing settling time and interference time.
{"title":"Probabilistic History-based distributed sensing protocol in cognitive radio networks","authors":"Ramzi Saifan , Rami Al-zyadat , Mohammed Hawa , Iyad Jafar , Samah Rahamneh","doi":"10.1016/j.eij.2026.100885","DOIUrl":"10.1016/j.eij.2026.100885","url":null,"abstract":"<div><div>Cognitive radio technology allows opportunistic access to underutilized licensed radio spectrum by dynamically sensing and accessing this scarce resource. Existing cognitive radio network (CRN) protocols may suffer from limited number of transmission channels, conflicts among secondary users (SUs), and a lack of awareness about potentially superior available channels. This paper introduces a decentralized protocol to enhance CRN performance, allowing seamless acquisition of spectrum bands. Unlike traditional protocols with information exchange overhead, our proposed method does not require communication between SUs, ensuring high performance and minimal interference.</div><div>The proposed protocol is called Probabilistic History-based Distributed Sensing Protocol in Cognitive Radio Networks (PHDS-CRN), and it is designed to address the limitations of existing protocols. This protocol offers a fully distributed approach to spectrum sensing and channel allocation, enabling SUs to efficiently utilize available spectrum bands while minimizing interference with primary users (PUs) and other SUs. By categorizing spectrum bands into distinct groups and employing a three-phase decision process, PHDS-CRN optimizes channel access in CRNs. Our experimental evaluation demonstrates the superior performance of PHDS-CRN compared to existing methodologies. Under 100% load conditions, our proposed method achieves high channel access rate, while significantly reducing settling time and interference time.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100885"},"PeriodicalIF":4.3,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.eij.2026.100884
A. Balajee , R. Vinoth , A. Suresh , Mudassir Khan , T.R. Mahesh , Anu Sayal
The Internet of Vehicles (IoV) supports the combination of various techno insights to provide safe and comfortable transportation. These vehicles can share information to facilitate the current status of the location that the vehicle is about to travel with. A collision occurs when the abundant information fails to reach the target IoV within a stipulated time limit. The term collision in an IoT environment is always annexed with storage since the sparse storage system could lead to loss of information. Thus, there is a pressing need for collision avoidance annotated with storage optimization for IoV technologies. In this article, we propose an innovative federated hyper-LSTM model that initially handles the storage environment by incorporating federated learners to optimize it. The collision is predicted simultaneously by the proposed hyper-LSTM model. The entire model is equipped with reinforcement learners to keep track of the current status of storage and collision, achieving a benchmark accuracy of 97% for the proposed model.
{"title":"Federated hyper LSTM model for storage optimization and collision prediction in an intelligent IoVT","authors":"A. Balajee , R. Vinoth , A. Suresh , Mudassir Khan , T.R. Mahesh , Anu Sayal","doi":"10.1016/j.eij.2026.100884","DOIUrl":"10.1016/j.eij.2026.100884","url":null,"abstract":"<div><div>The Internet of Vehicles (IoV) supports the combination of various techno insights to provide safe and comfortable transportation. These vehicles can share information to facilitate the current status of the location that the vehicle is about to travel with. A collision occurs when the abundant information fails to reach the target IoV within a stipulated time limit. The term collision in an IoT environment is always annexed with storage since the sparse storage system could lead to loss of information. Thus, there is a pressing need for collision avoidance annotated with storage optimization for IoV technologies. In this article, we propose an innovative federated hyper-LSTM model that initially handles the storage environment by incorporating federated learners to optimize it. The collision is predicted simultaneously by the proposed hyper-LSTM model. The entire model is equipped with reinforcement learners to keep track of the current status of storage and collision, achieving a benchmark accuracy of 97% for the proposed model.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100884"},"PeriodicalIF":4.3,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1016/j.eij.2025.100883
Utpal Ghosh , Uttam kr. Mondal , Abdelmoty M. Ahmed , Ahmed A. Elngar
The deployment of effective data transmission with minimal resources, minimum architecture, low power consumption, and improved security makes this proposed lightweight wireless acoustic sensor network (WASNs) an appealing solution. This paper addresses the challenges of secure and energy-efficient audio broadcasting in WASNs. To transfer the entire gathered signal from source to recipient, a common setup for this application would be to send it over multi-hop communication to a distant server. On the other hand, persistent data streaming may induce an abrupt reduction in sensor energy, which may shorten the network lifetime and raise concerns about the application’s feasibility. This suggested method is supplemented during the design phase with several methods or processes for reducing the overhead of architectural design, specifically regarding network resource consumption and development effort. This method aims to reduce the amount of energy used by the acoustic origin sensor and free up network bandwidth by carrying less unnecessary data. The proposed method guarantees secure transfer through an enhanced Elliptic Curve Cryptography (ECC). The method introduces a session key mechanism and a chaos-based private key generation approach to enhance resilience against cryptographic attacks. A novel feature extraction strategy utilizing a variety of extraction characteristics and classifications is suggested in this study. Based on experimental results, the suggested method saves 74.35% of energy and obtains 89% of feature extraction accuracy when compared to streaming the complete acoustic data to a distant server. The proposed method achieves superior security against known attacks while reducing computational overhead by over 97%.
{"title":"Designing lightweight secure and energy-efficient wireless acoustic sensor networks for optimized data transmission and processing","authors":"Utpal Ghosh , Uttam kr. Mondal , Abdelmoty M. Ahmed , Ahmed A. Elngar","doi":"10.1016/j.eij.2025.100883","DOIUrl":"10.1016/j.eij.2025.100883","url":null,"abstract":"<div><div>The deployment of effective data transmission with minimal resources, minimum architecture, low power consumption, and improved security makes this proposed lightweight wireless acoustic sensor network (WASNs) an appealing solution. This paper addresses the challenges of secure and energy-efficient audio broadcasting in WASNs. To transfer the entire gathered signal from source to recipient, a common setup for this application would be to send it over multi-hop communication to a distant server. On the other hand, persistent data streaming may induce an abrupt reduction in sensor energy, which may shorten the network lifetime and raise concerns about the application’s feasibility. This suggested method is supplemented during the design phase with several methods or processes for reducing the overhead of architectural design, specifically regarding network resource consumption and development effort. This method aims to reduce the amount of energy used by the acoustic origin sensor and free up network bandwidth by carrying less unnecessary data. The proposed method guarantees secure transfer through an enhanced Elliptic Curve Cryptography (ECC). The method introduces a session key mechanism and a chaos-based private key generation approach to enhance resilience against cryptographic attacks. A novel feature extraction strategy utilizing a variety of extraction characteristics and classifications is suggested in this study. Based on experimental results, the suggested method saves 74.35% of energy and obtains 89% of feature extraction accuracy when compared to streaming the complete acoustic data to a distant server. The proposed method achieves superior security against known attacks while reducing computational overhead by over 97%.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100883"},"PeriodicalIF":4.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.eij.2025.100882
Hamza H.M. Altarturi , Muntadher Saadoon , Haqi Khalid , Fairuz Amalina , Nor Badrul Anuar
The rapid escalation of inappropriate online content calls for sophisticated and highly accurate methods for content classification and filtering. Previous approaches primarily focused on classifying web pages based on their textual and visual contents, ignoring the subject-oriented aspect, leading to an inaccurate classification of inappropriate content topics. This study proposes a novel Subject-Oriented Filtering (SOF) integration and formalization that couples dynamic URL whitelists/blacklists with HTML topic vectors fed directly to discriminative classifiers for accurate inappropriate-content classification. By exploiting the semantic richness of HTML structure and inputting the topic vectors as features for advanced machine learning classifiers, this methodology noticeably increases the accuracy of webpage filtering and classification. This study performed extensive experiments, which show that SOF achieves an accuracy exceeding 94%, substantially outperforming conventional methods. The methodological innovation of this study establishes a new state-of-the-art baseline in subject-oriented web content classification, representing significant progress over previous studies and contributing to safer online environments.
{"title":"Advanced machine learning for subject-oriented inappropriate content classification: A topic modeling approach","authors":"Hamza H.M. Altarturi , Muntadher Saadoon , Haqi Khalid , Fairuz Amalina , Nor Badrul Anuar","doi":"10.1016/j.eij.2025.100882","DOIUrl":"10.1016/j.eij.2025.100882","url":null,"abstract":"<div><div>The rapid escalation of inappropriate online content calls for sophisticated and highly accurate methods for content classification and filtering. Previous approaches primarily focused on classifying web pages based on their textual and visual contents, ignoring the subject-oriented aspect, leading to an inaccurate classification of inappropriate content topics. This study proposes a novel Subject-Oriented Filtering (SOF) integration and formalization that couples dynamic URL whitelists/blacklists with HTML topic vectors fed directly to discriminative classifiers for accurate inappropriate-content classification. By exploiting the semantic richness of HTML structure and inputting the topic vectors as features for advanced machine learning classifiers, this methodology noticeably increases the accuracy of webpage filtering and classification. This study performed extensive experiments, which show that SOF achieves an accuracy exceeding 94%, substantially outperforming conventional methods. The methodological innovation of this study establishes a new state-of-the-art baseline in subject-oriented web content classification, representing significant progress over previous studies and contributing to safer online environments.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100882"},"PeriodicalIF":4.3,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.eij.2025.100874
Khalil Hamdi Ateyeh Al-Shqeerat , Ahmad Hamed Al Abadleh , Sunil Kumar Sharma , Pankaj Kumar , Ghanshyam G. Tejani , David Bassir
Mental health issues are a major global concern that not only affect millions of people but also create significant societal and economic costs. Traditional ways of diagnosing mental health problems are usually based on the opinions of experts, thus prolonging the process of getting a diagnosis, causing inconsistencies, and even sometimes leading to the wrong conclusion. The already available computational models are not fully equipped to cope with the problem of redundancy of features, complicated through capturing interdependencies between various types of data, and the issue of having to really scale them for actual use cases of the world. The paper introduces the MQAD-Net (Multi-Modal Quantum-Attentive Deep Learning Network), an advanced framework for mental health prediction and personalized therapy recommendation, as a solution. The suggested method uses GAT for temporal-spatial EEG feature extraction and transformer-based embeddings for behavioral text analysis to combine various kinds of data such as EEG signals, voice patterns, and behavioral text responses through Graph Attention Networks (GAT). Feature selection is being optimized through Quantum Greylag Multi-Criteria Decision-Making Feature Selection (QGMFS) which is a combination of Quantum-Based Particle Swarm Optimization (QPSO), Grey Wolf Optimization (GWO), and Multi-Criteria Decision Making (MCDM) that assists in choosing the most informative and non-redundant features. Dense-DualLSTMNet (DDL-Net) classification is conducted, which innovatively integrates three methodologies, namely, DenseNet, DPN-68, and BiLSTM, for better multi-modal feature learning and sequential modeling. The outcome of the experimental evaluation shows that MQAD-Net significantly exceeds the traditional deep learning models, achieving an accuracy of 95%, a precision of 94%, a recall of 93%, and an F1-score of 94%, which also allows it to recommend personalized therapy. These results highlight the potential of the framework to enhance the early diagnosis of mental health conditions, to facilitate the treatment planning for each individual, and to provide support for clinical decision-making in healthcare settings that are real-world.
心理健康问题是全球关注的一个主要问题,它不仅影响到数百万人,而且造成重大的社会和经济代价。传统的诊断心理健康问题的方法通常是基于专家的意见,从而延长了诊断的过程,造成不一致,甚至有时导致错误的结论。现有的计算模型并不能完全处理特征冗余的问题,因为捕获各种类型数据之间的相互依赖关系而变得复杂,并且必须为世界的实际用例真正扩展它们。本文介绍了一种用于心理健康预测和个性化治疗推荐的先进框架MQAD-Net (Multi-Modal quantum - attention Deep Learning Network)作为解决方案。该方法利用GAT进行脑电信号的时空特征提取,利用基于变压器的嵌入进行行为文本分析,通过图注意网络(GAT)将脑电信号、语音模式和行为文本响应等多种数据结合起来。特征选择通过量子灰时滞多准则决策特征选择(QGMFS)进行优化,QGMFS是基于量子的粒子群优化(QPSO),灰狼优化(GWO)和多准则决策(MCDM)的组合,有助于选择最具信息量和非冗余的特征。进行了Dense-DualLSTMNet (DDL-Net)分类,该分类创新性地集成了DenseNet、DPN-68和BiLSTM三种方法,实现了更好的多模态特征学习和顺序建模。实验评估结果表明,MQAD-Net显著超过传统的深度学习模型,准确率达到95%,精密度为94%,召回率为93%,f1得分为94%,这也使其能够推荐个性化治疗。这些结果突出了该框架的潜力,以提高精神健康状况的早期诊断,促进每个人的治疗计划,并为现实世界的医疗保健环境中的临床决策提供支持。
{"title":"MQAD-net: multi-modal quantum-attentive deep learning framework for early mental health detection and personalized therapy recommendation","authors":"Khalil Hamdi Ateyeh Al-Shqeerat , Ahmad Hamed Al Abadleh , Sunil Kumar Sharma , Pankaj Kumar , Ghanshyam G. Tejani , David Bassir","doi":"10.1016/j.eij.2025.100874","DOIUrl":"10.1016/j.eij.2025.100874","url":null,"abstract":"<div><div>Mental health issues are a major global concern that not only affect millions of people but also create significant societal and economic costs. Traditional ways of diagnosing mental health problems are usually based on the opinions of experts, thus prolonging the process of getting a diagnosis, causing inconsistencies, and even sometimes leading to the wrong conclusion. The already available computational models are not fully equipped to cope with the problem of redundancy of features, complicated through capturing interdependencies between various types of data, and the issue of having to really scale them for actual use cases of the world. The paper introduces the MQAD-Net (Multi-Modal Quantum-Attentive Deep Learning Network), an advanced framework for mental health prediction and personalized therapy recommendation, as a solution. The suggested method uses GAT for temporal-spatial EEG feature extraction and transformer-based embeddings for behavioral text analysis to combine various kinds of data such as EEG signals, voice patterns, and behavioral text responses through Graph Attention Networks (GAT). Feature selection is being optimized through Quantum Greylag Multi-Criteria Decision-Making Feature Selection (QGMFS) which is a combination of Quantum-Based Particle Swarm Optimization (QPSO), Grey Wolf Optimization (GWO), and Multi-Criteria Decision Making (MCDM) that assists in choosing the most informative and non-redundant features. Dense-DualLSTMNet (DDL-Net) classification is conducted, which innovatively integrates three methodologies, namely, DenseNet, DPN-68, and BiLSTM, for better multi-modal feature learning and sequential modeling. The outcome of the experimental evaluation shows that MQAD-Net significantly exceeds the traditional deep learning models, achieving an accuracy of 95%, a precision of 94%, a recall of 93%, and an F1-score of 94%, which also allows it to recommend personalized therapy. These results highlight the potential of the framework to enhance the early diagnosis of mental health conditions, to facilitate the treatment planning for each individual, and to provide support for clinical decision-making in healthcare settings that are real-world.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100874"},"PeriodicalIF":4.3,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1016/j.eij.2025.100880
Freedom M. Khubisa, Oludayo O. Olugbara
Deep learning has gained significant importance in manifold disciplines such as natural language processing, supply chain optimization, computer vision, financial analysis, mechatronics and robotics, cybersecurity, and healthcare. It offers alternative methods to proactively manage plant diseases to ensure healthy crop yields, minimize economic losses, contribute to global food security, and promote sustainable agricultural practices. Nevertheless, despite a huge volume of publications on plant disease management using deep learning, a gap exists in the methodical evaluation of the contributions, impacts, trends, and exploration of intellectual structures of the publication elements using bibliometric analysis. Therefore, a bibliometric analysis was performed on 4,317 publications indexed in the Scopus database from 2016 to 2025 regarding plant disease management utilizing deep learning methods. Bibliometric performance analysis was based on publication, citation, and citation-and-publication metrics. Science mapping was conducted based on citation analysis, co-authorship analysis, bibliographic coupling, and co-word analysis using Biblioshiny and VOSviewer tools. The bibliometric analysis confirmed that Computers and Electronics in Agriculture and IEEE Access are the most impactful publication sources according to the metrics of h-index and citations. A publication written by Mohanty SP in 2016 was found to be the most globally cited. Five distinctive clusters were identified using bibliographic coupling of publications and co-word analysis of author keywords to provide useful insights into the knowledge structure of plant disease management using deep learning. The analysis findings can provide valuable insights into the broader impact of the extant literature on deep learning applications, offering a footing for progressing artificial intelligence applications in plant disease management and guiding future research directions.
{"title":"Bibliometric analysis of deep learning in plant disease management","authors":"Freedom M. Khubisa, Oludayo O. Olugbara","doi":"10.1016/j.eij.2025.100880","DOIUrl":"10.1016/j.eij.2025.100880","url":null,"abstract":"<div><div>Deep learning has gained significant importance in manifold disciplines such as natural language processing, supply chain optimization, computer vision, financial analysis, mechatronics and robotics, cybersecurity, and healthcare. It offers alternative methods to proactively manage plant diseases to ensure healthy crop yields, minimize economic losses, contribute to global food security, and promote sustainable agricultural practices. Nevertheless, despite a huge volume of publications on plant disease management using deep learning, a gap exists in the methodical evaluation of the contributions, impacts, trends, and exploration of intellectual structures of the publication elements using bibliometric analysis. Therefore, a bibliometric analysis was performed on 4,317 publications indexed in the Scopus database from 2016 to 2025 regarding plant disease management utilizing deep learning methods. Bibliometric performance analysis was based on publication, citation, and citation-and-publication metrics. Science mapping was conducted based on citation analysis, co-authorship analysis, bibliographic coupling, and co-word analysis using Biblioshiny and VOSviewer tools. The bibliometric analysis confirmed that Computers and Electronics in Agriculture and IEEE Access are the most impactful publication sources according to the metrics of h-index and citations. A publication written by Mohanty SP in 2016 was found to be the most globally cited. Five distinctive clusters were identified using bibliographic coupling of publications and co-word analysis of author keywords to provide useful insights into the knowledge structure of plant disease management using deep learning. The analysis findings can provide valuable insights into the broader impact of the extant literature on deep learning applications, offering a footing for progressing artificial intelligence applications in plant disease management and guiding future research directions.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100880"},"PeriodicalIF":4.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.eij.2025.100876
Dongliang Zhang , Lei Wang
Real-time visual image identification presents significant challenges due to noise, variations in illumination, and intricate backdrops, frequently resulting in misclassification and heightened processing costs. To mitigate these constraints, we offer a Fuzzy Dependency Model for Image Identification (FDM-II) that explicitly characterizes pixel interdependencies and executes adaptive feature selection. The approach incorporates fuzzification, fuzzy derivative optimization, and defuzzification to dynamically prioritize high-dependency features, minimize duplicate computation, and enhance classification robustness in uncertain settings. Utilizing the Open Images dataset, FDM-II attained 11.43% superior detection precision, 9.84% enhanced correlation rate, and 9.55% augmented classification accuracy relative to established RSS-based, TOPSIS-MADM, and fuzzy VHO methodologies, concurrently decreasing detection error and processing time by 8.77% and 10.06%, respectively. In contrast to conventional fixed-threshold or resource-intensive deep learning models, our methodology employs adaptive correlation-based refinement and dynamic feature ranking, facilitating scalable, low-latency, and reliable real-time performance appropriate for IoT and embedded applications.
{"title":"Quality-Aware Fuzzy-Logic-Based vertical handover decision method for dependable Real-Time visual image identification","authors":"Dongliang Zhang , Lei Wang","doi":"10.1016/j.eij.2025.100876","DOIUrl":"10.1016/j.eij.2025.100876","url":null,"abstract":"<div><div>Real-time visual image identification presents significant challenges due to noise, variations in illumination, and intricate backdrops, frequently resulting in misclassification and heightened processing costs. To mitigate these constraints, we offer a Fuzzy Dependency Model for Image Identification (FDM-II) that explicitly characterizes pixel interdependencies and executes adaptive feature selection. The approach incorporates fuzzification, fuzzy derivative optimization, and defuzzification to dynamically prioritize high-dependency features, minimize duplicate computation, and enhance classification robustness in uncertain settings. Utilizing the Open Images dataset, FDM-II attained 11.43% superior detection precision, 9.84% enhanced correlation rate, and 9.55% augmented classification accuracy relative to established RSS-based, TOPSIS-MADM, and fuzzy VHO methodologies, concurrently decreasing detection error and processing time by 8.77% and 10.06%, respectively. In contrast to conventional fixed-threshold or resource-intensive deep learning models, our methodology employs adaptive correlation-based refinement and dynamic feature ranking, facilitating scalable, low-latency, and reliable real-time performance appropriate for IoT and embedded applications.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100876"},"PeriodicalIF":4.3,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log anomaly detection is a critical task for ensuring the reliability of complex systems. However, existing methods often suffer from poor adaptability and substantial retraining overhead as log data evolve. This paper introduces a novel framework called KDLog, a knowledge-distillation-based approach that enables accurate and efficient log anomaly detection in dynamic environments. KDLog employs a two-stage selective-distillation mechanism, in which a lightweight student model is trained using the high-confidence outputs generated by a teacher model, effectively preventing negative knowledge transfer. Compared with state-of-the-art methods, KDLog improves overall accuracy by 4.5%, F1-score by 4.3%, and recall by 3.3% on average across real-world datasets (HDFS and BGL). Moreover, it reduces model update time by 60–78% and achieves a smaller model size, by up to 50%, compared with deep learning baselines such as DeepLog and LogAnomaly. Statistical significance tests confirm the robustness of these improvements. Unlike prior methods, KDLog also demonstrates strong resilience to unseen log patterns, with less than a 4% performance drop under simulated log-template drift. These gains make KDLog a scalable and practical solution for real-time anomaly detection, effectively bridging the gap between high-performance learning and operational efficiency in production environments.
{"title":"KDLog: a selective knowledge distillation approach for sequential log anomaly detection","authors":"Hailong Cheng , Shi Ying , Xiaoyu Duan , Wanli Yuan","doi":"10.1016/j.eij.2025.100879","DOIUrl":"10.1016/j.eij.2025.100879","url":null,"abstract":"<div><div>Log anomaly detection is a critical task for ensuring the reliability of complex systems. However, existing methods often suffer from poor adaptability and substantial retraining overhead as log data evolve. This paper introduces a novel framework called KDLog, a knowledge-distillation-based approach that enables accurate and efficient log anomaly detection in dynamic environments. KDLog employs a two-stage selective-distillation mechanism, in which a lightweight student model is trained using the high-confidence outputs generated by a teacher model, effectively preventing negative knowledge transfer. Compared with state-of-the-art methods, KDLog improves overall accuracy by 4.5%, F1-score by 4.3%, and recall by 3.3% on average across real-world datasets (HDFS and BGL). Moreover, it reduces model update time by 60–78% and achieves a smaller model size, by up to 50%, compared with deep learning baselines such as DeepLog and LogAnomaly. Statistical significance tests confirm the robustness of these improvements. Unlike prior methods, KDLog also demonstrates strong resilience to unseen log patterns, with less than a 4% performance drop under simulated log-template drift. These gains make KDLog a scalable and practical solution for real-time anomaly detection, effectively bridging the gap between high-performance learning and operational efficiency in production environments.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100879"},"PeriodicalIF":4.3,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-27DOI: 10.1016/j.eij.2025.100873
Nevzat Olgun
In this study, a novel method based on Variational Mode Decomposition (VMD) is proposed for lie detection from EEG signals (EEGs). The study was conducted using the LieWaves database, and analyses were performed on 5 −channel EEGs obtained from 27 subjects. The EEGs collected from the subjects during truthful and lying situations were divided into 2-second segments based on the moments when visual stimuli were presented, and a total of 1350 EEG signals were obtained. For lie detection, 3 channels were selected, and EEG signals were processed using the VMD technique and time domain features were extracted from each mode. Extra Trees, Random Forest, K-Nearest Neighbors and Support Vector Machine classification models were used to classify the data. As a result of the tests, the Extra Trees model achieved the highest performance, reaching 100% classification accuracy. The other classification models achieved 99.93%, 99.48% and 64.22% classification accuracy, respectively. These results show that the VMD-based method provides an effective and efficient solution for EEG-based lie detection and it is suitable for real-time applications on portable EEG devices. Moreover, the proposed method is more advantageous than the complex approaches in the literature with its low number of channels and low processing time. The results show that this method has great potential for future studies and applications in the detection of deception.
{"title":"A novel method based on variational mode decomposition for lie detection","authors":"Nevzat Olgun","doi":"10.1016/j.eij.2025.100873","DOIUrl":"10.1016/j.eij.2025.100873","url":null,"abstract":"<div><div>In this study, a novel method based on Variational Mode Decomposition (VMD) is proposed for lie detection from EEG signals (EEGs). The study was conducted using the LieWaves database, and analyses were performed on 5 −channel EEGs obtained from 27 subjects. The EEGs collected from the subjects during truthful and lying situations were divided into 2-second segments based on the moments when visual stimuli were presented, and a total of 1350 EEG signals were obtained. For lie detection, 3 channels were selected, and EEG signals were processed using the VMD technique and time domain features were extracted from each mode. Extra Trees, Random Forest, K-Nearest Neighbors and Support Vector Machine classification models were used to classify the data. As a result of the tests, the Extra Trees model achieved the highest performance<strong>,</strong> reaching 100% classification accuracy. The other classification models achieved 99.93%, 99.48% and 64.22% classification accuracy, respectively. These results show that the VMD-based method provides an effective and efficient solution for EEG-based lie detection and it is suitable for real-time applications on portable EEG devices. Moreover, the proposed method is more advantageous than the complex approaches in the literature with its low number of channels and low processing time. The results show that this method has great potential for future studies and applications in the detection of deception.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"33 ","pages":"Article 100873"},"PeriodicalIF":4.3,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}