Pub Date : 2024-09-06DOI: 10.1016/j.jnca.2024.104021
Yash Sharma , Anshul Arora
The popularity of the Android operating system has itself become a reason for privacy concerns. To deal with such malware threats, researchers have proposed various detection approaches using static and dynamic features. Static analysis approaches are the most convenient for practical detection. However, several patterns of feature usage were found to be similar in the normal and malware datasets. Such high similarity in both datasets’ feature patterns motivates us to rank and select only the distinguishing set of features. Hence, in this study, we present a novel Android malware detection system, termed as PHIGrader for ranking and evaluating the efficiency of the three most commonly used static features, namely permissions, intents, and hardware components, when used for Android malware detection. To meet our goals, we individually rank the three feature types using frequency-based Multi-Criteria Decision Making (MCDM) techniques, namely TOPSIS and EDAS. Then, the system applies a novel detection algorithm to the rankings involving machine learning and deep learning classifiers to present the best set of features and feature type with higher detection accuracy as an output. The experimental results highlight that our proposed approach can effectively detect Android malware with 99.10% detection accuracy, achieved with the top 46 intents when ranked using TOPSIS, which is better than permissions, hardware components, or even the case where other popular MCDM techniques are used. Furthermore, our experiments demonstrate that the proposed system with frequency-based MCDM rankings is better than other statistical tests such as mutual information, Pearson correlation coefficient, and t-test. In addition, our proposed model outperforms various popularly used feature ranking methods such as Chi-square, Principal Component Analysis (PCA), Entropy-based Category Coverage Difference (ECCD), and other state-of-the-art Android malware detection techniques in terms of detection accuracy.
{"title":"PHIGrader: Evaluating the effectiveness of Manifest file components in Android malware detection using Multi Criteria Decision Making techniques","authors":"Yash Sharma , Anshul Arora","doi":"10.1016/j.jnca.2024.104021","DOIUrl":"10.1016/j.jnca.2024.104021","url":null,"abstract":"<div><p>The popularity of the Android operating system has itself become a reason for privacy concerns. To deal with such malware threats, researchers have proposed various detection approaches using static and dynamic features. Static analysis approaches are the most convenient for practical detection. However, several patterns of feature usage were found to be similar in the normal and malware datasets. Such high similarity in both datasets’ feature patterns motivates us to rank and select only the distinguishing set of features. Hence, in this study, we present a novel Android malware detection system, termed as <em>PHIGrader</em> for ranking and evaluating the efficiency of the three most commonly used static features, namely permissions, intents, and hardware components, when used for Android malware detection. To meet our goals, we individually rank the three feature types using frequency-based Multi-Criteria Decision Making (MCDM) techniques, namely TOPSIS and EDAS. Then, the system applies a novel detection algorithm to the rankings involving machine learning and deep learning classifiers to present the best set of features and feature type with higher detection accuracy as an output. The experimental results highlight that our proposed approach can effectively detect Android malware with 99.10% detection accuracy, achieved with the top 46 intents when ranked using TOPSIS, which is better than permissions, hardware components, or even the case where other popular MCDM techniques are used. Furthermore, our experiments demonstrate that the proposed system with frequency-based MCDM rankings is better than other statistical tests such as mutual information, Pearson correlation coefficient, and t-test. In addition, our proposed model outperforms various popularly used feature ranking methods such as Chi-square, Principal Component Analysis (PCA), Entropy-based Category Coverage Difference (ECCD), and other state-of-the-art Android malware detection techniques in terms of detection accuracy.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104021"},"PeriodicalIF":7.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1016/j.jnca.2024.104018
Muhammad Numan , Fazli Subhan , Mohd Nor Akmal Khalid , Wazir Zada Khan , Hiroyuki Iida
Wireless Sensor Networks (WSNs) security is a serious concern due to the lack of hardware protection on sensor nodes. One common attack on WSNs is the cloning attack, where an adversary captures legitimate nodes, creates multiple replicas, and reprograms them for malicious activities. Therefore, creating an efficient defense to mitigate this challenge is essential. Several witness node-based techniques have been developed to solve this issue, but they often suffer from higher communication and memory overheads or low detection accuracy, making them less effective. In response to the limitations of existing techniques, we propose a novel approach called Hybrid Random Walk assisted Zone-based (HRWZ) for clone node detection in static WSNs. The HRWZ method relies on the random selection of Zone-Leader () in WSNs to detect clones effectively while maintaining network lifespan. We compared HRWZ to known witness node-based techniques, namely Randomized Multicast (RM), Line Selected Multicast (LSM), Random Walk (RAWL) and Table-assisted RAndom WaLk (TRAWL), under different simulation settings. The simulation results confirmed the improved performance and reliability of the proposed HRWZ technique. Our approach reduces communication costs and provides an effective way of selecting for high-probability clone node detection.
{"title":"Clone node detection in static wireless sensor networks: A hybrid approach","authors":"Muhammad Numan , Fazli Subhan , Mohd Nor Akmal Khalid , Wazir Zada Khan , Hiroyuki Iida","doi":"10.1016/j.jnca.2024.104018","DOIUrl":"10.1016/j.jnca.2024.104018","url":null,"abstract":"<div><p>Wireless Sensor Networks (WSNs) security is a serious concern due to the lack of hardware protection on sensor nodes. One common attack on WSNs is the cloning attack, where an adversary captures legitimate nodes, creates multiple replicas, and reprograms them for malicious activities. Therefore, creating an efficient defense to mitigate this challenge is essential. Several witness node-based techniques have been developed to solve this issue, but they often suffer from higher communication and memory overheads or low detection accuracy, making them less effective. In response to the limitations of existing techniques, we propose a novel approach called Hybrid Random Walk assisted Zone-based (HRWZ) for clone node detection in static WSNs. The HRWZ method relies on the random selection of Zone-Leader (<span><math><msub><mrow><mi>Z</mi></mrow><mrow><mi>L</mi></mrow></msub></math></span>) in WSNs to detect clones effectively while maintaining network lifespan. We compared HRWZ to known witness node-based techniques, namely Randomized Multicast (RM), Line Selected Multicast (LSM), Random Walk (RAWL) and Table-assisted RAndom WaLk (TRAWL), under different simulation settings. The simulation results confirmed the improved performance and reliability of the proposed HRWZ technique. Our approach reduces communication costs and provides an effective way of selecting <span><math><msub><mrow><mi>Z</mi></mrow><mrow><mi>L</mi></mrow></msub></math></span> for high-probability clone node detection.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104018"},"PeriodicalIF":7.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804524001954/pdfft?md5=a1d53d6c117165e079b5f3dfe62294f6&pid=1-s2.0-S1084804524001954-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.jnca.2024.104007
Cong Wang , Tong Zhou , Maode Ma , Yuwen Xiong , Xiankun Zhang , Chao Liu
Named Data Networking (NDN) aims to establish an efficient content delivery architecture. In NDN, secure and effective identity authentication schemes ensure secure communication between producers and routers. Currently, there is no feasible solution to perform identity authentication of mobile producers in NDNs. Identity authentication schemes in other networks are either weak in security or performance, such as privacy leakage, difficulty to establish cross-domain trust, and long handover delays, and are not fully adaptable to the security requirements of NDNs. Additionally, the mobility of producers was not fully considered in the initial design of NDNs. This paper first revises the structure of packets and routers to support the identity authentication and mobility of producers. On this basis, this paper proposes a secure and efficient certificateless ECC-based producer identity authentication scheme (CL-BPA), which includes initial authentication and re-authentication, aimed at achieving rapid switch authentication and integrating blockchain to solve single-point failure issues. Using the Canetti and Krawczyk (CK) adversarial model and informal security analysis, the proposed CL-BPA scheme is demonstrated to be resistant to anonymity attacks, identity forgery attacks, and man-in-the-middle attacks. The performance analysis demonstrates that the proposed CL-BPA scheme exhibits excellent capabilities in terms of computation delay, communication cost, smart contract execution time, average response delay, and throughput.
{"title":"An efficient certificateless blockchain-enabled authentication scheme to secure producer mobility in named data networks","authors":"Cong Wang , Tong Zhou , Maode Ma , Yuwen Xiong , Xiankun Zhang , Chao Liu","doi":"10.1016/j.jnca.2024.104007","DOIUrl":"10.1016/j.jnca.2024.104007","url":null,"abstract":"<div><p>Named Data Networking (NDN) aims to establish an efficient content delivery architecture. In NDN, secure and effective identity authentication schemes ensure secure communication between producers and routers. Currently, there is no feasible solution to perform identity authentication of mobile producers in NDNs. Identity authentication schemes in other networks are either weak in security or performance, such as privacy leakage, difficulty to establish cross-domain trust, and long handover delays, and are not fully adaptable to the security requirements of NDNs. Additionally, the mobility of producers was not fully considered in the initial design of NDNs. This paper first revises the structure of packets and routers to support the identity authentication and mobility of producers. On this basis, this paper proposes a secure and efficient certificateless ECC-based producer identity authentication scheme (CL-BPA), which includes initial authentication and re-authentication, aimed at achieving rapid switch authentication and integrating blockchain to solve single-point failure issues. Using the Canetti and Krawczyk (CK) adversarial model and informal security analysis, the proposed CL-BPA scheme is demonstrated to be resistant to anonymity attacks, identity forgery attacks, and man-in-the-middle attacks. The performance analysis demonstrates that the proposed CL-BPA scheme exhibits excellent capabilities in terms of computation delay, communication cost, smart contract execution time, average response delay, and throughput.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104007"},"PeriodicalIF":7.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142151248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.jnca.2024.104008
Guoqing Tian, Li Pan, Shijun Liu
Cloud gaming (CG), as an emergent computing paradigm, is revolutionizing the gaming industry. Currently, cloud gaming service providers (CGSPs) begin to integrate edge computing with cloud to provide services, with the aim of maximizing gaming service revenue while considering the costs incurred and the benefits generated. However, it is non-trivial to maximize gaming service revenue, as future requests are not known beforehand, and poor resource provisioning may result in exorbitant costs. In addition, the edge resource provisioning (ERP) problem in CG necessitates a trade-off between cost and inevitable queuing issues in CG systems. To address this issue, we propose ERPOL (ERP Online), a convenient and efficient approach to formulate ERP strategies for CGSPs, without requiring any future information. The performance of ERPOL has been theoretically validated and experimentally evaluated. Experiments driven by real-world traces show that it can achieve significant cost savings. The proposed approach has the potential to transform how CGSPs manage their infrastructure.
{"title":"An online cost optimization approach for edge resource provisioning in cloud gaming","authors":"Guoqing Tian, Li Pan, Shijun Liu","doi":"10.1016/j.jnca.2024.104008","DOIUrl":"10.1016/j.jnca.2024.104008","url":null,"abstract":"<div><p>Cloud gaming (CG), as an emergent computing paradigm, is revolutionizing the gaming industry. Currently, cloud gaming service providers (CGSPs) begin to integrate edge computing with cloud to provide services, with the aim of maximizing gaming service revenue while considering the costs incurred and the benefits generated. However, it is non-trivial to maximize gaming service revenue, as future requests are not known beforehand, and poor resource provisioning may result in exorbitant costs. In addition, the edge resource provisioning (ERP) problem in CG necessitates a trade-off between cost and inevitable queuing issues in CG systems. To address this issue, we propose ERPOL (ERP Online), a convenient and efficient approach to formulate ERP strategies for CGSPs, without requiring any future information. The performance of ERPOL has been theoretically validated and experimentally evaluated. Experiments driven by real-world traces show that it can achieve significant cost savings. The proposed approach has the potential to transform how CGSPs manage their infrastructure.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104008"},"PeriodicalIF":7.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.jnca.2024.104009
Xiaolei Yang, Zhixin Xia, Junhui Song, Yongshan Liu
To accelerate the convergence speed and improve the accuracy of the federated shared model, this paper proposes a Federated Transfer Learning method based on frozen network parameters. The article sets up frozen two, three, and four layers network parameters, 8 sets of experimental tasks, and two target users for comparative experiments on frozen network parameters, and uses homomorphic encryption based Federated Transfer Learning to achieve secret transfer of parameters, and the accuracy, convergence speed, and loss function values of the experiment were compared and analyzed. The experiment proved that the frozen three-layer network parameter model has the highest accuracy, with the average values of the two target users being 0.9165 and 0.9164; The convergence speed is also the most ideal, with fast convergence completed after 25 iterations. The training time for the two users is also the shortest, with 1732.0s and 1787.3s, respectively; The loss function value shows that the lowest value for User-II is 0.181, while User-III is 0.2061. Finally, the unlabeled and non-empty enterprise credit data is predicted, with 61.08% of users being low-risk users. This article achieves rapid convergence of the target network model by freezing source domain network parameters in a shared network, saving computational resources.
{"title":"Credit risk prediction for small and micro enterprises based on federated transfer learning frozen network parameters","authors":"Xiaolei Yang, Zhixin Xia, Junhui Song, Yongshan Liu","doi":"10.1016/j.jnca.2024.104009","DOIUrl":"10.1016/j.jnca.2024.104009","url":null,"abstract":"<div><p>To accelerate the convergence speed and improve the accuracy of the federated shared model, this paper proposes a Federated Transfer Learning method based on frozen network parameters. The article sets up frozen two, three, and four layers network parameters, 8 sets of experimental tasks, and two target users for comparative experiments on frozen network parameters, and uses homomorphic encryption based Federated Transfer Learning to achieve secret transfer of parameters, and the accuracy, convergence speed, and loss function values of the experiment were compared and analyzed. The experiment proved that the frozen three-layer network parameter model has the highest accuracy, with the average values of the two target users being 0.9165 and 0.9164; The convergence speed is also the most ideal, with fast convergence completed after 25 iterations. The training time for the two users is also the shortest, with 1732.0s and 1787.3s, respectively; The loss function value shows that the lowest value for User-II is 0.181, while User-III is 0.2061. Finally, the unlabeled and non-empty enterprise credit data is predicted, with 61.08% of users being low-risk users. This article achieves rapid convergence of the target network model by freezing source domain network parameters in a shared network, saving computational resources.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104009"},"PeriodicalIF":7.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1016/j.jnca.2024.104006
Quang Truong Vu , Phuc Tan Nguyen , Thi Hanh Nguyen , Thi Thanh Binh Huynh , Van Chien Trinh , Mikael Gidlund
In the Internet of Things (IoT) era, wireless sensor networks play a critical role in communication systems. One of the most crucial problems in wireless sensor networks is the sensor deployment problem, which attempts to provide a strategy to place the sensors within the surveillance area so that two fundamental criteria of wireless sensor networks, coverage and connectivity, are guaranteed. In this paper, we look to solve the multi-objective deployment problem so that area coverage is maximized and the number of nodes used is minimized. Since Harmony Search is a simple yet suitable algorithm for our work, we propose Harmony Search algorithm along with various enhancement proposals, including heuristic initialization, random sampling of sensor types, weighted fitness evaluation, and using different components in the fitness function, to provide a solution to the problem of sensor deployment in a heterogeneous wireless sensor network where sensors have different sensing ranges. On top of that, the probabilistic sensing model is used to reflect how the sensors work realistically. We also provide the extension of our solution to 3D areas and propose a realistic 3D dataset to evaluate it. The simulation results show that the proposed algorithms solve the area coverage problem more efficiently than previous algorithms. Our best proposal demonstrates significant improvements in coverage ratio by 10.20% and cost saving by 27.65% compared to the best baseline in a large-scale evaluation.
{"title":"Striking the perfect balance: Multi-objective optimization for minimizing deployment cost and maximizing coverage with Harmony Search","authors":"Quang Truong Vu , Phuc Tan Nguyen , Thi Hanh Nguyen , Thi Thanh Binh Huynh , Van Chien Trinh , Mikael Gidlund","doi":"10.1016/j.jnca.2024.104006","DOIUrl":"10.1016/j.jnca.2024.104006","url":null,"abstract":"<div><p>In the Internet of Things (IoT) era, wireless sensor networks play a critical role in communication systems. One of the most crucial problems in wireless sensor networks is the sensor deployment problem, which attempts to provide a strategy to place the sensors within the surveillance area so that two fundamental criteria of wireless sensor networks, coverage and connectivity, are guaranteed. In this paper, we look to solve the multi-objective deployment problem so that area coverage is maximized and the number of nodes used is minimized. Since Harmony Search is a simple yet suitable algorithm for our work, we propose Harmony Search algorithm along with various enhancement proposals, including heuristic initialization, random sampling of sensor types, weighted fitness evaluation, and using different components in the fitness function, to provide a solution to the problem of sensor deployment in a heterogeneous wireless sensor network where sensors have different sensing ranges. On top of that, the probabilistic sensing model is used to reflect how the sensors work realistically. We also provide the extension of our solution to 3D areas and propose a realistic 3D dataset to evaluate it. The simulation results show that the proposed algorithms solve the area coverage problem more efficiently than previous algorithms. Our best proposal demonstrates significant improvements in coverage ratio by 10.20% and cost saving by 27.65% compared to the best baseline in a large-scale evaluation.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104006"},"PeriodicalIF":7.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.jnca.2024.104004
Arash Mahboubi , Khanh Luong , Hamed Aboutorab , Hang Thanh Bui , Geoff Jarrad , Mohammed Bahutair , Seyit Camtepe , Ganna Pogrebna , Ejaz Ahmed , Bazara Barry , Hannah Gately
In the rapidly changing cybersecurity landscape, threat hunting has become a critical proactive defense against sophisticated cyber threats. While traditional security measures are essential, their reactive nature often falls short in countering malicious actors’ increasingly advanced tactics. This paper explores the crucial role of threat hunting, a systematic, analyst-driven process aimed at uncovering hidden threats lurking within an organization’s digital infrastructure before they escalate into major incidents. Despite its importance, the cybersecurity community grapples with several challenges, including the lack of standardized methodologies, the need for specialized expertise, and the integration of cutting-edge technologies like artificial intelligence (AI) for predictive threat identification. To tackle these challenges, this survey paper offers a comprehensive overview of current threat hunting practices, emphasizing the integration of AI-driven models for proactive threat prediction. Our research explores critical questions regarding the effectiveness of various threat hunting processes and the incorporation of advanced techniques such as augmented methodologies and machine learning. Our approach involves a systematic review of existing practices, including frameworks from industry leaders like IBM and CrowdStrike. We also explore resources for intelligence ontologies and automation tools. The background section clarifies the distinction between threat hunting and anomaly detection, emphasizing systematic processes crucial for effective threat hunting. We formulate hypotheses based on hidden states and observations, examine the interplay between anomaly detection and threat hunting, and introduce iterative detection methodologies and playbooks for enhanced threat detection. Our review encompasses supervised and unsupervised machine learning approaches, reasoning techniques, graph-based and rule-based methods, as well as other innovative strategies. We identify key challenges in the field, including the scarcity of labeled data, imbalanced datasets, the need for integrating multiple data sources, the rapid evolution of adversarial techniques, and the limited availability of human expertise and data intelligence. The discussion highlights the transformative impact of artificial intelligence on both threat hunting and cybercrime, reinforcing the importance of robust hypothesis development. This paper contributes a detailed analysis of the current state and future directions of threat hunting, offering actionable insights for researchers and practitioners to enhance threat detection and mitigation strategies in the ever-evolving cybersecurity landscape.
在瞬息万变的网络安全环境中,威胁猎取已成为应对复杂网络威胁的重要主动防御手段。虽然传统的安全措施必不可少,但它们的被动性往往无法应对恶意行为者日益先进的战术。本文探讨了威胁猎取的关键作用,威胁猎取是一个系统化的、由分析师驱动的过程,旨在发现潜伏在组织数字基础设施中的隐蔽威胁,避免其升级为重大事件。尽管其重要性不言而喻,但网络安全界仍在努力应对一些挑战,包括缺乏标准化方法、对专业知识的需求,以及整合人工智能(AI)等尖端技术以进行预测性威胁识别。为了应对这些挑战,本调查报告全面概述了当前的威胁猎捕实践,强调了人工智能驱动模型在主动威胁预测中的整合。我们的研究探讨了与各种威胁猎取流程的有效性以及增强方法和机器学习等先进技术的整合有关的关键问题。我们的方法涉及对现有实践的系统回顾,包括来自 IBM 和 CrowdStrike 等行业领导者的框架。我们还探索了情报本体和自动化工具方面的资源。背景部分阐明了威胁猎取和异常检测之间的区别,强调了对有效猎取威胁至关重要的系统流程。我们根据隐藏状态和观察结果提出假设,研究异常检测和威胁猎捕之间的相互作用,并介绍用于增强威胁检测的迭代检测方法和流程。我们的综述涵盖了有监督和无监督机器学习方法、推理技术、基于图形和规则的方法以及其他创新策略。我们指出了该领域面临的主要挑战,包括标注数据稀缺、数据集不平衡、需要整合多种数据源、对抗技术的快速发展以及人类专业知识和数据智能的有限性。讨论强调了人工智能对威胁猎捕和网络犯罪的变革性影响,强化了稳健假设开发的重要性。本文详细分析了威胁猎取的现状和未来方向,为研究人员和从业人员在不断变化的网络安全环境中加强威胁检测和缓解策略提供了可行的见解。
{"title":"Evolving techniques in cyber threat hunting: A systematic review","authors":"Arash Mahboubi , Khanh Luong , Hamed Aboutorab , Hang Thanh Bui , Geoff Jarrad , Mohammed Bahutair , Seyit Camtepe , Ganna Pogrebna , Ejaz Ahmed , Bazara Barry , Hannah Gately","doi":"10.1016/j.jnca.2024.104004","DOIUrl":"10.1016/j.jnca.2024.104004","url":null,"abstract":"<div><p>In the rapidly changing cybersecurity landscape, threat hunting has become a critical proactive defense against sophisticated cyber threats. While traditional security measures are essential, their reactive nature often falls short in countering malicious actors’ increasingly advanced tactics. This paper explores the crucial role of threat hunting, a systematic, analyst-driven process aimed at uncovering hidden threats lurking within an organization’s digital infrastructure before they escalate into major incidents. Despite its importance, the cybersecurity community grapples with several challenges, including the lack of standardized methodologies, the need for specialized expertise, and the integration of cutting-edge technologies like artificial intelligence (AI) for predictive threat identification. To tackle these challenges, this survey paper offers a comprehensive overview of current threat hunting practices, emphasizing the integration of AI-driven models for proactive threat prediction. Our research explores critical questions regarding the effectiveness of various threat hunting processes and the incorporation of advanced techniques such as augmented methodologies and machine learning. Our approach involves a systematic review of existing practices, including frameworks from industry leaders like IBM and CrowdStrike. We also explore resources for intelligence ontologies and automation tools. The background section clarifies the distinction between threat hunting and anomaly detection, emphasizing systematic processes crucial for effective threat hunting. We formulate hypotheses based on hidden states and observations, examine the interplay between anomaly detection and threat hunting, and introduce iterative detection methodologies and playbooks for enhanced threat detection. Our review encompasses supervised and unsupervised machine learning approaches, reasoning techniques, graph-based and rule-based methods, as well as other innovative strategies. We identify key challenges in the field, including the scarcity of labeled data, imbalanced datasets, the need for integrating multiple data sources, the rapid evolution of adversarial techniques, and the limited availability of human expertise and data intelligence. The discussion highlights the transformative impact of artificial intelligence on both threat hunting and cybercrime, reinforcing the importance of robust hypothesis development. This paper contributes a detailed analysis of the current state and future directions of threat hunting, offering actionable insights for researchers and practitioners to enhance threat detection and mitigation strategies in the ever-evolving cybersecurity landscape.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104004"},"PeriodicalIF":7.7,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804524001814/pdfft?md5=7fb543744ca72ceac22267ab8ec36898&pid=1-s2.0-S1084804524001814-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.jnca.2024.104005
Baoshan Lu , Junli Fang , Junxiu Liu , Xuemin Hong
In this paper, we address the challenge of minimizing system energy consumption for task offloading within non-line-of-sight (NLoS) mobile edge computing (MEC) environments. Our approach integrates an active reconfigurable intelligent surface (RIS) and employs a hybrid transmission scheme combining time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) to enhance the quality of service (QoS) for user task offloading. The formulation of this problem as a non-convex optimization issue presents significant challenges due to its inherent complexity. To overcome this, we introduce an innovative method termed element refinement-based differential evolution (ERBDE). Initially, through rigorous theoretical analysis, we optimally determine the allocation of local computation resources, computation resources at the base station (BS), and transmit power of users, while maintaining fixed values for the offloading ratio, amplification factor, phase of the reflecting element, and the transmission period. Subsequently, we employ the differential evolution (DE) algorithm to iteratively fine-tune the offloading ratio, amplification factor, phase of the reflecting element, and transmission period towards near-optimal configurations. Our simulation results demonstrate that the implementation of active RIS-supported task offloading utilizing the hybrid TDMA-NOMA scheme results in an average system energy consumption reduction of 80.3%.
{"title":"Energy efficient multi-user task offloading through active RIS with hybrid TDMA-NOMA transmission","authors":"Baoshan Lu , Junli Fang , Junxiu Liu , Xuemin Hong","doi":"10.1016/j.jnca.2024.104005","DOIUrl":"10.1016/j.jnca.2024.104005","url":null,"abstract":"<div><p>In this paper, we address the challenge of minimizing system energy consumption for task offloading within non-line-of-sight (NLoS) mobile edge computing (MEC) environments. Our approach integrates an active reconfigurable intelligent surface (RIS) and employs a hybrid transmission scheme combining time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) to enhance the quality of service (QoS) for user task offloading. The formulation of this problem as a non-convex optimization issue presents significant challenges due to its inherent complexity. To overcome this, we introduce an innovative method termed element refinement-based differential evolution (ERBDE). Initially, through rigorous theoretical analysis, we optimally determine the allocation of local computation resources, computation resources at the base station (BS), and transmit power of users, while maintaining fixed values for the offloading ratio, amplification factor, phase of the reflecting element, and the transmission period. Subsequently, we employ the differential evolution (DE) algorithm to iteratively fine-tune the offloading ratio, amplification factor, phase of the reflecting element, and transmission period towards near-optimal configurations. Our simulation results demonstrate that the implementation of active RIS-supported task offloading utilizing the hybrid TDMA-NOMA scheme results in an average system energy consumption reduction of 80.3%.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104005"},"PeriodicalIF":7.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.jnca.2024.104001
Mengjie Lv, Xuanli Liu, Hui Dong, Weibei Fan
With the rapid growth of data volume, the escalating complexity of data businesses, and the increasing reliance on the Internet for daily life and production, the scale of data centers is constantly expanding. The data center network (DCN) is a bridge connecting large-scale servers in data centers for large-scale distributed computing. How to build a DCN structure that is flexible and cost-effective, while maintaining its topological properties unchanged during network expansion has become a challenging issue. In this paper, we propose an expandable and cost-effective DCN, namely HHCube, which is based on the half hypercube structure. Further, we analyze some characteristics of HHCube, including connectivity, diameter, and bisection bandwidth of the HHCube. We also design an efficient algorithm to find the shortest path between any two distinct nodes and present a fault-tolerant routing scheme to obtain a fault-tolerant path between any two distinct fault-free nodes in HHCube. Meanwhile, we present two local diagnosis algorithms to determine the status of nodes in HHCube under the PMC model and MM* model, respectively. Our results demonstrate that despite the presence of up to 25% faulty nodes in HHCube, both algorithms achieve a correct diagnosis rate exceeding 90%. Finally, we compare HHCube with state-of-the-art DCNs including Fat-Tree, DCell, BCube, Ficonn, and HSDC, and the experimental results indicate that the HHCube is an excellent candidate for constructing expandable and cost-effective DCNs.
{"title":"An expandable and cost-effective data center network","authors":"Mengjie Lv, Xuanli Liu, Hui Dong, Weibei Fan","doi":"10.1016/j.jnca.2024.104001","DOIUrl":"10.1016/j.jnca.2024.104001","url":null,"abstract":"<div><p>With the rapid growth of data volume, the escalating complexity of data businesses, and the increasing reliance on the Internet for daily life and production, the scale of data centers is constantly expanding. The data center network (DCN) is a bridge connecting large-scale servers in data centers for large-scale distributed computing. How to build a DCN structure that is flexible and cost-effective, while maintaining its topological properties unchanged during network expansion has become a challenging issue. In this paper, we propose an expandable and cost-effective DCN, namely HHCube, which is based on the half hypercube structure. Further, we analyze some characteristics of HHCube, including connectivity, diameter, and bisection bandwidth of the HHCube. We also design an efficient algorithm to find the shortest path between any two distinct nodes and present a fault-tolerant routing scheme to obtain a fault-tolerant path between any two distinct fault-free nodes in HHCube. Meanwhile, we present two local diagnosis algorithms to determine the status of nodes in HHCube under the PMC model and MM* model, respectively. Our results demonstrate that despite the presence of up to 25% faulty nodes in HHCube, both algorithms achieve a correct diagnosis rate exceeding 90%. Finally, we compare HHCube with state-of-the-art DCNs including Fat-Tree, DCell, BCube, Ficonn, and HSDC, and the experimental results indicate that the HHCube is an excellent candidate for constructing expandable and cost-effective DCNs.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104001"},"PeriodicalIF":7.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142089137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1016/j.jnca.2024.104003
Ji Wan , Kai Hu , Jie Li , Yichen Guo , Hao Su , Shenzhang Li , Yafei Ye
The Consensus algorithm is the core of the permissioned blockchain, it directly affects the performance and scalability of the system. Performance is limited by the computing power and network bandwidth of a single leader node. Most existing blockchain systems adopt mesh or star topology. Blockchain performance decreases rapidly as the number of nodes increases. To solve this problem, we first design the n-k cluster tree and corresponding generation algorithm, which supports rapid reconfiguration of nodes. Then we propose the Zebra consensus algorithm, which is a cluster tree-based consensus algorithm. Compared to the PBFT, it has higher throughput and supports more nodes under the same hardware conditions. However, the tree network topology enhances scalability while also increasing latency among nodes. To reduce transaction latency, we designed the Pipeline-Zebra consensus algorithm that further improves the performance of blockchain systems in a tree network topology through parallel message propagation and block validation. The message complexity of the algorithm is O(n). Experimental results show that the performance of the algorithm proposed in this paper can reach 2.25 times that of the PBFT algorithm, and it supports four times the number of nodes under the same hardware.
{"title":"Zebra: A cluster-aware blockchain consensus algorithm","authors":"Ji Wan , Kai Hu , Jie Li , Yichen Guo , Hao Su , Shenzhang Li , Yafei Ye","doi":"10.1016/j.jnca.2024.104003","DOIUrl":"10.1016/j.jnca.2024.104003","url":null,"abstract":"<div><p>The Consensus algorithm is the core of the permissioned blockchain, it directly affects the performance and scalability of the system. Performance is limited by the computing power and network bandwidth of a single leader node. Most existing blockchain systems adopt mesh or star topology. Blockchain performance decreases rapidly as the number of nodes increases. To solve this problem, we first design the <em>n-k</em> cluster tree and corresponding generation algorithm, which supports rapid reconfiguration of nodes. Then we propose the <em>Zebra</em> consensus algorithm, which is a cluster tree-based consensus algorithm. Compared to the PBFT, it has higher throughput and supports more nodes under the same hardware conditions. However, the tree network topology enhances scalability while also increasing latency among nodes. To reduce transaction latency, we designed the <em>Pipeline-Zebra</em> consensus algorithm that further improves the performance of blockchain systems in a tree network topology through parallel message propagation and block validation. The message complexity of the algorithm is <em>O(n)</em>. Experimental results show that the performance of the algorithm proposed in this paper can reach 2.25 times that of the PBFT algorithm, and it supports four times the number of nodes under the same hardware.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104003"},"PeriodicalIF":7.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}