Pub Date : 2024-12-16DOI: 10.1016/j.cose.2024.104281
Yufeng Zhang , Hongxin Zhang , Yijun Wang , Xiaorong Gao , Chen Yang
Reliable identity authentication is indispensable for information security. Brainprint emerges as a promising biometric authentication through brain signal, offering a glimpse into a secure future. However, questions surrounding its long-term stability and individual uniqueness necessitate further exploration. To address this, we developed a brainprint authentication system anchored in presenting self-face rapidly to evoke event related potential (ERP). A novel electroencephalogram model was proposed to trace ERP source responses. Then the ERP source signals were mapped into a multivariate Gaussian model derived from registered templates for identity authentication. We recorded the ERP brainprint of 15 participants and authenticated their identities on the 7th, 80th and 200th day to evaluate the permanence of the brainprint system. Additionally, totally 551 invasion attempts were simulated, with 380 instances involving premeditated attacks to verify individual uniqueness in ERP. Behavioral tests were introduced to verify that intruders are capable of imitating clients’ behaviors. Under the proposed EEG model, we achieved an impressive client login success rate of 81%, successfully warding off all impostor attempts. These results provide preliminary evidence supporting the permanence and uniqueness of brainprint in our system, offering new perspectives for the future information security of identity authentication.
{"title":"Enhancing information security through brainprint: A longitudinal study on ERP identity authentication","authors":"Yufeng Zhang , Hongxin Zhang , Yijun Wang , Xiaorong Gao , Chen Yang","doi":"10.1016/j.cose.2024.104281","DOIUrl":"10.1016/j.cose.2024.104281","url":null,"abstract":"<div><div>Reliable identity authentication is indispensable for information security. Brainprint emerges as a promising biometric authentication through brain signal, offering a glimpse into a secure future. However, questions surrounding its long-term stability and individual uniqueness necessitate further exploration. To address this, we developed a brainprint authentication system anchored in presenting self-face rapidly to evoke event related potential (ERP). A novel electroencephalogram model was proposed to trace ERP source responses. Then the ERP source signals were mapped into a multivariate Gaussian model derived from registered templates for identity authentication. We recorded the ERP brainprint of 15 participants and authenticated their identities on the 7th, 80th and 200th day to evaluate the permanence of the brainprint system. Additionally, totally 551 invasion attempts were simulated, with 380 instances involving premeditated attacks to verify individual uniqueness in ERP. Behavioral tests were introduced to verify that intruders are capable of imitating clients’ behaviors. Under the proposed EEG model, we achieved an impressive client login success rate of 81%, successfully warding off all impostor attempts. These results provide preliminary evidence supporting the permanence and uniqueness of brainprint in our system, offering new perspectives for the future information security of identity authentication.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104281"},"PeriodicalIF":4.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1016/j.cose.2024.104231
Narendra Singh, Somanath Tripathy
The threat actors continuously evolve their tactics and techniques in a novel form to evade traditional security solutions. Fileless malware attacks are one such advancement, which operates directly within system memory, leaving no footprint on the disk, so became challenging to detect. Meanwhile, the current state-of-the-art approaches detect fileless attacks at the final (post-infection) stage, although, detecting attacks at an early-stage is crucial to prevent potential damage and data breaches. In this work, we propose an early-stage detection system named Argus to detect fileless malware at early-stage. Argus extracts key features from acquired memory dumps of suspicious processes in real-time and generates explained features. It then correlates the explained features with the MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) framework to identify fileless malware attacks before their operational stage. The experimental results show that Argus could successfully identify, 4356 fileless malware samples (out of 5026 samples) during the operational stage. Specifically, 2978 samples are detected in the pre-operational phase, while 1378 samples are detected in the operational phase.
{"title":"Unveiling the veiled: An early stage detection of fileless malware","authors":"Narendra Singh, Somanath Tripathy","doi":"10.1016/j.cose.2024.104231","DOIUrl":"10.1016/j.cose.2024.104231","url":null,"abstract":"<div><div>The threat actors continuously evolve their tactics and techniques in a novel form to evade traditional security solutions. Fileless malware attacks are one such advancement, which operates directly within system memory, leaving no footprint on the disk, so became challenging to detect. Meanwhile, the current state-of-the-art approaches detect fileless attacks at the final (post-infection) stage, although, detecting attacks at an early-stage is crucial to prevent potential damage and data breaches. In this work, we propose an early-stage detection system named <em>Argus</em> to detect fileless malware at early-stage. <em>Argus</em> extracts key features from acquired memory dumps of suspicious processes in real-time and generates explained features. It then correlates the explained features with the MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) framework to identify fileless malware attacks before their operational stage. The experimental results show that <em>Argus</em> could successfully identify, 4356 fileless malware samples (out of 5026 samples) during the operational stage. Specifically, 2978 samples are detected in the pre-operational phase, while 1378 samples are detected in the operational phase.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104231"},"PeriodicalIF":4.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-15DOI: 10.1016/j.cose.2024.104273
Yunfei Li , Xiaodong Fu , Li Liu , Jiaman Ding , Wei Peng , Lianyin Jia
Local Differential Privacy (LDP) has garnered considerable attention in recent years because it does not rely on trusted third parties and has low interactivity and high operational efficiency. However, current LDP frequency estimation mechanisms aggregate data using different privacy budgets within the same domain of attribute values, overlooking the aggregation requirements across different domains of attribute values. This limits the potential for enhancing the data utility under fixed privacy budgets and meeting user preferences in multiple domains of attribute values and privacy budgets. To address this issue, we define a Multi-Domains Personalized Local Differential Privacy (MDPLDP) model that allows users to freely choose domains of attribute values and privacy budgets according to their privacy preferences. Furthermore, based on the MDPLDP model, two new frequency estimation mechanisms are proposed: MDPLDP-Generalized Randomized Response and MDPLDP-basic Randomized Aggregatable Privacy-Preserving Ordinal Response. These mechanisms support cross-domains data aggregation and optimize data utility by adjusting the domains of attribute values and increasing privacy budgets. Theoretical analysis reveals that these new mechanisms have lower estimation errors than the traditional LDP mechanisms. Experiments on real and synthetic datasets demonstrate that the proposed mechanisms effectively reduce estimation errors and enhance the utility of data-frequency estimation.
{"title":"Multi-domains personalized local differential privacy frequency estimation mechanism for utility optimization","authors":"Yunfei Li , Xiaodong Fu , Li Liu , Jiaman Ding , Wei Peng , Lianyin Jia","doi":"10.1016/j.cose.2024.104273","DOIUrl":"10.1016/j.cose.2024.104273","url":null,"abstract":"<div><div>Local Differential Privacy (LDP) has garnered considerable attention in recent years because it does not rely on trusted third parties and has low interactivity and high operational efficiency. However, current LDP frequency estimation mechanisms aggregate data using different privacy budgets within the same domain of attribute values, overlooking the aggregation requirements across different domains of attribute values. This limits the potential for enhancing the data utility under fixed privacy budgets and meeting user preferences in multiple domains of attribute values and privacy budgets. To address this issue, we define a Multi-Domains Personalized Local Differential Privacy (MDPLDP) model that allows users to freely choose domains of attribute values and privacy budgets according to their privacy preferences. Furthermore, based on the MDPLDP model, two new frequency estimation mechanisms are proposed: MDPLDP-Generalized Randomized Response and MDPLDP-basic Randomized Aggregatable Privacy-Preserving Ordinal Response. These mechanisms support cross-domains data aggregation and optimize data utility by adjusting the domains of attribute values and increasing privacy budgets. Theoretical analysis reveals that these new mechanisms have lower estimation errors than the traditional LDP mechanisms. Experiments on real and synthetic datasets demonstrate that the proposed mechanisms effectively reduce estimation errors and enhance the utility of data-frequency estimation.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104273"},"PeriodicalIF":4.8,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-14DOI: 10.1016/j.cose.2024.104269
Mingsheng Tang , Binbin Ge
Using valid accounts has become a prevalent tactic among Advanced Persistent Threat (APT) actors for executing malicious logins. By exploiting stolen credentials, they bypass rule-based and traffic-based detection mechanisms, enabling sustained network infiltration without triggering anomalous network traffic alerts. The scarcity of feature-rich datasets and labeled samples for identifying malicious logins by unknown APT actors presents a significant challenge. To address this, we propose Social-Hunter, an innovative approach for detecting unknown malicious logins without prior knowledge or training on specific APT behaviors. Social-Hunter integrates sociological heuristics and multi-viewpoint modeling to partition groups based on social and role-based perspectives. Iterative partitioning assesses whether new login nodes fit within established group contexts, thereby identifying potential malicious intent. A threshold parameter evaluates source node capability during cross-group logins, flagging insufficient capability as indicators of malicious behavior. The core algorithm detects deviations from social norms and predefined thresholds. Evaluation on a 58-day dataset of authentication events from a real-world Los Alamos National Laboratory’s (LANL) network demonstrates Social-Hunter’s effectiveness. It achieves a true positive rate (TPR) nearing 90% with a significantly reduced false positive rate (FPR) of 0.2%. Comparative analysis against state-of-art unsupervised methods such as graph learning, Local Outlier Factor (LOF), Isolation Forest (IF), One-Class Support Vector Machine (One-Class SVM), Ensemble Multi-Detector (EMD), and AutoEncoder (AE) shows Social-Hunter improving TPR by at least 5% and reducing FPR by more than 77%. In practical event auditing for threats hunting, Social-Hunter maintains a minimal false positives rate of 0.00014% with nearly 90% TPR. Over 28 days, it triggered 956 alerts, with 672 true positives and just 284 false alarms. The average daily false alarm rate is around 10, while valid alerts average 20 per day. These findings underscore Social-Hunter’s potential for early detection of APT activities in large enterprise networks.
{"title":"Social-Hunter: A social heuristics-based approach to early unveiling unknown malicious logins using valid accounts","authors":"Mingsheng Tang , Binbin Ge","doi":"10.1016/j.cose.2024.104269","DOIUrl":"10.1016/j.cose.2024.104269","url":null,"abstract":"<div><div>Using valid accounts has become a prevalent tactic among Advanced Persistent Threat (APT) actors for executing malicious logins. By exploiting stolen credentials, they bypass rule-based and traffic-based detection mechanisms, enabling sustained network infiltration without triggering anomalous network traffic alerts. The scarcity of feature-rich datasets and labeled samples for identifying malicious logins by unknown APT actors presents a significant challenge. To address this, we propose Social-Hunter, an innovative approach for detecting unknown malicious logins without prior knowledge or training on specific APT behaviors. Social-Hunter integrates sociological heuristics and multi-viewpoint modeling to partition groups based on social and role-based perspectives. Iterative partitioning assesses whether new login nodes fit within established group contexts, thereby identifying potential malicious intent. A threshold parameter evaluates source node capability during cross-group logins, flagging insufficient capability as indicators of malicious behavior. The core algorithm detects deviations from social norms and predefined thresholds. Evaluation on a 58-day dataset of authentication events from a real-world Los Alamos National Laboratory’s (LANL) network demonstrates Social-Hunter’s effectiveness. It achieves a true positive rate (TPR) nearing 90% with a significantly reduced false positive rate (FPR) of 0.2%. Comparative analysis against state-of-art unsupervised methods such as graph learning, Local Outlier Factor (LOF), Isolation Forest (IF), One-Class Support Vector Machine (One-Class SVM), Ensemble Multi-Detector (EMD), and AutoEncoder (AE) shows Social-Hunter improving TPR by at least 5% and reducing FPR by more than 77%. In practical event auditing for threats hunting, Social-Hunter maintains a minimal false positives rate of 0.00014% with nearly 90% TPR. Over 28 days, it triggered 956 alerts, with 672 true positives and just 284 false alarms. The average daily false alarm rate is around 10, while valid alerts average 20 per day. These findings underscore Social-Hunter’s potential for early detection of APT activities in large enterprise networks.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104269"},"PeriodicalIF":4.8,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-14DOI: 10.1016/j.cose.2024.104268
Victormills Iyieke , Hesamaldin Jadidbonab , Abdur Rakib , Jeremy Bryans , Don Dhaliwal , Odysseas Kosmas
The rise in Connected and Automated Vehicles (CAVs) and Intelligent Transport Systems (ITSs) introduced by OEMs has increased the demand for modern vehicle sophistication. This sophistication involves a variety of software capabilities and functionalities embedded in over 100 ECUs in a vehicle. This has led to the need for over-the-air (OTA) updates. OTA updates can be delivered wirelessly, eliminating the need to bring vehicles to the garage for updates. This is more convenient for owners, reduces costs for OEMs, and reduces greenhouse gas emissions. There exist different OTA update considerations that are adopted by automotive OEMs, such as the Uptane framework, Open Mobile Alliance Device Management (OMA-DM) standard, and the general ISO 24089 standard, including subvariance of Uptane and OMA-DM. However, the systematic implementation of security-by-design applying ISO 21434 in OTA systems is less employed, and there remains a gap in this practice of security-by-design that the automotive industry can adapt to ensure a systematic approach to secure OTA update technology. OTA update security hinges on identifying vulnerability pathways for potential malicious attacks. Therefore, identifying and mitigating potential vulnerabilities throughout the OTA update process is critical for robust security. This paper proposes an adaptable security-by-design approach to OTA update, built and extended from our work Iyieke et al. (2023). The adaptable security-by-design approach is then applied to a developed prototype OTA update system based on the Uptane framework as implemented by Toradex. Security-by-design is a well-established concept in enterprise systems, but is still developing in the cyber–physical system of automotive cybersecurity. Our proposed approach covers the security engineering lifecycle, the logical security layered concept, and the security architecture. A threat analysis and risk assessment (TARA) is performed based on the international automotive cybersecurity standard ISO/SAE 21434. The highest threats identified from the TARA are formalized, and corresponding mitigation actions are defined according to UNECE WP29. Penetration testing is conducted to verify the approach’s capability to reinforce the security of the OTA update systems against some of the identified risks and threats. Our proposed approach provides a systematic and adaptable security-by-design approach to ensure secure OTA updates in modern vehicles; OEMs and other stakeholders can use it to develop secure OTA systems regardless of the OTA update technology used.
{"title":"An adaptable security-by-design approach for ensuring a secure Over the Air (OTA) update in modern vehicles","authors":"Victormills Iyieke , Hesamaldin Jadidbonab , Abdur Rakib , Jeremy Bryans , Don Dhaliwal , Odysseas Kosmas","doi":"10.1016/j.cose.2024.104268","DOIUrl":"10.1016/j.cose.2024.104268","url":null,"abstract":"<div><div>The rise in Connected and Automated Vehicles (CAVs) and Intelligent Transport Systems (ITSs) introduced by OEMs has increased the demand for modern vehicle sophistication. This sophistication involves a variety of software capabilities and functionalities embedded in over 100 ECUs in a vehicle. This has led to the need for over-the-air (OTA) updates. OTA updates can be delivered wirelessly, eliminating the need to bring vehicles to the garage for updates. This is more convenient for owners, reduces costs for OEMs, and reduces greenhouse gas emissions. There exist different OTA update considerations that are adopted by automotive OEMs, such as the Uptane framework, Open Mobile Alliance Device Management (OMA-DM) standard, and the general ISO 24089 standard, including subvariance of Uptane and OMA-DM. However, the systematic implementation of security-by-design applying ISO 21434 in OTA systems is less employed, and there remains a gap in this practice of security-by-design that the automotive industry can adapt to ensure a systematic approach to secure OTA update technology. OTA update security hinges on identifying vulnerability pathways for potential malicious attacks. Therefore, identifying and mitigating potential vulnerabilities throughout the OTA update process is critical for robust security. This paper proposes an adaptable security-by-design approach to OTA update, built and extended from our work Iyieke et al. (2023). The adaptable security-by-design approach is then applied to a developed prototype OTA update system based on the Uptane framework as implemented by Toradex. Security-by-design is a well-established concept in enterprise systems, but is still developing in the cyber–physical system of automotive cybersecurity. Our proposed approach covers the security engineering lifecycle, the logical security layered concept, and the security architecture. A threat analysis and risk assessment (TARA) is performed based on the international automotive cybersecurity standard ISO/SAE 21434. The highest threats identified from the TARA are formalized, and corresponding mitigation actions are defined according to UNECE WP29. Penetration testing is conducted to verify the approach’s capability to reinforce the security of the OTA update systems against some of the identified risks and threats. Our proposed approach provides a systematic and adaptable security-by-design approach to ensure secure OTA updates in modern vehicles; OEMs and other stakeholders can use it to develop secure OTA systems regardless of the OTA update technology used.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104268"},"PeriodicalIF":4.8,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-13DOI: 10.1016/j.cose.2024.104264
Andrea Ponte , Dmitrijs Trizna , Luca Demetrio , Battista Biggio , Ivan Tesfai Ogbu , Fabio Roli
As a result of decades of research, Windows malware detection is approached through a plethora of techniques. However, there is an ongoing mismatch between academia – which pursues an optimal performances in terms of detection rate and low false alarms – and the requirements of real-world scenarios. In particular, academia focuses on combining static and dynamic analysis within a single or ensemble of models, falling into several pitfalls like (i) firing dynamic analysis without considering the computational burden it requires; (ii) discarding impossible-to-analyze samples; and (iii) analyzing robustness against adversarial attacks without considering that malware detectors are complemented with more non-machine-learning components. Thus, in this paper we bridge these gaps, by investigating the properties of malware detectors built with multiple and different types of analysis. To do so, we develop SLIFER, a Windows malware detection pipeline sequentially leveraging both static and dynamic analysis, interrupting computations as soon as one module triggers an alarm, requiring dynamic analysis only when needed. Contrary to the state of the art, we investigate how to deal with samples that impede analyzes, showing how much they impact performances, concluding that it is better to flag them as legitimate to not drastically increase false alarms. Lastly, we perform a robustness evaluation of SLIFER. Counter-intuitively, the injection of new content is either blocked more by signatures than dynamic analysis, due to byte artifacts created by the attack, or it is able to avoid detection from signatures, as they rely on constraints on file size disrupted by attacks. As far as we know, we are the first to investigate the properties of sequential malware detectors, shedding light on their behavior in real production environment.
{"title":"SLIFER: Investigating performance and robustness of malware detection pipelines","authors":"Andrea Ponte , Dmitrijs Trizna , Luca Demetrio , Battista Biggio , Ivan Tesfai Ogbu , Fabio Roli","doi":"10.1016/j.cose.2024.104264","DOIUrl":"10.1016/j.cose.2024.104264","url":null,"abstract":"<div><div>As a result of decades of research, Windows malware detection is approached through a plethora of techniques. However, there is an ongoing mismatch between academia – which pursues an optimal performances in terms of detection rate and low false alarms – and the requirements of real-world scenarios. In particular, academia focuses on combining static and dynamic analysis within a single or ensemble of models, falling into several pitfalls like (i) firing dynamic analysis without considering the computational burden it requires; (ii) discarding impossible-to-analyze samples; and (iii) analyzing robustness against adversarial attacks without considering that malware detectors are complemented with more non-machine-learning components. Thus, in this paper we bridge these gaps, by investigating the properties of malware detectors built with multiple and different types of analysis. To do so, we develop SLIFER, a Windows malware detection pipeline sequentially leveraging both static and dynamic analysis, interrupting computations as soon as one module triggers an alarm, requiring dynamic analysis only when needed. Contrary to the state of the art, we investigate how to deal with samples that impede analyzes, showing how much they impact performances, concluding that it is better to flag them as legitimate to not drastically increase false alarms. Lastly, we perform a robustness evaluation of SLIFER. Counter-intuitively, the injection of new content is either blocked more by signatures than dynamic analysis, due to byte artifacts created by the attack, or it is able to avoid detection from signatures, as they rely on constraints on file size disrupted by attacks. As far as we know, we are the first to investigate the properties of sequential malware detectors, shedding light on their behavior in real production environment.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104264"},"PeriodicalIF":4.8,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-12DOI: 10.1016/j.cose.2024.104265
Rouhollah Ahmadian , Mehdi Ghatee , Johan Wahlström
Throughout daily life, individuals partake in various activities such as walking, sitting, and drinking, often in a random manner. These physical activities generally exhibit similar patterns across different people, posing a challenge for identifying users using smartphone and wearable data. To tackle this issue, we have developed a new model called Batch Averaging Probabilities (BAP). Our approach involves segmenting input sequences into separate windows, independently classifying each segment, and then averaging the probabilistic predictions to make the final decision. The BAP method introduces the concept of primary patterns, which are the smallest meaningful sequences. It effectively deals with the random order of primary patterns within mixed patterns. Our work includes theoretical evidence supporting the BAP method, showcasing its ability to minimize prediction variance and enhance model accuracy. Additionally, the model’s training algorithm employs a unique approach. Model selection and regularization are based on the averaged loss of segments, reducing overfitting and improving performance without the complexity associated with using an ensemble of neural network models. We evaluated the effectiveness of our proposed method using accelerometer and gyroscope data from diverse user activity datasets including UIFW, WISM, HOP, CLD, RSSI, DI, DB2 and HAR, demonstrating significant performance improvements over state-of-the-art models. Specifically, our approach outperforms DB2 by 1.08%, HAR by 7.67%, and DI by 14.76% in terms of accuracy.
{"title":"Enhancing user identification through batch averaging of independent window subsequences using smartphone and wearable data","authors":"Rouhollah Ahmadian , Mehdi Ghatee , Johan Wahlström","doi":"10.1016/j.cose.2024.104265","DOIUrl":"10.1016/j.cose.2024.104265","url":null,"abstract":"<div><div>Throughout daily life, individuals partake in various activities such as walking, sitting, and drinking, often in a random manner. These physical activities generally exhibit similar patterns across different people, posing a challenge for identifying users using smartphone and wearable data. To tackle this issue, we have developed a new model called Batch Averaging Probabilities (BAP). Our approach involves segmenting input sequences into separate windows, independently classifying each segment, and then averaging the probabilistic predictions to make the final decision. The BAP method introduces the concept of primary patterns, which are the smallest meaningful sequences. It effectively deals with the random order of primary patterns within mixed patterns. Our work includes theoretical evidence supporting the BAP method, showcasing its ability to minimize prediction variance and enhance model accuracy. Additionally, the model’s training algorithm employs a unique approach. Model selection and regularization are based on the averaged loss of segments, reducing overfitting and improving performance without the complexity associated with using an ensemble of neural network models. We evaluated the effectiveness of our proposed method using accelerometer and gyroscope data from diverse user activity datasets including UIFW, WISM, HOP, CLD, RSSI, DI, DB2 and HAR, demonstrating significant performance improvements over state-of-the-art models. Specifically, our approach outperforms DB2 by 1.08%, HAR by 7.67%, and DI by 14.76% in terms of accuracy.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104265"},"PeriodicalIF":4.8,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1016/j.cose.2024.104248
Jeong Do Yoo, Gang Min Kim, Min Geun Song, Huy Kang Kim
With advancements in unmanned aerial vehicle (UAV) technology, UAVs have become widely used across various fields, including surveillance, agriculture, and architecture. Ensuring the safety and reliability of UAVs is crucial to prevent potential damage caused by malfunctions or cyberattacks. Consequently, the need for anomaly detection in UAVs is rising as a preemptive measure against undesirable incidents. Therefore, UAV anomaly detection faces challenges such as a lack of labeled data and high system workload. In this paper, we propose MeNU, a lightweight anomaly detection system for UAVs that utilizes various sensor data to detect abnormal events. We generated a concise feature set through preprocessing steps, including timestamp pooling, missing-value imputation, and feature selection. We then employed MemAE, a variant of the autoencoder with a memory module that stores prototypical benign patterns, which is particularly effective for anomaly detection. Experimental results on the ALFA and UA datasets demonstrated MeNU’s superior performance, achieving AUC scores of 0.9856 and 0.9988, respectively, outperforming previous approaches. MeNU can be easily integrated into UAV systems, enabling efficient real-time anomaly detection.
{"title":"MeNU: Memorizing normality for UAV anomaly detection with a few sensor values","authors":"Jeong Do Yoo, Gang Min Kim, Min Geun Song, Huy Kang Kim","doi":"10.1016/j.cose.2024.104248","DOIUrl":"10.1016/j.cose.2024.104248","url":null,"abstract":"<div><div>With advancements in unmanned aerial vehicle (UAV) technology, UAVs have become widely used across various fields, including surveillance, agriculture, and architecture. Ensuring the safety and reliability of UAVs is crucial to prevent potential damage caused by malfunctions or cyberattacks. Consequently, the need for anomaly detection in UAVs is rising as a preemptive measure against undesirable incidents. Therefore, UAV anomaly detection faces challenges such as a lack of labeled data and high system workload. In this paper, we propose MeNU, a lightweight anomaly detection system for UAVs that utilizes various sensor data to detect abnormal events. We generated a concise feature set through preprocessing steps, including timestamp pooling, missing-value imputation, and feature selection. We then employed MemAE, a variant of the autoencoder with a memory module that stores prototypical benign patterns, which is particularly effective for anomaly detection. Experimental results on the ALFA and UA datasets demonstrated MeNU’s superior performance, achieving AUC scores of 0.9856 and 0.9988, respectively, outperforming previous approaches. MeNU can be easily integrated into UAV systems, enabling efficient real-time anomaly detection.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104248"},"PeriodicalIF":4.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1016/j.cose.2024.104276
Chrispus Zacharia Oroni , Fu Xianping , Daniela Daniel Ndunguru , Arsenyan Ani
E-learning has revolutionized education by increasing accessibility and flexibility, but it also presents unique cybersecurity challenges. This study explores how E-Learning Engagement, Cybersecurity Awareness, and Information Security Policy Compliance Influence Cyber Safety Measures among virtual learning students. Data were collected from 398 virtual learning students and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) and Fuzzy-set Qualitative Comparative Analysis (fsQCA). The PLS-SEM results indicate that Cybersecurity Awareness and Information Security Policy Compliance significantly enhance Cyber Safety Measures. Additionally, E-Learning Engagement indirectly contributes to cyber safety through its positive influence on both cybersecurity awareness and policy compliance. The fsQCA results reveal that different pathways lead to improved cyber safety. For example, a high level of cybersecurity awareness combined with strong policy compliance consistently enhances cyber safety, even with moderate e-learning engagement. Alternatively, for students with lower cybersecurity awareness, active e-learning engagement paired with strict adherence to security policies also significantly improves cyber safety. These insights demonstrate that no single factor guarantees cyber safety; rather, multiple combinations of conditions can achieve positive outcomes. The study provides implications for educational institutions, highlighting the need for integrated strategies that combine enhancing student engagement with promoting cybersecurity awareness and enforcing information security policies to foster safer virtual learning environments.
{"title":"Enhancing cyber safety in e-learning environment through cybersecurity awareness and information security compliance: PLS-SEM and FsQCA analysis","authors":"Chrispus Zacharia Oroni , Fu Xianping , Daniela Daniel Ndunguru , Arsenyan Ani","doi":"10.1016/j.cose.2024.104276","DOIUrl":"10.1016/j.cose.2024.104276","url":null,"abstract":"<div><div>E-learning has revolutionized education by increasing accessibility and flexibility, but it also presents unique cybersecurity challenges. This study explores how E-Learning Engagement, Cybersecurity Awareness, and Information Security Policy Compliance Influence Cyber Safety Measures among virtual learning students. Data were collected from 398 virtual learning students and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) and Fuzzy-set Qualitative Comparative Analysis (fsQCA). The PLS-SEM results indicate that Cybersecurity Awareness and Information Security Policy Compliance significantly enhance Cyber Safety Measures. Additionally, E-Learning Engagement indirectly contributes to cyber safety through its positive influence on both cybersecurity awareness and policy compliance. The fsQCA results reveal that different pathways lead to improved cyber safety. For example, a high level of cybersecurity awareness combined with strong policy compliance consistently enhances cyber safety, even with moderate e-learning engagement. Alternatively, for students with lower cybersecurity awareness, active e-learning engagement paired with strict adherence to security policies also significantly improves cyber safety. These insights demonstrate that no single factor guarantees cyber safety; rather, multiple combinations of conditions can achieve positive outcomes. The study provides implications for educational institutions, highlighting the need for integrated strategies that combine enhancing student engagement with promoting cybersecurity awareness and enforcing information security policies to foster safer virtual learning environments.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104276"},"PeriodicalIF":4.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-10DOI: 10.1016/j.cose.2024.104249
Udi Aharon , Ran Dubin , Amit Dvir , Chen Hajaj
Application Programming Interface (API) Injection attacks refer to the unauthorized or malicious use of APIs, which are often exploited to gain access to sensitive data or manipulate online systems for illicit purposes. Identifying actors that deceitfully utilize an API poses a demanding problem. Although there have been notable advancements and contributions in the field of API security, there remains a significant challenge when dealing with attackers who use novel approaches that do not match the well-known payloads commonly seen in attacks. Also, attackers may exploit standard functionalities unconventionally and with objectives surpassing their intended boundaries. Thus, API security needs to be more sophisticated and dynamic than ever, with advanced computational intelligence methods, such as machine learning models that can quickly identify and respond to abnormal behavior. In response to these challenges, we propose a novel unsupervised few-shot anomaly detection framework composed of two main parts: First, we train a dedicated generic language model for API based on FastText embedding. Next, we use Approximate Nearest Neighbor search in a classification-by-retrieval approach. Our framework allows for training a fast, lightweight classification model using only a few examples of normal API requests. We evaluated the performance of our framework using the CSIC 2010 and ATRDF 2023 datasets. The results demonstrate that our framework improves API attack detection accuracy compared to the state-of-the-art (SOTA) unsupervised anomaly detection baselines.
{"title":"A classification-by-retrieval framework for few-shot anomaly detection to detect API injection","authors":"Udi Aharon , Ran Dubin , Amit Dvir , Chen Hajaj","doi":"10.1016/j.cose.2024.104249","DOIUrl":"10.1016/j.cose.2024.104249","url":null,"abstract":"<div><div>Application Programming Interface (API) Injection attacks refer to the unauthorized or malicious use of APIs, which are often exploited to gain access to sensitive data or manipulate online systems for illicit purposes. Identifying actors that deceitfully utilize an API poses a demanding problem. Although there have been notable advancements and contributions in the field of API security, there remains a significant challenge when dealing with attackers who use novel approaches that do not match the well-known payloads commonly seen in attacks. Also, attackers may exploit standard functionalities unconventionally and with objectives surpassing their intended boundaries. Thus, API security needs to be more sophisticated and dynamic than ever, with advanced computational intelligence methods, such as machine learning models that can quickly identify and respond to abnormal behavior. In response to these challenges, we propose a novel unsupervised few-shot anomaly detection framework composed of two main parts: First, we train a dedicated generic language model for API based on FastText embedding. Next, we use Approximate Nearest Neighbor search in a classification-by-retrieval approach. Our framework allows for training a fast, lightweight classification model using only a few examples of normal API requests. We evaluated the performance of our framework using the CSIC 2010 and ATRDF 2023 datasets. The results demonstrate that our framework improves API attack detection accuracy compared to the state-of-the-art (SOTA) unsupervised anomaly detection baselines.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"150 ","pages":"Article 104249"},"PeriodicalIF":4.8,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143142732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}