Pub Date : 2026-01-05DOI: 10.1016/j.cose.2025.104820
Tadeusz Sawik
A novel mixed integer nonlinear programming model is developed for cybersecurity optimization in the supply chain exposed to combined direct and propagated cyberattacks. Given a limited budget for cybersecurity investments and a set of available security controls, the problem objective is to select for each node a subset of controls to minimize the breach probability of the most vulnerable attack path to a target node. Using a network transformation, Taylor series approximation of natural logarithm and applying duality theory, a nonlinear model is replaced by a mixed integer linear program. The results of computational experiments are provided, and approximated and exact solutions are compared. This study’s contribution and novelty lie in the explicit equalization of cybersecurity vulnerabilities in supply chains under combined cyberattacks, using the developed linearization techniques. The findings indicate that for the minimax objective function, cybersecurity vulnerabilities of all nodes can be significantly reduced and equalized and that the Taylor approximation of the nonlinear formula for the combined direct and propagated breach probability is very accurate. The proposed approach proves to be computationally efficient for cybersecurity optimization in large-scale multi-tier supply chain networks.
{"title":"Cybersecurity optimization in supply chains under propagated cyberattacks","authors":"Tadeusz Sawik","doi":"10.1016/j.cose.2025.104820","DOIUrl":"10.1016/j.cose.2025.104820","url":null,"abstract":"<div><div>A novel mixed integer nonlinear programming model is developed for cybersecurity optimization in the supply chain exposed to combined direct and propagated cyberattacks. Given a limited budget for cybersecurity investments and a set of available security controls, the problem objective is to select for each node a subset of controls to minimize the breach probability of the most vulnerable attack path to a target node. Using a network transformation, Taylor series approximation of natural logarithm and applying duality theory, a nonlinear model is replaced by a mixed integer linear program. The results of computational experiments are provided, and approximated and exact solutions are compared. This study’s contribution and novelty lie in the explicit equalization of cybersecurity vulnerabilities in supply chains under combined cyberattacks, using the developed linearization techniques. The findings indicate that for the minimax objective function, cybersecurity vulnerabilities of all nodes can be significantly reduced and equalized and that the Taylor approximation of the nonlinear formula for the combined direct and propagated breach probability is very accurate. The proposed approach proves to be computationally efficient for cybersecurity optimization in large-scale multi-tier supply chain networks.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104820"},"PeriodicalIF":5.4,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Revocation of digital certificates represents a series of improvements by IETF in order to standardize a complete and effective solution. This applies to the context of Internet web sites in which web servers and browsers use digital certificates to establish Transport Layer Security (TLS). Despite IETF’s effort over the years to establish a reliable revocation mechanism, including Certificate Revocation List (CRL), Online Certificate Status Protocol (OCSP) and its variants, various technical issues hinder complete resolution of the revocation problem. At the same time, all major browser vendors implement their own proprietary solutions to address the revocation problem. As a result, revocation solutions are fragmented, incomplete, and ineffective, and the level of real-world acceptance of standardized solutions is limited. To address this situation, in 2020, IETF has introduced short-term certificate concept to avoid revocation altogether. It is called Support for Short-Term, Automatically Renewed (STAR) which recommends a validity period of 4 days. To measure the level of adoption of this new approach in the Internet, we collected and analyzed web server certificates from 1 million websites; the result of our extensive analysis indicates that this scheme has not gained traction in reality. In fact, we found no implementation of a 4-day validity period out of more than 1.5 million server certificates that we collected. This situation indicates that the latest IETF effort to promote short-term certificates has not materialized, with no clear alternative solution in sight to resolve the revocation issue. We present our insights into the reasons for this absence of traction in reality and present our view of a possible way forward.
{"title":"Certificate revocation – search for a way forward","authors":"Takahito Yoshizawa , Himanshu Agarwal , Dave Singelée , Bart Preneel","doi":"10.1016/j.cose.2025.104814","DOIUrl":"10.1016/j.cose.2025.104814","url":null,"abstract":"<div><div>Revocation of digital certificates represents a series of improvements by IETF in order to standardize a complete and effective solution. This applies to the context of Internet web sites in which web servers and browsers use digital certificates to establish Transport Layer Security (TLS). Despite IETF’s effort over the years to establish a reliable revocation mechanism, including Certificate Revocation List (CRL), Online Certificate Status Protocol (OCSP) and its variants, various technical issues hinder complete resolution of the revocation problem. At the same time, all major browser vendors implement their own proprietary solutions to address the revocation problem. As a result, revocation solutions are fragmented, incomplete, and ineffective, and the level of real-world acceptance of standardized solutions is limited. To address this situation, in 2020, IETF has introduced <em>short-term certificate</em> concept to avoid revocation altogether. It is called Support for Short-Term, Automatically Renewed (STAR) which recommends a validity period of 4 days. To measure the level of adoption of this new approach in the Internet, we collected and analyzed web server certificates from 1 million websites; the result of our extensive analysis indicates that this scheme has not gained traction in reality. In fact, we found no implementation of a 4-day validity period out of more than 1.5 million server certificates that we collected. This situation indicates that the latest IETF effort to promote short-term certificates has not materialized, with no clear alternative solution in sight to resolve the revocation issue. We present our insights into the reasons for this absence of traction in reality and present our view of a possible way forward.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104814"},"PeriodicalIF":5.4,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-03DOI: 10.1016/j.cose.2025.104821
Haotian Huang , Ruibin Yan , Jian Gao
Static taint analysis serves as a fundamental technique for detecting security vulnerabilities in JavaWeb applications. However, existing approaches suffer from two critical limitations. First, incomplete modeling of framework mechanisms results in unsound call graphs and value flows. Second, element-insensitive analysis of composite containers leads to imprecise data flows and over-taint. To address these limitations, we propose SemTaint, a unified scalable taint analysis approach based on pointer analysis systems. SemTaint enhances Anderson-style analysis through two key innovations. First, we design rule-based framework modeling that captures implicit data and control flows introduced by JavaWeb mechanisms including dependency injection, dynamic proxy, and data persistence frameworks. Second, we develop the on-demand and element-sensitive container modeling based on the access pattern, which integrates the semantic model, access pattern abstraction and sparse tracking model. It efficiently maintains precision against dynamic state changes, thereby balancing scalability and accuracy. Our evaluation on 20 real-world JavaWeb applications demonstrates that SemTaint achieves higher coverage of intra-app reachable methods, while reducing analysis time by an average of 56.4 % compared to state-of-the-art approach. In precision testing on composite containers, SemTaint achieves 96.7 % accuracy and 100 % recall, substantially outperforming FlowDroid (67.6 % accuracy, 82.8 % recall) and Tai-e (65.7 % accuracy, 79.3 % recall). On security benchmarks, SemTaint attains perfect vulnerability detection recall while maintaining superior efficiency. Case studies on real-world vulnerabilities further confirm SemTaint’s effectiveness in detecting taint flows.
{"title":"SemTaint: A scalable taint analysis approach for JavaWeb frameworks and composite containers","authors":"Haotian Huang , Ruibin Yan , Jian Gao","doi":"10.1016/j.cose.2025.104821","DOIUrl":"10.1016/j.cose.2025.104821","url":null,"abstract":"<div><div>Static taint analysis serves as a fundamental technique for detecting security vulnerabilities in JavaWeb applications. However, existing approaches suffer from two critical limitations. First, incomplete modeling of framework mechanisms results in unsound call graphs and value flows. Second, element-insensitive analysis of composite containers leads to imprecise data flows and over-taint. To address these limitations, we propose SemTaint, a unified scalable taint analysis approach based on pointer analysis systems. SemTaint enhances Anderson-style analysis through two key innovations. First, we design rule-based framework modeling that captures implicit data and control flows introduced by JavaWeb mechanisms including dependency injection, dynamic proxy, and data persistence frameworks. Second, we develop the on-demand and element-sensitive container modeling based on the access pattern, which integrates the semantic model, access pattern abstraction and sparse tracking model. It efficiently maintains precision against dynamic state changes, thereby balancing scalability and accuracy. Our evaluation on 20 real-world JavaWeb applications demonstrates that SemTaint achieves higher coverage of intra-app reachable methods, while reducing analysis time by an average of 56.4 % compared to state-of-the-art approach. In precision testing on composite containers, SemTaint achieves 96.7 % accuracy and 100 % recall, substantially outperforming FlowDroid (67.6 % accuracy, 82.8 % recall) and Tai-e (65.7 % accuracy, 79.3 % recall). On security benchmarks, SemTaint attains perfect vulnerability detection recall while maintaining superior efficiency. Case studies on real-world vulnerabilities further confirm SemTaint’s effectiveness in detecting taint flows.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104821"},"PeriodicalIF":5.4,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.cose.2026.104825
Christopher A Ramezan, Mohammad J. Ahmad, Ludwig Christian Schaupp, Frank W. Hatten, Michael A. Starling
<div><div>The cybersecurity analyst is one of the most common positions within the cybersecurity domain and forms the backbone of many organization’s cybersecurity operations. Despite its importance, the position remains broad in scope, and inconsistently defined across industry, with variability in titles, qualifications, and responsibilities. To provide a better understanding of the role, this study provides a global, position-level examination of the cybersecurity analyst through an empirical analysis of 725 job postings from 47 nations. Using a mixed-method approach, including manual coding, descriptive statistics, term frequency inverse document frequency (TF-IDF) analysis, named entity recognition (NER), and latent Dirichlet allocation (LDA), we explore the required qualifications, technical competencies, and operational responsibilities associated with the role. Results show that over 83% of positions required prior professional experience, while a higher education degree and possession of an industry certification were also highly desired, and were listed on 71% and 61% of positions, respectively. Surprisingly, soft communication skills and knowledge of industry standards and frameworks were highly desired and were a more frequent requirement than programming skills and knowledge of networking protocols, indicating a balanced demand for both technical proficiency and non-technical skills. Over 350 individual software tools and 123 different standards/frameworks were mentioned by employers, highlighting the diverse range of security tools and platforms used within industry. Job duties crossed several NICE Cybersecurity Workforce Framework categories, such as protection and defense, governance, incident response, and vulnerability management, highlighting the heterogeneous nature of the position. We also found several positions with unrealistic or mismatched requirements, including entry-level job postings requiring senior-level certifications, which can impede successful recruitment. Synthesizing these results, we further identify five recurring cybersecurity analyst job profiles that represent empirically derived types of analyst roles, offering a structured and actionable representation of how analyst responsibilities are configured in practice. Recommendations include aligning academic programs to industry certifications, combining technical and soft skill development, and increasing experiential learning opportunities to assist graduates with meeting position experience requirements. Employers are encouraged to ensure that position responsibilities are not overly broad, align position descriptions with operational requirements, and balance requirements with position expectations. Given the current wide diversity of the role, academia, industry, and professional organizations should focus on greater standardization of the role, which could streamline hiring, reduce barriers to entry, narrow the cyber skills gap, and better align educati
{"title":"The modern cybersecurity analyst: An international position analysis","authors":"Christopher A Ramezan, Mohammad J. Ahmad, Ludwig Christian Schaupp, Frank W. Hatten, Michael A. Starling","doi":"10.1016/j.cose.2026.104825","DOIUrl":"10.1016/j.cose.2026.104825","url":null,"abstract":"<div><div>The cybersecurity analyst is one of the most common positions within the cybersecurity domain and forms the backbone of many organization’s cybersecurity operations. Despite its importance, the position remains broad in scope, and inconsistently defined across industry, with variability in titles, qualifications, and responsibilities. To provide a better understanding of the role, this study provides a global, position-level examination of the cybersecurity analyst through an empirical analysis of 725 job postings from 47 nations. Using a mixed-method approach, including manual coding, descriptive statistics, term frequency inverse document frequency (TF-IDF) analysis, named entity recognition (NER), and latent Dirichlet allocation (LDA), we explore the required qualifications, technical competencies, and operational responsibilities associated with the role. Results show that over 83% of positions required prior professional experience, while a higher education degree and possession of an industry certification were also highly desired, and were listed on 71% and 61% of positions, respectively. Surprisingly, soft communication skills and knowledge of industry standards and frameworks were highly desired and were a more frequent requirement than programming skills and knowledge of networking protocols, indicating a balanced demand for both technical proficiency and non-technical skills. Over 350 individual software tools and 123 different standards/frameworks were mentioned by employers, highlighting the diverse range of security tools and platforms used within industry. Job duties crossed several NICE Cybersecurity Workforce Framework categories, such as protection and defense, governance, incident response, and vulnerability management, highlighting the heterogeneous nature of the position. We also found several positions with unrealistic or mismatched requirements, including entry-level job postings requiring senior-level certifications, which can impede successful recruitment. Synthesizing these results, we further identify five recurring cybersecurity analyst job profiles that represent empirically derived types of analyst roles, offering a structured and actionable representation of how analyst responsibilities are configured in practice. Recommendations include aligning academic programs to industry certifications, combining technical and soft skill development, and increasing experiential learning opportunities to assist graduates with meeting position experience requirements. Employers are encouraged to ensure that position responsibilities are not overly broad, align position descriptions with operational requirements, and balance requirements with position expectations. Given the current wide diversity of the role, academia, industry, and professional organizations should focus on greater standardization of the role, which could streamline hiring, reduce barriers to entry, narrow the cyber skills gap, and better align educati","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104825"},"PeriodicalIF":5.4,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.cose.2025.104815
Mikel Egaña Aranguren , Jesualdo Tomás Fernández-Breis , Bidane Leon Balentzia , Markus Rompe , Alexander García Castro
Cybersecurity has emerged as a critical concern for modern enterprises due to the increasing complexity and diversity of threats. These risks exploit multiple attack vectors, such as phishing, unpatched vulnerabilities, and malware distribution, necessitating a comprehensive and unified approach to threat modeling. However, cybersecurity data is often siloed across disparate sources–ranging from JSON vulnerability reports (e.g., Amazon Inspector, CycloneDX) and dependency files (e.g., NPM) to relational databases and manual assessments–making integration a significant challenge. Knowledge Graphs offer the technological framework to successfully integrate disparate data. This work presents a KG-based solution for software vulnerability data integration at Siemens Energy, leveraging Enterprise Knowledge Graphs to unify heterogeneous datasets under a shared semantic model. Our approach consists of: (1) a Cybersecurity Ontology Network defining core entities and relationships, (2) an automated pipeline converting diverse data sources into a (3) scalable EKG that enables advanced threat analysis, and (4) competency questions and data quality rules validating the system’s effectiveness. By adopting a Data-Centric Architecture, EKGs provide a flexible, future-proof framework for cybersecurity intelligence, overcoming the limitations of traditional Application-Centric systems, and ultimately providing FAIR data (Findable, Accessible, Interoperable, Reusable). This work offers actionable insights for organizations seeking to enhance cyber threat visibility while managing complex, evolving data landscapes.
{"title":"A comprehensive view of software vulnerability risks through enterprise knowledge graphs","authors":"Mikel Egaña Aranguren , Jesualdo Tomás Fernández-Breis , Bidane Leon Balentzia , Markus Rompe , Alexander García Castro","doi":"10.1016/j.cose.2025.104815","DOIUrl":"10.1016/j.cose.2025.104815","url":null,"abstract":"<div><div>Cybersecurity has emerged as a critical concern for modern enterprises due to the increasing complexity and diversity of threats. These risks exploit multiple attack vectors, such as phishing, unpatched vulnerabilities, and malware distribution, necessitating a comprehensive and unified approach to threat modeling. However, cybersecurity data is often siloed across disparate sources–ranging from JSON vulnerability reports (e.g., Amazon Inspector, CycloneDX) and dependency files (e.g., NPM) to relational databases and manual assessments–making integration a significant challenge. Knowledge Graphs offer the technological framework to successfully integrate disparate data. This work presents a KG-based solution for software vulnerability data integration at Siemens Energy, leveraging Enterprise Knowledge Graphs to unify heterogeneous datasets under a shared semantic model. Our approach consists of: (1) a Cybersecurity Ontology Network defining core entities and relationships, (2) an automated pipeline converting diverse data sources into a (3) scalable EKG that enables advanced threat analysis, and (4) competency questions and data quality rules validating the system’s effectiveness. By adopting a Data-Centric Architecture, EKGs provide a flexible, future-proof framework for cybersecurity intelligence, overcoming the limitations of traditional Application-Centric systems, and ultimately providing FAIR data (Findable, Accessible, Interoperable, Reusable). This work offers actionable insights for organizations seeking to enhance cyber threat visibility while managing complex, evolving data landscapes.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104815"},"PeriodicalIF":5.4,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.cose.2025.104817
Wei Wang , Weike Wang , Jiameng Liu , Lin Li , Bingzheng Li , Zirui Liu , Xiang Wang
With the extensive application of embedded devices in daily life, the security issues have gained escalating significance. There are numerous researches and countermeasures dealing with the security problems of mainstream processor architectures. As an emerging Instruction Set Architecture (ISA), RISC-V has drawn widespread attention owing to its openness, flexibility, and extensibility. With its popularization in diverse fields, ensuring the security becomes crucially important. Aiming at the runtime security of RISC-V IoT devices, this paper reviews all the published papers in RISC-V security, and investigates three mainstream attack approaches and corresponding defense solutions. We analyze five common side-channel attacks with distinct attack focuses, categorize defense schemes into three types based on different levels and strategies of defense technology, and summarize several existing defense schemes on RISC-V platforms. Then, in the context of program vulnerability exploitation attacks, we present the attack process and offer a comprehensive overview and comparison of hardware-assisted defense mechanisms that have been implemented on RISC-V platforms in the recent years. This analysis is carried out from four key strategies, namely Code Integrity, Control Flow Integrity, Data Flow Integrity, and Information Confidentiality. For higher-level network attacks that are less correlated with the underlying ISA, we provide a brief statement and introduce two mainstream mechanisms, namely Intrusion Detection System and Data Encryption. Besides, this paper offers the critical perspectives and future development directions for the defense strategies corresponding to each type of attack. It is convinced that this review will act as a valuable resource for fellow researchers in RISC-V security.
{"title":"Attacks, defenses and perspectives for the runtime security of RISC-V IoT devices: A review","authors":"Wei Wang , Weike Wang , Jiameng Liu , Lin Li , Bingzheng Li , Zirui Liu , Xiang Wang","doi":"10.1016/j.cose.2025.104817","DOIUrl":"10.1016/j.cose.2025.104817","url":null,"abstract":"<div><div>With the extensive application of embedded devices in daily life, the security issues have gained escalating significance. There are numerous researches and countermeasures dealing with the security problems of mainstream processor architectures. As an emerging Instruction Set Architecture (ISA), RISC-V has drawn widespread attention owing to its openness, flexibility, and extensibility. With its popularization in diverse fields, ensuring the security becomes crucially important. Aiming at the runtime security of RISC-V IoT devices, this paper reviews all the published papers in RISC-V security, and investigates three mainstream attack approaches and corresponding defense solutions. We analyze five common side-channel attacks with distinct attack focuses, categorize defense schemes into three types based on different levels and strategies of defense technology, and summarize several existing defense schemes on RISC-V platforms. Then, in the context of program vulnerability exploitation attacks, we present the attack process and offer a comprehensive overview and comparison of hardware-assisted defense mechanisms that have been implemented on RISC-V platforms in the recent years. This analysis is carried out from four key strategies, namely Code Integrity, Control Flow Integrity, Data Flow Integrity, and Information Confidentiality. For higher-level network attacks that are less correlated with the underlying ISA, we provide a brief statement and introduce two mainstream mechanisms, namely Intrusion Detection System and Data Encryption. Besides, this paper offers the critical perspectives and future development directions for the defense strategies corresponding to each type of attack. It is convinced that this review will act as a valuable resource for fellow researchers in RISC-V security.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104817"},"PeriodicalIF":5.4,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1016/j.cose.2025.104822
Sumin Yang, Hongjoo Jin, Wonsuk Choi, Dong Hoon Lee
Memory corruption vulnerabilities, such as out-of-bound memory access, are widely exploited by attackers to compromise system security. Numerous software-based techniques have been developed to prevent such vulnerabilities, but they often require a trade-off between security and performance. In response, Memory Tagging Extension (MTE) is one hardware-based technology that has been introduced to improve memory safety on the ARM architecture efficiently. However, ARM MTE suffers from low entropy and side-channel attacks. Consequently, additional techniques are urgent to enhance protection against pointer misuse arising from memory corruption.
In this paper, we present Folded-Tag, a technique designed to efficiently safeguard pointers against unauthorized out-of-bounds access. Our method addresses the issue of low entropy 4-bit tag in ARM MTE, which makes the system vulnerable, by introducing folding and unfolding mechanisms for pointers. These mechanisms mitigate both speculative execution attacks and pointer guessing attacks. We implemented Folded-Tag in the LLVM compiler framework without requiring kernel modifications, making it suitable for deployment in systems supporting ARM MTE and Pointer Authentication (PA). To assess its effectiveness, we evaluated Folded-Tag on SPEC CPU2017 and NBench-byte benchmarks on an ARM-based Apple Silicon platform. We also applied Folded-Tag to real-world applications, including the NginX web server and ProFTPD FTP server, to demonstrate its compatibility and efficiency. Our experimental results show that Folded-Tag effectively mitigates attacks against existing hardware-assisted security features with a geometric mean performance overhead of less than 1%.
{"title":"Folded-tag: Enhancing memory safety with efficient hardware-supported memory tagging","authors":"Sumin Yang, Hongjoo Jin, Wonsuk Choi, Dong Hoon Lee","doi":"10.1016/j.cose.2025.104822","DOIUrl":"10.1016/j.cose.2025.104822","url":null,"abstract":"<div><div>Memory corruption vulnerabilities, such as out-of-bound memory access, are widely exploited by attackers to compromise system security. Numerous software-based techniques have been developed to prevent such vulnerabilities, but they often require a trade-off between security and performance. In response, Memory Tagging Extension (MTE) is one hardware-based technology that has been introduced to improve memory safety on the ARM architecture efficiently. However, ARM MTE suffers from low entropy and side-channel attacks. Consequently, additional techniques are urgent to enhance protection against pointer misuse arising from memory corruption.</div><div>In this paper, we present Folded-Tag, a technique designed to efficiently safeguard pointers against unauthorized out-of-bounds access. Our method addresses the issue of low entropy 4-bit tag in ARM MTE, which makes the system vulnerable, by introducing <span>folding</span> and <span>unfolding</span> mechanisms for pointers. These mechanisms mitigate both speculative execution attacks and pointer guessing attacks. We implemented Folded-Tag in the LLVM compiler framework without requiring kernel modifications, making it suitable for deployment in systems supporting ARM MTE and Pointer Authentication (PA). To assess its effectiveness, we evaluated Folded-Tag on SPEC CPU2017 and NBench-byte benchmarks on an ARM-based Apple Silicon platform. We also applied Folded-Tag to real-world applications, including the NginX web server and ProFTPD FTP server, to demonstrate its compatibility and efficiency. Our experimental results show that Folded-Tag effectively mitigates attacks against existing hardware-assisted security features with a geometric mean performance overhead of less than 1%.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104822"},"PeriodicalIF":5.4,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1016/j.cose.2025.104813
Tengyao Li , Kaiyue Liu , Shaoyong Du
Network flow watermarking is an active tracing approach by embedding the source node identity information into network flows invisibly. The embedded watermarks coexist with the original network traffic, which are designed with robustness against network noises and active interferences. In recent years, network flow watermarking enters a phase of profound development in face of various challenges on practical applications (e.g. deanonymization, data leakage tracing, malicious behavior monitoring) in Internet. However, to our best knowledge, there are very few surveys for the network flow watermarking methods after 2018, for which the systematic survey covering the entire developments is absent. Meanwhile, current surveys classify and analyze network flow watermarking based on embedding patterns, which ignore different methods correlations on critical problems for watermarking. To this end, the paper reviews and analyzes the papers from 2001 to 2025 on network flow watermarking in perspective of problem-orientations. The threat model and theoretical framework are established to model the watermarking embedding and detecting procedures, which offer a consistent model for watermarking design. From three core problems on robustness, invisibility and adaptation, network flow watermarking methods are classified into different solutions to these problems, which depict an explicit network flow watermarking development roadmap. For sake of facilitating practical applications, five open problems as the critical challenges are discussed, providing references for the future work on network flow watermarking.
{"title":"A survey on network flow watermarking: A problem-oriented perspective","authors":"Tengyao Li , Kaiyue Liu , Shaoyong Du","doi":"10.1016/j.cose.2025.104813","DOIUrl":"10.1016/j.cose.2025.104813","url":null,"abstract":"<div><div>Network flow watermarking is an active tracing approach by embedding the source node identity information into network flows invisibly. The embedded watermarks coexist with the original network traffic, which are designed with robustness against network noises and active interferences. In recent years, network flow watermarking enters a phase of profound development in face of various challenges on practical applications (e.g. deanonymization, data leakage tracing, malicious behavior monitoring) in Internet. However, to our best knowledge, there are very few surveys for the network flow watermarking methods after 2018, for which the systematic survey covering the entire developments is absent. Meanwhile, current surveys classify and analyze network flow watermarking based on embedding patterns, which ignore different methods correlations on critical problems for watermarking. To this end, the paper reviews and analyzes the papers from 2001 to 2025 on network flow watermarking in perspective of problem-orientations. The threat model and theoretical framework are established to model the watermarking embedding and detecting procedures, which offer a consistent model for watermarking design. From three core problems on robustness, invisibility and adaptation, network flow watermarking methods are classified into different solutions to these problems, which depict an explicit network flow watermarking development roadmap. For sake of facilitating practical applications, five open problems as the critical challenges are discussed, providing references for the future work on network flow watermarking.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104813"},"PeriodicalIF":5.4,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145897840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.cose.2025.104823
Nathan Clarke , Steven Furnell
With technology increasingly embedded in everyday life, the demand for secure and usable authentication methods has never been greater. Traditional password-based systems continue to dominate, despite well-known usability and security challenges. This paper explores the evolution of user authentication technologies, from secret knowledge and tokens to biometrics and emerging approaches such as Passkeys. It critically evaluates the extent to which usability has been achieved, identifying both successes—such as biometrics integrated into smartphones—and persistent issues, including inconsistent guidance, ecosystem dependence, and accessibility barriers. Drawing on academic and commercial developments, the discussion highlights the growing burden on users who must authenticate across multiple devices and services daily. Future directions including transparent, continuous, and user-choice-driven authentication are discussed as potential solutions to mitigate this burden. Ultimately, it argues that while progress has been made, current solutions remain fragmented and often exclude key user groups. A more inclusive, consistent, and user-centred approach is essential to ensure authentication systems are both secure and truly usable in practice.
{"title":"Usable authentication: Are we there yet?","authors":"Nathan Clarke , Steven Furnell","doi":"10.1016/j.cose.2025.104823","DOIUrl":"10.1016/j.cose.2025.104823","url":null,"abstract":"<div><div>With technology increasingly embedded in everyday life, the demand for secure and usable authentication methods has never been greater. Traditional password-based systems continue to dominate, despite well-known usability and security challenges. This paper explores the evolution of user authentication technologies, from secret knowledge and tokens to biometrics and emerging approaches such as Passkeys. It critically evaluates the extent to which usability has been achieved, identifying both successes—such as biometrics integrated into smartphones—and persistent issues, including inconsistent guidance, ecosystem dependence, and accessibility barriers. Drawing on academic and commercial developments, the discussion highlights the growing burden on users who must authenticate across multiple devices and services daily. Future directions including transparent, continuous, and user-choice-driven authentication are discussed as potential solutions to mitigate this burden. Ultimately, it argues that while progress has been made, current solutions remain fragmented and often exclude key user groups. A more inclusive, consistent, and user-centred approach is essential to ensure authentication systems are both secure and truly usable in practice.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104823"},"PeriodicalIF":5.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}