Pub Date : 2025-12-30DOI: 10.1016/j.cose.2025.104823
Nathan Clarke , Steven Furnell
With technology increasingly embedded in everyday life, the demand for secure and usable authentication methods has never been greater. Traditional password-based systems continue to dominate, despite well-known usability and security challenges. This paper explores the evolution of user authentication technologies, from secret knowledge and tokens to biometrics and emerging approaches such as Passkeys. It critically evaluates the extent to which usability has been achieved, identifying both successes—such as biometrics integrated into smartphones—and persistent issues, including inconsistent guidance, ecosystem dependence, and accessibility barriers. Drawing on academic and commercial developments, the discussion highlights the growing burden on users who must authenticate across multiple devices and services daily. Future directions including transparent, continuous, and user-choice-driven authentication are discussed as potential solutions to mitigate this burden. Ultimately, it argues that while progress has been made, current solutions remain fragmented and often exclude key user groups. A more inclusive, consistent, and user-centred approach is essential to ensure authentication systems are both secure and truly usable in practice.
{"title":"Usable authentication: Are we there yet?","authors":"Nathan Clarke , Steven Furnell","doi":"10.1016/j.cose.2025.104823","DOIUrl":"10.1016/j.cose.2025.104823","url":null,"abstract":"<div><div>With technology increasingly embedded in everyday life, the demand for secure and usable authentication methods has never been greater. Traditional password-based systems continue to dominate, despite well-known usability and security challenges. This paper explores the evolution of user authentication technologies, from secret knowledge and tokens to biometrics and emerging approaches such as Passkeys. It critically evaluates the extent to which usability has been achieved, identifying both successes—such as biometrics integrated into smartphones—and persistent issues, including inconsistent guidance, ecosystem dependence, and accessibility barriers. Drawing on academic and commercial developments, the discussion highlights the growing burden on users who must authenticate across multiple devices and services daily. Future directions including transparent, continuous, and user-choice-driven authentication are discussed as potential solutions to mitigate this burden. Ultimately, it argues that while progress has been made, current solutions remain fragmented and often exclude key user groups. A more inclusive, consistent, and user-centred approach is essential to ensure authentication systems are both secure and truly usable in practice.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104823"},"PeriodicalIF":5.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.cose.2025.104810
Mahady Hassan , Shanto Roy , Reza Rahaeimehr
Side-channel attacks on memory (SCAM) exploit unintended data leaks from memory subsystems to infer sensitive information, posing significant threats to system security. These attacks exploit vulnerabilities in memory access patterns, cache behaviors, and other microarchitectural features to bypass traditional security measures. The purpose of this research is to examine SCAM, classify various attack techniques, and evaluate existing defense mechanisms. It guides researchers and industry professionals in improving memory security and mitigating emerging threats. We begin by identifying the major vulnerabilities in the memory system that are frequently exploited in SCAM, such as cache timing, speculative execution, Rowhammer, and other sophisticated approaches. Next, we outline a comprehensive taxonomy that systematically classifies these attacks based on their types, target systems, attack vectors, and adversarial capabilities required to execute them. In addition, we review the current landscape of mitigation strategies, emphasizing their strengths and limitations. This work aims to provide a comprehensive overview of memory-based side-channel attacks with the goal of providing significant insights for researchers and practitioners to better understand, detect, and mitigate SCAM risks.
{"title":"Memory under siege: A comprehensive survey of side-Channel attacks on memory","authors":"Mahady Hassan , Shanto Roy , Reza Rahaeimehr","doi":"10.1016/j.cose.2025.104810","DOIUrl":"10.1016/j.cose.2025.104810","url":null,"abstract":"<div><div>Side-channel attacks on memory (SCAM) exploit unintended data leaks from memory subsystems to infer sensitive information, posing significant threats to system security. These attacks exploit vulnerabilities in memory access patterns, cache behaviors, and other microarchitectural features to bypass traditional security measures. The purpose of this research is to examine SCAM, classify various attack techniques, and evaluate existing defense mechanisms. It guides researchers and industry professionals in improving memory security and mitigating emerging threats. We begin by identifying the major vulnerabilities in the memory system that are frequently exploited in SCAM, such as cache timing, speculative execution, <em>Rowhammer</em>, and other sophisticated approaches. Next, we outline a comprehensive taxonomy that systematically classifies these attacks based on their types, target systems, attack vectors, and adversarial capabilities required to execute them. In addition, we review the current landscape of mitigation strategies, emphasizing their strengths and limitations. This work aims to provide a comprehensive overview of memory-based side-channel attacks with the goal of providing significant insights for researchers and practitioners to better understand, detect, and mitigate SCAM risks.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104810"},"PeriodicalIF":5.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.cose.2025.104816
Xing Hu , Yang Zhang , Sheng Liu , Xiaowen Chen , Yaohua Wang , Shaoqing Li , Zhenyu Zhao , Keqin Li
As integrated circuits are increasingly deployed in security-critical applications, assessing the risk of information leakage introduced during the design phase has become a key challenge. Logic-level structures may inadvertently enable sensitive data to propagate to externally observable points, posing serious security risks. Although anomaly-based techniques such as taint tracking and machine learning have been developed to detect or mitigate leakage threats, the absence of a unified and quantitative metric for evaluating leakage risk remains a major limitation. Without such a metric, existing methods can neither effectively identify real threats nor compare the effectiveness of protection strategies in a principled manner, leading to limited reliability and comparability in hardware security analysis.
To overcome these challenges, we propose GLRA, a graph-based methodology for leakage risk assessment via minimal transmission cost path analysis. Departing from the traditional “path existence” criterion used in anomaly label-based taint tracking, GLRA quantifies leakage risk by evaluating the difficulty of information propagation. A central premise of GLRA is that the transmission cost-defined as the effort required to propagate signals from sensitive sources to observable outputs-is inversely correlated with leakage likelihood: lower costs imply higher risks. Accordingly, we define controllability-based transmission cost metrics for basic logical units such as AND, OR, NOT, and DFF, which quantify the propagation effort imposed by each logic unit. By modeling the circuit as an edge-weighted graph where edges are annotated with the aforementioned transmission cost values, GLRA identifies the minimal path from sensitive sources to potential leakage points. In addition, to accurately quantify the risk of leakage, GLRA establishes a formulaic correlation between the transmission cost and the design’s overall risk of information leakage. Experiments on cryptographic cores, debug infrastructure, and non-cryptographic logic demonstrate that GLRA accurately quantifies maximum-risk leakage paths, achieving a 18.75% improvement in detection precision over traditional anomaly-based approaches. GLRA correctly determines the presence or absence of leakage risks across all 16 evaluated benchmarks. Furthermore, it supports comparative analysis of leakage mitigation strategies across diverse hardware designs, providing quantitative insights into the effectiveness of protection mechanisms.
{"title":"GLRA: Graph-based leakage risk assessment via minimal transmission cost path analysis","authors":"Xing Hu , Yang Zhang , Sheng Liu , Xiaowen Chen , Yaohua Wang , Shaoqing Li , Zhenyu Zhao , Keqin Li","doi":"10.1016/j.cose.2025.104816","DOIUrl":"10.1016/j.cose.2025.104816","url":null,"abstract":"<div><div>As integrated circuits are increasingly deployed in security-critical applications, assessing the risk of information leakage introduced during the design phase has become a key challenge. Logic-level structures may inadvertently enable sensitive data to propagate to externally observable points, posing serious security risks. Although anomaly-based techniques such as taint tracking and machine learning have been developed to detect or mitigate leakage threats, the absence of a unified and quantitative metric for evaluating leakage risk remains a major limitation. Without such a metric, existing methods can neither effectively identify real threats nor compare the effectiveness of protection strategies in a principled manner, leading to limited reliability and comparability in hardware security analysis.</div><div>To overcome these challenges, we propose GLRA, a graph-based methodology for leakage risk assessment via minimal transmission cost path analysis. Departing from the traditional “path existence” criterion used in anomaly label-based taint tracking, GLRA quantifies leakage risk by evaluating the difficulty of information propagation. A central premise of GLRA is that the transmission cost-defined as the effort required to propagate signals from sensitive sources to observable outputs-is inversely correlated with leakage likelihood: lower costs imply higher risks. Accordingly, we define controllability-based transmission cost metrics for basic logical units such as AND, OR, NOT, and DFF, which quantify the propagation effort imposed by each logic unit. By modeling the circuit as an edge-weighted graph where edges are annotated with the aforementioned transmission cost values, GLRA identifies the minimal path from sensitive sources to potential leakage points. In addition, to accurately quantify the risk of leakage, GLRA establishes a formulaic correlation between the transmission cost and the design’s overall risk of information leakage. Experiments on cryptographic cores, debug infrastructure, and non-cryptographic logic demonstrate that GLRA accurately quantifies maximum-risk leakage paths, achieving a 18.75% improvement in detection precision over traditional anomaly-based approaches. GLRA correctly determines the presence or absence of leakage risks across all 16 evaluated benchmarks. Furthermore, it supports comparative analysis of leakage mitigation strategies across diverse hardware designs, providing quantitative insights into the effectiveness of protection mechanisms.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104816"},"PeriodicalIF":5.4,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.cose.2025.104818
Rodney Adriko, Jason R.C. Nurse
Cyber insurance is increasingly positioned as a complementary tool for managing cyber risk, yet Small to Medium-Sized Enterprises (SMEs) remain underrepresented in its adoption. This study investigates the perceptions, decision-making dynamics, and support needs of SMEs regarding cyber insurance, drawing on 38 semi-structured interviews with SMEs, insurers, brokers, and other relevant stakeholders. The findings reveal that many SMEs deprioritise cyber insurance; not because they dismiss its importance outright, but due to a combination of limited awareness, concerns over cost, and a perception that its value is minimal unless required by clients or regulators. This hesitation is further shaped by several key barriers: complex policy language, a lack of trust in insurers, and unclear internal ownership of cybersecurity responsibilities. Despite these challenges, the study identifies promising strategies to boost adoption. These include simplifying policy structures, fostering trust through collaborative awareness efforts, introducing financial incentives tailored to SME budgets, and offering accessible, user-friendly tools that help businesses assess their cyber risks and insurance needs. By identifying actionable strategies and addressing both cultural and structural barriers, this study contributes to efforts to enhance cybersecurity resilience in the SME sector.
{"title":"Cybersecurity and Cyber insurance for Small to Medium-sized Enterprises (SMEs): Perceptions, challenges and decision-making dynamics","authors":"Rodney Adriko, Jason R.C. Nurse","doi":"10.1016/j.cose.2025.104818","DOIUrl":"10.1016/j.cose.2025.104818","url":null,"abstract":"<div><div>Cyber insurance is increasingly positioned as a complementary tool for managing cyber risk, yet Small to Medium-Sized Enterprises (SMEs) remain underrepresented in its adoption. This study investigates the perceptions, decision-making dynamics, and support needs of SMEs regarding cyber insurance, drawing on 38 semi-structured interviews with SMEs, insurers, brokers, and other relevant stakeholders. The findings reveal that many SMEs deprioritise cyber insurance; not because they dismiss its importance outright, but due to a combination of limited awareness, concerns over cost, and a perception that its value is minimal unless required by clients or regulators. This hesitation is further shaped by several key barriers: complex policy language, a lack of trust in insurers, and unclear internal ownership of cybersecurity responsibilities. Despite these challenges, the study identifies promising strategies to boost adoption. These include simplifying policy structures, fostering trust through collaborative awareness efforts, introducing financial incentives tailored to SME budgets, and offering accessible, user-friendly tools that help businesses assess their cyber risks and insurance needs. By identifying actionable strategies and addressing both cultural and structural barriers, this study contributes to efforts to enhance cybersecurity resilience in the SME sector.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"163 ","pages":"Article 104818"},"PeriodicalIF":5.4,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.cose.2025.104811
Yongzhen Luo, Zhongkai Huang, Wenhui Duan, Liwei Wang, Bo Hou, Chenbing Qu, Chen Sun, Ziyang Wang
With the widespread application of Digital Signal Processors (DSPs) in critical areas, hidden instructions have become a significant threat to system security. Maliciously exploiting these instructions may lead to information leaks, data tampering, or system crashes. This paper proposed an efficient search method based on the instruction format to address the security issue of DSP hidden instructions. By establishing an instruction database, this method analyzes the instruction format, designs an efficient instruction generation strategy, and applies precise disassembly techniques, significantly reducing the instruction search space and effectively identifying hidden instructions. Experiments conducted on TI's DSP processors TMS320C6678 and TMS320F28335 have shown that this method successfully identified hidden instructions, demonstrating its effectiveness and practicality. The test results indicate that hidden instructions could lead to unexpected modifications of microprocessor registers or memory data, system resets, or even system crashes, exposing potential security risks in the DSP instruction set. The findings of this study offer an efficient search approach for hidden instructions and demonstrate the critical need for comprehensive security evaluation of DSP instruction sets in safety-critical applications.
{"title":"Uncovering hidden threats: A format-driven approach to dsp instruction set vulnerability discovery","authors":"Yongzhen Luo, Zhongkai Huang, Wenhui Duan, Liwei Wang, Bo Hou, Chenbing Qu, Chen Sun, Ziyang Wang","doi":"10.1016/j.cose.2025.104811","DOIUrl":"10.1016/j.cose.2025.104811","url":null,"abstract":"<div><div>With the widespread application of Digital Signal Processors (DSPs) in critical areas, hidden instructions have become a significant threat to system security. Maliciously exploiting these instructions may lead to information leaks, data tampering, or system crashes. This paper proposed an efficient search method based on the instruction format to address the security issue of DSP hidden instructions. By establishing an instruction database, this method analyzes the instruction format, designs an efficient instruction generation strategy, and applies precise disassembly techniques, significantly reducing the instruction search space and effectively identifying hidden instructions. Experiments conducted on TI's DSP processors TMS320C6678 and TMS320F28335 have shown that this method successfully identified hidden instructions, demonstrating its effectiveness and practicality. The test results indicate that hidden instructions could lead to unexpected modifications of microprocessor registers or memory data, system resets, or even system crashes, exposing potential security risks in the DSP instruction set. The findings of this study offer an efficient search approach for hidden instructions and demonstrate the critical need for comprehensive security evaluation of DSP instruction sets in safety-critical applications.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104811"},"PeriodicalIF":5.4,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.cose.2025.104808
Carlo Blundo , Stelvio Cimato
The popularity of the Role-based Access Control (RBAC) model is determined by its flexibility and its adaptability in different contexts, easing the enforcement and the management of security policy. In some cases, different kinds of (cardinality) constraints are considered to adjust and adapt roles and their assignment to best represent the organization’s security policy.
However, the process of role mining, whether based on an organizational scenario or on existing permission assignments, is a hard task, since the problem shows NP-hard computational complexity and in case of frequent policy updates, the dynamic adaptation of the roles can be challenging. Then, the only possibility of producing an RBAC model compliant with the security policy is to resort to heuristics, which may return an approximation of the optimal solution.
In this paper, we propose an innovative approach to explore the space of the solution based on the bag of word value, which is commonly deployed in the field of document representation and knowledge extraction. We propose different heuristics and validate our approach reporting the results of the application to standard datasets, and providing an evaluation under different metrics and indicators. We show that our technique returns improved results and provides an alternative way to produce valid solutions for constrained RBAC.
{"title":"A bag of words model for efficient discovery of roles in access control systems","authors":"Carlo Blundo , Stelvio Cimato","doi":"10.1016/j.cose.2025.104808","DOIUrl":"10.1016/j.cose.2025.104808","url":null,"abstract":"<div><div>The popularity of the Role-based Access Control (RBAC) model is determined by its flexibility and its adaptability in different contexts, easing the enforcement and the management of security policy. In some cases, different kinds of (cardinality) constraints are considered to adjust and adapt roles and their assignment to best represent the organization’s security policy.</div><div>However, the process of role mining, whether based on an organizational scenario or on existing permission assignments, is a hard task, since the problem shows NP-hard computational complexity and in case of frequent policy updates, the dynamic adaptation of the roles can be challenging. Then, the only possibility of producing an RBAC model compliant with the security policy is to resort to heuristics, which may return an approximation of the optimal solution.</div><div>In this paper, we propose an innovative approach to explore the space of the solution based on the bag of word value, which is commonly deployed in the field of document representation and knowledge extraction. We propose different heuristics and validate our approach reporting the results of the application to standard datasets, and providing an evaluation under different metrics and indicators. We show that our technique returns improved results and provides an alternative way to produce valid solutions for constrained RBAC.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104808"},"PeriodicalIF":5.4,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.cose.2025.104809
Jie Ying , Jun Li , Ruoxi Chen , Hongxin Su , Tiantian Zhu
The Domain Name System (DNS) represents a vital infrastructure component of the Internet, within which DNS resolvers constitute the core element of this system. Specifically, DNS resolvers mediate between DNS clients and DNS nameservers as the cache. However, existing tools face significant limitations in effectively identifying resolver vulnerabilities, presenting three primary challenges. First, DNS resolver implementations are complex and stateful, resulting in huge input space. Second, DNS resolver vulnerabilities typically manifest as semantic bugs leading to erroneous responses, making them difficult to detect through conventional oracle-based validation. Finally, most DNS resolver vulnerabilities only become apparent under bidirectional information sequences. This paper presents BGF-DR, a bidirectional greybox fuzzing system that addresses the aforementioned challenges to achieve efficient vulnerability discovery for DNS resolvers. First, BGF-DR leverages both branch coverage and state coverage information to explore the DNS resolver input space more rapidly and comprehensively. Second, BGF-DR employs differential testing and heuristic rules to identify test cases that trigger vulnerabilities. Finally, BGF-DR performs mutation-based case generation on both client-query and nameserver-response to enhance the efficiency of vulnerability discovery. We evaluated BGF-DR on 4 DNS resolvers and identified 6 vulnerabilities that could lead to cache poisoning, resource consumption, and crash attacks.
{"title":"BGF-DR: bidirectional greybox fuzzing for DNS resolver vulnerability discovery","authors":"Jie Ying , Jun Li , Ruoxi Chen , Hongxin Su , Tiantian Zhu","doi":"10.1016/j.cose.2025.104809","DOIUrl":"10.1016/j.cose.2025.104809","url":null,"abstract":"<div><div>The Domain Name System (DNS) represents a vital infrastructure component of the Internet, within which DNS resolvers constitute the core element of this system. Specifically, DNS resolvers mediate between DNS clients and DNS nameservers as the cache. However, existing tools face significant limitations in effectively identifying resolver vulnerabilities, presenting three primary challenges. First, DNS resolver implementations are complex and stateful, resulting in huge input space. Second, DNS resolver vulnerabilities typically manifest as semantic bugs leading to erroneous responses, making them difficult to detect through conventional oracle-based validation. Finally, most DNS resolver vulnerabilities only become apparent under bidirectional information sequences. This paper presents <span>BGF-DR</span>, a bidirectional greybox fuzzing system that addresses the aforementioned challenges to achieve efficient vulnerability discovery for DNS resolvers. First, <span>BGF-DR</span> leverages both branch coverage and state coverage information to explore the DNS resolver input space more rapidly and comprehensively. Second, <span>BGF-DR</span> employs differential testing and heuristic rules to identify test cases that trigger vulnerabilities. Finally, <span>BGF-DR</span> performs mutation-based case generation on both client-query and nameserver-response to enhance the efficiency of vulnerability discovery. We evaluated <span>BGF-DR</span> on 4 DNS resolvers and identified 6 vulnerabilities that could lead to cache poisoning, resource consumption, and crash attacks.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104809"},"PeriodicalIF":5.4,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.cose.2025.104807
Yuying Du , Jiahao Cao , Junrui Xu , YangYang Wang , Renjie Xie , Jiang Li , Changliyun Liu , Mingwei Xu
The rise of the Splinternet is reshaping the global digital landscape by fragmenting the Internet along political, commercial, and technological lines. Geoblocking, a practice where access to content is restricted based on geographic location, exemplifies this trend. Despite existing studies on geoblocking in specific contexts, such as Russia and Cuba, a systematic understanding of geoblocking policies targeting users in China has not been sufficiently explored. To bridge this gap, we present GeoWatch to conduct the first large-scale measurement study of geoblocking practices towards China. It employs advanced domain mining techniques and globally distributed vantage points to identify geoblocking websites. We test 97.78 million domains, which represents the largest domain set ever used in geoblocking research. Our comprehensive analysis reveals widespread geoblocking towards China, identifying 4.54 million geoblocking domains across 196 countries and regions. These findings highlight the real-world factors influencing geoblocking practices and offer valuable insights into its scope and impact, with a particular focus on China as a case study.
{"title":"A large-scale measurement study of region-based web access restrictions: The case of China","authors":"Yuying Du , Jiahao Cao , Junrui Xu , YangYang Wang , Renjie Xie , Jiang Li , Changliyun Liu , Mingwei Xu","doi":"10.1016/j.cose.2025.104807","DOIUrl":"10.1016/j.cose.2025.104807","url":null,"abstract":"<div><div>The rise of the Splinternet is reshaping the global digital landscape by fragmenting the Internet along political, commercial, and technological lines. Geoblocking, a practice where access to content is restricted based on geographic location, exemplifies this trend. Despite existing studies on geoblocking in specific contexts, such as Russia and Cuba, a systematic understanding of geoblocking policies targeting users in China has not been sufficiently explored. To bridge this gap, we present GeoWatch to conduct the first large-scale measurement study of geoblocking practices towards China. It employs advanced domain mining techniques and globally distributed vantage points to identify geoblocking websites. We test 97.78 million domains, which represents the largest domain set ever used in geoblocking research. Our comprehensive analysis reveals widespread geoblocking towards China, identifying 4.54 million geoblocking domains across 196 countries and regions. These findings highlight the real-world factors influencing geoblocking practices and offer valuable insights into its scope and impact, with a particular focus on China as a case study.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104807"},"PeriodicalIF":5.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-14DOI: 10.1016/j.cose.2025.104806
Daniel Lastanao Miró , Javier Carrillo-Mondéjar , Ricarddo J. Rodríguez
As macOS systems increasingly become malware targets, understanding the tactics, techniques, and procedures (TTPs) used by adversaries is essential to improving defense strategies. This paper provides a systematic and detailed analysis of macOS malware using the MITRE ATT&CK framework, focusing on TTPs at key stages of the malware attack cycle. Leveraging a comprehensive dataset of 57,636 macOS malware samples collected between November 2006 and October 2024, we employ both static and dynamic analysis techniques to uncover patterns in adversary behavior. Our analysis, primarily based on static analysis techniques, offers a broad representation of macOS malware and highlights common characteristics across samples. While we only partially explore dynamic behaviors, we identify recurring patterns that align with specific TTPs in the MITRE ATT&CK framework, such as persistence and defense evasion. This mapping contributes to a more structured understanding of macOS threats and can help inform future detection and mitigation efforts.
{"title":"Characterizing tactics, techniques, and procedures in the macOS threat landscape","authors":"Daniel Lastanao Miró , Javier Carrillo-Mondéjar , Ricarddo J. Rodríguez","doi":"10.1016/j.cose.2025.104806","DOIUrl":"10.1016/j.cose.2025.104806","url":null,"abstract":"<div><div>As macOS systems increasingly become malware targets, understanding the tactics, techniques, and procedures (TTPs) used by adversaries is essential to improving defense strategies. This paper provides a systematic and detailed analysis of macOS malware using the MITRE ATT&CK framework, focusing on TTPs at key stages of the malware attack cycle. Leveraging a comprehensive dataset of 57,636 macOS malware samples collected between November 2006 and October 2024, we employ both static and dynamic analysis techniques to uncover patterns in adversary behavior. Our analysis, primarily based on static analysis techniques, offers a broad representation of macOS malware and highlights common characteristics across samples. While we only partially explore dynamic behaviors, we identify recurring patterns that align with specific TTPs in the MITRE ATT&CK framework, such as persistence and defense evasion. This mapping contributes to a more structured understanding of macOS threats and can help inform future detection and mitigation efforts.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104806"},"PeriodicalIF":5.4,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-14DOI: 10.1016/j.cose.2025.104805
Zechen Li , Guozhen Shi , Kai Chen
With the rapid development of information technology, data has become the core element driving decision-making, and the explosive growth of massive data makes data governance face new challenges. The diversity of data sources and the dynamic complexity of application scenarios lead to uneven data quality, so there is an urgent practical need to construct accurate and efficient data credibility assessment methods. Existing researches are mostly limited to a single domain, which leads to fragmentation of assessment standards and makes it difficult to adapt to the needs of multiple scenarios. To address the above problems, this study proposes a dynamic data credibility assessment paradigm with universal applicability. Specifically, firstly, we construct a four-layer data credibility assessment index system based on national standards and domain guidelines through UML modeling technology, which realizes quantifiable disassembly from the target layer to the index layer and ensures cross-scenario compatibility and scalability of the assessment framework. Second, a scenario-driven dynamic fuzzy assessment model is proposed, which consists of a scene adaptation layer, an index optimization layer, a weight dynamic allocation layer and a comprehensive assessment layer. The key assessment indexes are screened by the scene feature analysis and the improved analytical hierarchy process, and the combination of the subjective and objective weights and the modification model are combined to achieve a dynamic balance of the weights, and a fuzzy comprehensive evaluation model is introduced to deal with uncertainties in the assessment process, and finally get the comprehensive assessment grade of data credibility. Finally, this study applies the framework to a vehicle forensics scenario for case analysis and evaluates the method’s accuracy using both simulated and real-world data. The results demonstrate its effectiveness in complex scenarios.
{"title":"A scenario-driven dynamic assessment model for data credibility","authors":"Zechen Li , Guozhen Shi , Kai Chen","doi":"10.1016/j.cose.2025.104805","DOIUrl":"10.1016/j.cose.2025.104805","url":null,"abstract":"<div><div>With the rapid development of information technology, data has become the core element driving decision-making, and the explosive growth of massive data makes data governance face new challenges. The diversity of data sources and the dynamic complexity of application scenarios lead to uneven data quality, so there is an urgent practical need to construct accurate and efficient data credibility assessment methods. Existing researches are mostly limited to a single domain, which leads to fragmentation of assessment standards and makes it difficult to adapt to the needs of multiple scenarios. To address the above problems, this study proposes a dynamic data credibility assessment paradigm with universal applicability. Specifically, firstly, we construct a four-layer data credibility assessment index system based on national standards and domain guidelines through UML modeling technology, which realizes quantifiable disassembly from the target layer to the index layer and ensures cross-scenario compatibility and scalability of the assessment framework. Second, a scenario-driven dynamic fuzzy assessment model is proposed, which consists of a scene adaptation layer, an index optimization layer, a weight dynamic allocation layer and a comprehensive assessment layer. The key assessment indexes are screened by the scene feature analysis and the improved analytical hierarchy process, and the combination of the subjective and objective weights and the modification model are combined to achieve a dynamic balance of the weights, and a fuzzy comprehensive evaluation model is introduced to deal with uncertainties in the assessment process, and finally get the comprehensive assessment grade of data credibility. Finally, this study applies the framework to a vehicle forensics scenario for case analysis and evaluates the method’s accuracy using both simulated and real-world data. The results demonstrate its effectiveness in complex scenarios.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"162 ","pages":"Article 104805"},"PeriodicalIF":5.4,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}