Intrusion Detection Systems (IDSs) are essential for identifying and mitigating cyber threats in modern network infrastructures. Although prior work has extensively explored the optimal placement of IDS sensors across networks, optimizing the deployment of detection rules across multiple IDS instances remains a mostly underexplored area. This paper addresses rule deployment by formulating it as a set covering problem with capacity constraints. We seek to minimize the number of rule deployments required to detect potential exploits of all known vulnerabilities while ensuring that no IDS exceeds its inspection capacity. Our model considers the statistical properties of network traffic, enabling the system to account for load surges and reduce the number of packets not inspected by an IDS under high-traffic conditions, such as during Distributed Denial-of-Service attacks. To solve the optimization problem, we introduce a backtracking algorithm enhanced with a priority queue, which efficiently balances rule coverage and capacity constraints. We validate our approach using the CSE-CIC-IDS2017 dataset and a simulated multi-IDS environment. Experimental results demonstrate that our method significantly reduces the number of uninspected packets, while maximizing vulnerability coverage, and outperforms typical rule deployment strategies. This work highlights the critical role of intelligent rule placement in enhancing IDS performance and paves the way for future adaptive and scalable detection systems.
{"title":"Optimizing IDS rule placement via set covering with capacity constraints","authors":"Arka Ghosh , Domenico Ditale , Massimiliano Albanese , Preetam Mukherjee","doi":"10.1016/j.cose.2025.104748","DOIUrl":"10.1016/j.cose.2025.104748","url":null,"abstract":"<div><div>Intrusion Detection Systems (IDSs) are essential for identifying and mitigating cyber threats in modern network infrastructures. Although prior work has extensively explored the optimal placement of IDS sensors across networks, optimizing the deployment of detection rules across multiple IDS instances remains a mostly underexplored area. This paper addresses rule deployment by formulating it as a set covering problem with capacity constraints. We seek to minimize the number of rule deployments required to detect potential exploits of all known vulnerabilities while ensuring that no IDS exceeds its inspection capacity. Our model considers the statistical properties of network traffic, enabling the system to account for load surges and reduce the number of packets not inspected by an IDS under high-traffic conditions, such as during Distributed Denial-of-Service attacks. To solve the optimization problem, we introduce a backtracking algorithm enhanced with a priority queue, which efficiently balances rule coverage and capacity constraints. We validate our approach using the CSE-CIC-IDS2017 dataset and a simulated multi-IDS environment. Experimental results demonstrate that our method significantly reduces the number of uninspected packets, while maximizing vulnerability coverage, and outperforms typical rule deployment strategies. This work highlights the critical role of intelligent rule placement in enhancing IDS performance and paves the way for future adaptive and scalable detection systems.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104748"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-12DOI: 10.1016/j.cose.2025.104754
Yuhao Zhou, Peng Jia, Jiayong Liu, Ximing Fan
The programming security of Compute Unified Device Architecture (CUDA), NVIDIA’s parallel computing platform and programming model for Graphics Processing Unit, has always been a significant concern. On the host-side, fuzzing has been remarkably successful at uncovering various software bugs and vulnerabilities, with hundreds of flaws discovered annually through different fuzzing tools. However, existing fuzzing tools typically operate on general-purpose CPU architectures and embedded systems. As an independent processing unit, the GPU does not support tools like American Fuzzy Lop for collecting instrumentation and code coverage information. Consequently, grey-box fuzzing for closed-source graphics and driver libraries has remained an unaddressed challenge. This research introduces Fuzz4Cuda, CUDA-focused GPU fuzzing framework specifically designed for GPU libraries. To enhance device-side coverage collection, Fuzz4Cuda achieved this by runtime analysis of CUDA Streaming Assembler. Furthermore, the framework could dynamically adjust the number of breakpoints to optimize test case execution speed, thereby accelerating the overall time to discover program crash inputs. The development of Fuzz4Cuda has moved GPU library fuzzing ahead, aiming to improve the security of the GPU programming environment. Over a month-long real-world fuzzing campaign aimed at vulnerability discovery, our evaluation of the CUDA Toolkit uncovered five real-world bugs, four of which have been assigned Common Vulnerabilities and Exposures (CVE) IDs.
{"title":"Fuzz4Cuda: Fuzzing your NVIDIA GPU libraries through debug interface","authors":"Yuhao Zhou, Peng Jia, Jiayong Liu, Ximing Fan","doi":"10.1016/j.cose.2025.104754","DOIUrl":"10.1016/j.cose.2025.104754","url":null,"abstract":"<div><div>The programming security of Compute Unified Device Architecture (CUDA), NVIDIA’s parallel computing platform and programming model for Graphics Processing Unit, has always been a significant concern. On the host-side, fuzzing has been remarkably successful at uncovering various software bugs and vulnerabilities, with hundreds of flaws discovered annually through different fuzzing tools. However, existing fuzzing tools typically operate on general-purpose CPU architectures and embedded systems. As an independent processing unit, the GPU does not support tools like American Fuzzy Lop for collecting instrumentation and code coverage information. Consequently, grey-box fuzzing for closed-source graphics and driver libraries has remained an unaddressed challenge. This research introduces Fuzz4Cuda, CUDA-focused GPU fuzzing framework specifically designed for GPU libraries. To enhance device-side coverage collection, Fuzz4Cuda achieved this by runtime analysis of CUDA Streaming Assembler. Furthermore, the framework could dynamically adjust the number of breakpoints to optimize test case execution speed, thereby accelerating the overall time to discover program crash inputs. The development of Fuzz4Cuda has moved GPU library fuzzing ahead, aiming to improve the security of the GPU programming environment. Over a month-long real-world fuzzing campaign aimed at vulnerability discovery, our evaluation of the CUDA Toolkit uncovered five real-world bugs, four of which have been assigned Common Vulnerabilities and Exposures (CVE) IDs.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104754"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-05DOI: 10.1016/j.cose.2025.104752
Faheem Ahmed Shaikh , Damien Joseph , Eugene Kang
Public announcements of data breaches often lead to short-lived negative stock price reactions, raising questions about firms’ incentives for sustained cybersecurity improvements. This study applies legitimacy theory to examine how investor perceptions of a firm’s security practices—termed information security legitimacy—shape firm-specific risk after such announcements. Analyzing media sentiment following 485 U.S. data breach announcements, we find that firms with stronger information security legitimacy experience significantly lower firm-specific risk over six months. Additionally, shorter delays in public breach announcements strengthen this risk reduction. By linking data breach announcements with post-breach management, this study offers a unified framework showing how proactive security actions and timely communication mitigate long-term financial risk. These findings provide actionable guidance for security managers to prioritize rapid disclosure and strategic legitimacy management, advancing theory on stakeholder perceptions in cybersecurity.
{"title":"Reassessing information security perceptions following a data breach announcement: The role of post-breach management in firm-specific risk","authors":"Faheem Ahmed Shaikh , Damien Joseph , Eugene Kang","doi":"10.1016/j.cose.2025.104752","DOIUrl":"10.1016/j.cose.2025.104752","url":null,"abstract":"<div><div>Public announcements of data breaches often lead to short-lived negative stock price reactions, raising questions about firms’ incentives for sustained cybersecurity improvements. This study applies legitimacy theory to examine how investor perceptions of a firm’s security practices—termed information security legitimacy—shape firm-specific risk after such announcements. Analyzing media sentiment following 485 U.S. data breach announcements, we find that firms with stronger information security legitimacy experience significantly lower firm-specific risk over six months. Additionally, shorter delays in public breach announcements strengthen this risk reduction. By linking data breach announcements with post-breach management, this study offers a unified framework showing how proactive security actions and timely communication mitigate long-term financial risk. These findings provide actionable guidance for security managers to prioritize rapid disclosure and strategic legitimacy management, advancing theory on stakeholder perceptions in cybersecurity.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104752"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-04DOI: 10.1016/j.cose.2025.104736
John C. John , Arobinda Gupta , Shamik Sural
With an increase in the diversity and complexity of requirements from organizations for cloud computing, there is a growing need for integrating the services of multiple cloud providers. In such multi-cloud systems, data leakage is considered to be a major security concern, which is caused by illegitimate actions of malicious users often acting in collusion. The possibility of data leakage in such environments is characterized by the number of interoperations as well as the trustworthiness of users on the collaborating clouds. In this paper, we address the problem of secure multi-cloud collaboration from an Attribute-based Access Control (ABAC) policy management perspective. In particular, we define a problem that aims to formulate ABAC policy rules for establishing a high degree of inter-cloud accesses while eliminating potential paths for data leakage. A data leakage free ABAC policy generation algorithm is proposed that first determines the likelihood of data leakage and then attempts to maximize inter-cloud collaborations. We also pose several variants of the problem by imposing additional meaningful constraints on the nature of accesses. Experimental results on several large data sets show the efficacy of the proposed approach.
{"title":"Secure multi-cloud collaboration using data leakage free attribute-based access control policies","authors":"John C. John , Arobinda Gupta , Shamik Sural","doi":"10.1016/j.cose.2025.104736","DOIUrl":"10.1016/j.cose.2025.104736","url":null,"abstract":"<div><div>With an increase in the diversity and complexity of requirements from organizations for cloud computing, there is a growing need for integrating the services of multiple cloud providers. In such multi-cloud systems, data leakage is considered to be a major security concern, which is caused by illegitimate actions of malicious users often acting in collusion. The possibility of data leakage in such environments is characterized by the number of interoperations as well as the trustworthiness of users on the collaborating clouds. In this paper, we address the problem of secure multi-cloud collaboration from an Attribute-based Access Control (ABAC) policy management perspective. In particular, we define a problem that aims to formulate ABAC policy rules for establishing a high degree of inter-cloud accesses while eliminating potential paths for data leakage. A data leakage free ABAC policy generation algorithm is proposed that first determines the likelihood of data leakage and then attempts to maximize inter-cloud collaborations. We also pose several variants of the problem by imposing additional meaningful constraints on the nature of accesses. Experimental results on several large data sets show the efficacy of the proposed approach.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104736"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-11DOI: 10.1016/j.cose.2025.104743
David Álvarez Muñiz, Luis Perez Miguel, Miguel, Alberto Mateo Muñoz, Xavier Larriva-Novo, Manuel Alvarez-Campana, Diego Rivera
The increasing complexity of insider threats poses a critical challenge for modern cybersecurity. Existing datasets used for training detection systems often lack realism, suffer from severe class imbalance, or are outdated. This paper presents a novel methodology for the generation of insider threat datasets through the integration of three data sources: (1) real user behavior collected during a controlled cyber exercise, (2) simulated user activity modeled on realistic work roles, and (3) synthetic data derived from the CERT Insider Threat Test dataset. The result is the SPEDIA dataset, designed to support the development and evaluation of machine learning models for detecting insider threats. The dataset includes detailed event-level logs of user activity, such as file manipulation, command execution, service usage, and network behavior, with annotations mapped to MITRE ATT&CK tactics and techniques. Unlike previous datasets, SPEDIA achieves a more balanced distribution of malicious and non-malicious events, enhancing its suitability for supervised learning. This work also provides a replicable framework for generating similar datasets, contributing to the advancement of insider threat detection research and the development of robust, real-world mitigation strategies.
{"title":"Design and generation of a dataset for training insider threat prevention and detection models: The SPEDIA dataset","authors":"David Álvarez Muñiz, Luis Perez Miguel, Miguel, Alberto Mateo Muñoz, Xavier Larriva-Novo, Manuel Alvarez-Campana, Diego Rivera","doi":"10.1016/j.cose.2025.104743","DOIUrl":"10.1016/j.cose.2025.104743","url":null,"abstract":"<div><div>The increasing complexity of insider threats poses a critical challenge for modern cybersecurity. Existing datasets used for training detection systems often lack realism, suffer from severe class imbalance, or are outdated. This paper presents a novel methodology for the generation of insider threat datasets through the integration of three data sources: (1) real user behavior collected during a controlled cyber exercise, (2) simulated user activity modeled on realistic work roles, and (3) synthetic data derived from the CERT Insider Threat Test dataset. The result is the SPEDIA dataset, designed to support the development and evaluation of machine learning models for detecting insider threats. The dataset includes detailed event-level logs of user activity, such as file manipulation, command execution, service usage, and network behavior, with annotations mapped to MITRE ATT&CK tactics and techniques. Unlike previous datasets, SPEDIA achieves a more balanced distribution of malicious and non-malicious events, enhancing its suitability for supervised learning. This work also provides a replicable framework for generating similar datasets, contributing to the advancement of insider threat detection research and the development of robust, real-world mitigation strategies.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104743"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-10DOI: 10.1016/j.cose.2025.104745
Mónica P. Arenas, Gabriele Lenzini, Mohammadamin Rakeei, Peter Y.A. Ryan, Marjan Škrobot, Maria Zhekova
We study how to authenticate objects, a problem that is relevant to buyers who seek proof that a purchase is authentic. Typically, manufacturers watermark their goods or assign them IDs with a certificate of authenticity; then, buyers can check for the presence of the watermark or verify the authenticity of the certificate, matching it with the good’s ID. However, this solution falls short when manufacturers and buyers are geographically separated, such as in retail or online purchases. Since certificates can be forged and goods can be substituted with substandard clones, buyers should verify the authenticity of the goods directly. This suggests a process: honest manufacturers should provide goods with an ID and securely register it along with some unforgeable and unique data that can be (re)generated only from the original physical object. In turn, buyers can verify whether the data registered under that ID matches the data retrieved by the buyer for the good just acquired. Such enrollment and authentication processes are complex when realized as protocols because they must withstand attacks against both the physical object and the communication channel. We propose a cyber-physical solution that relies on two elements: (i) a material inseparably joined with an object from which cryptographically strong digital identities can be generated; (ii) two novel cryptographic protocols that ensure data integrity and secure authentication of agents and objects. We present a comprehensive threat model for the artifact authenticity service. We also implemented and optimized the image processing pipeline, which takes under two seconds per image set, representing a notable improvement over previous versions.
{"title":"Secure authentication and traceability of physical objects","authors":"Mónica P. Arenas, Gabriele Lenzini, Mohammadamin Rakeei, Peter Y.A. Ryan, Marjan Škrobot, Maria Zhekova","doi":"10.1016/j.cose.2025.104745","DOIUrl":"10.1016/j.cose.2025.104745","url":null,"abstract":"<div><div>We study how to authenticate objects, a problem that is relevant to buyers who seek proof that a purchase is authentic. Typically, manufacturers watermark their goods or assign them IDs with a certificate of authenticity; then, buyers can check for the presence of the watermark or verify the authenticity of the certificate, matching it with the good’s ID. However, this solution falls short when manufacturers and buyers are geographically separated, such as in retail or online purchases. Since certificates can be forged and goods can be substituted with substandard clones, buyers should verify the authenticity of the goods directly. This suggests a process: honest manufacturers should provide goods with an ID and securely register it along with some unforgeable and unique data that can be (re)generated only from the original physical object. In turn, buyers can verify whether the data registered under that ID matches the data retrieved by the buyer for the good just acquired. Such enrollment and authentication processes are complex when realized as protocols because they must withstand attacks against both the physical object and the communication channel. We propose a cyber-physical solution that relies on two elements: (i) a material inseparably joined with an object from which cryptographically strong digital identities can be generated; (ii) two novel cryptographic protocols that ensure data integrity and secure authentication of agents and objects. We present a comprehensive threat model for the artifact authenticity service. We also implemented and optimized the image processing pipeline, which takes under two seconds per image set, representing a notable improvement over previous versions.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104745"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-22DOI: 10.1016/j.cose.2025.104783
Jinxian Zhao, Haidong Hou, Liang Chang
Network intrusion detection plays a crucial role in ensuring cybersecurity by promptly mitigating network attacks. However, existing deep learning methods have limited capabilities in capture network attack features and address class imbalances, resulting in low classification accuracy. This paper proposes a deep-learning intrusion detection model named FLSPPMRXt, which is built upon ResNeXt50. It enhances feature capture by improving the backbone convolution and introducing a multi-scale feature fusion module, including the Soft Pool layer. Meanwhile, focal loss is employed as the loss function to effectively mitigate the impact of class imbalance on classification accuracy. Furthermore, this method proposes a data visualization processing algorithm to provide an image representation that is more consistent with the feature nearest neighbor distribution. Experimental results show that the FLSPPMRXt model achieves 93.3 % and 95.2 % in overall classification accuracy and F1 score on UNSW_NB15 dataset, respectively. Compared with existing algorithms, such as the 2DCNN and RNN models, the method demonstrates superior network intrusion detection performance.
{"title":"Intrusion detection algorithm based on multi-scale feature fusion","authors":"Jinxian Zhao, Haidong Hou, Liang Chang","doi":"10.1016/j.cose.2025.104783","DOIUrl":"10.1016/j.cose.2025.104783","url":null,"abstract":"<div><div>Network intrusion detection plays a crucial role in ensuring cybersecurity by promptly mitigating network attacks. However, existing deep learning methods have limited capabilities in capture network attack features and address class imbalances, resulting in low classification accuracy. This paper proposes a deep-learning intrusion detection model named FLSPPMRXt, which is built upon ResNeXt50. It enhances feature capture by improving the backbone convolution and introducing a multi-scale feature fusion module, including the Soft Pool layer. Meanwhile, focal loss is employed as the loss function to effectively mitigate the impact of class imbalance on classification accuracy. Furthermore, this method proposes a data visualization processing algorithm to provide an image representation that is more consistent with the feature nearest neighbor distribution. Experimental results show that the FLSPPMRXt model achieves 93.3 % and 95.2 % in overall classification accuracy and F1 score on UNSW_NB15 dataset, respectively. Compared with existing algorithms, such as the 2DCNN and RNN models, the method demonstrates superior network intrusion detection performance.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104783"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-14DOI: 10.1016/j.cose.2025.104756
Yishun Zeng, Yue Wu, Xicheng Lu, Chao Zhang
The rendering engine is a cornerstone of modern web browsers, responsible for transforming heterogeneous inputs-HTML, CSS, and JavaScript-into visual page content. This complex process involves constructing and updating the render tree, which governs layout and painting, but also introduces subtle defects that manifest as robustness and security challenges. Existing browser fuzzers largely fall short in thoroughly testing the rendering engine due to two fundamental challenges: (i) the vast, multidimensional input space makes efficient exploration difficult; (ii) the periodic, incremental rendering model of modern rendering engines merges multiple updates of the render tree within each rendering cycle, reducing activation of deep pipeline logic such as layout and painting. In this paper, we aim to enhance the testing depth of the rendering pipeline-rather than simply increasing code coverage-by focusing on updating the render tree, the central data structure linking frontend inputs to backend layout and painting modules. Our approach incorporates (i) correlation-based pruning strategies for HTML elements and CSS properties to prioritize high-yield input combinations, and (ii) a time-sliced testing scheme that intentionally distributes mutations across multiple rendering cycles within a single test case, thereby increasing the trigger frequency of backend rendering modules. We implement a prototype, RTFuzz, and evaluate it extensively. Compared to state-of-the-art fuzzers Domato, FreeDom, and Minerva, RTFuzz helps uncover 43.1 %, 28.7 %, and 75.7 % more unique crashes, 83.3 % of which occur in the rendering pipeline, and further identified 20 real-world defects during long-running experiments. Ablation studies confirm that correlation-based pruning increases unique crashes by 79.2 %, and the time-sliced scheme contributes a 16.2 % improvement.
{"title":"RTFuzz: Fuzzing browsers via efficient render tree mutation","authors":"Yishun Zeng, Yue Wu, Xicheng Lu, Chao Zhang","doi":"10.1016/j.cose.2025.104756","DOIUrl":"10.1016/j.cose.2025.104756","url":null,"abstract":"<div><div>The rendering engine is a cornerstone of modern web browsers, responsible for transforming heterogeneous inputs-HTML, CSS, and JavaScript-into visual page content. This complex process involves constructing and updating the render tree, which governs layout and painting, but also introduces subtle defects that manifest as robustness and security challenges. Existing browser fuzzers largely fall short in thoroughly testing the rendering engine due to two fundamental challenges: (i) the vast, multidimensional input space makes efficient exploration difficult; (ii) the periodic, incremental rendering model of modern rendering engines merges multiple updates of the render tree within each rendering cycle, reducing activation of deep pipeline logic such as layout and painting. In this paper, we aim to enhance the testing depth of the rendering pipeline-rather than simply increasing code coverage-by focusing on updating the render tree, the central data structure linking frontend inputs to backend layout and painting modules. Our approach incorporates (i) correlation-based pruning strategies for HTML elements and CSS properties to prioritize high-yield input combinations, and (ii) a time-sliced testing scheme that intentionally distributes mutations across multiple rendering cycles within a single test case, thereby increasing the trigger frequency of backend rendering modules. We implement a prototype, RTFuzz, and evaluate it extensively. Compared to state-of-the-art fuzzers Domato, FreeDom, and Minerva, RTFuzz helps uncover 43.1 %, 28.7 %, and 75.7 % more unique crashes, 83.3 % of which occur in the rendering pipeline, and further identified 20 real-world defects during long-running experiments. Ablation studies confirm that correlation-based pruning increases unique crashes by 79.2 %, and the time-sliced scheme contributes a 16.2 % improvement.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104756"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the rapid expansion of the Internet of Things (IoT) has introduced significant cybersecurity challenges, requiring manufacturers to comply with various regulatory frameworks and cybersecurity standards. Hence, to protect user data and privacy, all organizations providing IoT devices must adhere to complex guidelines such as the National Institute of Standards and Technology Inter-Agency Report (NISTIR) 8259, which defines essential cybersecurity guidelines for IoT manufacturers. However, interpreting and applying these rules from these guidelines remains a significant challenge for companies. Previously, our Automated Knowledge Framework for IoT Cybersecurity Compliance, leveraged SWRL, SPARQL queries, Web Ontology Language and Visualization (OWL Viz), Semantic Web technologies, Large Language Models (LLMs), and Retrieval Augmented Generation (RAG) pipeline to automate compliance assessment of multiple Functional requirement documents (FRDs), while systematically cross-checking Business requirement documents (BRDs) against them [Oranekwu et al., 2024]. However, these efforts primarily focused on mapping NISTIR 8259 guidelines into a structured ontology laying the foundation for us to build, expand on, and then integrate the IoT Cybersecurity Improvement Act of 2020 into the compliance framework. Furthermore, exploring its big data capability, the Knowledge Graph (KG) has been expanded and populated with more than 800 manufacturer privacy policy instances, allowing direct comparison between manufacturer-defined data properties, object properties, and regulatory compliance expectations. The primary objective is to evaluate the effectiveness of this enhanced version of the framework in identifying policy non-compliance by comparing triples extracted from privacy policies against the structured knowledge representation. Through this approach, our goal is to automate compliance verification by examining the relationships between manufacturers, security requirements, and regulatory obligations, offering a scalable solution for the security governance of IoT.
{"title":"Scalable automation for IoT cyberSecurity compliance: Ontology-driven reasoning for real-time assessment","authors":"Ikechukwu Oranekwu , Lavanya Elluri , Roberto Yus , Anantaa Kotal","doi":"10.1016/j.cose.2025.104711","DOIUrl":"10.1016/j.cose.2025.104711","url":null,"abstract":"<div><div>In recent years, the rapid expansion of the Internet of Things (IoT) has introduced significant cybersecurity challenges, requiring manufacturers to comply with various regulatory frameworks and cybersecurity standards. Hence, to protect user data and privacy, all organizations providing IoT devices must adhere to complex guidelines such as the National Institute of Standards and Technology Inter-Agency Report (NISTIR) 8259, which defines essential cybersecurity guidelines for IoT manufacturers. However, interpreting and applying these rules from these guidelines remains a significant challenge for companies. Previously, our Automated Knowledge Framework for IoT Cybersecurity Compliance, leveraged SWRL, SPARQL queries, Web Ontology Language and Visualization (OWL Viz), Semantic Web technologies, Large Language Models (LLMs), and Retrieval Augmented Generation (RAG) pipeline to automate compliance assessment of multiple Functional requirement documents (FRDs), while systematically cross-checking Business requirement documents (BRDs) against them [Oranekwu et al., 2024]. However, these efforts primarily focused on mapping NISTIR 8259 guidelines into a structured ontology laying the foundation for us to build, expand on, and then integrate the IoT Cybersecurity Improvement Act of 2020 into the compliance framework. Furthermore, exploring its big data capability, the Knowledge Graph (KG) has been expanded and populated with more than 800 manufacturer privacy policy instances, allowing direct comparison between manufacturer-defined data properties, object properties, and regulatory compliance expectations. The primary objective is to evaluate the effectiveness of this enhanced version of the framework in identifying policy non-compliance by comparing triples extracted from privacy policies against the structured knowledge representation. Through this approach, our goal is to automate compliance verification by examining the relationships between manufacturers, security requirements, and regulatory obligations, offering a scalable solution for the security governance of IoT.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104711"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-17DOI: 10.1016/j.cose.2025.104776
Antony Mullins, Nik Thompson
Technostress is a growing concern for organisations, given the negative impacts of stress on employees' job satisfaction, productivity, and intention to comply with or violate policies. Security-related stress (SRS), a dimension of technostress, addresses how security-related activities, such as information technology compliance, can impact an individual's stress. Addressing security-related stress research is vital, given it can help identify factors that can both enhance employee well-being and strengthen an organisation's security posture. In this paper, we systematically review the literature from the past two decades addressing security-related stress and identify twenty-seven relevant studies for analysis. We make contributions in three areas. Firstly, we discover the predominant theoretical frameworks and models that address security-related stress while examining key factors and constructs that examine security-related stress. Secondly, we describe how security-related stress is measured and what interventions have proven effective in reducing it. Finally, based on our comprehensive analysis, we present a research agenda to inform future research directions of security-related stress.
{"title":"Technostress and information security – A review and research agenda of security-related stress","authors":"Antony Mullins, Nik Thompson","doi":"10.1016/j.cose.2025.104776","DOIUrl":"10.1016/j.cose.2025.104776","url":null,"abstract":"<div><div>Technostress is a growing concern for organisations, given the negative impacts of stress on employees' job satisfaction, productivity, and intention to comply with or violate policies. Security-related stress (SRS), a dimension of technostress, addresses how security-related activities, such as information technology compliance, can impact an individual's stress. Addressing security-related stress research is vital, given it can help identify factors that can both enhance employee well-being and strengthen an organisation's security posture. In this paper, we systematically review the literature from the past two decades addressing security-related stress and identify twenty-seven relevant studies for analysis. We make contributions in three areas. Firstly, we discover the predominant theoretical frameworks and models that address security-related stress while examining key factors and constructs that examine security-related stress. Secondly, we describe how security-related stress is measured and what interventions have proven effective in reducing it. Finally, based on our comprehensive analysis, we present a research agenda to inform future research directions of security-related stress.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104776"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}