首页 > 最新文献

Computers & Security最新文献

英文 中文
Optimizing IDS rule placement via set covering with capacity constraints 通过带容量约束的集覆盖优化IDS规则放置
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-17 DOI: 10.1016/j.cose.2025.104748
Arka Ghosh , Domenico Ditale , Massimiliano Albanese , Preetam Mukherjee
Intrusion Detection Systems (IDSs) are essential for identifying and mitigating cyber threats in modern network infrastructures. Although prior work has extensively explored the optimal placement of IDS sensors across networks, optimizing the deployment of detection rules across multiple IDS instances remains a mostly underexplored area. This paper addresses rule deployment by formulating it as a set covering problem with capacity constraints. We seek to minimize the number of rule deployments required to detect potential exploits of all known vulnerabilities while ensuring that no IDS exceeds its inspection capacity. Our model considers the statistical properties of network traffic, enabling the system to account for load surges and reduce the number of packets not inspected by an IDS under high-traffic conditions, such as during Distributed Denial-of-Service attacks. To solve the optimization problem, we introduce a backtracking algorithm enhanced with a priority queue, which efficiently balances rule coverage and capacity constraints. We validate our approach using the CSE-CIC-IDS2017 dataset and a simulated multi-IDS environment. Experimental results demonstrate that our method significantly reduces the number of uninspected packets, while maximizing vulnerability coverage, and outperforms typical rule deployment strategies. This work highlights the critical role of intelligent rule placement in enhancing IDS performance and paves the way for future adaptive and scalable detection systems.
入侵检测系统(ids)对于识别和减轻现代网络基础设施中的网络威胁至关重要。尽管之前的工作已经广泛探索了IDS传感器在网络中的最佳布局,但在多个IDS实例中优化检测规则的部署仍然是一个未充分探索的领域。本文通过将规则部署表述为具有容量约束的集合覆盖问题来解决规则部署。我们力求减少检测所有已知漏洞的潜在利用所需的规则部署数量,同时确保没有任何IDS超出其检查能力。我们的模型考虑了网络流量的统计特性,使系统能够考虑负载激增,并减少在高流量条件下(例如在分布式拒绝服务攻击期间)不被IDS检查的数据包数量。为了解决优化问题,我们引入了一种增强了优先队列的回溯算法,该算法有效地平衡了规则覆盖和容量约束。我们使用CSE-CIC-IDS2017数据集和模拟的多ids环境验证了我们的方法。实验结果表明,该方法显著减少了未检查数据包的数量,同时最大限度地提高了漏洞覆盖率,并且优于典型的规则部署策略。这项工作强调了智能规则放置在提高IDS性能方面的关键作用,并为未来的自适应和可扩展检测系统铺平了道路。
{"title":"Optimizing IDS rule placement via set covering with capacity constraints","authors":"Arka Ghosh ,&nbsp;Domenico Ditale ,&nbsp;Massimiliano Albanese ,&nbsp;Preetam Mukherjee","doi":"10.1016/j.cose.2025.104748","DOIUrl":"10.1016/j.cose.2025.104748","url":null,"abstract":"<div><div>Intrusion Detection Systems (IDSs) are essential for identifying and mitigating cyber threats in modern network infrastructures. Although prior work has extensively explored the optimal placement of IDS sensors across networks, optimizing the deployment of detection rules across multiple IDS instances remains a mostly underexplored area. This paper addresses rule deployment by formulating it as a set covering problem with capacity constraints. We seek to minimize the number of rule deployments required to detect potential exploits of all known vulnerabilities while ensuring that no IDS exceeds its inspection capacity. Our model considers the statistical properties of network traffic, enabling the system to account for load surges and reduce the number of packets not inspected by an IDS under high-traffic conditions, such as during Distributed Denial-of-Service attacks. To solve the optimization problem, we introduce a backtracking algorithm enhanced with a priority queue, which efficiently balances rule coverage and capacity constraints. We validate our approach using the CSE-CIC-IDS2017 dataset and a simulated multi-IDS environment. Experimental results demonstrate that our method significantly reduces the number of uninspected packets, while maximizing vulnerability coverage, and outperforms typical rule deployment strategies. This work highlights the critical role of intelligent rule placement in enhancing IDS performance and paves the way for future adaptive and scalable detection systems.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104748"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzz4Cuda: Fuzzing your NVIDIA GPU libraries through debug interface Fuzz4Cuda:通过调试接口模糊你的NVIDIA GPU库
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-12 DOI: 10.1016/j.cose.2025.104754
Yuhao Zhou, Peng Jia, Jiayong Liu, Ximing Fan
The programming security of Compute Unified Device Architecture (CUDA), NVIDIA’s parallel computing platform and programming model for Graphics Processing Unit, has always been a significant concern. On the host-side, fuzzing has been remarkably successful at uncovering various software bugs and vulnerabilities, with hundreds of flaws discovered annually through different fuzzing tools. However, existing fuzzing tools typically operate on general-purpose CPU architectures and embedded systems. As an independent processing unit, the GPU does not support tools like American Fuzzy Lop for collecting instrumentation and code coverage information. Consequently, grey-box fuzzing for closed-source graphics and driver libraries has remained an unaddressed challenge. This research introduces Fuzz4Cuda, CUDA-focused GPU fuzzing framework specifically designed for GPU libraries. To enhance device-side coverage collection, Fuzz4Cuda achieved this by runtime analysis of CUDA Streaming Assembler. Furthermore, the framework could dynamically adjust the number of breakpoints to optimize test case execution speed, thereby accelerating the overall time to discover program crash inputs. The development of Fuzz4Cuda has moved GPU library fuzzing ahead, aiming to improve the security of the GPU programming environment. Over a month-long real-world fuzzing campaign aimed at vulnerability discovery, our evaluation of the CUDA Toolkit uncovered five real-world bugs, four of which have been assigned Common Vulnerabilities and Exposures (CVE) IDs.
计算统一设备架构(CUDA)是NVIDIA的并行计算平台和图形处理单元编程模型,其编程安全性一直备受关注。在主机端,模糊测试在发现各种软件错误和漏洞方面非常成功,每年通过不同的模糊测试工具发现数百个缺陷。然而,现有的模糊测试工具通常在通用CPU架构和嵌入式系统上运行。作为一个独立的处理单元,GPU不支持像美国Fuzzy Lop这样的工具来收集仪表和代码覆盖信息。因此,闭源图形和驱动程序库的灰盒模糊测试仍然是一个未解决的挑战。本研究介绍了Fuzz4Cuda,专门为GPU库设计的专注于cuda的GPU模糊测试框架。为了增强设备端覆盖收集,Fuzz4Cuda通过对CUDA Streaming Assembler的运行时分析实现了这一点。此外,框架可以动态调整断点的数量以优化测试用例的执行速度,从而加快发现程序崩溃输入的总时间。Fuzz4Cuda的开发推动了GPU库模糊测试的发展,旨在提高GPU编程环境的安全性。在长达一个月的真实世界的模糊测试活动中,我们对CUDA工具包的评估发现了五个真实世界的漏洞,其中四个已被分配为常见漏洞和暴露(CVE) id。
{"title":"Fuzz4Cuda: Fuzzing your NVIDIA GPU libraries through debug interface","authors":"Yuhao Zhou,&nbsp;Peng Jia,&nbsp;Jiayong Liu,&nbsp;Ximing Fan","doi":"10.1016/j.cose.2025.104754","DOIUrl":"10.1016/j.cose.2025.104754","url":null,"abstract":"<div><div>The programming security of Compute Unified Device Architecture (CUDA), NVIDIA’s parallel computing platform and programming model for Graphics Processing Unit, has always been a significant concern. On the host-side, fuzzing has been remarkably successful at uncovering various software bugs and vulnerabilities, with hundreds of flaws discovered annually through different fuzzing tools. However, existing fuzzing tools typically operate on general-purpose CPU architectures and embedded systems. As an independent processing unit, the GPU does not support tools like American Fuzzy Lop for collecting instrumentation and code coverage information. Consequently, grey-box fuzzing for closed-source graphics and driver libraries has remained an unaddressed challenge. This research introduces Fuzz4Cuda, CUDA-focused GPU fuzzing framework specifically designed for GPU libraries. To enhance device-side coverage collection, Fuzz4Cuda achieved this by runtime analysis of CUDA Streaming Assembler. Furthermore, the framework could dynamically adjust the number of breakpoints to optimize test case execution speed, thereby accelerating the overall time to discover program crash inputs. The development of Fuzz4Cuda has moved GPU library fuzzing ahead, aiming to improve the security of the GPU programming environment. Over a month-long real-world fuzzing campaign aimed at vulnerability discovery, our evaluation of the CUDA Toolkit uncovered five real-world bugs, four of which have been assigned Common Vulnerabilities and Exposures (CVE) IDs.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104754"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reassessing information security perceptions following a data breach announcement: The role of post-breach management in firm-specific risk 数据泄露公告后信息安全观念的重新评估:泄露后管理在公司特定风险中的作用
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-05 DOI: 10.1016/j.cose.2025.104752
Faheem Ahmed Shaikh , Damien Joseph , Eugene Kang
Public announcements of data breaches often lead to short-lived negative stock price reactions, raising questions about firms’ incentives for sustained cybersecurity improvements. This study applies legitimacy theory to examine how investor perceptions of a firm’s security practices—termed information security legitimacy—shape firm-specific risk after such announcements. Analyzing media sentiment following 485 U.S. data breach announcements, we find that firms with stronger information security legitimacy experience significantly lower firm-specific risk over six months. Additionally, shorter delays in public breach announcements strengthen this risk reduction. By linking data breach announcements with post-breach management, this study offers a unified framework showing how proactive security actions and timely communication mitigate long-term financial risk. These findings provide actionable guidance for security managers to prioritize rapid disclosure and strategic legitimacy management, advancing theory on stakeholder perceptions in cybersecurity.
数据泄露的公开公告通常会导致短期的负面股价反应,引发人们对公司持续改进网络安全的动机的质疑。本研究运用合法性理论来检验投资者对公司安全实践(称为信息安全合法性)的看法如何在此类公告后塑造公司特定风险。我们分析了485个美国数据泄露公告后的媒体情绪,发现信息安全合法性较强的公司在六个月内的公司特定风险显著降低。此外,更短的公开违规公告延迟加强了这种风险降低。通过将数据泄露公告与泄露后管理联系起来,本研究提供了一个统一的框架,展示了主动安全行动和及时沟通如何降低长期财务风险。这些发现为安全管理人员优先考虑快速披露和战略合法性管理提供了可操作的指导,推进了利益相关者在网络安全方面的认知理论。
{"title":"Reassessing information security perceptions following a data breach announcement: The role of post-breach management in firm-specific risk","authors":"Faheem Ahmed Shaikh ,&nbsp;Damien Joseph ,&nbsp;Eugene Kang","doi":"10.1016/j.cose.2025.104752","DOIUrl":"10.1016/j.cose.2025.104752","url":null,"abstract":"<div><div>Public announcements of data breaches often lead to short-lived negative stock price reactions, raising questions about firms’ incentives for sustained cybersecurity improvements. This study applies legitimacy theory to examine how investor perceptions of a firm’s security practices—termed information security legitimacy—shape firm-specific risk after such announcements. Analyzing media sentiment following 485 U.S. data breach announcements, we find that firms with stronger information security legitimacy experience significantly lower firm-specific risk over six months. Additionally, shorter delays in public breach announcements strengthen this risk reduction. By linking data breach announcements with post-breach management, this study offers a unified framework showing how proactive security actions and timely communication mitigate long-term financial risk. These findings provide actionable guidance for security managers to prioritize rapid disclosure and strategic legitimacy management, advancing theory on stakeholder perceptions in cybersecurity.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104752"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure multi-cloud collaboration using data leakage free attribute-based access control policies 使用无数据泄漏的基于属性的访问控制策略来保护多云协作
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-04 DOI: 10.1016/j.cose.2025.104736
John C. John , Arobinda Gupta , Shamik Sural
With an increase in the diversity and complexity of requirements from organizations for cloud computing, there is a growing need for integrating the services of multiple cloud providers. In such multi-cloud systems, data leakage is considered to be a major security concern, which is caused by illegitimate actions of malicious users often acting in collusion. The possibility of data leakage in such environments is characterized by the number of interoperations as well as the trustworthiness of users on the collaborating clouds. In this paper, we address the problem of secure multi-cloud collaboration from an Attribute-based Access Control (ABAC) policy management perspective. In particular, we define a problem that aims to formulate ABAC policy rules for establishing a high degree of inter-cloud accesses while eliminating potential paths for data leakage. A data leakage free ABAC policy generation algorithm is proposed that first determines the likelihood of data leakage and then attempts to maximize inter-cloud collaborations. We also pose several variants of the problem by imposing additional meaningful constraints on the nature of accesses. Experimental results on several large data sets show the efficacy of the proposed approach.
随着组织对云计算需求的多样性和复杂性的增加,越来越需要集成多个云提供商的服务。在这种多云系统中,数据泄露被认为是一个主要的安全问题,这是由于恶意用户的非法行为经常相互勾结造成的。在这种环境中,数据泄露的可能性取决于互操作的数量以及协作云上用户的可信度。在本文中,我们从基于属性的访问控制(ABAC)策略管理的角度解决了安全多云协作的问题。特别是,我们定义了一个问题,旨在制定ABAC策略规则,以建立高度的云间访问,同时消除潜在的数据泄漏路径。提出了一种无数据泄漏的ABAC策略生成算法,该算法首先确定数据泄漏的可能性,然后尝试最大化云间协作。通过对访问的性质施加额外的有意义的约束,我们还提出了该问题的几个变体。在多个大型数据集上的实验结果表明了该方法的有效性。
{"title":"Secure multi-cloud collaboration using data leakage free attribute-based access control policies","authors":"John C. John ,&nbsp;Arobinda Gupta ,&nbsp;Shamik Sural","doi":"10.1016/j.cose.2025.104736","DOIUrl":"10.1016/j.cose.2025.104736","url":null,"abstract":"<div><div>With an increase in the diversity and complexity of requirements from organizations for cloud computing, there is a growing need for integrating the services of multiple cloud providers. In such multi-cloud systems, data leakage is considered to be a major security concern, which is caused by illegitimate actions of malicious users often acting in collusion. The possibility of data leakage in such environments is characterized by the number of interoperations as well as the trustworthiness of users on the collaborating clouds. In this paper, we address the problem of secure multi-cloud collaboration from an Attribute-based Access Control (ABAC) policy management perspective. In particular, we define a problem that aims to formulate ABAC policy rules for establishing a high degree of inter-cloud accesses while eliminating potential paths for data leakage. A data leakage free ABAC policy generation algorithm is proposed that first determines the likelihood of data leakage and then attempts to maximize inter-cloud collaborations. We also pose several variants of the problem by imposing additional meaningful constraints on the nature of accesses. Experimental results on several large data sets show the efficacy of the proposed approach.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104736"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and generation of a dataset for training insider threat prevention and detection models: The SPEDIA dataset 设计和生成用于培训内部威胁预防和检测模型的数据集:SPEDIA数据集
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-11 DOI: 10.1016/j.cose.2025.104743
David Álvarez Muñiz, Luis Perez Miguel, Miguel, Alberto Mateo Muñoz, Xavier Larriva-Novo, Manuel Alvarez-Campana, Diego Rivera
The increasing complexity of insider threats poses a critical challenge for modern cybersecurity. Existing datasets used for training detection systems often lack realism, suffer from severe class imbalance, or are outdated. This paper presents a novel methodology for the generation of insider threat datasets through the integration of three data sources: (1) real user behavior collected during a controlled cyber exercise, (2) simulated user activity modeled on realistic work roles, and (3) synthetic data derived from the CERT Insider Threat Test dataset. The result is the SPEDIA dataset, designed to support the development and evaluation of machine learning models for detecting insider threats. The dataset includes detailed event-level logs of user activity, such as file manipulation, command execution, service usage, and network behavior, with annotations mapped to MITRE ATT&CK tactics and techniques. Unlike previous datasets, SPEDIA achieves a more balanced distribution of malicious and non-malicious events, enhancing its suitability for supervised learning. This work also provides a replicable framework for generating similar datasets, contributing to the advancement of insider threat detection research and the development of robust, real-world mitigation strategies.
日益复杂的内部威胁对现代网络安全构成了严峻挑战。用于训练检测系统的现有数据集通常缺乏真实感,遭受严重的类不平衡,或者过时。本文提出了一种通过整合三个数据源生成内部威胁数据集的新方法:(1)在受控网络演习中收集的真实用户行为,(2)以现实工作角色为模型的模拟用户活动,以及(3)来自CERT内部威胁测试数据集的合成数据。结果是SPEDIA数据集,旨在支持用于检测内部威胁的机器学习模型的开发和评估。该数据集包括用户活动的详细事件级日志,例如文件操作、命令执行、服务使用和网络行为,并带有映射到MITRE攻击和CK策略和技术的注释。与以前的数据集不同,SPEDIA实现了更平衡的恶意和非恶意事件分布,增强了其对监督学习的适用性。这项工作还为生成类似的数据集提供了一个可复制的框架,有助于推进内部威胁检测研究和制定强大的、现实世界的缓解战略。
{"title":"Design and generation of a dataset for training insider threat prevention and detection models: The SPEDIA dataset","authors":"David Álvarez Muñiz,&nbsp;Luis Perez Miguel,&nbsp;Miguel,&nbsp;Alberto Mateo Muñoz,&nbsp;Xavier Larriva-Novo,&nbsp;Manuel Alvarez-Campana,&nbsp;Diego Rivera","doi":"10.1016/j.cose.2025.104743","DOIUrl":"10.1016/j.cose.2025.104743","url":null,"abstract":"<div><div>The increasing complexity of insider threats poses a critical challenge for modern cybersecurity. Existing datasets used for training detection systems often lack realism, suffer from severe class imbalance, or are outdated. This paper presents a novel methodology for the generation of insider threat datasets through the integration of three data sources: (1) real user behavior collected during a controlled cyber exercise, (2) simulated user activity modeled on realistic work roles, and (3) synthetic data derived from the CERT Insider Threat Test dataset. The result is the SPEDIA dataset, designed to support the development and evaluation of machine learning models for detecting insider threats. The dataset includes detailed event-level logs of user activity, such as file manipulation, command execution, service usage, and network behavior, with annotations mapped to MITRE ATT&amp;CK tactics and techniques. Unlike previous datasets, SPEDIA achieves a more balanced distribution of malicious and non-malicious events, enhancing its suitability for supervised learning. This work also provides a replicable framework for generating similar datasets, contributing to the advancement of insider threat detection research and the development of robust, real-world mitigation strategies.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104743"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure authentication and traceability of physical objects 物理对象的安全认证和可追溯性
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-10 DOI: 10.1016/j.cose.2025.104745
Mónica P. Arenas, Gabriele Lenzini, Mohammadamin Rakeei, Peter Y.A. Ryan, Marjan Škrobot, Maria Zhekova
We study how to authenticate objects, a problem that is relevant to buyers who seek proof that a purchase is authentic. Typically, manufacturers watermark their goods or assign them IDs with a certificate of authenticity; then, buyers can check for the presence of the watermark or verify the authenticity of the certificate, matching it with the good’s ID. However, this solution falls short when manufacturers and buyers are geographically separated, such as in retail or online purchases. Since certificates can be forged and goods can be substituted with substandard clones, buyers should verify the authenticity of the goods directly. This suggests a process: honest manufacturers should provide goods with an ID and securely register it along with some unforgeable and unique data that can be (re)generated only from the original physical object. In turn, buyers can verify whether the data registered under that ID matches the data retrieved by the buyer for the good just acquired. Such enrollment and authentication processes are complex when realized as protocols because they must withstand attacks against both the physical object and the communication channel. We propose a cyber-physical solution that relies on two elements: (i) a material inseparably joined with an object from which cryptographically strong digital identities can be generated; (ii) two novel cryptographic protocols that ensure data integrity and secure authentication of agents and objects. We present a comprehensive threat model for the artifact authenticity service. We also implemented and optimized the image processing pipeline, which takes under two seconds per image set, representing a notable improvement over previous versions.
我们研究如何鉴定物品,这是一个与寻求证明购买是真实的买家相关的问题。通常情况下,制造商会在商品上加水印,或者用真品证书为其指定id;然后,买家可以检查水印是否存在,或者验证证书的真实性,将其与货物的ID进行匹配。然而,当制造商和买家在地理位置上分开时,例如在零售或在线购买中,这种解决方案就不够用了。由于证书可能被伪造,货物也可能被不合格的复制品所取代,买方应直接核实货物的真伪。这暗示了一个过程:诚实的制造商应该为商品提供一个ID,并安全地注册它,以及一些只能从原始物理对象生成(重新)的不可伪造的唯一数据。反过来,买方可以验证在该ID下注册的数据是否与买方为刚刚获得的商品检索到的数据相匹配。这种注册和身份验证过程在作为协议实现时非常复杂,因为它们必须承受针对物理对象和通信通道的攻击。我们提出了一种网络物理解决方案,它依赖于两个要素:(i)与可以生成加密强数字身份的对象不可分割地连接在一起的材料;(ii)两种新的加密协议,确保数据完整性和代理和对象的安全认证。提出了一种针对工件真实性服务的综合威胁模型。我们还实现并优化了图像处理管道,每个图像集只需不到两秒钟,与以前的版本相比有了显著的改进。
{"title":"Secure authentication and traceability of physical objects","authors":"Mónica P. Arenas,&nbsp;Gabriele Lenzini,&nbsp;Mohammadamin Rakeei,&nbsp;Peter Y.A. Ryan,&nbsp;Marjan Škrobot,&nbsp;Maria Zhekova","doi":"10.1016/j.cose.2025.104745","DOIUrl":"10.1016/j.cose.2025.104745","url":null,"abstract":"<div><div>We study how to authenticate objects, a problem that is relevant to buyers who seek proof that a purchase is authentic. Typically, manufacturers watermark their goods or assign them IDs with a certificate of authenticity; then, buyers can check for the presence of the watermark or verify the authenticity of the certificate, matching it with the good’s ID. However, this solution falls short when manufacturers and buyers are geographically separated, such as in retail or online purchases. Since certificates can be forged and goods can be substituted with substandard clones, buyers should verify the authenticity of the goods directly. This suggests a process: honest manufacturers should provide goods with an ID and securely register it along with some unforgeable and unique data that can be (re)generated only from the original physical object. In turn, buyers can verify whether the data registered under that ID matches the data retrieved by the buyer for the good just acquired. Such enrollment and authentication processes are complex when realized as protocols because they must withstand attacks against both the physical object and the communication channel. We propose a cyber-physical solution that relies on two elements: (i) a material inseparably joined with an object from which cryptographically strong digital identities can be generated; (ii) two novel cryptographic protocols that ensure data integrity and secure authentication of agents and objects. We present a comprehensive threat model for the artifact authenticity service. We also implemented and optimized the image processing pipeline, which takes under two seconds per image set, representing a notable improvement over previous versions.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104745"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intrusion detection algorithm based on multi-scale feature fusion 基于多尺度特征融合的入侵检测算法
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-22 DOI: 10.1016/j.cose.2025.104783
Jinxian Zhao, Haidong Hou, Liang Chang
Network intrusion detection plays a crucial role in ensuring cybersecurity by promptly mitigating network attacks. However, existing deep learning methods have limited capabilities in capture network attack features and address class imbalances, resulting in low classification accuracy. This paper proposes a deep-learning intrusion detection model named FLSPPMRXt, which is built upon ResNeXt50. It enhances feature capture by improving the backbone convolution and introducing a multi-scale feature fusion module, including the Soft Pool layer. Meanwhile, focal loss is employed as the loss function to effectively mitigate the impact of class imbalance on classification accuracy. Furthermore, this method proposes a data visualization processing algorithm to provide an image representation that is more consistent with the feature nearest neighbor distribution. Experimental results show that the FLSPPMRXt model achieves 93.3 % and 95.2 % in overall classification accuracy and F1 score on UNSW_NB15 dataset, respectively. Compared with existing algorithms, such as the 2DCNN and RNN models, the method demonstrates superior network intrusion detection performance.
网络入侵检测能够及时缓解网络攻击,对保障网络安全起着至关重要的作用。然而,现有的深度学习方法在捕获网络攻击特征和解决类不平衡方面的能力有限,导致分类精度较低。本文提出了一种基于ResNeXt50的深度学习入侵检测模型FLSPPMRXt。该算法通过改进主干卷积和引入包括软池层在内的多尺度特征融合模块来增强特征捕获。同时,采用焦点损失作为损失函数,有效缓解了类不平衡对分类精度的影响。此外,该方法提出了一种数据可视化处理算法,以提供更符合特征最近邻分布的图像表示。实验结果表明,FLSPPMRXt模型在UNSW_NB15数据集上的总体分类准确率和F1分数分别达到93.3%和95.2%。与现有的2DCNN和RNN模型相比,该方法具有更好的网络入侵检测性能。
{"title":"Intrusion detection algorithm based on multi-scale feature fusion","authors":"Jinxian Zhao,&nbsp;Haidong Hou,&nbsp;Liang Chang","doi":"10.1016/j.cose.2025.104783","DOIUrl":"10.1016/j.cose.2025.104783","url":null,"abstract":"<div><div>Network intrusion detection plays a crucial role in ensuring cybersecurity by promptly mitigating network attacks. However, existing deep learning methods have limited capabilities in capture network attack features and address class imbalances, resulting in low classification accuracy. This paper proposes a deep-learning intrusion detection model named FLSPPMRXt, which is built upon ResNeXt50. It enhances feature capture by improving the backbone convolution and introducing a multi-scale feature fusion module, including the Soft Pool layer. Meanwhile, focal loss is employed as the loss function to effectively mitigate the impact of class imbalance on classification accuracy. Furthermore, this method proposes a data visualization processing algorithm to provide an image representation that is more consistent with the feature nearest neighbor distribution. Experimental results show that the FLSPPMRXt model achieves 93.3 % and 95.2 % in overall classification accuracy and F1 score on UNSW_NB15 dataset, respectively. Compared with existing algorithms, such as the 2DCNN and RNN models, the method demonstrates superior network intrusion detection performance.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104783"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RTFuzz: Fuzzing browsers via efficient render tree mutation RTFuzz:通过有效的渲染树变异来模糊浏览器
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-14 DOI: 10.1016/j.cose.2025.104756
Yishun Zeng, Yue Wu, Xicheng Lu, Chao Zhang
The rendering engine is a cornerstone of modern web browsers, responsible for transforming heterogeneous inputs-HTML, CSS, and JavaScript-into visual page content. This complex process involves constructing and updating the render tree, which governs layout and painting, but also introduces subtle defects that manifest as robustness and security challenges. Existing browser fuzzers largely fall short in thoroughly testing the rendering engine due to two fundamental challenges: (i) the vast, multidimensional input space makes efficient exploration difficult; (ii) the periodic, incremental rendering model of modern rendering engines merges multiple updates of the render tree within each rendering cycle, reducing activation of deep pipeline logic such as layout and painting. In this paper, we aim to enhance the testing depth of the rendering pipeline-rather than simply increasing code coverage-by focusing on updating the render tree, the central data structure linking frontend inputs to backend layout and painting modules. Our approach incorporates (i) correlation-based pruning strategies for HTML elements and CSS properties to prioritize high-yield input combinations, and (ii) a time-sliced testing scheme that intentionally distributes mutations across multiple rendering cycles within a single test case, thereby increasing the trigger frequency of backend rendering modules. We implement a prototype, RTFuzz, and evaluate it extensively. Compared to state-of-the-art fuzzers Domato, FreeDom, and Minerva, RTFuzz helps uncover 43.1 %, 28.7 %, and 75.7 % more unique crashes, 83.3 % of which occur in the rendering pipeline, and further identified 20 real-world defects during long-running experiments. Ablation studies confirm that correlation-based pruning increases unique crashes by 79.2 %, and the time-sliced scheme contributes a 16.2 % improvement.
渲染引擎是现代web浏览器的基石,负责将异构输入(html、CSS和javascript)转换为可视化页面内容。这个复杂的过程包括构造和更新渲染树,它管理布局和绘制,但也引入了一些微妙的缺陷,表现为健壮性和安全性挑战。现有的浏览器模糊测试工具在对渲染引擎进行彻底测试方面存在两个基本挑战:(1)巨大的多维输入空间使得有效的探索变得困难;(ii)现代渲染引擎的周期性增量渲染模型在每个渲染周期内合并了渲染树的多次更新,减少了深层管道逻辑(如布局和绘画)的激活。在本文中,我们的目标是增强渲染管道的测试深度-而不是简单地增加代码覆盖率-通过专注于更新渲染树,连接前端输入到后端布局和绘画模块的中心数据结构。我们的方法包含(i)基于相关性的HTML元素和CSS属性修剪策略,以优先考虑高产输入组合,以及(ii)时间切片测试方案,在单个测试用例中故意将突变分布在多个呈现周期中,从而增加后端呈现模块的触发频率。我们实现了一个原型RTFuzz,并对其进行了广泛的评估。与最先进的fuzzers Domato, FreeDom和Minerva相比,RTFuzz帮助发现43.1%,28.7%和75.7%的独特崩溃,其中83.3%发生在渲染管道中,并在长期运行的实验中进一步确定了20个现实世界的缺陷。消融研究证实,基于相关性的剪枝使唯一崩溃增加了79.2%,而时间切片方案贡献了16.2%的改进。
{"title":"RTFuzz: Fuzzing browsers via efficient render tree mutation","authors":"Yishun Zeng,&nbsp;Yue Wu,&nbsp;Xicheng Lu,&nbsp;Chao Zhang","doi":"10.1016/j.cose.2025.104756","DOIUrl":"10.1016/j.cose.2025.104756","url":null,"abstract":"<div><div>The rendering engine is a cornerstone of modern web browsers, responsible for transforming heterogeneous inputs-HTML, CSS, and JavaScript-into visual page content. This complex process involves constructing and updating the render tree, which governs layout and painting, but also introduces subtle defects that manifest as robustness and security challenges. Existing browser fuzzers largely fall short in thoroughly testing the rendering engine due to two fundamental challenges: (i) the vast, multidimensional input space makes efficient exploration difficult; (ii) the periodic, incremental rendering model of modern rendering engines merges multiple updates of the render tree within each rendering cycle, reducing activation of deep pipeline logic such as layout and painting. In this paper, we aim to enhance the testing depth of the rendering pipeline-rather than simply increasing code coverage-by focusing on updating the render tree, the central data structure linking frontend inputs to backend layout and painting modules. Our approach incorporates (i) correlation-based pruning strategies for HTML elements and CSS properties to prioritize high-yield input combinations, and (ii) a time-sliced testing scheme that intentionally distributes mutations across multiple rendering cycles within a single test case, thereby increasing the trigger frequency of backend rendering modules. We implement a prototype, RTFuzz, and evaluate it extensively. Compared to state-of-the-art fuzzers Domato, FreeDom, and Minerva, RTFuzz helps uncover 43.1 %, 28.7 %, and 75.7 % more unique crashes, 83.3 % of which occur in the rendering pipeline, and further identified 20 real-world defects during long-running experiments. Ablation studies confirm that correlation-based pruning increases unique crashes by 79.2 %, and the time-sliced scheme contributes a 16.2 % improvement.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104756"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable automation for IoT cyberSecurity compliance: Ontology-driven reasoning for real-time assessment 物联网网络安全合规性的可扩展自动化:本体驱动的实时评估推理
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-10-22 DOI: 10.1016/j.cose.2025.104711
Ikechukwu Oranekwu , Lavanya Elluri , Roberto Yus , Anantaa Kotal
In recent years, the rapid expansion of the Internet of Things (IoT) has introduced significant cybersecurity challenges, requiring manufacturers to comply with various regulatory frameworks and cybersecurity standards. Hence, to protect user data and privacy, all organizations providing IoT devices must adhere to complex guidelines such as the National Institute of Standards and Technology Inter-Agency Report (NISTIR) 8259, which defines essential cybersecurity guidelines for IoT manufacturers. However, interpreting and applying these rules from these guidelines remains a significant challenge for companies. Previously, our Automated Knowledge Framework for IoT Cybersecurity Compliance, leveraged SWRL, SPARQL queries, Web Ontology Language and Visualization (OWL Viz), Semantic Web technologies, Large Language Models (LLMs), and Retrieval Augmented Generation (RAG) pipeline to automate compliance assessment of multiple Functional requirement documents (FRDs), while systematically cross-checking Business requirement documents (BRDs) against them [Oranekwu et al., 2024]. However, these efforts primarily focused on mapping NISTIR 8259 guidelines into a structured ontology laying the foundation for us to build, expand on, and then integrate the IoT Cybersecurity Improvement Act of 2020 into the compliance framework. Furthermore, exploring its big data capability, the Knowledge Graph (KG) has been expanded and populated with more than 800 manufacturer privacy policy instances, allowing direct comparison between manufacturer-defined data properties, object properties, and regulatory compliance expectations. The primary objective is to evaluate the effectiveness of this enhanced version of the framework in identifying policy non-compliance by comparing triples extracted from privacy policies against the structured knowledge representation. Through this approach, our goal is to automate compliance verification by examining the relationships between manufacturers, security requirements, and regulatory obligations, offering a scalable solution for the security governance of IoT.
近年来,物联网(IoT)的快速扩张带来了重大的网络安全挑战,要求制造商遵守各种监管框架和网络安全标准。因此,为了保护用户数据和隐私,所有提供物联网设备的组织都必须遵守复杂的指导方针,例如美国国家标准与技术研究所机构间报告(nistr) 8259,该报告为物联网制造商定义了基本的网络安全指导方针。然而,从这些指导方针中解释和应用这些规则对公司来说仍然是一个重大挑战。此前,我们的物联网网络安全合规性自动化知识框架利用SWRL、SPARQL查询、Web本体语言和可视化(OWL Viz)、语义网技术、大型语言模型(llm)和检索增强生成(RAG)管道来自动评估多个功能需求文档(frd)的合规性,同时系统地根据它们交叉检查业务需求文档(brd) [Oranekwu等人,2024]。然而,这些努力主要集中在将nistr 8259指南映射到结构化本体中,为我们构建、扩展并将2020年物联网网络安全改进法案整合到合规框架中奠定基础。此外,探索其大数据功能,知识图(KG)已经扩展并填充了800多个制造商隐私策略实例,允许在制造商定义的数据属性、对象属性和法规遵从性期望之间进行直接比较。主要目标是通过比较从隐私策略中提取的三元组与结构化知识表示,来评估框架的这个增强版本在识别策略不遵从性方面的有效性。通过这种方法,我们的目标是通过检查制造商、安全需求和监管义务之间的关系来自动化合规性验证,为物联网的安全治理提供可扩展的解决方案。
{"title":"Scalable automation for IoT cyberSecurity compliance: Ontology-driven reasoning for real-time assessment","authors":"Ikechukwu Oranekwu ,&nbsp;Lavanya Elluri ,&nbsp;Roberto Yus ,&nbsp;Anantaa Kotal","doi":"10.1016/j.cose.2025.104711","DOIUrl":"10.1016/j.cose.2025.104711","url":null,"abstract":"<div><div>In recent years, the rapid expansion of the Internet of Things (IoT) has introduced significant cybersecurity challenges, requiring manufacturers to comply with various regulatory frameworks and cybersecurity standards. Hence, to protect user data and privacy, all organizations providing IoT devices must adhere to complex guidelines such as the National Institute of Standards and Technology Inter-Agency Report (NISTIR) 8259, which defines essential cybersecurity guidelines for IoT manufacturers. However, interpreting and applying these rules from these guidelines remains a significant challenge for companies. Previously, our Automated Knowledge Framework for IoT Cybersecurity Compliance, leveraged SWRL, SPARQL queries, Web Ontology Language and Visualization (OWL Viz), Semantic Web technologies, Large Language Models (LLMs), and Retrieval Augmented Generation (RAG) pipeline to automate compliance assessment of multiple Functional requirement documents (FRDs), while systematically cross-checking Business requirement documents (BRDs) against them [Oranekwu et al., 2024]. However, these efforts primarily focused on mapping NISTIR 8259 guidelines into a structured ontology laying the foundation for us to build, expand on, and then integrate the IoT Cybersecurity Improvement Act of 2020 into the compliance framework. Furthermore, exploring its big data capability, the Knowledge Graph (KG) has been expanded and populated with more than 800 manufacturer privacy policy instances, allowing direct comparison between manufacturer-defined data properties, object properties, and regulatory compliance expectations. The primary objective is to evaluate the effectiveness of this enhanced version of the framework in identifying policy non-compliance by comparing triples extracted from privacy policies against the structured knowledge representation. Through this approach, our goal is to automate compliance verification by examining the relationships between manufacturers, security requirements, and regulatory obligations, offering a scalable solution for the security governance of IoT.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104711"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technostress and information security – A review and research agenda of security-related stress 技术压力和信息安全-安全相关压力的回顾和研究议程
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-01 Epub Date: 2025-11-17 DOI: 10.1016/j.cose.2025.104776
Antony Mullins, Nik Thompson
Technostress is a growing concern for organisations, given the negative impacts of stress on employees' job satisfaction, productivity, and intention to comply with or violate policies. Security-related stress (SRS), a dimension of technostress, addresses how security-related activities, such as information technology compliance, can impact an individual's stress. Addressing security-related stress research is vital, given it can help identify factors that can both enhance employee well-being and strengthen an organisation's security posture. In this paper, we systematically review the literature from the past two decades addressing security-related stress and identify twenty-seven relevant studies for analysis. We make contributions in three areas. Firstly, we discover the predominant theoretical frameworks and models that address security-related stress while examining key factors and constructs that examine security-related stress. Secondly, we describe how security-related stress is measured and what interventions have proven effective in reducing it. Finally, based on our comprehensive analysis, we present a research agenda to inform future research directions of security-related stress.
鉴于压力对员工的工作满意度、生产力以及遵守或违反政策的意图的负面影响,技术压力越来越受到组织的关注。安全相关压力(SRS)是技术压力的一个维度,涉及与安全相关的活动(如信息技术遵从性)如何影响个人的压力。解决与安全相关的压力研究是至关重要的,因为它可以帮助确定既能提高员工幸福感又能加强组织安全态势的因素。在本文中,我们系统地回顾了过去二十年来关于安全相关压力的文献,并确定了27项相关研究进行分析。我们在三个方面作出贡献。首先,我们发现了解决安全相关压力的主要理论框架和模型,同时研究了检查安全相关压力的关键因素和结构。其次,我们描述了与安全相关的压力是如何测量的,以及哪些干预措施被证明是有效的。最后,在综合分析的基础上,提出了安全相关应力的研究方向。
{"title":"Technostress and information security – A review and research agenda of security-related stress","authors":"Antony Mullins,&nbsp;Nik Thompson","doi":"10.1016/j.cose.2025.104776","DOIUrl":"10.1016/j.cose.2025.104776","url":null,"abstract":"<div><div>Technostress is a growing concern for organisations, given the negative impacts of stress on employees' job satisfaction, productivity, and intention to comply with or violate policies. Security-related stress (SRS), a dimension of technostress, addresses how security-related activities, such as information technology compliance, can impact an individual's stress. Addressing security-related stress research is vital, given it can help identify factors that can both enhance employee well-being and strengthen an organisation's security posture. In this paper, we systematically review the literature from the past two decades addressing security-related stress and identify twenty-seven relevant studies for analysis. We make contributions in three areas. Firstly, we discover the predominant theoretical frameworks and models that address security-related stress while examining key factors and constructs that examine security-related stress. Secondly, we describe how security-related stress is measured and what interventions have proven effective in reducing it. Finally, based on our comprehensive analysis, we present a research agenda to inform future research directions of security-related stress.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"161 ","pages":"Article 104776"},"PeriodicalIF":5.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1