首页 > 最新文献

Computers & Security最新文献

英文 中文
A novel biometric authentication scheme with privacy protection based on SVM and ZKP 基于 SVM 和 ZKP 的具有隐私保护功能的新型生物识别身份验证方案
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-14 DOI: 10.1016/j.cose.2024.103995

Biometric authentication is a very convenient and user-friendly method. The popularity of this method requires strong privacy-preserving technology to prevent the disclosure of template information. Most of the existing privacy protection technologies rely on classic encryption techniques, such as homomorphic encryption, which incur huge system overhead and cannot be popularized. To address these issues, we propose a novel biometric authentication scheme with privacy protection based on support vector machine and zero knowledge proof (BioAu–SVM+ZKP). BioAu–SVM+ZKP allows users to authenticate themselves to different service providers without disclosing any biometric template information. The evidence is generated through the zero-knowledge proof utilizing polynomial commitments. Our approach for generating a unique and repeatable biometric identifier from the user’s fingerprint image leverages the multi-classification property of SVM. Notably, our scheme not only reduces the communication overhead but also provides the privacy protection features. Besides, the communication overhead of BioAu–SVM+ZKP is constant. We have simulated the authentication scheme on the common dataset NIST, analyzed the performance and proved the security.

生物识别身份验证是一种非常方便、用户友好的方法。这种方法的普及需要强大的隐私保护技术来防止模板信息的泄露。现有的隐私保护技术大多依赖于同态加密等经典加密技术,这些技术会产生巨大的系统开销,无法普及。针对这些问题,我们提出了一种基于支持向量机和零知识证明(BioAu-SVM+ZKP)的新型隐私保护生物特征认证方案。BioAu-SVM+ZKP 允许用户在不透露任何生物识别模板信息的情况下,向不同的服务提供商进行身份验证。证据是通过利用多项式承诺的零知识证明生成的。我们利用 SVM 的多分类特性,从用户指纹图像生成唯一且可重复的生物识别标识符。值得注意的是,我们的方案不仅减少了通信开销,还提供了隐私保护功能。此外,BioAu-SVM+ZKP 的通信开销是恒定的。我们在通用数据集 NIST 上模拟了验证方案,分析了其性能并证明了其安全性。
{"title":"A novel biometric authentication scheme with privacy protection based on SVM and ZKP","authors":"","doi":"10.1016/j.cose.2024.103995","DOIUrl":"10.1016/j.cose.2024.103995","url":null,"abstract":"<div><p>Biometric authentication is a very convenient and user-friendly method. The popularity of this method requires strong privacy-preserving technology to prevent the disclosure of template information. Most of the existing privacy protection technologies rely on classic encryption techniques, such as homomorphic encryption, which incur huge system overhead and cannot be popularized. To address these issues, we propose a novel biometric authentication scheme with privacy protection based on support vector machine and zero knowledge proof (BioAu–SVM+ZKP). BioAu–SVM+ZKP allows users to authenticate themselves to different service providers without disclosing any biometric template information. The evidence is generated through the zero-knowledge proof utilizing polynomial commitments. Our approach for generating a unique and repeatable biometric identifier from the user’s fingerprint image leverages the multi-classification property of SVM. Notably, our scheme not only reduces the communication overhead but also provides the privacy protection features. Besides, the communication overhead of BioAu–SVM+ZKP is constant. We have simulated the authentication scheme on the common dataset NIST, analyzed the performance and proved the security.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167404824003006/pdfft?md5=af94177ea9d23a2bd821053af17d882f&pid=1-s2.0-S0167404824003006-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141637248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint relational triple extraction with enhanced representation and binary tagging framework in cybersecurity 网络安全中带有增强表示法和二进制标记框架的联合关系三重提取
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-14 DOI: 10.1016/j.cose.2024.104001

The cyber threat intelligence (CTI) knowledge graph is a valuable tool for aiding security practitioners in the identification and analysis of cyberattacks. These graphs are constructed from CTI data, organized into relational triples, where each triple comprises two entities linked by a particular relation. However, as the volume of CTI data is expanding at a faster rate than predicted, existing technologies are unable to extract relational triples quickly and accurately. This work mainly focuses on the extraction of relational triples in CTI data, which is achieved by an enhanced representation and binary tagging framework (ERBTF). Firstly, we introduce embedding representations for relations and concatenate these with word embeddings to obtain the initial hidden representation. Subsequently, we employ a novel dilated convolutional encoder that consists of a dilated convolution neural network, gate mechanism and residual connection to enhance the learned contextual representation. Afterwards, we adopt an attention module that includes multi-head self-attention and position-wise feed-forward neural network to allocate greater attention to words that significantly influence the specific relation. Additionally, we utilize the straightforward yet efficient binary entity tagger to identify subject and object entities under different relations for constructing relational triples. We conduct massive experiments on relational triple extraction from CTI data, the results show that ERBTF is superior to the existing relation extraction models, and achieves state-of-the-art performance.

网络威胁情报(CTI)知识图谱是帮助安全从业人员识别和分析网络攻击的重要工具。这些图谱由 CTI 数据构建,并组织成关系三元组,其中每个三元组由通过特定关系链接的两个实体组成。然而,由于 CTI 数据量的增长速度超过预期,现有技术无法快速准确地提取关系三元。这项工作主要关注 CTI 数据中关系三的提取,通过增强表示和二进制标记框架(ERBTF)来实现。首先,我们引入了关系的嵌入表示,并将其与词嵌入连接起来以获得初始隐藏表示。随后,我们采用一种新颖的扩张卷积编码器(由扩张卷积神经网络、门机制和残差连接组成)来增强学习到的上下文表征。之后,我们采用了一个注意力模块,其中包括多头自注意力和位置前馈神经网络,以将更多注意力分配给对特定关系有重大影响的词语。此外,我们还利用简单高效的二元实体标记来识别不同关系下的主体和客体实体,从而构建关系三元组。我们从 CTI 数据中进行了大量的关系三提取实验,结果表明 ERBTF 优于现有的关系提取模型,并达到了最先进的性能。
{"title":"Joint relational triple extraction with enhanced representation and binary tagging framework in cybersecurity","authors":"","doi":"10.1016/j.cose.2024.104001","DOIUrl":"10.1016/j.cose.2024.104001","url":null,"abstract":"<div><p>The cyber threat intelligence (CTI) knowledge graph is a valuable tool for aiding security practitioners in the identification and analysis of cyberattacks. These graphs are constructed from CTI data, organized into relational triples, where each triple comprises two entities linked by a particular relation. However, as the volume of CTI data is expanding at a faster rate than predicted, existing technologies are unable to extract relational triples quickly and accurately. This work mainly focuses on the extraction of relational triples in CTI data, which is achieved by an <u>e</u>nhanced <u>r</u>epresentation and <u>b</u>inary <u>t</u>agging <u>f</u>ramework (ERBTF). Firstly, we introduce embedding representations for relations and concatenate these with word embeddings to obtain the initial hidden representation. Subsequently, we employ a novel dilated convolutional encoder that consists of a dilated convolution neural network, gate mechanism and residual connection to enhance the learned contextual representation. Afterwards, we adopt an attention module that includes multi-head self-attention and position-wise feed-forward neural network to allocate greater attention to words that significantly influence the specific relation. Additionally, we utilize the straightforward yet efficient binary entity tagger to identify subject and object entities under different relations for constructing relational triples. We conduct massive experiments on relational triple extraction from CTI data, the results show that ERBTF is superior to the existing relation extraction models, and achieves state-of-the-art performance.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141637246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and mitigation of vampire attacks with secure routing in WSN using weighted RNN and optimal path selection 利用加权 RNN 和最优路径选择在 WSN 中通过安全路由检测和缓解吸血鬼攻击
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-14 DOI: 10.1016/j.cose.2024.103991

In Wireless Sensor Networks (WSNs), one of the most significant threats is vampire attacks in sensor nodes. These attacks are marked by malicious behaviors within sensor nodes, often exploiting vulnerabilities inherent in routing protocols. These attacks can disrupt the connectivity of the network and significantly impact the energy resources. However, these intermediate nodes can introduce security vulnerabilities, making network security in WSN is challenging task. To address this issue, a novel deep learning-based vampire attack detection model is proposed. The developed deep learning-based vampire attack detection model is performed by following steps like data collection, attack detection, mitigation, and optimal path selection. Initially, the data attributes for all sensor nodes in the WSN system are collected. Further, the vampire attack detection is carried out by a Weighted Recurrent Neural Network (WRNN), here the weight values are optimized using Enhanced Golf Optimization Algorithm (EGOA). The detected vampire nodes are effectively separated based on different characteristics of nodes like node broadcast count, node energy, and node Packet Received Ratio (PRR). The attack mitigation process is executed by the separation of the vampire nodes from the network, the remaining nodes are considered for the routing process. The optimal paths are chosen by the proposed EGOA. Finally, the result of the suggested vampire attack detection model is compared with the conventional techniques in terms of various evaluation indices.

在无线传感器网络(WSN)中,最重要的威胁之一是传感器节点的吸血鬼攻击。这些攻击以传感器节点内的恶意行为为特征,通常利用路由协议中固有的漏洞。这些攻击会破坏网络连接,严重影响能源资源。然而,这些中间节点会引入安全漏洞,使 WSN 的网络安全成为一项具有挑战性的任务。为解决这一问题,我们提出了一种基于深度学习的新型吸血鬼攻击检测模型。所开发的基于深度学习的吸血鬼攻击检测模型通过数据收集、攻击检测、缓解和最佳路径选择等步骤来执行。首先,收集 WSN 系统中所有传感器节点的数据属性。然后,通过加权递归神经网络(WRNN)进行吸血鬼攻击检测,这里的权重值使用增强高尔夫优化算法(EGOA)进行优化。根据节点的不同特征,如节点广播数、节点能量和节点数据包接收率(PRR),有效分离检测到的吸血鬼节点。通过将 "吸血鬼 "节点从网络中分离出来来执行攻击缓解过程,剩下的节点则用于路由过程。建议的 EGOA 会选择最佳路径。最后,就各种评价指标而言,建议的吸血鬼攻击检测模型的结果与传统技术进行了比较。
{"title":"Detection and mitigation of vampire attacks with secure routing in WSN using weighted RNN and optimal path selection","authors":"","doi":"10.1016/j.cose.2024.103991","DOIUrl":"10.1016/j.cose.2024.103991","url":null,"abstract":"<div><p>In Wireless Sensor Networks (WSNs), one of the most significant threats is vampire attacks in sensor nodes. These attacks are marked by malicious behaviors within sensor nodes, often exploiting vulnerabilities inherent in routing protocols. These attacks can disrupt the connectivity of the network and significantly impact the energy resources. However, these intermediate nodes can introduce security vulnerabilities, making network security in WSN is challenging task. To address this issue, a novel deep learning-based vampire attack detection model is proposed. The developed deep learning-based vampire attack detection model is performed by following steps like data collection, attack detection, mitigation, and optimal path selection. Initially, the data attributes for all sensor nodes in the WSN system are collected. Further, the vampire attack detection is carried out by a Weighted Recurrent Neural Network (WRNN), here the weight values are optimized using Enhanced Golf Optimization Algorithm (EGOA). The detected vampire nodes are effectively separated based on different characteristics of nodes like node broadcast count, node energy, and node Packet Received Ratio (PRR). The attack mitigation process is executed by the separation of the vampire nodes from the network, the remaining nodes are considered for the routing process. The optimal paths are chosen by the proposed EGOA. Finally, the result of the suggested vampire attack detection model is compared with the conventional techniques in terms of various evaluation indices.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141709261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for mapping organisational workforce knowledge profile in cyber security 绘制组织员工网络安全知识概况的框架
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-14 DOI: 10.1016/j.cose.2024.103925

A cyber security organisation needs to ensure that its workforce possesses the necessary knowledge to fulfil its cyber security business functions. Similarly, where an organisation chooses to delegate their cyber security tasks to a third-party provider, they must ensure that the chosen entity possesses robust knowledge capabilities to effectively carry out the assigned tasks. Building a comprehensive cyber security knowledge profile is a distinct challenge; the field is ever evolving with a range of professional certifications, academic qualifications and on-the-job training. So far, there has been a lack of a well-defined methodology for systematically evaluating an organisation’s cyber security knowledge, specifically derived from its workforce, against a standardised reference point. Prior research on knowledge profiling across various disciplines has predominantly utilised established frameworks such as SWEBOK. However, within the domain of cyber security, the absence of a standardised reference point is notable. In this paper, we advance a framework leveraging Cyber Security Body of Knowledge (CyBOK), to construct an organisation’s knowledge profile. The framework enables a user to identify areas of coverage and where gaps may lie, so that an organisation can consider targeted recruitment or training or, where such expertise may be outsourced, drawing in knowledge capability from third parties. In the latter case, the framework can also be used as a basis for assessing the knowledge capability of such a third party. We present the knowledge profiling framework, discussing three case studies in organisational teams underpinning its initial development, followed by its refinement through workshops with cyber security practitioners.

网络安全组织需要确保其员工具备履行网络安全业务职能所需的知识。同样,如果组织选择将其网络安全任务委托给第三方提供商,则必须确保所选实体拥有强大的知识能力,以有效执行指定任务。建立全面的网络安全知识档案是一项独特的挑战;该领域不断发展,有一系列专业认证、学历资格和在职培训。迄今为止,还缺乏一种明确界定的方法来系统评估一个组织的网络安全知识,特别是根据标准化参考点评估其员工队伍的网络安全知识。之前对各学科知识剖析的研究主要利用 SWEBOK 等既定框架。然而,在网络安全领域,标准化参考点的缺失十分明显。在本文中,我们提出了一个利用网络安全知识体系(CyBOK)构建组织知识档案的框架。该框架使用户能够确定覆盖领域和可能存在的差距,从而使组织能够考虑有针对性的招聘或培训,或者在此类专业知识可能外包的情况下,从第三方那里获取知识能力。在后一种情况下,该框架还可用作评估第三方知识能力的基础。我们介绍了知识剖析框架,讨论了支持其初步发展的组织团队中的三个案例研究,随后通过与网络安全从业人员的研讨会对其进行了完善。
{"title":"A framework for mapping organisational workforce knowledge profile in cyber security","authors":"","doi":"10.1016/j.cose.2024.103925","DOIUrl":"10.1016/j.cose.2024.103925","url":null,"abstract":"<div><p>A cyber security organisation needs to ensure that its workforce possesses the necessary knowledge to fulfil its cyber security business functions. Similarly, where an organisation chooses to delegate their cyber security tasks to a third-party provider, they must ensure that the chosen entity possesses robust knowledge capabilities to effectively carry out the assigned tasks. Building a comprehensive cyber security knowledge profile is a distinct challenge; the field is ever evolving with a range of professional certifications, academic qualifications and on-the-job training. So far, there has been a lack of a well-defined methodology for systematically evaluating an organisation’s cyber security knowledge, specifically derived from its workforce, against a standardised reference point. Prior research on knowledge profiling across various disciplines has predominantly utilised established frameworks such as SWEBOK. However, within the domain of cyber security, the absence of a standardised reference point is notable. In this paper, we advance a framework leveraging Cyber Security Body of Knowledge (CyBOK), to construct an organisation’s knowledge profile. The framework enables a user to identify areas of coverage and where gaps may lie, so that an organisation can consider targeted recruitment or training or, where such expertise may be outsourced, drawing in knowledge capability from third parties. In the latter case, the framework can also be used as a basis for assessing the knowledge capability of such a third party. We present the knowledge profiling framework, discussing three case studies in organisational teams underpinning its initial development, followed by its refinement through workshops with cyber security practitioners.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167404824002281/pdfft?md5=c7ba2b96661144ca1c554cc2c302306d&pid=1-s2.0-S0167404824002281-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM-TIKG: Threat intelligence knowledge graph construction utilizing large language model LLM-TIKG:利用大型语言模型构建威胁情报知识图谱
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-14 DOI: 10.1016/j.cose.2024.103999

Open-source threat intelligence is often unstructured and cannot be directly applied to the next detection and defense. By constructing a knowledge graph through open-source threat intelligence, we can better apply this information to intrusion detection. However, the current methods for constructing knowledge graphs face limitations due to the domain-specific attributes of entities and the analysis of lengthy texts, and they require large amounts of labeled data. Furthermore, there is a lack of authoritative open-source annotated threat intelligence datasets, which require significant manual effort. Moreover, it is noteworthy that current research often neglects the textual descriptions of attack behaviors, resulting in the loss of vital information to understand intricate cyber threats. To address these issues, we propose LLM-TIKG that applies the large language model to construct a knowledge graph from unstructured open-source threat intelligence. The few-shot learning capability of GPT is leveraged to achieve data annotation and augmentation, thereby creating the datasets for fine-tuning a smaller language model (7B). Using the fine-tuned model, we perform topic classification on the collected reports, extract entities and relationships, and extract TTPs from the attack description. This process results in the construction of a threat intelligence knowledge graph, enabling automated and universal analysis of textualized threat intelligence. The experimental results demonstrate improved performance in both named entity recognition and TTP classification, achieving the precision of 87.88% and 96.53%, respectively.

开源威胁情报通常是非结构化的,无法直接应用于下一步的检测和防御。通过开源威胁情报构建知识图谱,我们可以更好地将这些信息应用于入侵检测。然而,由于实体的特定领域属性和对冗长文本的分析,当前构建知识图谱的方法面临着局限性,它们需要大量的标记数据。此外,还缺乏权威的开源注释威胁情报数据集,这需要大量的人工努力。此外,值得注意的是,目前的研究往往忽略了攻击行为的文本描述,导致失去了理解错综复杂的网络威胁的重要信息。为了解决这些问题,我们提出了 LLM-TIKG,它应用大型语言模型从非结构化开源威胁情报中构建知识图谱。利用 GPT 的少量学习能力实现数据注释和增强,从而创建用于微调较小语言模型的数据集 (7B)。利用微调后的模型,我们对收集到的报告进行主题分类,提取实体和关系,并从攻击描述中提取 TTP。这一过程的结果是构建了一个威胁情报知识图谱,实现了对文本化威胁情报的自动化和通用分析。实验结果表明,命名实体识别和 TTP 分类的性能都得到了提高,精度分别达到了 87.88% 和 96.53%。
{"title":"LLM-TIKG: Threat intelligence knowledge graph construction utilizing large language model","authors":"","doi":"10.1016/j.cose.2024.103999","DOIUrl":"10.1016/j.cose.2024.103999","url":null,"abstract":"<div><p>Open-source threat intelligence is often unstructured and cannot be directly applied to the next detection and defense. By constructing a knowledge graph through open-source threat intelligence, we can better apply this information to intrusion detection. However, the current methods for constructing knowledge graphs face limitations due to the domain-specific attributes of entities and the analysis of lengthy texts, and they require large amounts of labeled data. Furthermore, there is a lack of authoritative open-source annotated threat intelligence datasets, which require significant manual effort. Moreover, it is noteworthy that current research often neglects the textual descriptions of attack behaviors, resulting in the loss of vital information to understand intricate cyber threats. To address these issues, we propose LLM-TIKG that applies the large language model to construct a knowledge graph from unstructured open-source threat intelligence. The few-shot learning capability of GPT is leveraged to achieve data annotation and augmentation, thereby creating the datasets for fine-tuning a smaller language model (7B). Using the fine-tuned model, we perform topic classification on the collected reports, extract entities and relationships, and extract TTPs from the attack description. This process results in the construction of a threat intelligence knowledge graph, enabling automated and universal analysis of textualized threat intelligence. The experimental results demonstrate improved performance in both named entity recognition and TTP classification, achieving the precision of 87.88% and 96.53%, respectively.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141705027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending limited datasets with GAN-like self-supervision for SMS spam detection 利用类似于 GAN 的自监督功能扩展有限的数据集,以检测垃圾短信
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-14 DOI: 10.1016/j.cose.2024.103998

Short Message Service (SMS) spamming is a harmful phishing attack on mobile phones. That is, fraudsters are trying to misuse personal user information, using tricky text messages, sometimes included with a fake URL that asks for this personal information, such as passwords, usernames, etc. In the world of Machine Learning, several approaches have tried to attitudinize this problem, but the lack of available data resources was commonly the main drawback towards a good enough solution. Therefore, in this paper, we suggest a dataset extension technique for small datasets, based on an Out Of Distribution (OOD) metric. Hence, different approaches such as Generative Adversarial Networks (GANs) were suggested, yet GANs are hard to train whenever datasets are limited in terms of sample size. In this paper, we present a GAN-like method that imitates the generator concept of GANs for the purpose of limited datasets extension, using the OOD concept. By using a sophisticated text generation method, we show how to apply it over datasets from the domain of fraud and spam detection in SMS messages, and achieve over 25% relative improvement, compared to two other solutions. In addition, due to the class imbalance in typical spam datasets, our approach is being examined over another dataset, in order to verify that the false alarm rate is low enough.

短信息服务(SMS)垃圾邮件是一种针对手机的有害网络钓鱼攻击。也就是说,欺诈者试图利用刁钻的短信滥用用户个人信息,有时还会附上一个假冒的 URL,要求用户提供密码、用户名等个人信息。在机器学习领域,有几种方法试图解决这一问题,但缺乏可用数据资源通常是无法找到足够好的解决方案的主要障碍。因此,在本文中,我们提出了一种基于分布外(OOD)度量的小型数据集扩展技术。因此,我们提出了生成对抗网络(GANs)等不同方法,但只要数据集的样本量有限,GANs 就很难训练。在本文中,我们提出了一种类似 GAN 的方法,利用 OOD 概念,模仿 GAN 的生成器概念,用于有限数据集的扩展。通过使用一种复杂的文本生成方法,我们展示了如何将其应用于短信欺诈和垃圾邮件检测领域的数据集,并与其他两种解决方案相比,取得了超过 25% 的相对改进。此外,由于典型垃圾邮件数据集的类不平衡,我们正在对另一个数据集进行检验,以验证误报率足够低。
{"title":"Extending limited datasets with GAN-like self-supervision for SMS spam detection","authors":"","doi":"10.1016/j.cose.2024.103998","DOIUrl":"10.1016/j.cose.2024.103998","url":null,"abstract":"<div><p>Short Message Service (SMS) spamming is a harmful phishing attack on mobile phones. That is, fraudsters are trying to misuse personal user information, using tricky text messages, sometimes included with a fake URL that asks for this personal information, such as passwords, usernames, etc. In the world of Machine Learning, several approaches have tried to attitudinize this problem, but the lack of available data resources was commonly the main drawback towards a good enough solution. Therefore, in this paper, we suggest a dataset extension technique for small datasets, based on an Out Of Distribution (OOD) metric. Hence, different approaches such as Generative Adversarial Networks (GANs) were suggested, yet GANs are hard to train whenever datasets are limited in terms of sample size. In this paper, we present a GAN-like method that imitates the generator concept of GANs for the purpose of limited datasets extension, using the OOD concept. By using a sophisticated text generation method, we show how to apply it over datasets from the domain of fraud and spam detection in SMS messages, and achieve over 25% relative improvement, compared to two other solutions. In addition, due to the class imbalance in typical spam datasets, our approach is being examined over another dataset, in order to verify that the false alarm rate is low enough.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141708996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorrect compliance and correct noncompliance with information security policies: A framework of rule-related information security behaviour 不正确地遵守和不正确地不遵守信息安全政策:与规则相关的信息安全行为框架
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-11 DOI: 10.1016/j.cose.2024.103986

Information security policy (ISP) compliance is recognized as a key measure for dealing with human errors when protecting information. A considerable and growing body of literature has studied the persuasive, deterrent, and coercive antecedents of compliant and noncompliant behaviour. Simultaneously, research indicates that real life situations are too complex and varied to prescribe in terms of a priori rules of acceptable behaviour, and create situations where compliance is in fact harmful for achieving organisational security and business goals. Thus, regarding ISP compliance as inherently “correct” and noncompliance as inherently “incorrect”, may contribute to creating problems that compliance research seeks to alleviate. In this research perspective, we argue that ISP compliance and noncompliance cannot be universally and invariably determined as “correct” or “incorrect” but that they become meaningful only when evaluated against organisational outcomes. We draw on organisational accident theorists to develop our arguments and propose a framework of rule-related information security behaviour (RISB) in order to conceptualize different types of ISP compliant and noncompliant behaviour and their organisational outcomes. Our research argues that compliance and noncompliance are nor inherently correct or incorrect and that making the judgement on the correctness of these actions requires considering the rule, the action, and the outcome.

遵守信息安全政策(ISP)被认为是在保护信息时处理人为错误的关键措施。大量文献对遵守和不遵守行为的说服性、威慑性和胁迫性前因后果进行了研究,而且这种研究还在不断增加。同时,研究表明,现实生活中的情况过于复杂多变,无法用先验的规则来规定可接受的行为,而且会产生这样的情况,即遵守规则实际上对实现组织安全和业务目标有害。因此,将遵守 ISP 视为固有的 "正确",而将不遵守视为固有的 "不正确",可能会造成合规性研究试图缓解的问题。在这一研究视角中,我们认为 ISP 合规和不合规并不能一概而论地判定为 "正确 "或 "不正确",只有在根据组织结果进行评估时,它们才变得有意义。我们借鉴了组织事故理论家的观点,提出了一个与规则相关的信息安全行为(RISB)框架,以便对不同类型的遵守和不遵守 ISP 的行为及其组织结果进行概念化。我们的研究认为,合规和不合规行为本身并没有正确与否之分,判断这些行为的正确与否需要考虑规则、行为和结果。
{"title":"Incorrect compliance and correct noncompliance with information security policies: A framework of rule-related information security behaviour","authors":"","doi":"10.1016/j.cose.2024.103986","DOIUrl":"10.1016/j.cose.2024.103986","url":null,"abstract":"<div><p>Information security policy (ISP) compliance is recognized as a key measure for dealing with human errors when protecting information. A considerable and growing body of literature has studied the persuasive, deterrent, and coercive antecedents of compliant and noncompliant behaviour. Simultaneously, research indicates that real life situations are too complex and varied to prescribe in terms of a priori rules of acceptable behaviour, and create situations where compliance is in fact harmful for achieving organisational security and business goals. Thus, regarding ISP compliance as inherently “correct” and noncompliance as inherently “incorrect”, may contribute to creating problems that compliance research seeks to alleviate. In this research perspective, we argue that ISP compliance and noncompliance cannot be universally and invariably determined as “correct” or “incorrect” but that they become meaningful only when evaluated against organisational outcomes. We draw on organisational accident theorists to develop our arguments and propose a framework of rule-related information security behaviour (RISB) in order to conceptualize different types of ISP compliant and noncompliant behaviour and their organisational outcomes. Our research argues that compliance and noncompliance are nor inherently correct or incorrect and that making the judgement on the correctness of these actions requires considering the rule, the action, and the outcome.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141709746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking of synthetic network data: Reviewing challenges and approaches 合成网络数据基准:审查挑战和方法
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-09 DOI: 10.1016/j.cose.2024.103993

The development of Network Intrusion Detection Systems (NIDS) requires labeled network traffic, especially to train and evaluate machine learning approaches. Besides the recording of traffic, the generation of traffic via generative models is a promising approach to obtain vast amounts of labeled data. There exist various machine learning approaches for data generation, but the assessment of the data quality is complex and not standardized. The lack of common quality criteria complicates the comparison of synthetic data generation approaches and synthetic data.

Our work addresses this gap in multiple steps. Firstly, we review and categorize existing approaches for evaluating synthetic data in the network traffic domain and other data domains as well. Secondly, based on our review, we compile a setup of metrics that are suitable for the NetFlow domain, which we aggregate into two metrics Data Dissimilarity Score and Domain Dissimilarity Score. Thirdly, we evaluate the proposed metrics on real world data sets, to demonstrate their ability to distinguish between samples from different data sets. As a final step, we conduct a case study to demonstrate the application of the metrics for the evaluation of synthetic data. We calculate the metrics on samples from real NetFlow data sets to define an upper and lower bound for inter- and intra-data set similarity scores. Afterward, we generate synthetic data via Generative Adversarial Network (GAN) and Generative Pre-trained Transformer 2 (GPT-2) and apply the metrics to these synthetic data and incorporate these lower bound baseline results to obtain an objective benchmark. The application of the benchmarking process is demonstrated on three NetFlow benchmark data sets, NF-CSE-CIC-IDS2018, NF-ToN-IoT and NF-UNSW-NB15. Our demonstration indicates that this benchmark framework captures the differences in similarity between real world data and synthetic data of varying quality well, and can therefore be used to assess the quality of generated synthetic data.

网络入侵检测系统(NIDS)的开发需要有标签的网络流量,特别是用于训练和评估机器学习方法。除了记录流量外,通过生成模型生成流量也是获得大量标记数据的一种有前途的方法。目前有多种用于生成数据的机器学习方法,但数据质量评估既复杂又不规范。由于缺乏通用的质量标准,合成数据生成方法和合成数据的比较变得更加复杂。首先,我们对现有的网络流量领域和其他数据领域的合成数据评估方法进行了回顾和分类。其次,在回顾的基础上,我们汇编了一套适用于 NetFlow 领域的度量标准,并将其汇总为两个度量标准:数据差异度得分(Data Dissimilarity Score)和领域差异度得分(Domain Dissimilarity Score)。第三,我们在真实世界的数据集上评估所提出的指标,以证明它们区分不同数据集样本的能力。最后,我们进行了一项案例研究,以证明这些指标在合成数据评估中的应用。我们对来自真实 NetFlow 数据集的样本计算指标,以定义数据集间和数据集内相似性得分的上限和下限。然后,我们通过生成对抗网络(GAN)和生成预训练转换器 2(GPT-2)生成合成数据,并将指标应用于这些合成数据,并将这些下限基线结果纳入其中,从而获得客观基准。我们在三个 NetFlow 基准数据集(NF-CSE-CIC-IDS2018、NF-ToN-IoT 和 NF-UNSW-NB15)上演示了基准测试过程的应用。我们的演示表明,该基准框架能很好地捕捉真实世界数据与不同质量的合成数据之间的相似性差异,因此可用于评估生成的合成数据的质量。
{"title":"Benchmarking of synthetic network data: Reviewing challenges and approaches","authors":"","doi":"10.1016/j.cose.2024.103993","DOIUrl":"10.1016/j.cose.2024.103993","url":null,"abstract":"<div><p>The development of Network Intrusion Detection Systems (NIDS) requires labeled network traffic, especially to train and evaluate machine learning approaches. Besides the recording of traffic, the generation of traffic via generative models is a promising approach to obtain vast amounts of labeled data. There exist various machine learning approaches for data generation, but the assessment of the data quality is complex and not standardized. The lack of common quality criteria complicates the comparison of synthetic data generation approaches and synthetic data.</p><p>Our work addresses this gap in multiple steps. Firstly, we review and categorize existing approaches for evaluating synthetic data in the network traffic domain and other data domains as well. Secondly, based on our review, we compile a setup of metrics that are suitable for the NetFlow domain, which we aggregate into two metrics Data Dissimilarity Score and Domain Dissimilarity Score. Thirdly, we evaluate the proposed metrics on real world data sets, to demonstrate their ability to distinguish between samples from different data sets. As a final step, we conduct a case study to demonstrate the application of the metrics for the evaluation of synthetic data. We calculate the metrics on samples from real NetFlow data sets to define an upper and lower bound for inter- and intra-data set similarity scores. Afterward, we generate synthetic data via Generative Adversarial Network (GAN) and Generative Pre-trained Transformer 2 (GPT-2) and apply the metrics to these synthetic data and incorporate these lower bound baseline results to obtain an objective benchmark. The application of the benchmarking process is demonstrated on three NetFlow benchmark data sets, NF-CSE-CIC-IDS2018, NF-ToN-IoT and NF-UNSW-NB15. Our demonstration indicates that this benchmark framework captures the differences in similarity between real world data and synthetic data of varying quality well, and can therefore be used to assess the quality of generated synthetic data.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141694715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OSSIntegrity: Collaborative open-source code integrity verification OSSIntegrity:协作式开源代码完整性验证
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-08 DOI: 10.1016/j.cose.2024.103977

Open-source software (OSS) libraries have become popular among developers due to their ability to reduce development time and costs. However, OSS can also be exploited and used as a means of conducting OSS supply chain attacks. In OSS attacks, malicious code is injected into libraries used by the target. Previous studies have proposed various methods for preventing and detecting such attacks, however most of them focused on untargeted attacks. In contrast, this paper focuses on targeted OSS supply chain attacks which are performed by skilled and persistent attackers with strong technical aptitude. Targeted OSS attacks are crafted towards a specific target (i.e., developer). Since these attacks do not target general OSS repositories, they tend to go under the radar for a long period of time, allowing an attacker to gain access to sensitive data or systems. In this paper, we propose (SC)2V — secure crowdsource-based code verification, a novel distributed and scalable framework for verifying OSS libraries. (SC)2V is aimed at preventing targeted supply chain attacks and is integrated in the build phase of software production, serving as an additional code verification step before packaging the application and deploying it. (SC)2V involves both users (developers seeking to verify an OSS library) and verifiers that contribute to the collaborative verification effort. (SC)2V considers a library as verified and safe when a consensus is reached among the verifiers. We evaluated the proposed method using eight different attack scenarios (including cold start and edge cases), on around 900 popular OSS libraries and their dependencies, each of which included an average of 10 files and was verified by at least five participants; a total of 127,000 files were evaluated, and the results indicate that it took our framework an average of just 26 s to issue an alert against the attacks.

开放源码软件(OSS)库能够减少开发时间和成本,因此深受开发人员的欢迎。然而,开放源码软件也可能被利用,成为进行开放源码软件供应链攻击的手段。在开放源码软件攻击中,恶意代码会被注入目标使用的库中。以往的研究提出了各种预防和检测此类攻击的方法,但大多数都侧重于非目标攻击。与此相反,本文重点关注有针对性的开放源码软件供应链攻击,这些攻击是由技术娴熟、持续性强的攻击者实施的。有针对性的开放源码软件攻击是针对特定目标(即开发人员)精心策划的。由于这些攻击并不针对一般的开放源码软件库,因此往往会在很长一段时间内不引人注意,从而使攻击者能够访问敏感数据或系统。在本文中,我们提出了 (SC)2V - 基于众源的安全代码验证,这是一个用于验证开放源码软件库的新型分布式可扩展框架。(SC)2V旨在防止有针对性的供应链攻击,并集成到软件生产的构建阶段,作为打包和部署应用程序前的额外代码验证步骤。(SC)2V 既涉及用户(寻求验证开放源码软件库的开发人员),也涉及为协作验证工作做出贡献的验证人员。当验证者达成共识时,(SC)2V 就会认为程序库是经过验证和安全的。我们使用八种不同的攻击情况(包括冷启动和边缘情况)对所提出的方法进行了评估,评估对象是大约 900 个流行的开放源码软件库及其依赖库,每个库平均包含 10 个文件,并由至少五名参与者进行了验证;总共评估了 127,000 个文件,结果表明,我们的框架平均只需 26 秒就能发出针对攻击的警报。
{"title":"OSSIntegrity: Collaborative open-source code integrity verification","authors":"","doi":"10.1016/j.cose.2024.103977","DOIUrl":"10.1016/j.cose.2024.103977","url":null,"abstract":"<div><p>Open-source software (OSS) libraries have become popular among developers due to their ability to reduce development time and costs. However, OSS can also be exploited and used as a means of conducting OSS supply chain attacks. In OSS attacks, malicious code is injected into libraries used by the target. Previous studies have proposed various methods for preventing and detecting such attacks, however most of them focused on untargeted attacks. In contrast, this paper focuses on targeted OSS supply chain attacks which are performed by skilled and persistent attackers with strong technical aptitude. Targeted OSS attacks are crafted towards a specific target (i.e., developer). Since these attacks do not target general OSS repositories, they tend to go under the radar for a long period of time, allowing an attacker to gain access to sensitive data or systems. In this paper, we propose <span><math><mrow><msup><mrow><mrow><mo>(</mo><mi>S</mi><mi>C</mi><mo>)</mo></mrow></mrow><mrow><mn>2</mn></mrow></msup><mi>V</mi></mrow></math></span> — secure crowdsource-based code verification, a novel distributed and scalable framework for verifying OSS libraries. <span><math><mrow><msup><mrow><mrow><mo>(</mo><mi>S</mi><mi>C</mi><mo>)</mo></mrow></mrow><mrow><mn>2</mn></mrow></msup><mi>V</mi></mrow></math></span> is aimed at preventing targeted supply chain attacks and is integrated in the build phase of software production, serving as an additional code verification step before packaging the application and deploying it. <span><math><mrow><msup><mrow><mrow><mo>(</mo><mi>S</mi><mi>C</mi><mo>)</mo></mrow></mrow><mrow><mn>2</mn></mrow></msup><mi>V</mi></mrow></math></span> involves both users (developers seeking to verify an OSS library) and verifiers that contribute to the collaborative verification effort. <span><math><mrow><msup><mrow><mrow><mo>(</mo><mi>S</mi><mi>C</mi><mo>)</mo></mrow></mrow><mrow><mn>2</mn></mrow></msup><mi>V</mi></mrow></math></span> considers a library as verified and safe when a consensus is reached among the verifiers. We evaluated the proposed method using eight different attack scenarios (including cold start and edge cases), on around 900 popular OSS libraries and their dependencies, each of which included an average of 10 files and was verified by at least five participants; a total of 127,000 files were evaluated, and the results indicate that it took our framework an average of just 26 s to issue an alert against the attacks.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141637250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel passive-active detection system for false data injection attacks in industrial control systems 针对工业控制系统虚假数据注入攻击的新型被动-主动检测系统
IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-07 DOI: 10.1016/j.cose.2024.103996

With the increasing occurrence of incidents causing significant damage due to attacks on Industrial Control Systems (ICSs), people pay attention to the cyber security of ICSs. This study improves existing active detection mechanisms and proposes an integrated passive-active detection system to detect False Data Injection Attacks (FDIA) for ICS. Since it is challenging to detect FDIA in current operational practices, the method presented in this research not only compares passive received system data with predefined rules to detect attacks but also launches active detection by controlling actuators to find attackers and achieve comprehensive detection of FDIA targeting ICS. This work dynamically adjusts the frequency of launching active detection through risk assessment, aiming to minimize the impact on operational efficiency during low-risk periods and reduce the time required for detecting attacks during high-risk periods. The experimental results show that using the proposed system, when false data differs by 10 % from accurate data, the detection rate can reach 99.9 %, which is 22.5 % higher than active detection by the random launch method when false data differs by 5 % from accurate data, the detection rate can reach 95.4 %, which is 18.2 % higher than active detect by randomly launch method, and even if false data only differs by 3 % from accurate data, the detection rate can reach 92.9 %, which is 16.5 % higher than active detect by randomly launch method.

随着工业控制系统(ICS)受到攻击而造成重大损失的事件日益增多,人们开始关注 ICS 的网络安全问题。本研究改进了现有的主动检测机制,提出了一种集成的被动-主动检测系统,用于检测工业控制系统的虚假数据注入攻击(FDIA)。由于在当前的操作实践中检测 FDIA 具有挑战性,本研究提出的方法不仅将被动接收的系统数据与预定义规则进行比较以检测攻击,还通过控制执行器启动主动检测以查找攻击者,从而实现对针对 ICS 的 FDIA 的全面检测。这项工作通过风险评估动态调整启动主动检测的频率,旨在将低风险期间对运行效率的影响降至最低,并减少高风险期间检测攻击所需的时间。实验结果表明,使用所提出的系统,当虚假数据与准确数据相差 10%时,检测率可达 99.9%,比随机发射法的主动检测率高出 22.5%;当虚假数据与准确数据相差 5%时,检测率可达 95.4%,比随机发射法的主动检测率高出 18.2%;即使虚假数据与准确数据仅相差 3%,检测率也可达 92.9%,比随机发射法的主动检测率高出 16.5%。
{"title":"A novel passive-active detection system for false data injection attacks in industrial control systems","authors":"","doi":"10.1016/j.cose.2024.103996","DOIUrl":"10.1016/j.cose.2024.103996","url":null,"abstract":"<div><p>With the increasing occurrence of incidents causing significant damage due to attacks on Industrial Control Systems (ICSs), people pay attention to the cyber security of ICSs. This study improves existing active detection mechanisms and proposes an integrated passive-active detection system to detect False Data Injection Attacks (FDIA) for ICS. Since it is challenging to detect FDIA in current operational practices, the method presented in this research not only compares passive received system data with predefined rules to detect attacks but also launches active detection by controlling actuators to find attackers and achieve comprehensive detection of FDIA targeting ICS. This work dynamically adjusts the frequency of launching active detection through risk assessment, aiming to minimize the impact on operational efficiency during low-risk periods and reduce the time required for detecting attacks during high-risk periods. The experimental results show that using the proposed system, when false data differs by 10 % from accurate data, the detection rate can reach 99.9 %, which is 22.5 % higher than active detection by the random launch method when false data differs by 5 % from accurate data, the detection rate can reach 95.4 %, which is 18.2 % higher than active detect by randomly launch method, and even if false data only differs by 3 % from accurate data, the detection rate can reach 92.9 %, which is 16.5 % higher than active detect by randomly launch method.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141709971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1