首页 > 最新文献

ACM Transactions on Information and System Security最新文献

英文 中文
Examining a Large Keystroke Biometrics Dataset for Statistical-Attack Openings 检查统计攻击开口的大型击键生物识别数据集
Q Engineering Pub Date : 2013-09-01 DOI: 10.1145/2516960
Abdul Serwadda, V. Phoha
Research on keystroke-based authentication has traditionally assumed human impostors who generate forgeries by physically typing on the keyboard. With bots now well understood to have the capacity to originate precisely timed keystroke sequences, this model of attack is likely to underestimate the threat facing a keystroke-based system in practice. In this work, we investigate how a keystroke-based authentication system would perform if it were subjected to synthetic attacks designed to mimic the typical user. To implement the attacks, we perform a rigorous statistical analysis on keystroke biometrics data collected over a 2-year period from more than 3000 users, and then use the observed statistical traits to design and launch algorithmic attacks against three state-of-the-art password-based keystroke verification systems. Relative to the zero-effort attacks typically used to test the performance of keystroke biometric systems, we show that our algorithmic attack increases the mean Equal Error Rates (EERs) of three high performance keystroke verifiers by between 28.6% and 84.4%. We also find that the impact of the attack is more pronounced when the keystroke profiles subjected to the attack are based on shorter strings, and that some users see considerably greater performance degradation under the attack than others. This article calls for a shift from the traditional zero-effort approach of testing the performance of password-based keystroke verifiers, to a more rigorous algorithmic approach that captures the threat posed by today’s bots.
传统上,基于击键的身份验证研究假设人类冒充者通过在键盘上打字来生成伪造品。现在人们已经很清楚,机器人有能力发起精确定时的按键序列,这种攻击模式很可能低估了基于按键的系统在实践中面临的威胁。在这项工作中,我们研究了基于击键的身份验证系统在受到模仿典型用户的合成攻击时会如何执行。为了实施攻击,我们对从3000多名用户收集的2年多时间内的击键生物特征数据进行了严格的统计分析,然后使用观察到的统计特征来设计和启动针对三种最先进的基于密码的击键验证系统的算法攻击。相对于通常用于测试击键生物识别系统性能的零努力攻击,我们表明,我们的算法攻击将三种高性能击键验证器的平均相等错误率(EERs)提高了28.6%至84.4%。我们还发现,当遭受攻击的击键配置文件基于较短的字符串时,攻击的影响更为明显,并且一些用户在攻击下的性能下降要比其他用户严重得多。本文呼吁从传统的测试基于密码的击键验证器性能的零努力方法转向更严格的算法方法,以捕获当今机器人构成的威胁。
{"title":"Examining a Large Keystroke Biometrics Dataset for Statistical-Attack Openings","authors":"Abdul Serwadda, V. Phoha","doi":"10.1145/2516960","DOIUrl":"https://doi.org/10.1145/2516960","url":null,"abstract":"Research on keystroke-based authentication has traditionally assumed human impostors who generate forgeries by physically typing on the keyboard. With bots now well understood to have the capacity to originate precisely timed keystroke sequences, this model of attack is likely to underestimate the threat facing a keystroke-based system in practice. In this work, we investigate how a keystroke-based authentication system would perform if it were subjected to synthetic attacks designed to mimic the typical user. To implement the attacks, we perform a rigorous statistical analysis on keystroke biometrics data collected over a 2-year period from more than 3000 users, and then use the observed statistical traits to design and launch algorithmic attacks against three state-of-the-art password-based keystroke verification systems.\u0000 Relative to the zero-effort attacks typically used to test the performance of keystroke biometric systems, we show that our algorithmic attack increases the mean Equal Error Rates (EERs) of three high performance keystroke verifiers by between 28.6% and 84.4%. We also find that the impact of the attack is more pronounced when the keystroke profiles subjected to the attack are based on shorter strings, and that some users see considerably greater performance degradation under the attack than others. This article calls for a shift from the traditional zero-effort approach of testing the performance of password-based keystroke verifiers, to a more rigorous algorithmic approach that captures the threat posed by today’s bots.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79098695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
CPM: Masking Code Pointers to Prevent Code Injection Attacks CPM:屏蔽代码指针以防止代码注入攻击
Q Engineering Pub Date : 2013-06-01 DOI: 10.1145/2487222.2487223
Pieter Philippaerts, Yves Younan, Stijn Muylle, F. Piessens, Sven Lachmund, T. Walter
Code Pointer Masking (CPM) is a novel countermeasure against code injection attacks on native code. By enforcing the correct semantics of code pointers, CPM thwarts attacks that modify code pointers to divert the application’s control flow. It does not rely on secret values such as stack canaries and protects against attacks that are not addressed by state-of-the-art countermeasures of similar performance. This article reports on two prototype implementations on very distinct processor architectures, showing that the idea behind CPM is portable. The evaluation also shows that the overhead of using our countermeasure is very small and the security benefits are substantial.
代码指针掩蔽(CPM)是一种针对本地代码的代码注入攻击的新对策。通过强制代码指针的正确语义,CPM可以阻止修改代码指针以转移应用程序控制流的攻击。它不依赖于诸如堆栈金丝雀之类的秘密值,并且可以防止性能相似的最新对策无法解决的攻击。本文报告了在非常不同的处理器体系结构上的两个原型实现,展示了CPM背后的思想是可移植的。评估还表明,使用我们的对策的开销非常小,安全效益是可观的。
{"title":"CPM: Masking Code Pointers to Prevent Code Injection Attacks","authors":"Pieter Philippaerts, Yves Younan, Stijn Muylle, F. Piessens, Sven Lachmund, T. Walter","doi":"10.1145/2487222.2487223","DOIUrl":"https://doi.org/10.1145/2487222.2487223","url":null,"abstract":"Code Pointer Masking (CPM) is a novel countermeasure against code injection attacks on native code. By enforcing the correct semantics of code pointers, CPM thwarts attacks that modify code pointers to divert the application’s control flow. It does not rely on secret values such as stack canaries and protects against attacks that are not addressed by state-of-the-art countermeasures of similar performance. This article reports on two prototype implementations on very distinct processor architectures, showing that the idea behind CPM is portable. The evaluation also shows that the overhead of using our countermeasure is very small and the security benefits are substantial.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73886476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Leakage Mapping: A Systematic Methodology for Assessing the Side-Channel Information Leakage of Cryptographic Implementations 泄漏映射:一种评估加密实现侧信道信息泄漏的系统方法
Q Engineering Pub Date : 2013-06-01 DOI: 10.1145/2487222.2487224
William E. Cobb, R. Baldwin, Eric D. Laspe
We propose a generalized framework to evaluate the side-channel information leakage of symmetric block ciphers. The leakage mapping methodology enables the systematic and efficient identification and mitigation of problematic information leakages by exhaustively considering relevant leakage models. The evaluation procedure bounds the anticipated resistance of an implementation to the general class of univariate differential side-channel analysis techniques. Typical applications are demonstrated using the well-known Hamming weight and Hamming distance leakage models, with recommendations for the incorporation of more accurate models. The evaluation results are empirically validated against correlation-based differential side-channel analysis attacks on two typical unprotected implementations of the Advanced Encryption Standard.
我们提出了一个广义框架来评估对称分组密码的边信道信息泄漏。泄漏映射方法通过详尽地考虑相关的泄漏模型,能够系统有效地识别和减轻有问题的信息泄漏。评估程序将实现的预期阻力限制在单变量微分边通道分析技术的一般类别中。使用众所周知的汉明重量和汉明距离泄漏模型演示了典型应用,并建议采用更精确的模型。在两种典型的无保护高级加密标准实现上,对评估结果进行了基于相关的差分边信道分析攻击的经验验证。
{"title":"Leakage Mapping: A Systematic Methodology for Assessing the Side-Channel Information Leakage of Cryptographic Implementations","authors":"William E. Cobb, R. Baldwin, Eric D. Laspe","doi":"10.1145/2487222.2487224","DOIUrl":"https://doi.org/10.1145/2487222.2487224","url":null,"abstract":"We propose a generalized framework to evaluate the side-channel information leakage of symmetric block ciphers. The leakage mapping methodology enables the systematic and efficient identification and mitigation of problematic information leakages by exhaustively considering relevant leakage models. The evaluation procedure bounds the anticipated resistance of an implementation to the general class of univariate differential side-channel analysis techniques. Typical applications are demonstrated using the well-known Hamming weight and Hamming distance leakage models, with recommendations for the incorporation of more accurate models. The evaluation results are empirically validated against correlation-based differential side-channel analysis attacks on two typical unprotected implementations of the Advanced Encryption Standard.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73926056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automated Anomaly Detector Adaptation using Adaptive Threshold Tuning 使用自适应阈值调优的自动异常检测器自适应
Q Engineering Pub Date : 2013-04-01 DOI: 10.1145/2445566.2445569
M. Ali, E. Al-Shaer, Hassan Khan, S. A. Khayam
Real-time network- and host-based Anomaly Detection Systems (ADSs) transform a continuous stream of input data into meaningful and quantifiable anomaly scores. These scores are subsequently compared to a fixed detection threshold and classified as either benign or malicious. We argue that a real-time ADS’ input changes considerably over time and a fixed threshold value cannot guarantee good anomaly detection accuracy for such a time-varying input. In this article, we propose a simple and generic technique to adaptively tune the detection threshold of any ADS that works on threshold method. To this end, we first perform statistical and information-theoretic analysis of network- and host-based ADSs’ anomaly scores to reveal a consistent time correlation structure during benign activity periods. We model the observed correlation structure using Markov chains, which are in turn used in a stochastic target tracking framework to adapt an ADS’ detection threshold in accordance with real-time measurements. We also use statistical techniques to make the proposed algorithm resilient to sporadic changes and evasion attacks. In order to evaluate the proposed approach, we incorporate the proposed adaptive thresholding module into multiple ADSs and evaluate those ADSs over comprehensive and independently collected network and host attack datasets. We show that, while reducing the need of human threshold configuration, the proposed technique provides considerable and consistent accuracy improvements for all evaluated ADSs.
基于实时网络和主机的异常检测系统(ads)将连续的输入数据流转换为有意义和可量化的异常评分。随后将这些分数与固定的检测阈值进行比较,并将其分类为良性或恶意。我们认为实时ADS的输入随时间变化很大,固定的阈值不能保证这种时变输入的良好异常检测精度。在本文中,我们提出了一种简单而通用的技术来自适应地调整任何使用阈值方法的ADS的检测阈值。为此,我们首先对基于网络和主机的ads的异常评分进行统计和信息论分析,以揭示良性活动期间一致的时间相关结构。我们使用马尔可夫链对观察到的相关结构进行建模,然后将其用于随机目标跟踪框架中,以根据实时测量调整ADS的检测阈值。我们还使用统计技术使所提出的算法对零星变化和逃避攻击具有弹性。为了评估所提出的方法,我们将所提出的自适应阈值模块整合到多个ads中,并在全面且独立收集的网络和主机攻击数据集上评估这些ads。我们表明,在减少人工阈值配置的同时,所提出的技术为所有评估的ads提供了可观且一致的准确性改进。
{"title":"Automated Anomaly Detector Adaptation using Adaptive Threshold Tuning","authors":"M. Ali, E. Al-Shaer, Hassan Khan, S. A. Khayam","doi":"10.1145/2445566.2445569","DOIUrl":"https://doi.org/10.1145/2445566.2445569","url":null,"abstract":"Real-time network- and host-based Anomaly Detection Systems (ADSs) transform a continuous stream of input data into meaningful and quantifiable anomaly scores. These scores are subsequently compared to a fixed detection threshold and classified as either benign or malicious. We argue that a real-time ADS’ input changes considerably over time and a fixed threshold value cannot guarantee good anomaly detection accuracy for such a time-varying input. In this article, we propose a simple and generic technique to adaptively tune the detection threshold of any ADS that works on threshold method. To this end, we first perform statistical and information-theoretic analysis of network- and host-based ADSs’ anomaly scores to reveal a consistent time correlation structure during benign activity periods. We model the observed correlation structure using Markov chains, which are in turn used in a stochastic target tracking framework to adapt an ADS’ detection threshold in accordance with real-time measurements. We also use statistical techniques to make the proposed algorithm resilient to sporadic changes and evasion attacks. In order to evaluate the proposed approach, we incorporate the proposed adaptive thresholding module into multiple ADSs and evaluate those ADSs over comprehensive and independently collected network and host attack datasets. We show that, while reducing the need of human threshold configuration, the proposed technique provides considerable and consistent accuracy improvements for all evaluated ADSs.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89654021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Fragmentation Considered Vulnerable 碎片被认为是脆弱的
Q Engineering Pub Date : 2013-04-01 DOI: 10.1145/2445566.2445568
Y. Gilad, A. Herzberg
We show that fragmented IPv4 and IPv6 traffic is vulnerable to effective interception and denial-of-service (DoS) attacks by an off-path attacker. Specifically, we demonstrate a weak attacker intercepting more than 80% of the data between peers and causing over 94% loss rate. We show that our attacks are practical through experimental validation on popular industrial and open-source products, with realistic network setups that involve NAT or tunneling and include concurrent legitimate traffic as well as packet losses. The interception attack requires a zombie agent behind the same NAT or tunnel-gateway as the victim destination; the DoS attack only requires a puppet agent, that is, a sandboxed applet or script running in web-browser context. The complexity of our attacks depends on the predictability of the IP Identification (ID) field which is typically implemented as one or multiple counters, as allowed and recommended by the IP specifications. The attacks are much simpler and more efficient for implementations, such as Windows, which use one ID counter for all destinations. Therefore, much of our focus is on presenting effective attacks for implementations, such as Linux, which use per-destination ID counters. We present practical defenses for the attacks presented in this article, the defenses can be deployed on network firewalls without changes to hosts or operating system kernel.
我们表明,碎片化的IPv4和IPv6流量很容易受到偏离路径攻击者的有效拦截和拒绝服务(DoS)攻击。具体来说,我们展示了一个弱攻击者拦截超过80%的对等体之间的数据,造成超过94%的损失率。我们通过对流行的工业和开源产品的实验验证表明,我们的攻击是实用的,具有涉及NAT或隧道的现实网络设置,包括并发的合法流量以及数据包丢失。拦截攻击需要僵尸代理位于与受害者目的地相同的NAT或隧道网关后面;DoS攻击只需要一个傀儡代理,即在web浏览器上下文中运行的沙盒小程序或脚本。攻击的复杂性取决于IP标识(ID)字段的可预测性,该字段通常作为一个或多个计数器实现,这是IP规范允许和推荐的。对于实现(如Windows)来说,这种攻击要简单得多,也更有效,因为Windows对所有目的地使用一个ID计数器。因此,我们的重点是如何对实现(如Linux)提供有效的攻击,这些实现使用每个目的地ID计数器。我们为本文中介绍的攻击提供了实用的防御措施,这些防御措施可以部署在网络防火墙上,而无需更改主机或操作系统内核。
{"title":"Fragmentation Considered Vulnerable","authors":"Y. Gilad, A. Herzberg","doi":"10.1145/2445566.2445568","DOIUrl":"https://doi.org/10.1145/2445566.2445568","url":null,"abstract":"We show that fragmented IPv4 and IPv6 traffic is vulnerable to effective interception and denial-of-service (DoS) attacks by an off-path attacker. Specifically, we demonstrate a weak attacker intercepting more than 80% of the data between peers and causing over 94% loss rate.\u0000 We show that our attacks are practical through experimental validation on popular industrial and open-source products, with realistic network setups that involve NAT or tunneling and include concurrent legitimate traffic as well as packet losses. The interception attack requires a zombie agent behind the same NAT or tunnel-gateway as the victim destination; the DoS attack only requires a puppet agent, that is, a sandboxed applet or script running in web-browser context.\u0000 The complexity of our attacks depends on the predictability of the IP Identification (ID) field which is typically implemented as one or multiple counters, as allowed and recommended by the IP specifications. The attacks are much simpler and more efficient for implementations, such as Windows, which use one ID counter for all destinations. Therefore, much of our focus is on presenting effective attacks for implementations, such as Linux, which use per-destination ID counters.\u0000 We present practical defenses for the attacks presented in this article, the defenses can be deployed on network firewalls without changes to hosts or operating system kernel.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73669292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Mohawk: Abstraction-Refinement and Bound-Estimation for Verifying Access Control Policies Mohawk:验证访问控制策略的抽象-细化和边界估计
Q Engineering Pub Date : 2013-04-01 DOI: 10.1145/2445566.2445570
K. Jayaraman, Mahesh V. Tripunitara, Vijay Ganesh, M. Rinard, S. Chapin
Verifying that access-control systems maintain desired security properties is recognized as an important problem in security. Enterprise access-control systems have grown to protect tens of thousands of resources, and there is a need for verification to scale commensurately. We present techniques for abstraction-refinement and bound-estimation for bounded model checkers to automatically find errors in Administrative Role-Based Access Control (ARBAC) security policies. ARBAC is the first and most comprehensive administrative scheme for Role-Based Access Control (RBAC) systems. In the abstraction-refinement portion of our approach, we identify and discard roles that are unlikely to be relevant to the verification question (the abstraction step). We then restore such abstracted roles incrementally (the refinement steps). In the bound-estimation portion of our approach, we lower the estimate of the diameter of the reachability graph from the worst-case by recognizing relationships between roles and state-change rules. Our techniques complement one another, and are used with conventional bounded model checking. Our approach is sound and complete: an error is found if and only if it exists. We have implemented our technique in an access-control policy analysis tool called Mohawk. We show empirically that Mohawk scales well to realistic policies, and provide a comparison with prior tools.
验证访问控制系统保持预期的安全属性是公认的安全中的一个重要问题。企业访问控制系统已经发展到可以保护数以万计的资源,因此需要相应地进行验证。我们提出了用于有界模型检查器的抽象细化和边界估计技术,以自动发现基于管理角色的访问控制(ARBAC)安全策略中的错误。ARBAC是基于角色的访问控制(RBAC)系统的第一个也是最全面的管理方案。在我们的方法的抽象细化部分,我们识别并丢弃不太可能与验证问题相关的角色(抽象步骤)。然后我们逐渐恢复这些抽象的角色(细化步骤)。在我们方法的边界估计部分,我们通过识别角色和状态变化规则之间的关系,降低了最坏情况下可达图直径的估计。我们的技术相互补充,并与传统的有界模型检查一起使用。我们的方法是健全和完整的:当且仅当错误存在时才会发现错误。我们已经在一个名为Mohawk的访问控制策略分析工具中实现了我们的技术。我们从经验上证明了Mohawk可以很好地适用于现实政策,并提供了与先前工具的比较。
{"title":"Mohawk: Abstraction-Refinement and Bound-Estimation for Verifying Access Control Policies","authors":"K. Jayaraman, Mahesh V. Tripunitara, Vijay Ganesh, M. Rinard, S. Chapin","doi":"10.1145/2445566.2445570","DOIUrl":"https://doi.org/10.1145/2445566.2445570","url":null,"abstract":"Verifying that access-control systems maintain desired security properties is recognized as an important problem in security. Enterprise access-control systems have grown to protect tens of thousands of resources, and there is a need for verification to scale commensurately. We present techniques for abstraction-refinement and bound-estimation for bounded model checkers to automatically find errors in Administrative Role-Based Access Control (ARBAC) security policies. ARBAC is the first and most comprehensive administrative scheme for Role-Based Access Control (RBAC) systems. In the abstraction-refinement portion of our approach, we identify and discard roles that are unlikely to be relevant to the verification question (the abstraction step). We then restore such abstracted roles incrementally (the refinement steps). In the bound-estimation portion of our approach, we lower the estimate of the diameter of the reachability graph from the worst-case by recognizing relationships between roles and state-change rules. Our techniques complement one another, and are used with conventional bounded model checking. Our approach is sound and complete: an error is found if and only if it exists. We have implemented our technique in an access-control policy analysis tool called Mohawk. We show empirically that Mohawk scales well to realistic policies, and provide a comparison with prior tools.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82150060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Comparing Vulnerability Severity and Exploits Using Case-Control Studies 使用病例对照研究比较漏洞严重性和漏洞利用
Q Engineering Pub Date : 2013-01-07 DOI: 10.1145/2630069
Luca Allodi, F. Massacci
(U.S.) Rule-based policies for mitigating software risk suggest using the CVSS score to measure the risk of an individual vulnerability and act accordingly. A key issue is whether the ‘danger’ score does actually match the risk of exploitation in the wild, and if and how such a score could be improved. To address this question, we propose using a case-control study methodology similar to the procedure used to link lung cancer and smoking in the 1950s. A case-control study allows the researcher to draw conclusions on the relation between some risk factor (e.g., smoking) and an effect (e.g., cancer) by looking backward at the cases (e.g., patients) and comparing them with controls (e.g., randomly selected patients with similar characteristics). The methodology allows us to quantify the risk reduction achievable by acting on the risk factor. We illustrate the methodology by using publicly available data on vulnerabilities, exploits, and exploits in the wild to (1) evaluate the performances of the current risk factor in the industry, the CVSS base score; (2) determine whether it can be improved by considering additional factors such the existence of a proof-of-concept exploit, or of an exploit in the black markets. Our analysis reveals that (a) fixing a vulnerability just because it was assigned a high CVSS score is equivalent to randomly picking vulnerabilities to fix; (b) the existence of proof-of-concept exploits is a significantly better risk factor; (c) fixing in response to exploit presence in black markets yields the largest risk reduction.
(美国)用于减轻软件风险的基于规则的策略建议使用CVSS分数来测量单个漏洞的风险并相应地行动。一个关键问题是“危险”分数是否真的与野外开发的风险相匹配,以及是否以及如何提高这样的分数。为了解决这个问题,我们建议使用一种病例对照研究方法,类似于20世纪50年代用于将肺癌和吸烟联系起来的程序。病例对照研究允许研究人员通过回顾病例(如患者)并将其与对照(如随机选择具有相似特征的患者)进行比较,得出某些风险因素(如吸烟)与影响(如癌症)之间关系的结论。该方法使我们能够通过对风险因素采取行动来量化可实现的风险降低。我们通过在野外使用关于漏洞,漏洞和漏洞的公开可用数据来说明方法(1)评估行业中当前风险因素的性能,CVSS基础分数;(2)通过考虑诸如是否存在概念验证漏洞或是否存在黑市漏洞等其他因素,确定是否可以对其进行改进。我们的分析表明(a)修复一个漏洞只是因为它被分配了一个高CVSS分数相当于随机选择漏洞修复;(b)存在概念验证漏洞是一个明显更好的风险因素;(c)针对黑市中存在的剥削者而采取的固定措施可最大程度地降低风险。
{"title":"Comparing Vulnerability Severity and Exploits Using Case-Control Studies","authors":"Luca Allodi, F. Massacci","doi":"10.1145/2630069","DOIUrl":"https://doi.org/10.1145/2630069","url":null,"abstract":"(U.S.) Rule-based policies for mitigating software risk suggest using the CVSS score to measure the risk of an individual vulnerability and act accordingly. A key issue is whether the ‘danger’ score does actually match the risk of exploitation in the wild, and if and how such a score could be improved. To address this question, we propose using a case-control study methodology similar to the procedure used to link lung cancer and smoking in the 1950s. A case-control study allows the researcher to draw conclusions on the relation between some risk factor (e.g., smoking) and an effect (e.g., cancer) by looking backward at the cases (e.g., patients) and comparing them with controls (e.g., randomly selected patients with similar characteristics). The methodology allows us to quantify the risk reduction achievable by acting on the risk factor. We illustrate the methodology by using publicly available data on vulnerabilities, exploits, and exploits in the wild to (1) evaluate the performances of the current risk factor in the industry, the CVSS base score; (2) determine whether it can be improved by considering additional factors such the existence of a proof-of-concept exploit, or of an exploit in the black markets. Our analysis reveals that (a) fixing a vulnerability just because it was assigned a high CVSS score is equivalent to randomly picking vulnerabilities to fix; (b) the existence of proof-of-concept exploits is a significantly better risk factor; (c) fixing in response to exploit presence in black markets yields the largest risk reduction.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85142576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 147
Role Mining with Probabilistic Models 基于概率模型的角色挖掘
Q Engineering Pub Date : 2012-12-19 DOI: 10.1145/2445566.2445567
Mario Frank, J. Buhmann, D. Basin
Role mining tackles the problem of finding a role-based access control (RBAC) configuration, given an access-control matrix assigning users to access permissions as input. Most role-mining approaches work by constructing a large set of candidate roles and use a greedy selection strategy to iteratively pick a small subset such that the differences between the resulting RBAC configuration and the access control matrix are minimized. In this article, we advocate an alternative approach that recasts role mining as an inference problem rather than a lossy compression problem. Instead of using combinatorial algorithms to minimize the number of roles needed to represent the access-control matrix, we derive probabilistic models to learn the RBAC configuration that most likely underlies the given matrix. Our models are generative in that they reflect the way that permissions are assigned to users in a given RBAC configuration. We additionally model how user-permission assignments that conflict with an RBAC configuration emerge and we investigate the influence of constraints on role hierarchies and on the number of assignments. In experiments with access-control matrices from real-world enterprises, we compare our proposed models with other role-mining methods. Our results show that our probabilistic models infer roles that generalize well to new system users for a wide variety of data, while other models’ generalization abilities depend on the dataset given.
角色挖掘处理查找基于角色的访问控制(RBAC)配置的问题,给定一个为用户分配访问权限的访问控制矩阵作为输入。大多数角色挖掘方法的工作原理是构造一个大的候选角色集,并使用贪婪选择策略迭代地选择一个小子集,从而使最终的RBAC配置和访问控制矩阵之间的差异最小化。在本文中,我们提倡一种替代方法,将角色挖掘重新定义为推理问题,而不是有损压缩问题。我们不是使用组合算法来最小化表示访问控制矩阵所需的角色数量,而是推导概率模型来学习最有可能作为给定矩阵基础的RBAC配置。我们的模型是生成的,因为它们反映了在给定的RBAC配置中向用户分配权限的方式。此外,我们还对与RBAC配置冲突的用户权限分配如何出现进行了建模,并研究了约束对角色层次结构和分配数量的影响。在真实企业访问控制矩阵的实验中,我们将我们提出的模型与其他角色挖掘方法进行了比较。我们的研究结果表明,我们的概率模型推断出的角色可以很好地泛化到各种各样的新系统用户,而其他模型的泛化能力取决于给定的数据集。
{"title":"Role Mining with Probabilistic Models","authors":"Mario Frank, J. Buhmann, D. Basin","doi":"10.1145/2445566.2445567","DOIUrl":"https://doi.org/10.1145/2445566.2445567","url":null,"abstract":"Role mining tackles the problem of finding a role-based access control (RBAC) configuration, given an access-control matrix assigning users to access permissions as input. Most role-mining approaches work by constructing a large set of candidate roles and use a greedy selection strategy to iteratively pick a small subset such that the differences between the resulting RBAC configuration and the access control matrix are minimized. In this article, we advocate an alternative approach that recasts role mining as an inference problem rather than a lossy compression problem. Instead of using combinatorial algorithms to minimize the number of roles needed to represent the access-control matrix, we derive probabilistic models to learn the RBAC configuration that most likely underlies the given matrix.\u0000 Our models are generative in that they reflect the way that permissions are assigned to users in a given RBAC configuration. We additionally model how user-permission assignments that conflict with an RBAC configuration emerge and we investigate the influence of constraints on role hierarchies and on the number of assignments. In experiments with access-control matrices from real-world enterprises, we compare our proposed models with other role-mining methods. Our results show that our probabilistic models infer roles that generalize well to new system users for a wide variety of data, while other models’ generalization abilities depend on the dataset given.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83847974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity 对抗性文体学:规避作者身份识别以保护隐私和匿名性
Q Engineering Pub Date : 2012-11-01 DOI: 10.1145/2382448.2382450
Michael Brennan, Sadia Afroz, R. Greenstadt
The use of stylometry, authorship recognition through purely linguistic means, has contributed to literary, historical, and criminal investigation breakthroughs. Existing stylometry research assumes that authors have not attempted to disguise their linguistic writing style. We challenge this basic assumption of existing stylometry methodologies and present a new area of research: adversarial stylometry. Adversaries have a devastating effect on the robustness of existing classification methods. Our work presents a framework for creating adversarial passages including obfuscation, where a subject attempts to hide her identity, and imitation, where a subject attempts to frame another subject by imitating his writing style, and translation where original passages are obfuscated with machine translation services. This research demonstrates that manual circumvention methods work very well while automated translation methods are not effective. The obfuscation method reduces the techniques' effectiveness to the level of random guessing and the imitation attempts succeed up to 67% of the time depending on the stylometry technique used. These results are more significant given the fact that experimental subjects were unfamiliar with stylometry, were not professional writers, and spent little time on the attacks. This article also contributes to the field by using human subjects to empirically validate the claim of high accuracy for four current techniques (without adversaries). We have also compiled and released two corpora of adversarial stylometry texts to promote research in this field with a total of 57 unique authors. We argue that this field is important to a multidisciplinary approach to privacy, security, and anonymity.
文体学的使用,通过纯粹的语言手段来识别作者,为文学、历史和刑事调查的突破做出了贡献。现有的文体学研究假设作者并没有试图掩饰他们的语言写作风格。我们挑战现有文体学方法的这个基本假设,并提出了一个新的研究领域:对抗性文体学。对手对现有分类方法的鲁棒性具有破坏性影响。我们的工作提出了一个创建对抗性段落的框架,包括混淆,其中一个主体试图隐藏她的身份,模仿,其中一个主体试图通过模仿另一个主体的写作风格来构建另一个主体,以及翻译,其中原始段落被机器翻译服务混淆。本研究表明,人工翻墙方法效果很好,而自动翻译方法效果不佳。混淆方法将技术的有效性降低到随机猜测的水平,模仿尝试的成功率高达67%,这取决于所使用的文体学技术。考虑到实验对象不熟悉文体学,不是专业作家,在攻击上花的时间很少,这些结果就更有意义了。本文还通过使用人类受试者来经验验证四种当前技术(没有对手)的高精度声明,从而为该领域做出了贡献。我们还编写并发布了两个对抗性文体学文本语料库,以促进该领域的研究,共有57位独特的作者。我们认为,这一领域对隐私、安全和匿名的多学科方法很重要。
{"title":"Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity","authors":"Michael Brennan, Sadia Afroz, R. Greenstadt","doi":"10.1145/2382448.2382450","DOIUrl":"https://doi.org/10.1145/2382448.2382450","url":null,"abstract":"The use of stylometry, authorship recognition through purely linguistic means, has contributed to literary, historical, and criminal investigation breakthroughs. Existing stylometry research assumes that authors have not attempted to disguise their linguistic writing style. We challenge this basic assumption of existing stylometry methodologies and present a new area of research: adversarial stylometry. Adversaries have a devastating effect on the robustness of existing classification methods. Our work presents a framework for creating adversarial passages including obfuscation, where a subject attempts to hide her identity, and imitation, where a subject attempts to frame another subject by imitating his writing style, and translation where original passages are obfuscated with machine translation services. This research demonstrates that manual circumvention methods work very well while automated translation methods are not effective. The obfuscation method reduces the techniques' effectiveness to the level of random guessing and the imitation attempts succeed up to 67% of the time depending on the stylometry technique used. These results are more significant given the fact that experimental subjects were unfamiliar with stylometry, were not professional writers, and spent little time on the attacks. This article also contributes to the field by using human subjects to empirically validate the claim of high accuracy for four current techniques (without adversaries). We have also compiled and released two corpora of adversarial stylometry texts to promote research in this field with a total of 57 unique authors. We argue that this field is important to a multidisciplinary approach to privacy, security, and anonymity.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75012298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 199
BAF and FI-BAF: Efficient and Publicly Verifiable Cryptographic Schemes for Secure Logging in Resource-Constrained Systems BAF和FI-BAF:资源受限系统中安全登录的有效和可公开验证的加密方案
Q Engineering Pub Date : 2012-07-01 DOI: 10.1145/2240276.2240280
A. Yavuz, P. Ning, M. Reiter
Audit logs are an integral part of modern computer systems due to their forensic value. Protecting audit logs on a physically unprotected machine in hostile environments is a challenging task, especially in the presence of active adversaries. It is critical for such a system to have forward security and append-only properties such that when an adversary compromises a logging machine, she cannot forge or selectively delete the log entries accumulated before the compromise. Existing public-key-based secure logging schemes are computationally costly. Existing symmetric secure logging schemes are not publicly verifiable and open to certain attacks. In this article, we develop a new forward-secure and aggregate signature scheme called Blind-Aggregate-Forward (BAF), which is suitable for secure logging in resource-constrained systems. BAF is the only cryptographic secure logging scheme that can produce publicly verifiable, forward-secure and aggregate signatures with low computation, key/signature storage, and signature communication overheads for the loggers, without requiring any online trusted third party support. A simple variant of BAF also allows a fine-grained verification of log entries without compromising the security or computational efficiency of BAF. We prove that our schemes are secure in Random Oracle Model (ROM). We also show that they are significantly more efficient than all the previous publicly verifiable cryptographic secure logging schemes.
审计日志是现代计算机系统不可或缺的一部分,因为它们具有鉴证价值。在恶意环境中保护物理上未受保护的机器上的审计日志是一项具有挑战性的任务,特别是在存在活跃的攻击者的情况下。对于这样一个系统来说,具有前向安全性和仅附加属性是至关重要的,这样当攻击者入侵一台日志机时,他就不能伪造或选择性地删除入侵前积累的日志条目。现有的基于公钥的安全日志记录方案在计算上是昂贵的。现有的对称安全日志记录方案不能公开验证,并且容易受到某些攻击。在本文中,我们开发了一种新的前向安全和聚合签名方案,称为盲聚合前向(BAF),它适用于资源受限系统中的安全日志记录。BAF是唯一的加密安全日志记录方案,它可以生成公开可验证的、前向安全的聚合签名,并且具有较低的计算、密钥/签名存储和签名通信开销,不需要任何在线可信第三方支持。BAF的一个简单变体还允许对日志条目进行细粒度验证,而不会影响BAF的安全性或计算效率。我们证明了我们的方案在随机Oracle模型(ROM)下是安全的。我们还表明,它们比以前所有可公开验证的加密安全日志记录方案都要高效得多。
{"title":"BAF and FI-BAF: Efficient and Publicly Verifiable Cryptographic Schemes for Secure Logging in Resource-Constrained Systems","authors":"A. Yavuz, P. Ning, M. Reiter","doi":"10.1145/2240276.2240280","DOIUrl":"https://doi.org/10.1145/2240276.2240280","url":null,"abstract":"Audit logs are an integral part of modern computer systems due to their forensic value. Protecting audit logs on a physically unprotected machine in hostile environments is a challenging task, especially in the presence of active adversaries. It is critical for such a system to have forward security and append-only properties such that when an adversary compromises a logging machine, she cannot forge or selectively delete the log entries accumulated before the compromise. Existing public-key-based secure logging schemes are computationally costly. Existing symmetric secure logging schemes are not publicly verifiable and open to certain attacks.\u0000 In this article, we develop a new forward-secure and aggregate signature scheme called Blind-Aggregate-Forward (BAF), which is suitable for secure logging in resource-constrained systems. BAF is the only cryptographic secure logging scheme that can produce publicly verifiable, forward-secure and aggregate signatures with low computation, key/signature storage, and signature communication overheads for the loggers, without requiring any online trusted third party support. A simple variant of BAF also allows a fine-grained verification of log entries without compromising the security or computational efficiency of BAF. We prove that our schemes are secure in Random Oracle Model (ROM). We also show that they are significantly more efficient than all the previous publicly verifiable cryptographic secure logging schemes.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84669651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
期刊
ACM Transactions on Information and System Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1