首页 > 最新文献

Proceedings 2019 Network and Distributed System Security Symposium最新文献

英文 中文
Cleaning Up the Internet of Evil Things: Real-World Evidence on ISP and Consumer Efforts to Remove Mirai 清理互联网的邪恶事物:关于ISP和消费者努力删除Mirai的真实证据
Pub Date : 2019-01-01 DOI: 10.14722/NDSS.2019.23438
Orçun Çetin, C. Gañán, L. Altena, Takahiro Kasama, D. Inoue, Kazuki Tamiya, Ying Tie, K. Yoshioka, M. V. Eeten
With the rise of IoT botnets, the remediation of infected devices has become a critical task. As over 87% of these devices reside in broadband networks, this task will fall primarily to consumers and the Internet Service Providers. We present the first empirical study of IoT malware cleanup in the wild -- more specifically, of removing Mirai infections in the network of a medium-sized ISP. To measure remediation rates, we combine data from an observational study and a randomized controlled trial involving 220 consumers who suffered a Mirai infection together with data from honeypots and darknets. We find that quarantining and notifying infected customers via a walled garden, a best practice from ISP botnet mitigation for conventional malware, remediates 92% of the infections within 14 days. Email-only notifications have no observable impact compared to a control group where no notifications were sent. We also measure surprisingly high natural remediation rates of 58-74% for this control group and for two reference networks where users were also not notified. Even more surprising, reinfection rates are low. Only 5% of the customers who remediated suffered another infection in the five months after our first study. This stands in contrast to our lab tests, which observed reinfection of real IoT devices within minutes -- a discrepancy for which we explore various different possible explanations, but find no satisfactory answer. We gather data on customer experiences and actions via 76 phone interviews and the communications logs of the ISP. Remediation succeeds even though many users are operating from the wrong mental model -- e.g., they run anti-virus software on their PC to solve the infection of an IoT device. While quarantining infected devices is clearly highly effective, future work will have to resolve several remaining mysteries. Furthermore, it will be hard to scale up the walled garden solution because of the weak incentives of the ISPs.
随着物联网僵尸网络的兴起,修复受感染设备已成为一项关键任务。由于超过87%的这些设备驻留在宽带网络中,这项任务将主要落在消费者和互联网服务提供商身上。我们首次在野外对物联网恶意软件清理进行了实证研究——更具体地说,是在一家中型ISP的网络中清除Mirai感染。为了测量修复率,我们结合了一项观察性研究和一项随机对照试验的数据,该试验涉及220名感染Mirai的消费者,以及来自蜜罐和暗池的数据。我们发现,通过围墙花园隔离并通知受感染的客户,这是ISP僵尸网络缓解传统恶意软件的最佳实践,可在14天内修复92%的感染。与不发送通知的对照组相比,仅发送电子邮件通知没有明显的影响。我们还测量了令人惊讶的高自然修复率,在这个对照组和两个参考网络中,用户也没有被通知。更令人惊讶的是,再感染率很低。在我们第一次研究后的5个月里,只有5%进行了补救的客户再次受到感染。这与我们的实验室测试形成鲜明对比,我们在几分钟内观察到真正的物联网设备再次感染——我们探索了各种不同的可能解释,但没有找到令人满意的答案。我们通过76次电话访谈和ISP的通信日志收集客户体验和行为数据。即使许多用户在错误的思维模式下操作——例如,他们在个人电脑上运行杀毒软件来解决物联网设备的感染,补救措施也会成功。虽然隔离受感染的设备显然非常有效,但未来的工作将不得不解决几个遗留的谜团。此外,由于互联网服务提供商的动机薄弱,围墙花园解决方案将很难扩大规模。
{"title":"Cleaning Up the Internet of Evil Things: Real-World Evidence on ISP and Consumer Efforts to Remove Mirai","authors":"Orçun Çetin, C. Gañán, L. Altena, Takahiro Kasama, D. Inoue, Kazuki Tamiya, Ying Tie, K. Yoshioka, M. V. Eeten","doi":"10.14722/NDSS.2019.23438","DOIUrl":"https://doi.org/10.14722/NDSS.2019.23438","url":null,"abstract":"With the rise of IoT botnets, the remediation of infected devices has become a critical task. As over 87% of these devices reside in broadband networks, this task will fall primarily to consumers and the Internet Service Providers. We present the first empirical study of IoT malware cleanup in the wild -- more specifically, of removing Mirai infections in the network of a medium-sized ISP. To measure remediation rates, we combine data from an observational study and a randomized controlled trial involving 220 consumers who suffered a Mirai infection together with data from honeypots and darknets. We find that quarantining and notifying infected customers via a walled garden, a best practice from ISP botnet mitigation for conventional malware, remediates 92% of the infections within 14 days. Email-only notifications have no observable impact compared to a control group where no notifications were sent. We also measure surprisingly high natural remediation rates of 58-74% for this control group and for two reference networks where users were also not notified. Even more surprising, reinfection rates are low. Only 5% of the customers who remediated suffered another infection in the five months after our first study. This stands in contrast to our lab tests, which observed reinfection of real IoT devices within minutes -- a discrepancy for which we explore various different possible explanations, but find no satisfactory answer. We gather data on customer experiences and actions via 76 phone interviews and the communications logs of the ISP. Remediation succeeds even though many users are operating from the wrong mental model -- e.g., they run anti-virus software on their PC to solve the infection of an IoT device. While quarantining infected devices is clearly highly effective, future work will have to resolve several remaining mysteries. Furthermore, it will be hard to scale up the walled garden solution because of the weak incentives of the ISPs.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87268397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
How Bad Can It Git? Characterizing Secret Leakage in Public GitHub Repositories 它能变得多糟糕?描述GitHub公共存储库中的秘密泄漏
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23418
Michael Meli, Matthew R. McNiece, Bradley Reaves
—GitHub and similar platforms have made public collaborative development of software commonplace. However, a problem arises when this public code must manage authentication secrets, such as API keys or cryptographic secrets. These secrets must be kept private for security, yet common development practices like adding these secrets to code make accidental leakage frequent. In this paper, we present the first large-scale and longitudinal analysis of secret leakage on GitHub. We examine billions of files collected using two complementary approaches: a nearly six-month scan of real-time public GitHub commits and a public snapshot covering 13% of open-source repositories. We focus on private key files and 11 high-impact platforms with distinctive API key formats. This focus allows us to develop conservative detection techniques that we manually and automatically evaluate to ensure accurate results. We find that not only is secret leakage pervasive — affecting over 100,000 repositories — but that thousands of new, unique secrets are leaked every day. We also use our data to explore possible root causes of leakage and to evaluate potential mitigation strategies. This work shows that secret leakage on public repository platforms is rampant and far from a solved problem, placing developers and services at persistent risk of compromise and abuse.
-GitHub和类似的平台使得软件的公共协作开发变得司空见惯。但是,当此公共代码必须管理身份验证秘密(如API密钥或加密秘密)时,就会出现问题。为了安全起见,这些秘密必须保持私密性,但是将这些秘密添加到代码中的常见开发实践经常会导致意外泄漏。在本文中,我们首次对GitHub上的秘密泄漏进行了大规模的纵向分析。我们使用两种互补的方法来检查收集的数十亿个文件:近六个月的实时GitHub公共提交扫描和覆盖13%开源存储库的公共快照。我们专注于私钥文件和11个具有独特API密钥格式的高影响力平台。这种专注使我们能够开发保守的检测技术,我们手动和自动评估以确保准确的结果。我们发现,不仅秘密泄露无处不在——影响了超过10万个存储库——而且每天都有数千个新的、独特的秘密被泄露。我们还使用我们的数据来探索泄漏的可能根本原因,并评估潜在的缓解策略。这项工作表明,公共存储库平台上的秘密泄漏非常猖獗,远远没有解决问题,将开发人员和服务置于妥协和滥用的持续风险中。
{"title":"How Bad Can It Git? Characterizing Secret Leakage in Public GitHub Repositories","authors":"Michael Meli, Matthew R. McNiece, Bradley Reaves","doi":"10.14722/ndss.2019.23418","DOIUrl":"https://doi.org/10.14722/ndss.2019.23418","url":null,"abstract":"—GitHub and similar platforms have made public collaborative development of software commonplace. However, a problem arises when this public code must manage authentication secrets, such as API keys or cryptographic secrets. These secrets must be kept private for security, yet common development practices like adding these secrets to code make accidental leakage frequent. In this paper, we present the first large-scale and longitudinal analysis of secret leakage on GitHub. We examine billions of files collected using two complementary approaches: a nearly six-month scan of real-time public GitHub commits and a public snapshot covering 13% of open-source repositories. We focus on private key files and 11 high-impact platforms with distinctive API key formats. This focus allows us to develop conservative detection techniques that we manually and automatically evaluate to ensure accurate results. We find that not only is secret leakage pervasive — affecting over 100,000 repositories — but that thousands of new, unique secrets are leaked every day. We also use our data to explore possible root causes of leakage and to evaluate potential mitigation strategies. This work shows that secret leakage on public repository platforms is rampant and far from a solved problem, placing developers and services at persistent risk of compromise and abuse.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78487642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Understanding Open Ports in Android Applications: Discovery, Diagnosis, and Security Assessment 理解Android应用程序中的开放端口:发现、诊断和安全评估
Pub Date : 2019-01-01 DOI: 10.14722/NDSS.2019.23171
Daoyuan Wu, Debin Gao, R. Chang, En He, E. Cheng, R. Deng
—Open TCP/UDP ports are traditionally used by servers to provide application services, but they are also found in many Android apps. In this paper, we present the first open- port analysis pipeline, covering the discovery, diagnosis, and security assessment, to systematically understand open ports in Android apps and their threats. We design and deploy a novel on-device crowdsourcing app and its server-side analytic engine to continuously monitor open ports in the wild. Over a period of ten months, we have collected over 40 million port monitoring records from 3,293 users in 136 countries worldwide, which allow us to observe the actual execution of open ports in 925 popular apps and 725 built-in system apps. The crowdsourcing also provides us a more accurate view of the pervasiveness of open ports in Android apps at 15.3%, much higher than the previous estimation of 6.8%. We also develop a new static diagnostic tool to reveal that 61.8% of the open-port apps are solely due to embedded SDKs, and 20.7% suffer from insecure API usages. Finally, we perform three security assessments of open ports: (i) vulnerability analysis revealing five vulnerability patterns in open ports of popular apps, e.g., Instagram, Samsung Gear, Skype, and the widely-embedded Facebook SDK, (ii) inter-device connectivity measurement in 224 cellular networks and 2,181 WiFi networks through crowdsourced network scans, and (iii) experimental demonstration of effective denial-of-service attacks against mobile open ports.
开放的TCP/UDP端口通常用于服务器提供应用程序服务,但它们也存在于许多Android应用程序中。在本文中,我们提出了第一个开放端口分析管道,涵盖发现,诊断和安全评估,以系统地了解Android应用程序中的开放端口及其威胁。我们设计并部署了一个新颖的设备上众包应用程序及其服务器端分析引擎,以持续监控开放端口。在10个月的时间里,我们从全球136个国家的3293个用户中收集了超过4000万个端口监控记录,这使我们能够观察925个流行应用程序和725个内置系统应用程序中开放端口的实际执行情况。众包还为我们提供了一个更准确的视角,即Android应用中开放端口的普及率为15.3%,远高于之前估计的6.8%。我们还开发了一个新的静态诊断工具,以揭示61.8%的开放端口应用仅仅是由于嵌入式sdk, 20.7%的应用遭受不安全的API使用。最后,我们对开放端口进行了三项安全评估:(i)漏洞分析,揭示了流行应用程序(如Instagram、Samsung Gear、Skype和广泛嵌入的Facebook SDK)开放端口中的五种漏洞模式;(ii)通过众包网络扫描对224个蜂窝网络和2181个WiFi网络进行设备间连接测量;(iii)对移动开放端口进行有效拒绝服务攻击的实验演示。
{"title":"Understanding Open Ports in Android Applications: Discovery, Diagnosis, and Security Assessment","authors":"Daoyuan Wu, Debin Gao, R. Chang, En He, E. Cheng, R. Deng","doi":"10.14722/NDSS.2019.23171","DOIUrl":"https://doi.org/10.14722/NDSS.2019.23171","url":null,"abstract":"—Open TCP/UDP ports are traditionally used by servers to provide application services, but they are also found in many Android apps. In this paper, we present the first open- port analysis pipeline, covering the discovery, diagnosis, and security assessment, to systematically understand open ports in Android apps and their threats. We design and deploy a novel on-device crowdsourcing app and its server-side analytic engine to continuously monitor open ports in the wild. Over a period of ten months, we have collected over 40 million port monitoring records from 3,293 users in 136 countries worldwide, which allow us to observe the actual execution of open ports in 925 popular apps and 725 built-in system apps. The crowdsourcing also provides us a more accurate view of the pervasiveness of open ports in Android apps at 15.3%, much higher than the previous estimation of 6.8%. We also develop a new static diagnostic tool to reveal that 61.8% of the open-port apps are solely due to embedded SDKs, and 20.7% suffer from insecure API usages. Finally, we perform three security assessments of open ports: (i) vulnerability analysis revealing five vulnerability patterns in open ports of popular apps, e.g., Instagram, Samsung Gear, Skype, and the widely-embedded Facebook SDK, (ii) inter-device connectivity measurement in 224 cellular networks and 2,181 WiFi networks through crowdsourced network scans, and (iii) experimental demonstration of effective denial-of-service attacks against mobile open ports.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77673621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Profit: Detecting and Quantifying Side Channels in Networked Applications 利润:在网络应用中检测和量化侧信道
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23536
Nicolás Rosner, Ismet Burak Kadron, Lucas Bang, T. Bultan
We present a black-box, dynamic technique to detect and quantify side-channel information leaks in networked applications that communicate through a TLS-encrypted stream. Given a user-supplied profiling-input suite in which some aspect of the inputs is marked as secret, we run the application over the inputs and capture a collection of variable-length network packet traces. The captured traces give rise to a vast side-channel feature space, including the size and timestamp of each individual packet as well as their aggregations (such as total time, median size, etc.) over every possible subset of packets. Finding the features that leak the most information is a difficult problem. Our approach addresses this problem in three steps: 1) Global analysis of traces for their alignment and identification of phases across traces; 2) Feature extraction using the identified phases; 3) Information leakage quantification and ranking of features via estimation of probability distribution. We embody this approach in a tool called Profit and experimentally evaluate it on a benchmark of applications from the DARPA STAC program, which were developed to assess the effectiveness of side-channel analysis techniques. Our experimental results demonstrate that, given suitable profiling-input suites, Profit is successful in automatically detecting information-leaking features in applications, and correctly ordering the strength of the leakage for differently-leaking variants of the same application.
我们提出了一种黑盒动态技术,用于检测和量化通过tls加密流通信的网络应用程序中的侧信道信息泄漏。给定一个用户提供的分析输入套件,其中输入的某些方面被标记为机密,我们在这些输入上运行应用程序并捕获一组可变长度的网络数据包跟踪。捕获的跟踪产生了一个巨大的侧信道特征空间,包括每个单独数据包的大小和时间戳,以及它们在每个可能的数据包子集上的聚合(如总时间、中位数大小等)。找到泄漏最多信息的特征是一个难题。我们的方法分三个步骤解决了这个问题:1)对轨迹进行全局分析,以确定轨迹之间的相位;2)利用识别的阶段进行特征提取;3)通过估计概率分布对信息泄漏进行量化,并对特征进行排序。我们在一个名为Profit的工具中体现了这种方法,并在DARPA STAC项目的应用基准上进行了实验评估,该项目旨在评估侧信道分析技术的有效性。我们的实验结果表明,在给定合适的分析输入套件的情况下,Profit可以成功地自动检测应用程序中的信息泄漏特征,并为同一应用程序的不同泄漏变体正确排序泄漏强度。
{"title":"Profit: Detecting and Quantifying Side Channels in Networked Applications","authors":"Nicolás Rosner, Ismet Burak Kadron, Lucas Bang, T. Bultan","doi":"10.14722/ndss.2019.23536","DOIUrl":"https://doi.org/10.14722/ndss.2019.23536","url":null,"abstract":"We present a black-box, dynamic technique to detect and quantify side-channel information leaks in networked applications that communicate through a TLS-encrypted stream. Given a user-supplied profiling-input suite in which some aspect of the inputs is marked as secret, we run the application over the inputs and capture a collection of variable-length network packet traces. The captured traces give rise to a vast side-channel feature space, including the size and timestamp of each individual packet as well as their aggregations (such as total time, median size, etc.) over every possible subset of packets. Finding the features that leak the most information is a difficult problem. Our approach addresses this problem in three steps: 1) Global analysis of traces for their alignment and identification of phases across traces; 2) Feature extraction using the identified phases; 3) Information leakage quantification and ranking of features via estimation of probability distribution. We embody this approach in a tool called Profit and experimentally evaluate it on a benchmark of applications from the DARPA STAC program, which were developed to assess the effectiveness of side-channel analysis techniques. Our experimental results demonstrate that, given suitable profiling-input suites, Profit is successful in automatically detecting information-leaking features in applications, and correctly ordering the strength of the leakage for differently-leaking variants of the same application.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81365163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Component-Based Formal Analysis of 5G-AKA: Channel Assumptions and Session Confusion 基于组件的5g形式化分析:信道假设和会话混淆
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23394
C. Cremers, Martin Dehnel-Wild
The 5G mobile telephony standards are nearing completion; upon adoption these will be used by billions across the globe. Ensuring the security of 5G communication is of the utmost importance, building trust in a critical component of everyday life and national infrastructure. We perform fine-grained formal analysis of 5G’s main authentication and key agreement protocol (AKA), and provide the first models to explicitly consider all parties defined by the protocol specification. Our analysis reveals that the security of 5G-AKA critically relies on unstated assumptions on the inner workings of the underlying channels. In practice this means that following the 5G-AKA specification, a provider can easily and ‘correctly’ implement the standard insecurely, leaving the protocol vulnerable to a security-critical race condition. We provide the first models and analysis considering component and channel compromise in 5G, whose results further demonstrate the fragility and subtle trust assumptions of the 5G-AKA protocol. We propose formally verified fixes to the encountered issues, and have worked with 3GPP to ensure these fixes are adopted.
5G移动通信标准即将完成;一旦被采用,这些将被全球数十亿人使用。确保5G通信的安全至关重要,在日常生活和国家基础设施的关键组成部分建立信任。我们对5G的主要认证和密钥协议(AKA)进行了细粒度的形式化分析,并提供了第一个明确考虑协议规范定义的各方的模型。我们的分析表明,5G-AKA的安全性严重依赖于对底层通道内部工作的未声明假设。在实践中,这意味着遵循5G-AKA规范,提供商可以轻松且“正确”地实现不安全的标准,使协议容易受到安全关键竞争条件的影响。我们提供了第一个考虑5G中组件和信道妥协的模型和分析,其结果进一步证明了5G- aka协议的脆弱性和微妙的信任假设。我们对遇到的问题提出了经过正式验证的修复方案,并与3GPP合作确保这些修复方案被采用。
{"title":"Component-Based Formal Analysis of 5G-AKA: Channel Assumptions and Session Confusion","authors":"C. Cremers, Martin Dehnel-Wild","doi":"10.14722/ndss.2019.23394","DOIUrl":"https://doi.org/10.14722/ndss.2019.23394","url":null,"abstract":"The 5G mobile telephony standards are nearing completion; upon adoption these will be used by billions across the globe. Ensuring the security of 5G communication is of the utmost importance, building trust in a critical component of everyday life and national infrastructure. We perform fine-grained formal analysis of 5G’s main authentication and key agreement protocol (AKA), and provide the first models to explicitly consider all parties defined by the protocol specification. Our analysis reveals that the security of 5G-AKA critically relies on unstated assumptions on the inner workings of the underlying channels. In practice this means that following the 5G-AKA specification, a provider can easily and ‘correctly’ implement the standard insecurely, leaving the protocol vulnerable to a security-critical race condition. We provide the first models and analysis considering component and channel compromise in 5G, whose results further demonstrate the fragility and subtle trust assumptions of the 5G-AKA protocol. We propose formally verified fixes to the encountered issues, and have worked with 3GPP to ensure these fixes are adopted.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83592099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
DNS Cache-Based User Tracking 基于DNS缓存的用户跟踪
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23186
Amit Klein, Benny Pinkas
{"title":"DNS Cache-Based User Tracking","authors":"Amit Klein, Benny Pinkas","doi":"10.14722/ndss.2019.23186","DOIUrl":"https://doi.org/10.14722/ndss.2019.23186","url":null,"abstract":"","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"592 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77309538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
NIC: Detecting Adversarial Samples with Neural Network Invariant Checking 用神经网络不变性检查检测对抗样本
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23415
Shiqing Ma, Yingqi Liu, Guanhong Tao, Wen-Chuan Lee, X. Zhang
Deep Neural Networks (DNN) are vulnerable to adversarial samples that are generated by perturbing correctly classified inputs to cause DNN models to misbehave (e.g., misclassification). This can potentially lead to disastrous consequences especially in security-sensitive applications. Existing defense and detection techniques work well for specific attacks under various assumptions (e.g., the set of possible attacks are known beforehand). However, they are not sufficiently general to protect against a broader range of attacks. In this paper, we analyze the internals of DNN models under various attacks and identify two common exploitation channels: the provenance channel and the activation value distribution channel. We then propose a novel technique to extract DNN invariants and use them to perform runtime adversarial sample detection. Our experimental results of 11 different kinds of attacks on popular datasets including ImageNet and 13 models show that our technique can effectively detect all these attacks (over 90% accuracy) with limited false positives. We also compare it with three state-of-theart techniques including the Local Intrinsic Dimensionality (LID) based method, denoiser based methods (i.e., MagNet and HGD), and the prediction inconsistency based approach (i.e., feature squeezing). Our experiments show promising results.
深度神经网络(DNN)容易受到对抗性样本的影响,这些样本是通过扰动正确分类的输入而产生的,从而导致DNN模型行为不当(例如,错误分类)。这可能会导致灾难性的后果,特别是在对安全敏感的应用程序中。现有的防御和检测技术可以很好地应对各种假设下的特定攻击(例如,预先知道可能的攻击集合)。然而,它们还不够通用,无法抵御更广泛的攻击。在本文中,我们分析了DNN模型在各种攻击下的内部结构,确定了两种常见的利用通道:来源通道和激活值分布通道。然后,我们提出了一种新的技术来提取DNN不变量,并使用它们来执行运行时对抗性样本检测。我们在包括ImageNet在内的流行数据集和13个模型上对11种不同类型的攻击进行了实验,结果表明我们的技术可以有效地检测所有这些攻击(准确率超过90%),并且误报率有限。我们还将其与三种最先进的技术进行了比较,包括基于局部固有维数(LID)的方法、基于去噪的方法(即MagNet和HGD)和基于预测不一致性的方法(即特征压缩)。我们的实验显示出有希望的结果。
{"title":"NIC: Detecting Adversarial Samples with Neural Network Invariant Checking","authors":"Shiqing Ma, Yingqi Liu, Guanhong Tao, Wen-Chuan Lee, X. Zhang","doi":"10.14722/ndss.2019.23415","DOIUrl":"https://doi.org/10.14722/ndss.2019.23415","url":null,"abstract":"Deep Neural Networks (DNN) are vulnerable to adversarial samples that are generated by perturbing correctly classified inputs to cause DNN models to misbehave (e.g., misclassification). This can potentially lead to disastrous consequences especially in security-sensitive applications. Existing defense and detection techniques work well for specific attacks under various assumptions (e.g., the set of possible attacks are known beforehand). However, they are not sufficiently general to protect against a broader range of attacks. In this paper, we analyze the internals of DNN models under various attacks and identify two common exploitation channels: the provenance channel and the activation value distribution channel. We then propose a novel technique to extract DNN invariants and use them to perform runtime adversarial sample detection. Our experimental results of 11 different kinds of attacks on popular datasets including ImageNet and 13 models show that our technique can effectively detect all these attacks (over 90% accuracy) with limited false positives. We also compare it with three state-of-theart techniques including the Local Intrinsic Dimensionality (LID) based method, denoiser based methods (i.e., MagNet and HGD), and the prediction inconsistency based approach (i.e., feature squeezing). Our experiments show promising results.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85424572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 217
Distinguishing Attacks from Legitimate Authentication Traffic at Scale 大规模区分攻击和合法认证流量
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23124
Cormac Herley, Stuart E. Schechter
Online guessing attacks against password servers can be hard to address. Approaches that throttle or block repeated guesses on an account (e.g., three strikes type lockout rules) can be effective against depth-first attacks, but are of little help against breadth-first attacks that spread guesses very widely. At large providers with tens, or hundreds, of millions of accounts breadth-first attacks offer a way to send millions or even billions of guesses without ever triggering the depth-first defenses. The absence of labels and non-stationarity of attack traffic make it challenging to apply machine learning techniques. We show how to accurately estimate the odds that an observation x indicates that a request is malicious. Our main assumptions are that successful malicious logins are a small fraction of the total, and that the distribution of x in the legitimate traffic is stationary, or very-slowly varying. From these we show how we can estimate the ratio of bad-to-good traffic among any set of requests; how we can then identify subsets of the request data that contain least (or even no) attack traffic; how these leastattacked subsets allow us to estimate the distribution of values of x over the legitimate data, and hence calculate the odds ratio. A sensitivity analysis shows that even when we fail to identify a subset with little attack traffic our odds ratio estimates are very robust.
针对密码服务器的在线猜测攻击很难解决。限制或阻止帐户重复猜测的方法(例如,三击类型锁定规则)可以有效地对抗深度优先攻击,但对于传播猜测非常广泛的广度优先攻击几乎没有帮助。在拥有数千万或数亿账户的大型提供商中,广度优先攻击提供了一种发送数百万甚至数十亿猜测的方法,而不会触发深度优先防御。攻击流量缺乏标签和非平稳性使得机器学习技术的应用具有挑战性。我们将展示如何准确地估计观察x表明请求是恶意请求的几率。我们的主要假设是,成功的恶意登录只占总数的一小部分,并且x在合法流量中的分布是稳定的,或者变化非常缓慢。从这些数据中,我们展示了如何在任何一组请求中估计坏流量与好流量的比率;然后我们如何识别包含最少(甚至没有)攻击流量的请求数据子集;这些最少受攻击的子集如何使我们能够估计合法数据上x值的分布,从而计算比值比。敏感性分析表明,即使我们无法识别具有少量攻击流量的子集,我们的比值比估计也非常稳健。
{"title":"Distinguishing Attacks from Legitimate Authentication Traffic at Scale","authors":"Cormac Herley, Stuart E. Schechter","doi":"10.14722/ndss.2019.23124","DOIUrl":"https://doi.org/10.14722/ndss.2019.23124","url":null,"abstract":"Online guessing attacks against password servers can be hard to address. Approaches that throttle or block repeated guesses on an account (e.g., three strikes type lockout rules) can be effective against depth-first attacks, but are of little help against breadth-first attacks that spread guesses very widely. At large providers with tens, or hundreds, of millions of accounts breadth-first attacks offer a way to send millions or even billions of guesses without ever triggering the depth-first defenses. The absence of labels and non-stationarity of attack traffic make it challenging to apply machine learning techniques. We show how to accurately estimate the odds that an observation x indicates that a request is malicious. Our main assumptions are that successful malicious logins are a small fraction of the total, and that the distribution of x in the legitimate traffic is stationary, or very-slowly varying. From these we show how we can estimate the ratio of bad-to-good traffic among any set of requests; how we can then identify subsets of the request data that contain least (or even no) attack traffic; how these leastattacked subsets allow us to estimate the distribution of values of x over the legitimate data, and hence calculate the odds ratio. A sensitivity analysis shows that even when we fail to identify a subset with little attack traffic our odds ratio estimates are very robust.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80944683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
CRCount: Pointer Invalidation with Reference Counting to Mitigate Use-after-free in Legacy C/C++ CRCount:指针无效与引用计数,以减轻使用后免费在遗留C/ c++
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23541
Jangseop Shin, Donghyun Kwon, Jiwon Seo, Yeongpil Cho, Y. Paek
Pointer invalidation has been a popular approach adopted in many recent studies to mitigate use-after-free errors. The approach can be divided largely into two different schemes: explicit invalidation and implicit invalidation. The former aims to eradicate the root cause of use-after-free errors by explicitly invalidating every dangling pointer. In contrast, the latter aims to prevent dangling pointers by freeing an object only if there is no pointer referring to it. A downside of the explicit scheme is that it is expensive, as it demands high-cost algorithms or a large amount of space to maintain up-to-date lists of pointer locations linking to each object. Implicit invalidation is more efficient in that even without any explicit effort, it can eliminate dangling pointers by leaving objects undeleted until all the links between the objects and their referring pointers vanish by themselves during program execution. However, such an argument only holds if the scheme knows exactly when each link is created and deleted. Reference counting is a traditional method to determine the existence of reference links between objects and pointers. Unfortunately, impeccable reference counting for legacy C/C++ code is very difficult and expensive to achieve in practice, mainly because of the type unsafe operations in the code. In this paper, we present a solution, called CRCount, to the use-after-free problem in legacy C/C++. For effective and efficient problem solving, CRCount is armed with the pointer footprinting technique that enables us to compute, with high accuracy, the reference count of every object referred to by the pointers in the legacy code. Our experiments demonstrate that CRCount mitigates the useafter-free errors with a lower performance-wise and space-wise overhead than the existing pointer invalidation solutions.
指针失效是最近许多研究中采用的一种流行方法,以减轻free后使用错误。该方法大致可分为两种不同的方案:显式无效和隐式无效。前者旨在通过显式地使每个悬空指针失效来消除free后使用错误的根本原因。相比之下,后者旨在通过仅在没有指针指向对象时释放对象来防止悬空指针。显式方案的一个缺点是代价昂贵,因为它需要高成本算法或大量空间来维护链接到每个对象的指针位置的最新列表。隐式失效更有效,因为即使没有任何显式的努力,它也可以通过保持对象不被删除来消除悬空指针,直到在程序执行期间对象与其引用指针之间的所有链接自行消失。但是,只有当方案确切地知道每个链接创建和删除的时间时,这样的参数才成立。引用计数是确定对象和指针之间是否存在引用链接的传统方法。不幸的是,对于遗留的C/ c++代码,要在实践中实现完美的引用计数是非常困难和昂贵的,主要是因为代码中的类型不安全操作。在本文中,我们提出了一种名为CRCount的解决方案,以解决遗留C/ c++中免费后使用的问题。为了有效和高效地解决问题,CRCount配备了指针足迹技术,使我们能够高精度地计算遗留代码中指针所引用的每个对象的引用计数。我们的实验表明,与现有的指针无效解决方案相比,CRCount以更低的性能和空间开销减轻了use - after-free错误。
{"title":"CRCount: Pointer Invalidation with Reference Counting to Mitigate Use-after-free in Legacy C/C++","authors":"Jangseop Shin, Donghyun Kwon, Jiwon Seo, Yeongpil Cho, Y. Paek","doi":"10.14722/ndss.2019.23541","DOIUrl":"https://doi.org/10.14722/ndss.2019.23541","url":null,"abstract":"Pointer invalidation has been a popular approach adopted in many recent studies to mitigate use-after-free errors. The approach can be divided largely into two different schemes: explicit invalidation and implicit invalidation. The former aims to eradicate the root cause of use-after-free errors by explicitly invalidating every dangling pointer. In contrast, the latter aims to prevent dangling pointers by freeing an object only if there is no pointer referring to it. A downside of the explicit scheme is that it is expensive, as it demands high-cost algorithms or a large amount of space to maintain up-to-date lists of pointer locations linking to each object. Implicit invalidation is more efficient in that even without any explicit effort, it can eliminate dangling pointers by leaving objects undeleted until all the links between the objects and their referring pointers vanish by themselves during program execution. However, such an argument only holds if the scheme knows exactly when each link is created and deleted. Reference counting is a traditional method to determine the existence of reference links between objects and pointers. Unfortunately, impeccable reference counting for legacy C/C++ code is very difficult and expensive to achieve in practice, mainly because of the type unsafe operations in the code. In this paper, we present a solution, called CRCount, to the use-after-free problem in legacy C/C++. For effective and efficient problem solving, CRCount is armed with the pointer footprinting technique that enables us to compute, with high accuracy, the reference count of every object referred to by the pointers in the legacy code. Our experiments demonstrate that CRCount mitigates the useafter-free errors with a lower performance-wise and space-wise overhead than the existing pointer invalidation solutions.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"395 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79595725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Ginseng: Keeping Secrets in Registers When You Distrust the Operating System 人参:当你不信任操作系统时,将秘密保存在寄存器中
Pub Date : 2019-01-01 DOI: 10.14722/ndss.2019.23327
Minhong Yun, Lin Zhong
Many mobile and embedded apps possess sensitive data, or secrets. Trusting the operating system (OS), they often keep their secrets in the memory. Recent incidents have shown that the memory is not necessarily secure because the OS can be compromised due to inevitable vulnerabilities resulting from its sheer size and complexity. Existing solutions protect sensitive data against an untrusted OS by running app logic in the Secure world, a Trusted Execution Environment (TEE) supported by the ARM TrustZone technology. Because app logic increases the attack surface of their TEE, these solutions do not work for third-party apps. This work aims to support third-party apps without growing the attack surface, significant development effort, or performance overhead. Our solution, called Ginseng, protects sensitive data by allocating them to registers at compile time and encrypting them at runtime before they enter the memory, due to function calls, exceptions or lack of physical registers. Ginseng does not run any app logic in the TEE and only requires minor markups to support existing apps. We report a prototype implementation based on LLVM, ARM Trusted Firmware (ATF), and the HiKey board. We evaluate it with both microbenchmarks and real-world secret-holding apps. Our evaluation shows Ginseng efficiently protects sensitive data with low engineering effort. For example, a Ginsengenabled web server, Nginx, protects the TLS master key with no measurable overhead. We find Ginseng’s overhead is proportional to how often sensitive data in registers have to be encrypted and decrypted, i.e., spilling and restoring sensitive data on a function call or under high register pressure. As a result, Ginseng is most suited to protecting small sensitive data, like a password or social security number.
许多移动和嵌入式应用程序都拥有敏感数据或机密。由于信任操作系统(OS),他们经常将秘密保存在内存中。最近的事件表明,内存不一定是安全的,因为操作系统由于其庞大的规模和复杂性而不可避免地存在漏洞,因此可能会受到损害。现有的解决方案通过在安全世界中运行应用程序逻辑来保护敏感数据免受不受信任的操作系统的攻击,这是一个由ARM TrustZone技术支持的可信执行环境(TEE)。因为应用程序逻辑增加了TEE的攻击面,这些解决方案不适用于第三方应用程序。这项工作旨在支持第三方应用程序,而不会增加攻击面,显著的开发工作或性能开销。我们的解决方案,称为人参,通过在编译时将敏感数据分配到寄存器,并在运行时由于函数调用,异常或缺乏物理寄存器而在它们进入内存之前对它们进行加密,从而保护敏感数据。Ginseng不会在TEE中运行任何应用逻辑,只需要少量标记来支持现有的应用。我们报告了一个基于LLVM, ARM可信固件(ATF)和HiKey板的原型实现。我们用微基准测试和真实世界的秘密应用程序来评估它。我们的评估表明,人参有效地保护了敏感数据,而工程投入较少。例如,一个支持ginginx的web服务器,在没有可测量开销的情况下保护TLS主密钥。我们发现人参的开销与寄存器中的敏感数据必须加密和解密的频率成正比,即在函数调用或高寄存器压力下溢出和恢复敏感数据。因此,人参最适合保护小的敏感数据,如密码或社会保险号。
{"title":"Ginseng: Keeping Secrets in Registers When You Distrust the Operating System","authors":"Minhong Yun, Lin Zhong","doi":"10.14722/ndss.2019.23327","DOIUrl":"https://doi.org/10.14722/ndss.2019.23327","url":null,"abstract":"Many mobile and embedded apps possess sensitive data, or secrets. Trusting the operating system (OS), they often keep their secrets in the memory. Recent incidents have shown that the memory is not necessarily secure because the OS can be compromised due to inevitable vulnerabilities resulting from its sheer size and complexity. Existing solutions protect sensitive data against an untrusted OS by running app logic in the Secure world, a Trusted Execution Environment (TEE) supported by the ARM TrustZone technology. Because app logic increases the attack surface of their TEE, these solutions do not work for third-party apps. This work aims to support third-party apps without growing the attack surface, significant development effort, or performance overhead. Our solution, called Ginseng, protects sensitive data by allocating them to registers at compile time and encrypting them at runtime before they enter the memory, due to function calls, exceptions or lack of physical registers. Ginseng does not run any app logic in the TEE and only requires minor markups to support existing apps. We report a prototype implementation based on LLVM, ARM Trusted Firmware (ATF), and the HiKey board. We evaluate it with both microbenchmarks and real-world secret-holding apps. Our evaluation shows Ginseng efficiently protects sensitive data with low engineering effort. For example, a Ginsengenabled web server, Nginx, protects the TLS master key with no measurable overhead. We find Ginseng’s overhead is proportional to how often sensitive data in registers have to be encrypted and decrypted, i.e., spilling and restoring sensitive data on a function call or under high register pressure. As a result, Ginseng is most suited to protecting small sensitive data, like a password or social security number.","PeriodicalId":20444,"journal":{"name":"Proceedings 2019 Network and Distributed System Security Symposium","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74067367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
Proceedings 2019 Network and Distributed System Security Symposium
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1