Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
{"title":"Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning","authors":"Milad Nasr, R. Shokri, A. Houmansadr","doi":"10.1109/SP.2019.00065","DOIUrl":"https://doi.org/10.1109/SP.2019.00065","url":null,"abstract":"Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121572967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Control-Flow Hijacking attacks are the dominant attack vector against C/C++ programs. Control-Flow Integrity (CFI) solutions mitigate these attacks on the forward edge, i.e., indirect calls through function pointers and virtual calls. Protecting the backward edge is left to stack canaries, which are easily bypassed through information leaks. Shadow Stacks are a fully precise mechanism for protecting backwards edges, and should be deployed with CFI mitigations. We present a comprehensive analysis of all possible shadow stack mechanisms along three axes: performance, compatibil- ity, and security. For performance comparisons we use SPEC CPU2006, while security and compatibility are qualitatively analyzed. Based on our study, we renew calls for a shadow stack design that leverages a dedicated register, resulting in low performance overhead, and minimal memory overhead, but sacrifices compatibility. We present case studies of our implementation of such a design, Shadesmar, on Phoronix and Apache to demonstrate the feasibility of dedicating a general purpose register to a security monitor on modern architectures, and Shadesmar’s deployability. Our comprehensive analysis, including detailed case studies for our novel design, allows compiler designers and practitioners to select the correct shadow stack design for different usage scenarios. Shadow stacks belong to the class of defense mechanisms that require metadata about the program’s state to enforce their defense policies. Protecting this metadata for deployed mitigations requires in-process isolation of a segment of the virtual address space. Prior work on defenses in this class has relied on information hiding to protect metadata. We show that stronger guarantees are possible by repurposing two new Intel x86 extensions for memory protection (MPX), and page table control (MPK). Building on our isolation efforts with MPX and MPK, we present the design requirements for a dedicated hardware mechanism to support intra-process memory isolation, and discuss how such a mechanism can empower the next wave of highly precise software security mitigations that rely on partially isolated information in a process.
{"title":"SoK: Shining Light on Shadow Stacks","authors":"N. Burow, Xinping Zhang, Mathias Payer","doi":"10.1109/SP.2019.00076","DOIUrl":"https://doi.org/10.1109/SP.2019.00076","url":null,"abstract":"Control-Flow Hijacking attacks are the dominant attack vector against C/C++ programs. Control-Flow Integrity (CFI) solutions mitigate these attacks on the forward edge, i.e., indirect calls through function pointers and virtual calls. Protecting the backward edge is left to stack canaries, which are easily bypassed through information leaks. Shadow Stacks are a fully precise mechanism for protecting backwards edges, and should be deployed with CFI mitigations. We present a comprehensive analysis of all possible shadow stack mechanisms along three axes: performance, compatibil- ity, and security. For performance comparisons we use SPEC CPU2006, while security and compatibility are qualitatively analyzed. Based on our study, we renew calls for a shadow stack design that leverages a dedicated register, resulting in low performance overhead, and minimal memory overhead, but sacrifices compatibility. We present case studies of our implementation of such a design, Shadesmar, on Phoronix and Apache to demonstrate the feasibility of dedicating a general purpose register to a security monitor on modern architectures, and Shadesmar’s deployability. Our comprehensive analysis, including detailed case studies for our novel design, allows compiler designers and practitioners to select the correct shadow stack design for different usage scenarios. Shadow stacks belong to the class of defense mechanisms that require metadata about the program’s state to enforce their defense policies. Protecting this metadata for deployed mitigations requires in-process isolation of a segment of the virtual address space. Prior work on defenses in this class has relied on information hiding to protect metadata. We show that stronger guarantees are possible by repurposing two new Intel x86 extensions for memory protection (MPX), and page table control (MPK). Building on our isolation efforts with MPX and MPK, we present the design requirements for a dedicated hardware mechanism to support intra-process memory isolation, and discuss how such a mechanism can empower the next wave of highly precise software security mitigations that rely on partially isolated information in a process.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128109922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emma Dauterman, Henry Corrigan-Gibbs, David Mazières, D. Boneh, Dominic Rizzo
We present True2F, a system for second-factor authentication that provides the benefits of conventional authentication tokens in the face of phishing and software compromise, while also providing strong protection against token faults and backdoors. To do so, we develop new lightweight two-party protocols for generating cryptographic keys and ECDSA signatures, and we implement new privacy defenses to prevent cross-origin token-fingerprinting attacks. To facilitate real-world deployment, our system is backwards-compatible with today’s U2F-enabled web services and runs on commodity hardware tokens after a firmware modification. A True2F-protected authentication takes just 57ms to complete on the token, compared with 23ms for unprotected U2F.
{"title":"True2F: Backdoor-Resistant Authentication Tokens","authors":"Emma Dauterman, Henry Corrigan-Gibbs, David Mazières, D. Boneh, Dominic Rizzo","doi":"10.1109/SP.2019.00048","DOIUrl":"https://doi.org/10.1109/SP.2019.00048","url":null,"abstract":"We present True2F, a system for second-factor authentication that provides the benefits of conventional authentication tokens in the face of phishing and software compromise, while also providing strong protection against token faults and backdoors. To do so, we develop new lightweight two-party protocols for generating cryptographic keys and ECDSA signatures, and we implement new privacy defenses to prevent cross-origin token-fingerprinting attacks. To facilitate real-world deployment, our system is backwards-compatible with today’s U2F-enabled web services and runs on commodity hardware tokens after a firmware modification. A True2F-protected authentication takes just 57ms to complete on the token, compared with 23ms for unprotected U2F.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130064589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sadegh M. Milajerdi, Rigel Gjomemo, Birhanu Eshete, R. Sekar, V. Venkatakrishnan
In this paper, we present HOLMES, a system that implements a new approach to the detection of Advanced and Persistent Threats (APTs). HOLMES is inspired by several case studies of real-world APTs that highlight some common goals of APT actors. In a nutshell, HOLMES aims to produce a detection signal that indicates the presence of a coordinated set of activities that are part of an APT campaign. One of the main challenges addressed by our approach involves developing a suite of techniques that make the detection signal robust and reliable. At a high-level, the techniques we develop effectively leverage the correlation between suspicious information flows that arise during an attacker campaign. In addition to its detection capability, HOLMES is also able to generate a high-level graph that summarizes the attacker’s actions in real-time. This graph can be used by an analyst for an effective cyber response. An evaluation of our approach against some real-world APTs indicates that HOLMES can detect APT campaigns with high precision and low false alarm rate. The compact high-level graphs produced by HOLMES effectively summarizes an ongoing attack campaign and can assist real-time cyber-response operations.
{"title":"HOLMES: Real-Time APT Detection through Correlation of Suspicious Information Flows","authors":"Sadegh M. Milajerdi, Rigel Gjomemo, Birhanu Eshete, R. Sekar, V. Venkatakrishnan","doi":"10.1109/SP.2019.00026","DOIUrl":"https://doi.org/10.1109/SP.2019.00026","url":null,"abstract":"In this paper, we present HOLMES, a system that implements a new approach to the detection of Advanced and Persistent Threats (APTs). HOLMES is inspired by several case studies of real-world APTs that highlight some common goals of APT actors. In a nutshell, HOLMES aims to produce a detection signal that indicates the presence of a coordinated set of activities that are part of an APT campaign. One of the main challenges addressed by our approach involves developing a suite of techniques that make the detection signal robust and reliable. At a high-level, the techniques we develop effectively leverage the correlation between suspicious information flows that arise during an attacker campaign. In addition to its detection capability, HOLMES is also able to generate a high-level graph that summarizes the attacker’s actions in real-time. This graph can be used by an analyst for an effective cyber response. An evaluation of our approach against some real-world APTs indicates that HOLMES can detect APT campaigns with high precision and low false alarm rate. The compact high-level graphs produced by HOLMES effectively summarizes an ongoing attack campaign and can assist real-time cyber-response operations.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121491019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many micro-architectural attacks rely on the capability of an attacker to efficiently find small eviction sets: groups of virtual addresses that map to the same cache set. This capability has become a decisive primitive for cache side-channel, rowhammer, and speculative execution attacks. Despite their importance, algorithms for finding small eviction sets have not been systematically studied in the literature. In this paper, we perform such a systematic study. We begin by formalizing the problem and analyzing the probability that a set of random virtual addresses is an eviction set. We then present novel algorithms, based on ideas from threshold group testing, that reduce random eviction sets to their minimal core in linear time, improving over the quadratic state-of-the-art. We complement the theoretical analysis of our algorithms with a rigorous empirical evaluation in which we identify and isolate factors that affect their reliability in practice, such as adaptive cache replacement strategies and TLB thrashing. Our results indicate that our algorithms enable finding small eviction sets much faster than before, and under conditions where this was previously deemed impractical.
{"title":"Theory and Practice of Finding Eviction Sets","authors":"Pepe Vila, Boris Köpf, J. Morales","doi":"10.1109/SP.2019.00042","DOIUrl":"https://doi.org/10.1109/SP.2019.00042","url":null,"abstract":"Many micro-architectural attacks rely on the capability of an attacker to efficiently find small eviction sets: groups of virtual addresses that map to the same cache set. This capability has become a decisive primitive for cache side-channel, rowhammer, and speculative execution attacks. Despite their importance, algorithms for finding small eviction sets have not been systematically studied in the literature. In this paper, we perform such a systematic study. We begin by formalizing the problem and analyzing the probability that a set of random virtual addresses is an eviction set. We then present novel algorithms, based on ideas from threshold group testing, that reduce random eviction sets to their minimal core in linear time, improving over the quadratic state-of-the-art. We complement the theoretical analysis of our algorithms with a rigorous empirical evaluation in which we identify and isolate factors that affect their reliability in practice, such as adaptive cache replacement strategies and TLB thrashing. Our results indicate that our algorithms enable finding small eviction sets much faster than before, and under conditions where this was previously deemed impractical.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"350 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133875585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saba Eskandarian, Jonathan Cogan, Sawyer Birnbaum, Peh Chang Wei Brandon, Dillon Franke, Forest Fraser, Gaspar Garcia, Eric Gong, Hung T. Nguyen, Taresh K. Sethi, Vishal Subbiah, M. Backes, Giancarlo Pellegrino, D. Boneh
Users regularly enter sensitive data, such as passwords, credit card numbers, or tax information, into the browser window. While modern browsers provide powerful client-side privacy measures to protect this data, none of these defenses prevent a browser compromised by malware from stealing it. In this work, we present Fidelius, a new architecture that uses trusted hardware enclaves integrated into the browser to enable protection of user secrets during web browsing sessions, even if the entire underlying browser and OS are fully controlled by a malicious attacker. Fidelius solves many challenges involved in providing protection for browsers in a fully malicious environment, offering support for integrity and privacy for form data, JavaScript execution, XMLHttpRequests, and protected web storage, while minimizing the TCB. Moreover, interactions between the enclave and the browser, the keyboard, and the display all require new protocols, each with their own security considerations. Finally, Fidelius takes into account UI considerations to ensure a consistent and simple interface for both developers and users. As part of this project, we develop the first open source system that provides a trusted path from input and output peripherals to a hardware enclave with no reliance on additional hypervisor security assumptions. These components may be of independent interest and useful to future projects. We implement and evaluate Fidelius to measure its performance overhead, finding that Fidelius imposes acceptable overhead on page load and user interaction for secured pages and has no impact on pages and page components that do not use its enhanced security features.
{"title":"Fidelius: Protecting User Secrets from Compromised Browsers","authors":"Saba Eskandarian, Jonathan Cogan, Sawyer Birnbaum, Peh Chang Wei Brandon, Dillon Franke, Forest Fraser, Gaspar Garcia, Eric Gong, Hung T. Nguyen, Taresh K. Sethi, Vishal Subbiah, M. Backes, Giancarlo Pellegrino, D. Boneh","doi":"10.1109/SP.2019.00036","DOIUrl":"https://doi.org/10.1109/SP.2019.00036","url":null,"abstract":"Users regularly enter sensitive data, such as passwords, credit card numbers, or tax information, into the browser window. While modern browsers provide powerful client-side privacy measures to protect this data, none of these defenses prevent a browser compromised by malware from stealing it. In this work, we present Fidelius, a new architecture that uses trusted hardware enclaves integrated into the browser to enable protection of user secrets during web browsing sessions, even if the entire underlying browser and OS are fully controlled by a malicious attacker. Fidelius solves many challenges involved in providing protection for browsers in a fully malicious environment, offering support for integrity and privacy for form data, JavaScript execution, XMLHttpRequests, and protected web storage, while minimizing the TCB. Moreover, interactions between the enclave and the browser, the keyboard, and the display all require new protocols, each with their own security considerations. Finally, Fidelius takes into account UI considerations to ensure a consistent and simple interface for both developers and users. As part of this project, we develop the first open source system that provides a trusted path from input and output peripherals to a hardware enclave with no reliance on additional hypervisor security assumptions. These components may be of independent interest and useful to future projects. We implement and evaluate Fidelius to measure its performance overhead, finding that Fidelius imposes acceptable overhead on page load and user interaction for secured pages and has no impact on pages and page components that do not use its enhanced security features.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122130708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Genkin, Mihir Pattani, R. Schuster, Eran Tromer
We show that subtle acoustic noises emanating from within computer screens can be used to detect the content displayed on the screens. This sound can be picked up by ordinary microphones built into webcams or screens, and is inadvertently transmitted to other parties, e.g., during a videoconference call or archived recordings. It can also be recorded by a smartphone or ``smart speaker'' placed on a desk next to the screen, or from as far as 10 meters away using a parabolic microphone. Empirically demonstrating various attack scenarios, we show how this channel can be used for real-time detection of on-screen text, or users' input into on-screen virtual keyboards. We also demonstrate how an attacker can analyze the audio received during video call (e.g., on Google Hangout) to infer whether the other side is browsing the web in lieu of watching the video call, and which web site is displayed on their screen.
{"title":"Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels","authors":"Daniel Genkin, Mihir Pattani, R. Schuster, Eran Tromer","doi":"10.1109/SP.2019.00074","DOIUrl":"https://doi.org/10.1109/SP.2019.00074","url":null,"abstract":"We show that subtle acoustic noises emanating from within computer screens can be used to detect the content displayed on the screens. This sound can be picked up by ordinary microphones built into webcams or screens, and is inadvertently transmitted to other parties, e.g., during a videoconference call or archived recordings. It can also be recorded by a smartphone or ``smart speaker'' placed on a desk next to the screen, or from as far as 10 meters away using a parabolic microphone. Empirically demonstrating various attack scenarios, we show how this channel can be used for real-time detection of on-screen text, or users' input into on-screen virtual keyboards. We also demonstrate how an attacker can analyze the audio received during video call (e.g., on Google Hangout) to infer whether the other side is browsing the web in lieu of watching the video call, and which web site is displayed on their screen.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121074686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital security technology is able to identify and prevent many threats to users accounts. However, some threats remain that, to provide reliable security, require human intervention: e.g., through users paying attention to warning messages or completing secondary authentication procedures. While prior work has broadly explored people's mental models of digital security threats, we know little about users' precise, in-the-moment response process to in-the-wild threats. In this work, we conduct a series of qualitative interviews (n=67) with users who had recently experienced suspicious login incidents on their real Facebook accounts in order to explore this process of account security incident response. We find a common process across participants from five countries -- with differing online and offline cultures -- allowing us to identify areas for future technical development to best support user security. We provide additional insights on the unique nature of incident-response information seeking, known attacker threat models, and lessons learned from a large, cross-cultural qualitative study of digital security.
{"title":"\"Should I Worry?\" A Cross-Cultural Examination of Account Security Incident Response","authors":"Elissa M. Redmiles","doi":"10.1109/SP.2019.00059","DOIUrl":"https://doi.org/10.1109/SP.2019.00059","url":null,"abstract":"Digital security technology is able to identify and prevent many threats to users accounts. However, some threats remain that, to provide reliable security, require human intervention: e.g., through users paying attention to warning messages or completing secondary authentication procedures. While prior work has broadly explored people's mental models of digital security threats, we know little about users' precise, in-the-moment response process to in-the-wild threats. In this work, we conduct a series of qualitative interviews (n=67) with users who had recently experienced suspicious login incidents on their real Facebook accounts in order to explore this process of account security incident response. We find a common process across participants from five countries -- with differing online and offline cultures -- allowing us to identify areas for future technical development to best support user security. We provide additional insights on the unique nature of incident-response information seeking, known attacker threat models, and lessons learned from a large, cross-cultural qualitative study of digital security.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117086462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location information is critical to a wide variety of navigation and tracking applications. GPS, today's de-facto outdoor localization system has been shown to be vulnerable to signal spoofing attacks. Inertial Navigation Systems (INS) are emerging as a popular complementary system, especially in road transportation systems as they enable improved navigation and tracking as well as offer resilience to wireless signals spoofing and jamming attacks. In this paper, we evaluate the security guarantees of INS-aided GPS tracking and navigation for road transportation systems. We consider an adversary required to travel from a source location to a destination and monitored by an INS-aided GPS system. The goal of the adversary is to travel to alternate locations without being detected. We develop and evaluate algorithms that achieve this goal, providing the adversary significant latitude. Our algorithms build a graph model for a given road network and enable us to derive potential destinations an attacker can reach without raising alarms even with the INS-aided GPS tracking and navigation system. The algorithms render the gyroscope and accelerometer sensors useless as they generate road trajectories indistinguishable from plausible paths (both in terms of turn angles and roads curvature). We also design, build and demonstrate that the magnetometer can be actively spoofed using a combination of carefully controlled coils. To experimentally demonstrate and evaluate the feasibility of the attack in real-world, we implement a first real-time integrated GPS/INS spoofer that accounts for traffic fluidity, congestion, lights, and dynamically generates corresponding spoofing signals. Furthermore, we evaluate our attack on ten different cities using driving traces and publicly available city plans. Our evaluations show that it is possible for an attacker to reach destinations that are as far as 30 km away from the actual destination without being detected. We also show that it is possible for the adversary to reach almost 60--80% of possible points within the target region in some cities. Such results are only a lower-bound, as an adversary can adjust our parameters to spend more resources (e.g., time) on the target source/destination than we did for our performance evaluations of thousands of paths. We propose countermeasures that limit an attacker's ability, without the need for any hardware modifications. Our system can be used as the foundation for countering such attacks, both detecting and recommending paths that are difficult to spoof.
{"title":"Security of GPS/INS Based On-road Location Tracking Systems","authors":"Sashank Narain, Aanjhan Ranganathan, G. Noubir","doi":"10.1109/SP.2019.00068","DOIUrl":"https://doi.org/10.1109/SP.2019.00068","url":null,"abstract":"Location information is critical to a wide variety of navigation and tracking applications. GPS, today's de-facto outdoor localization system has been shown to be vulnerable to signal spoofing attacks. Inertial Navigation Systems (INS) are emerging as a popular complementary system, especially in road transportation systems as they enable improved navigation and tracking as well as offer resilience to wireless signals spoofing and jamming attacks. In this paper, we evaluate the security guarantees of INS-aided GPS tracking and navigation for road transportation systems. We consider an adversary required to travel from a source location to a destination and monitored by an INS-aided GPS system. The goal of the adversary is to travel to alternate locations without being detected. We develop and evaluate algorithms that achieve this goal, providing the adversary significant latitude. Our algorithms build a graph model for a given road network and enable us to derive potential destinations an attacker can reach without raising alarms even with the INS-aided GPS tracking and navigation system. The algorithms render the gyroscope and accelerometer sensors useless as they generate road trajectories indistinguishable from plausible paths (both in terms of turn angles and roads curvature). We also design, build and demonstrate that the magnetometer can be actively spoofed using a combination of carefully controlled coils. To experimentally demonstrate and evaluate the feasibility of the attack in real-world, we implement a first real-time integrated GPS/INS spoofer that accounts for traffic fluidity, congestion, lights, and dynamically generates corresponding spoofing signals. Furthermore, we evaluate our attack on ten different cities using driving traces and publicly available city plans. Our evaluations show that it is possible for an attacker to reach destinations that are as far as 30 km away from the actual destination without being detected. We also show that it is possible for the adversary to reach almost 60--80% of possible points within the target region in some cities. Such results are only a lower-bound, as an adversary can adjust our parameters to spend more resources (e.g., time) on the target source/destination than we did for our performance evaluations of thousands of paths. We propose countermeasures that limit an attacker's ability, without the need for any hardware modifications. Our system can be used as the foundation for countering such attacks, both detecting and recommending paths that are difficult to spoof.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124664791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongdong She, Kexin Pei, Dave Epstein, Junfeng Yang, Baishakhi Ray, S. Jana
Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-the-art fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use evolutionary guidance to generate inputs that can trigger different bugs. Such evolutionary algorithms, while fast and simple to implement, often get stuck in fruitless sequences of random mutations. Gradient-guided optimization presents a promising alternative to evolutionary guidance. Gradient-guided techniques have been shown to significantly outperform evolutionary algorithms at solving high-dimensional structured optimization problems in domains like machine learning by efficiently utilizing gradients or higher-order derivatives of the underlying function. However, gradient-guided approaches are not directly applicable to fuzzing as real-world program behaviors contain many discontinuities, plateaus, and ridges where the gradient-based methods often get stuck. We observe that this problem can be addressed by creating a smooth surrogate function approximating the target program’s discrete branching behavior. In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program's branching behaviors. We further demonstrate that such neural network models can be used together with gradient-guided input generation schemes to significantly increase the efficiency of the fuzzing process. Our extensive evaluations demonstrate that NEUZZ significantly outperforms 10 state-of-the-art graybox fuzzers on 10 popular real-world programs both at finding new bugs and achieving higher edge coverage. NEUZZ found 31 previously unknown bugs (including two CVEs) that other fuzzers failed to find in 10 real-world programs and achieved 3X more edge coverage than all of the tested graybox fuzzers over 24 hour runs. Furthermore, NEUZZ also outperformed existing fuzzers on both LAVA-M and DARPA CGC bug datasets.
{"title":"NEUZZ: Efficient Fuzzing with Neural Program Smoothing","authors":"Dongdong She, Kexin Pei, Dave Epstein, Junfeng Yang, Baishakhi Ray, S. Jana","doi":"10.1109/SP.2019.00052","DOIUrl":"https://doi.org/10.1109/SP.2019.00052","url":null,"abstract":"Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-the-art fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use evolutionary guidance to generate inputs that can trigger different bugs. Such evolutionary algorithms, while fast and simple to implement, often get stuck in fruitless sequences of random mutations. Gradient-guided optimization presents a promising alternative to evolutionary guidance. Gradient-guided techniques have been shown to significantly outperform evolutionary algorithms at solving high-dimensional structured optimization problems in domains like machine learning by efficiently utilizing gradients or higher-order derivatives of the underlying function. However, gradient-guided approaches are not directly applicable to fuzzing as real-world program behaviors contain many discontinuities, plateaus, and ridges where the gradient-based methods often get stuck. We observe that this problem can be addressed by creating a smooth surrogate function approximating the target program’s discrete branching behavior. In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program's branching behaviors. We further demonstrate that such neural network models can be used together with gradient-guided input generation schemes to significantly increase the efficiency of the fuzzing process. Our extensive evaluations demonstrate that NEUZZ significantly outperforms 10 state-of-the-art graybox fuzzers on 10 popular real-world programs both at finding new bugs and achieving higher edge coverage. NEUZZ found 31 previously unknown bugs (including two CVEs) that other fuzzers failed to find in 10 real-world programs and achieved 3X more edge coverage than all of the tested graybox fuzzers over 24 hour runs. Furthermore, NEUZZ also outperformed existing fuzzers on both LAVA-M and DARPA CGC bug datasets.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117329004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}