Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00072
Qingchuan Zhao, Chaoshun Zuo, Brendan Dolan-Gavitt, Giancarlo Pellegrino, Zhiqiang Lin
Mobile applications (apps) have exploded in popularity, with billions of smartphone users using millions of apps available through markets such as the Google Play Store or the Apple App Store. While these apps have rich and useful functionality that is publicly exposed to end users, they also contain hidden behaviors that are not disclosed, such as backdoors and blacklists designed to block unwanted content. In this paper, we show that the input validation behavior—the way the mobile apps process and respond to data entered by users—can serve as a powerful tool for uncovering such hidden functionality. We therefore have developed a tool, InputScope, that automatically detects both the execution context of user input validation and also the content involved in the validation, to automatically expose the secrets of interest. We have tested InputScope with over 150,000 mobile apps, including popular apps from major app stores and preinstalled apps shipped with the phone, and found 12,706 mobile apps with backdoor secrets and 4,028 mobile apps containing blacklist secrets.
{"title":"Automatic Uncovering of Hidden Behaviors From Input Validation in Mobile Apps","authors":"Qingchuan Zhao, Chaoshun Zuo, Brendan Dolan-Gavitt, Giancarlo Pellegrino, Zhiqiang Lin","doi":"10.1109/SP40000.2020.00072","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00072","url":null,"abstract":"Mobile applications (apps) have exploded in popularity, with billions of smartphone users using millions of apps available through markets such as the Google Play Store or the Apple App Store. While these apps have rich and useful functionality that is publicly exposed to end users, they also contain hidden behaviors that are not disclosed, such as backdoors and blacklists designed to block unwanted content. In this paper, we show that the input validation behavior—the way the mobile apps process and respond to data entered by users—can serve as a powerful tool for uncovering such hidden functionality. We therefore have developed a tool, InputScope, that automatically detects both the execution context of user input validation and also the content involved in the validation, to automatically expose the secrets of interest. We have tested InputScope with over 150,000 mobile apps, including popular apps from major app stores and preinstalled apps shipped with the phone, and found 12,706 mobile apps with backdoor secrets and 4,028 mobile apps containing blacklist secrets.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"3 1","pages":"1106-1120"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75678743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00067
Soyeon Park, Wen Xu, Insu Yun, Daehee Jang, Taesoo Kim
Fuzzing is a practical, widely-deployed technique to find bugs in complex, real-world programs like JavaScript engines. We observed, however, that existing fuzzing approaches, either generative or mutational, fall short in fully harvesting high-quality input corpora such as known proof of concept (PoC) exploits or unit tests. Existing fuzzers tend to destruct subtle semantics or conditions encoded in the input corpus in order to generate new test cases because this approach helps in discovering new code paths of the program. Nevertheless, for JavaScript-like complex programs, such a conventional design leads to test cases that tackle only shallow parts of the complex codebase and fails to reach deep bugs effectively due to the huge input space.In this paper, we advocate a new technique, called an aspect-preserving mutation, that stochastically preserves the desirable properties, called aspects, that we prefer to be maintained across mutation. We demonstrate the aspect preservation with two mutation strategies, namely, structure and type preservation, in our fully-fledged JavaScript fuzzer, called Die. Our evaluation shows that Die’s aspect-preserving mutation is more effective in discovering new bugs (5.7× more unique crashes) and producing valid test cases (2.4× fewer runtime errors) than the state-of-the-art JavaScript fuzzers. Die newly discovered 48 high-impact bugs in ChakraCore, JavaScriptCore, and V8 (38 fixed with 12 CVEs assigned as of today). The source code of Die is publicly available as an open-source project.1
{"title":"Fuzzing JavaScript Engines with Aspect-preserving Mutation","authors":"Soyeon Park, Wen Xu, Insu Yun, Daehee Jang, Taesoo Kim","doi":"10.1109/SP40000.2020.00067","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00067","url":null,"abstract":"Fuzzing is a practical, widely-deployed technique to find bugs in complex, real-world programs like JavaScript engines. We observed, however, that existing fuzzing approaches, either generative or mutational, fall short in fully harvesting high-quality input corpora such as known proof of concept (PoC) exploits or unit tests. Existing fuzzers tend to destruct subtle semantics or conditions encoded in the input corpus in order to generate new test cases because this approach helps in discovering new code paths of the program. Nevertheless, for JavaScript-like complex programs, such a conventional design leads to test cases that tackle only shallow parts of the complex codebase and fails to reach deep bugs effectively due to the huge input space.In this paper, we advocate a new technique, called an aspect-preserving mutation, that stochastically preserves the desirable properties, called aspects, that we prefer to be maintained across mutation. We demonstrate the aspect preservation with two mutation strategies, namely, structure and type preservation, in our fully-fledged JavaScript fuzzer, called Die. Our evaluation shows that Die’s aspect-preserving mutation is more effective in discovering new bugs (5.7× more unique crashes) and producing valid test cases (2.4× fewer runtime errors) than the state-of-the-art JavaScript fuzzers. Die newly discovered 48 high-impact bugs in ChakraCore, JavaScriptCore, and V8 (38 fixed with 12 CVEs assigned as of today). The source code of Die is publicly available as an open-source project.1","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"5 1","pages":"1629-1642"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74267226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00084
Laura Edelson, Tobias Lauinger, Damon McCoy
Actors engaged in election disinformation are using online advertising platforms to spread political messages. In response to this threat, online advertising networks have started making political advertising on their platforms more transparent in order to enable third parties to detect malicious advertisers. We present a set of methodologies and perform a security analysis of Facebook’s U.S. Ad Library, which is their political advertising transparency product. Unfortunately, we find that there are several weaknesses that enable a malicious advertiser to avoid accurate disclosure of their political ads. We also propose a clustering-based method to detect advertisers engaged in undeclared coordinated activity. Our clustering method identified 16 clusters of likely inauthentic communities that spent a total of over four million dollars on political advertising. This supports the idea that transparency could be a promising tool for combating disinformation. Finally, based on our findings, we make recommendations for improving the security of advertising transparency on Facebook and other platforms.
{"title":"A Security Analysis of the Facebook Ad Library","authors":"Laura Edelson, Tobias Lauinger, Damon McCoy","doi":"10.1109/SP40000.2020.00084","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00084","url":null,"abstract":"Actors engaged in election disinformation are using online advertising platforms to spread political messages. In response to this threat, online advertising networks have started making political advertising on their platforms more transparent in order to enable third parties to detect malicious advertisers. We present a set of methodologies and perform a security analysis of Facebook’s U.S. Ad Library, which is their political advertising transparency product. Unfortunately, we find that there are several weaknesses that enable a malicious advertiser to avoid accurate disclosure of their political ads. We also propose a clustering-based method to detect advertisers engaged in undeclared coordinated activity. Our clustering method identified 16 clusters of likely inauthentic communities that spent a total of over four million dollars on political advertising. This supports the idea that transparency could be a promising tool for combating disinformation. Finally, based on our findings, we make recommendations for improving the security of advertising transparency on Facebook and other platforms.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"151 1","pages":"661-678"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74333955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00098
N. Filardo, B. F. Gutstein, Jonathan Woodruff, S. Ainsworth, Lucian Paul-Trifu, Brooks Davis, Hongyan Xia, E. Napierala, Alexander Richardson, John Baldwin, D. Chisnall, Jessica Clarke, Khilan Gudka, Alexandre Joannou, A. T. Markettos, Alfredo Mazzinghi, Robert M. Norton, M. Roe, Peter Sewell, Stacey D. Son, Timothy M. Jones, S. Moore, P. Neumann, R. Watson
Use-after-free violations of temporal memory safety continue to plague software systems, underpinning many high-impact exploits. The CHERI capability system shows great promise in achieving C and C++ language spatial memory safety, preventing out-of-bounds accesses. Enforcing language-level temporal safety on CHERI requires capability revocation, traditionally achieved either via table lookups (avoided for performance in the CHERI design) or by identifying capabilities in memory to revoke them (similar to a garbage-collector sweep). CHERIvoke, a prior feasibility study, suggested that CHERI’s tagged capabilities could make this latter strategy viable, but modeled only architectural limits and did not consider the full implementation or evaluation of the approach.Cornucopia is a lightweight capability revocation system for CHERI that implements non-probabilistic C/C++ temporal memory safety for standard heap allocations. It extends the CheriBSD virtual-memory subsystem to track capability flow through memory and provides a concurrent kernel-resident revocation service that is amenable to multi-processor and hardware acceleration. We demonstrate an average overhead of less than 2% and a worst-case of 8.9% for concurrent revocation on compatible SPEC CPU2006 benchmarks on a multi-core CHERI CPU on FPGA, and we validate Cornucopia against the Juliet test suite’s corpus of temporally unsafe programs. We test its compatibility with a large corpus of C programs by using a revoking allocator as the system allocator while booting multi-user CheriBSD. Cornucopia is a viable strategy for always-on temporal heap memory safety, suitable for production environments.
{"title":"Cornucopia: Temporal Safety for CHERI Heaps","authors":"N. Filardo, B. F. Gutstein, Jonathan Woodruff, S. Ainsworth, Lucian Paul-Trifu, Brooks Davis, Hongyan Xia, E. Napierala, Alexander Richardson, John Baldwin, D. Chisnall, Jessica Clarke, Khilan Gudka, Alexandre Joannou, A. T. Markettos, Alfredo Mazzinghi, Robert M. Norton, M. Roe, Peter Sewell, Stacey D. Son, Timothy M. Jones, S. Moore, P. Neumann, R. Watson","doi":"10.1109/SP40000.2020.00098","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00098","url":null,"abstract":"Use-after-free violations of temporal memory safety continue to plague software systems, underpinning many high-impact exploits. The CHERI capability system shows great promise in achieving C and C++ language spatial memory safety, preventing out-of-bounds accesses. Enforcing language-level temporal safety on CHERI requires capability revocation, traditionally achieved either via table lookups (avoided for performance in the CHERI design) or by identifying capabilities in memory to revoke them (similar to a garbage-collector sweep). CHERIvoke, a prior feasibility study, suggested that CHERI’s tagged capabilities could make this latter strategy viable, but modeled only architectural limits and did not consider the full implementation or evaluation of the approach.Cornucopia is a lightweight capability revocation system for CHERI that implements non-probabilistic C/C++ temporal memory safety for standard heap allocations. It extends the CheriBSD virtual-memory subsystem to track capability flow through memory and provides a concurrent kernel-resident revocation service that is amenable to multi-processor and hardware acceleration. We demonstrate an average overhead of less than 2% and a worst-case of 8.9% for concurrent revocation on compatible SPEC CPU2006 benchmarks on a multi-core CHERI CPU on FPGA, and we validate Cornucopia against the Juliet test suite’s corpus of temporally unsafe programs. We test its compatibility with a large corpus of C programs by using a revoking allocator as the system allocator while booting multi-user CheriBSD. Cornucopia is a viable strategy for always-on temporal heap memory safety, suitable for production environments.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"30 1","pages":"608-625"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82712803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00001
Kasper Bonne Rasmussen, Youqian Zhang
Sensor systems are used every time a microcontroller needs to interact with the physical world. They are abundant in home automation, factory control systems, critical infrastructure, transport systems and many, many other things.In a sensor system, a sensor transforms a physical quantity into an analog signal which is sent to an ADC and a microcontroller for digitization and further processing. Once the measurement is in digital form, the microcontroller can execute tasks according to the measurement. Electromagnetic interference (EMI) can affect a measurement as it is transferred to the microcontroller. An attacker can manipulate the sensor output by intentionally inducing EMI in the wire between the sensor and the microcontroller. The nature of the analog channel between the sensor and the microcontroller means that the microcontroller cannot authenticate whether the measurement is from the sensor or the attacker. If the microcontroller includes incorrect measurements in its control decisions, it could have disastrous consequences.We present a novel detection system for these low-level electromagnetic interference attacks. Our system is based on the idea that if the sensor is turned off, the signal read by the microcontroller should be 0V (or some other known value). We use this idea to modulate the sensor output in a way that is unpredictable to the adversary. If the microcontroller detects fluctuations in the sensor output, the attacking signal can be detected. Our proposal works with a minimal amount of extra components and is thus cheap and easy to implement.We present the working mechanism of our detection method and prove the detection guarantee in the context of a strong attacker model. We implement our approach in order to detect adversarial EMI signals, both in a microphone system and a temperature sensor system, and we show that our detection mechanism is both effective and robust.
{"title":"Detection of Electromagnetic Interference Attacks on Sensor Systems","authors":"Kasper Bonne Rasmussen, Youqian Zhang","doi":"10.1109/SP40000.2020.00001","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00001","url":null,"abstract":"Sensor systems are used every time a microcontroller needs to interact with the physical world. They are abundant in home automation, factory control systems, critical infrastructure, transport systems and many, many other things.In a sensor system, a sensor transforms a physical quantity into an analog signal which is sent to an ADC and a microcontroller for digitization and further processing. Once the measurement is in digital form, the microcontroller can execute tasks according to the measurement. Electromagnetic interference (EMI) can affect a measurement as it is transferred to the microcontroller. An attacker can manipulate the sensor output by intentionally inducing EMI in the wire between the sensor and the microcontroller. The nature of the analog channel between the sensor and the microcontroller means that the microcontroller cannot authenticate whether the measurement is from the sensor or the attacker. If the microcontroller includes incorrect measurements in its control decisions, it could have disastrous consequences.We present a novel detection system for these low-level electromagnetic interference attacks. Our system is based on the idea that if the sensor is turned off, the signal read by the microcontroller should be 0V (or some other known value). We use this idea to modulate the sensor output in a way that is unpredictable to the adversary. If the microcontroller detects fluctuations in the sensor output, the attacking signal can be detected. Our proposal works with a minimal amount of extra components and is thus cheap and easy to implement.We present the working mechanism of our detection method and prove the detection guarantee in the context of a strong attacker model. We implement our approach in order to detect adversarial EMI signals, both in a microphone system and a temperature sensor system, and we show that our detection mechanism is both effective and robust.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"35 1","pages":"203-216"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89789919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00040
Philip Daian, Steven Goldfeder, T. Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, A. Juels
Blockchains, and specifically smart contracts, have promised to create fair and transparent trading ecosystems.Unfortunately, we show that this promise has not been met. We document and quantify the widespread and rising deployment of arbitrage bots in blockchain systems, specifically in decentralized exchanges (or "DEXes"). Like high-frequency traders on Wall Street, these bots exploit inefficiencies in DEXes, paying high transaction fees and optimizing network latency to frontrun, i.e., anticipate and exploit, ordinary users’ DEX trades.We study the breadth of DEX arbitrage bots in a subset of transactions that yield quantifiable revenue to these bots. We also study bots’ profit-making strategies, with a focus on blockchain-specific elements. We observe bots engage in what we call priority gas auctions (PGAs), competitively bidding up transaction fees in order to obtain priority ordering, i.e., early block position and execution, for their transactions. PGAs present an interesting and complex new continuous-time, partial-information, game-theoretic model that we formalize and study. We release an interactive web portal, frontrun.me, to provide the community with real-time data on PGAs.We additionally show that high fees paid for priority transaction ordering poses a systemic risk to consensus-layer security. We explain that such fees are just one form of a general phenomenon in DEXes and beyond—what we call miner extractable value (MEV)—that poses concrete, measurable, consensus-layer security risks. We show empirically that MEV poses a realistic threat to Ethereum today.Our work highlights the large, complex risks created by transaction-ordering dependencies in smart contracts and the ways in which traditional forms of financial-market exploitation are adapting to and penetrating blockchain economies.
{"title":"Flash Boys 2.0: Frontrunning in Decentralized Exchanges, Miner Extractable Value, and Consensus Instability","authors":"Philip Daian, Steven Goldfeder, T. Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, A. Juels","doi":"10.1109/SP40000.2020.00040","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00040","url":null,"abstract":"Blockchains, and specifically smart contracts, have promised to create fair and transparent trading ecosystems.Unfortunately, we show that this promise has not been met. We document and quantify the widespread and rising deployment of arbitrage bots in blockchain systems, specifically in decentralized exchanges (or \"DEXes\"). Like high-frequency traders on Wall Street, these bots exploit inefficiencies in DEXes, paying high transaction fees and optimizing network latency to frontrun, i.e., anticipate and exploit, ordinary users’ DEX trades.We study the breadth of DEX arbitrage bots in a subset of transactions that yield quantifiable revenue to these bots. We also study bots’ profit-making strategies, with a focus on blockchain-specific elements. We observe bots engage in what we call priority gas auctions (PGAs), competitively bidding up transaction fees in order to obtain priority ordering, i.e., early block position and execution, for their transactions. PGAs present an interesting and complex new continuous-time, partial-information, game-theoretic model that we formalize and study. We release an interactive web portal, frontrun.me, to provide the community with real-time data on PGAs.We additionally show that high fees paid for priority transaction ordering poses a systemic risk to consensus-layer security. We explain that such fees are just one form of a general phenomenon in DEXes and beyond—what we call miner extractable value (MEV)—that poses concrete, measurable, consensus-layer security risks. We show empirically that MEV poses a realistic threat to Ethereum today.Our work highlights the large, complex risks created by transaction-ordering dependencies in smart contracts and the ways in which traditional forms of financial-market exploitation are adapting to and penetrating blockchain economies.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"31 1","pages":"910-927"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76639559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00006
M. J. Amon, Rakibul Hasan, K. Hugenberg, B. Bertenthal, Apu Kapadia
We investigate the effects of perspective taking, privacy cues, and portrayal of photo subjects (i.e., photo valence) on decisions to share photos of people via social media. In an online experiment we queried 379 participants about 98 photos (that were previously rated for photo valence) in three conditions: (1) Baseline: participants judged their likelihood of sharing each photo; (2) Perspective-taking: participants judged their likelihood of sharing each photo when cued to imagine they are the person in the photo; and (3) Privacy: participants judged their likelihood to share after being cued to consider the privacy of the person in the photo. While participants across conditions indicated a lower likelihood of sharing photos that portrayed people negatively, they – surprisingly – reported a higher likelihood of sharing photos when primed to consider the privacy of the person in the photo. Frequent photo sharers on real-world social media platforms and people without strong personal privacy preferences were especially likely to want to share photos in the experiment, regardless of how the photo portrayed the subject. A follow-up study with 100 participants explaining their responses revealed that the Privacy condition led to a lack of concern with others’ privacy. These findings suggest that developing interventions for reducing photo sharing and protecting the privacy of others is a multivariate problem in which seemingly obvious solutions can sometimes go awry.
{"title":"Influencing Photo Sharing Decisions on Social Media: A Case of Paradoxical Findings","authors":"M. J. Amon, Rakibul Hasan, K. Hugenberg, B. Bertenthal, Apu Kapadia","doi":"10.1109/SP40000.2020.00006","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00006","url":null,"abstract":"We investigate the effects of perspective taking, privacy cues, and portrayal of photo subjects (i.e., photo valence) on decisions to share photos of people via social media. In an online experiment we queried 379 participants about 98 photos (that were previously rated for photo valence) in three conditions: (1) Baseline: participants judged their likelihood of sharing each photo; (2) Perspective-taking: participants judged their likelihood of sharing each photo when cued to imagine they are the person in the photo; and (3) Privacy: participants judged their likelihood to share after being cued to consider the privacy of the person in the photo. While participants across conditions indicated a lower likelihood of sharing photos that portrayed people negatively, they – surprisingly – reported a higher likelihood of sharing photos when primed to consider the privacy of the person in the photo. Frequent photo sharers on real-world social media platforms and people without strong personal privacy preferences were especially likely to want to share photos in the experiment, regardless of how the photo portrayed the subject. A follow-up study with 100 participants explaining their responses revealed that the Privacy condition led to a lack of concern with others’ privacy. These findings suggest that developing interventions for reducing photo sharing and protecting the privacy of others is a multivariate problem in which seemingly obvious solutions can sometimes go awry.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"17 1","pages":"1350-1366"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75732676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00070
Ilias Giechaskiel, Kasper Bonne Rasmussen, Jakub Szefer
Field-Programmable Gate Arrays (FPGAs) are versatile, reconfigurable integrated circuits that can be used as hardware accelerators to process highly-sensitive data. Leaking this data and associated cryptographic keys, however, can undermine a system’s security. To prevent potentially unintentional interactions that could break separation of privilege between different data center tenants, FPGAs in cloud environments are currently dedicated on a per-user basis. Nevertheless, while the FPGAs themselves are not shared among different users, other parts of the data center infrastructure are. This paper specifically shows for the first time that powering FPGAs, CPUs, and GPUs through the same power supply unit (PSU) can be exploited in FPGA-to-FPGA, CPU-to-FPGA, and GPU-to-FPGA covert channels between independent boards. These covert channels can operate remotely, without the need for physical access to, or modifications of, the boards. To demonstrate the attacks, this paper uses a novel combination of "sensing" and "stressing" ring oscillators as receivers on the sink FPGA. Further, ring oscillators are used as transmitters on the source FPGA. The transmitting and receiving circuits are used to determine the presence of the leakage on off-the-shelf Xilinx boards containing Artix 7 and Kintex 7 FPGA chips. Experiments are conducted with PSUs by two vendors, as well as CPUs and GPUs of different generations. Moreover, different sizes and types of ring oscillators are also tested. In addition, this work discusses potential countermeasures to mitigate the impact of the cross-board leakage. The results of this paper highlight the dangers of shared power supply units in local and cloud FPGAs, and therefore a fundamental need to re-think FPGA security for shared infrastructures.
{"title":"C3APSULe: Cross-FPGA Covert-Channel Attacks through Power Supply Unit Leakage","authors":"Ilias Giechaskiel, Kasper Bonne Rasmussen, Jakub Szefer","doi":"10.1109/SP40000.2020.00070","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00070","url":null,"abstract":"Field-Programmable Gate Arrays (FPGAs) are versatile, reconfigurable integrated circuits that can be used as hardware accelerators to process highly-sensitive data. Leaking this data and associated cryptographic keys, however, can undermine a system’s security. To prevent potentially unintentional interactions that could break separation of privilege between different data center tenants, FPGAs in cloud environments are currently dedicated on a per-user basis. Nevertheless, while the FPGAs themselves are not shared among different users, other parts of the data center infrastructure are. This paper specifically shows for the first time that powering FPGAs, CPUs, and GPUs through the same power supply unit (PSU) can be exploited in FPGA-to-FPGA, CPU-to-FPGA, and GPU-to-FPGA covert channels between independent boards. These covert channels can operate remotely, without the need for physical access to, or modifications of, the boards. To demonstrate the attacks, this paper uses a novel combination of \"sensing\" and \"stressing\" ring oscillators as receivers on the sink FPGA. Further, ring oscillators are used as transmitters on the source FPGA. The transmitting and receiving circuits are used to determine the presence of the leakage on off-the-shelf Xilinx boards containing Artix 7 and Kintex 7 FPGA chips. Experiments are conducted with PSUs by two vendors, as well as CPUs and GPUs of different generations. Moreover, different sizes and types of ring oscillators are also tested. In addition, this work discusses potential countermeasures to mitigate the impact of the cross-board leakage. The results of this paper highlight the dangers of shared power supply units in local and cloud FPGAs, and therefore a fundamental need to re-think FPGA security for shared infrastructures.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"1 1","pages":"1728-1741"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74594949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00116
R. Kasturi, Yiting Sun, Ruian Duan, Omar Alrawi, Ehsan Asdar, Victor Zhu, Yonghwi Kwon, Brendan Saltaformaggio
Over 55% of the world’s websites run on Content Management Systems (CMS). Unfortunately, this huge user population has made CMS-based websites a high-profile target for hackers. Worse still, the vast majority of the website hosting industry has shifted to a "backup and restore" model of security, which relies on error-prone AV scanners to prompt users to roll back to a pre-infection nightly snapshot. This research had the opportunity to study these nightly backups for over 300,000 unique production websites. In doing so, we measured the attack landscape of CMS-based websites and assessed the effectiveness of the backup and restore protection scheme. To our surprise, we found that the evolution of tens of thousands of attacks exhibited clear long-lived multi-stage attack patterns. We now propose TARDIS, an automated provenance inference technique, which enables the investigation and remediation of CMS-targeting attacks based on only the nightly backups already being collected by website hosting companies. With the help of our industry collaborator, we applied TARDIS to the nightly backups of those 300K websites and found 20,591 attacks which lasted from 6 to 1,694 days, some of which were still yet to be detected.
{"title":"TARDIS: Rolling Back The Clock On CMS-Targeting Cyber Attacks","authors":"R. Kasturi, Yiting Sun, Ruian Duan, Omar Alrawi, Ehsan Asdar, Victor Zhu, Yonghwi Kwon, Brendan Saltaformaggio","doi":"10.1109/SP40000.2020.00116","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00116","url":null,"abstract":"Over 55% of the world’s websites run on Content Management Systems (CMS). Unfortunately, this huge user population has made CMS-based websites a high-profile target for hackers. Worse still, the vast majority of the website hosting industry has shifted to a \"backup and restore\" model of security, which relies on error-prone AV scanners to prompt users to roll back to a pre-infection nightly snapshot. This research had the opportunity to study these nightly backups for over 300,000 unique production websites. In doing so, we measured the attack landscape of CMS-based websites and assessed the effectiveness of the backup and restore protection scheme. To our surprise, we found that the evolution of tens of thousands of attacks exhibited clear long-lived multi-stage attack patterns. We now propose TARDIS, an automated provenance inference technique, which enables the investigation and remediation of CMS-targeting attacks based on only the nightly backups already being collected by website hosting companies. With the help of our industry collaborator, we applied TARDIS to the nightly backups of those 300K websites and found 20,591 attacks which lasted from 6 to 1,694 days, some of which were still yet to be detected.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"71 1","pages":"1156-1171"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84281829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/SP40000.2020.00026
Chen Yan, Hocheol Shin, Connor Bolton, Wenyuan Xu, Yongdae Kim, Kevin Fu
Over the last six years, several papers demonstrated how intentional analog interference based on acoustics, RF, lasers, and other physical modalities could induce faults, influence, or even control the output of sensors. Damage to the availability and integrity of sensor output carries significant risks to safety-critical systems that make automated decisions based on trusted sensor measurement. Established signal processing models use transfer functions to express reliability and dependability characteristics of sensors, but existing models do not provide a deliberate way to express and capture security properties meaningfully.Our work begins to fill this gap by systematizing knowledge of analog attacks against sensor circuitry and defenses. Our primary contribution is a simple sensor security model such that sensor engineers can better express analog security properties of sensor circuitry without needing to learn significantly new notation. Our model introduces transfer functions and a vector of adversarial noise to represent adversarial capabilities at each stage of a sensor’s signal conditioning chain. The primary goals of the systematization are (1) to enable more meaningful quantification of risk for the design and evaluation of past and future sensors, (2) to better predict new attack vectors, and (3) to establish defensive design patterns that make sensors more resistant to analog attacks.
{"title":"SoK: A Minimalist Approach to Formalizing Analog Sensor Security","authors":"Chen Yan, Hocheol Shin, Connor Bolton, Wenyuan Xu, Yongdae Kim, Kevin Fu","doi":"10.1109/SP40000.2020.00026","DOIUrl":"https://doi.org/10.1109/SP40000.2020.00026","url":null,"abstract":"Over the last six years, several papers demonstrated how intentional analog interference based on acoustics, RF, lasers, and other physical modalities could induce faults, influence, or even control the output of sensors. Damage to the availability and integrity of sensor output carries significant risks to safety-critical systems that make automated decisions based on trusted sensor measurement. Established signal processing models use transfer functions to express reliability and dependability characteristics of sensors, but existing models do not provide a deliberate way to express and capture security properties meaningfully.Our work begins to fill this gap by systematizing knowledge of analog attacks against sensor circuitry and defenses. Our primary contribution is a simple sensor security model such that sensor engineers can better express analog security properties of sensor circuitry without needing to learn significantly new notation. Our model introduces transfer functions and a vector of adversarial noise to represent adversarial capabilities at each stage of a sensor’s signal conditioning chain. The primary goals of the systematization are (1) to enable more meaningful quantification of risk for the design and evaluation of past and future sensors, (2) to better predict new attack vectors, and (3) to establish defensive design patterns that make sensors more resistant to analog attacks.","PeriodicalId":6849,"journal":{"name":"2020 IEEE Symposium on Security and Privacy (SP)","volume":"23 1","pages":"233-248"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81905486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}