Stefan Dziembowski, Lisa Eckey, Sebastian Faust, Daniel Malinowski
Payment channels emerged recently as an efficient method for performing cheap micropayments in cryptocurrencies. In contrast to traditional on-chain transactions, payment channels have the advantage that they allow for nearly unlimited number of transactions between parties without involving the blockchain. In this work, we introduce Perun, an off-chain channel system that offers a new method for connecting channels that is more efficient than the existing technique of ``routing transactions'' over multiple channels. To this end, Perun introduces a technique called ``virtual payment channels'' that avoids involvement of the intermediary for each individual payment. In this paper we formally model and prove security of this technique in the case of one intermediary, who can be viewed as a ``payment hub'' that has direct channels with several parties. Our scheme works over any cryptocurrency that provides Turing-complete smart contracts. As a proof of concept, we implemented Perun's smart contracts in Ethereum.
{"title":"Perun: Virtual Payment Hubs over Cryptocurrencies","authors":"Stefan Dziembowski, Lisa Eckey, Sebastian Faust, Daniel Malinowski","doi":"10.1109/SP.2019.00020","DOIUrl":"https://doi.org/10.1109/SP.2019.00020","url":null,"abstract":"Payment channels emerged recently as an efficient method for performing cheap micropayments in cryptocurrencies. In contrast to traditional on-chain transactions, payment channels have the advantage that they allow for nearly unlimited number of transactions between parties without involving the blockchain. In this work, we introduce Perun, an off-chain channel system that offers a new method for connecting channels that is more efficient than the existing technique of ``routing transactions'' over multiple channels. To this end, Perun introduces a technique called ``virtual payment channels'' that avoids involvement of the intermediary for each individual payment. In this paper we formally model and prove security of this technique in the case of one intermediary, who can be viewed as a ``payment hub'' that has direct channels with several parties. Our scheme works over any cryptocurrency that provides Turing-complete smart contracts. As a proof of concept, we implemented Perun's smart contracts in Ethereum.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"498 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129782405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sidechains have long been heralded as the key enabler of blockchain scalability and interoperability. However, no modeling of the concept or a provably secure construction has so far been attempted. We provide the first formal definition of what a sidechain system is and how assets can be moved between sidechains securely. We put forth a security definition that augments the known transaction ledger properties of liveness and safety to hold across multiple ledgers and enhance them with a new “firewall” security property which safeguards each blockchain from its sidechains, limiting the impact of an otherwise catastrophic sidechain failure. We then provide a sidechain construction that is suitable for proof-of-stake (PoS) sidechain systems. As an exemplary concrete instantiation we present our construction for an epoch- based PoS system consistent with Ouroboros (Crypto 2017), the PoS blockchain protocol used in Cardano which is one of the largest pure PoS systems by market capitalisation, and we also comment how the construction can be adapted for other protocols such as Ouroboros Praos (Eurocrypt 2018), Ouroboros Genesis (CCS 2018), Snow White and Algorand. An important feature of our construction is merged-staking that prevents “goldfinger” attacks against a sidechain that is only carrying a small amount of stake. An important technique for pegging chains that we use in our construction is cross-chain certification which is facilitated by a novel cryptographic primitive we introduce called ad-hoc threshold multisignatures (ATMS) which may be of independent interest. We show how ATMS can be securely instantiated by regular and aggregate digital signatures as well as succinct arguments of knowledge such as STARKs and bulletproofs with varying degrees of storage efficiency.
{"title":"Proof-of-Stake Sidechains","authors":"Peter Gazi, A. Kiayias, Dionysis Zindros","doi":"10.1109/SP.2019.00040","DOIUrl":"https://doi.org/10.1109/SP.2019.00040","url":null,"abstract":"Sidechains have long been heralded as the key enabler of blockchain scalability and interoperability. However, no modeling of the concept or a provably secure construction has so far been attempted. We provide the first formal definition of what a sidechain system is and how assets can be moved between sidechains securely. We put forth a security definition that augments the known transaction ledger properties of liveness and safety to hold across multiple ledgers and enhance them with a new “firewall” security property which safeguards each blockchain from its sidechains, limiting the impact of an otherwise catastrophic sidechain failure. We then provide a sidechain construction that is suitable for proof-of-stake (PoS) sidechain systems. As an exemplary concrete instantiation we present our construction for an epoch- based PoS system consistent with Ouroboros (Crypto 2017), the PoS blockchain protocol used in Cardano which is one of the largest pure PoS systems by market capitalisation, and we also comment how the construction can be adapted for other protocols such as Ouroboros Praos (Eurocrypt 2018), Ouroboros Genesis (CCS 2018), Snow White and Algorand. An important feature of our construction is merged-staking that prevents “goldfinger” attacks against a sidechain that is only carrying a small amount of stake. An important technique for pegging chains that we use in our construction is cross-chain certification which is facilitated by a novel cryptographic primitive we introduce called ad-hoc threshold multisignatures (ATMS) which may be of independent interest. We show how ATMS can be securely instantiated by regular and aggregate digital signatures as well as succinct arguments of knowledge such as STARKs and bulletproofs with varying degrees of storage efficiency.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125263951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mu Zhang, Chien-Ying Chen, Bin-Chou Kao, Yassine Qamsane, Yuru Shao, Yikai Lin, E. Shi, Sibin Mohan, K. Barton, J. Moyne, Z. Morley Mao
Safety violations in programmable logic controllers (PLCs), caused either by faults or attacks, have recently garnered significant attention. However, prior efforts at PLC code vetting suffer from many drawbacks. Static analyses and verification cause significant false positives and cannot reveal specific runtime contexts. Dynamic analyses and symbolic execution, on the other hand, fail due to their inability to handle real-world PLC programs that are event-driven and timing sensitive. In this paper, we propose VetPLC, a temporal context-aware, program analysis-based approach to produce timed event sequences that can be used for automatic safety vetting. To this end, we (a) perform static program analysis to create timed event causality graphs in order to understand causal relations among events in PLC code and (b) mine temporal invariants from data traces collected in Industrial Control System (ICS) testbeds to quantitatively gauge temporal dependencies that are constrained by machine operations. Our VetPLC prototype has been implemented in 15K lines of code. We evaluate it on 10 real-world scenarios from two different ICS settings. Our experiments show that VetPLC outperforms state-of-the-art techniques and can generate event sequences that can be used to automatically detect hidden safety violations.
{"title":"Towards Automated Safety Vetting of PLC Code in Real-World Plants","authors":"Mu Zhang, Chien-Ying Chen, Bin-Chou Kao, Yassine Qamsane, Yuru Shao, Yikai Lin, E. Shi, Sibin Mohan, K. Barton, J. Moyne, Z. Morley Mao","doi":"10.1109/SP.2019.00034","DOIUrl":"https://doi.org/10.1109/SP.2019.00034","url":null,"abstract":"Safety violations in programmable logic controllers (PLCs), caused either by faults or attacks, have recently garnered significant attention. However, prior efforts at PLC code vetting suffer from many drawbacks. Static analyses and verification cause significant false positives and cannot reveal specific runtime contexts. Dynamic analyses and symbolic execution, on the other hand, fail due to their inability to handle real-world PLC programs that are event-driven and timing sensitive. In this paper, we propose VetPLC, a temporal context-aware, program analysis-based approach to produce timed event sequences that can be used for automatic safety vetting. To this end, we (a) perform static program analysis to create timed event causality graphs in order to understand causal relations among events in PLC code and (b) mine temporal invariants from data traces collected in Industrial Control System (ICS) testbeds to quantitatively gauge temporal dependencies that are constrained by machine operations. Our VetPLC prototype has been implemented in 15K lines of code. We evaluate it on 10 real-world scenarios from two different ICS settings. Our experiments show that VetPLC outperforms state-of-the-art techniques and can generate event sequences that can be used to automatically detect hidden safety violations.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"150 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116022239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Duc Cuong Nguyen, Erik Derr, M. Backes, Sven Bugiel
Application markets streamline the end-users’ task of finding and installing applications. They also form an immediate communication channel between app developers and their end-users in form of app reviews, which allow users to provide developers feedback on their apps. However, it is unclear to which extent users employ this channel to point out their security and privacy concerns about apps, about which aspects of apps users express concerns, and how developers react to such security- and privacy-related reviews. In this paper, we present the first study of the relationship between end-user reviews and security- & privacy-related changes in apps. Using natural language processing on 4.5M user reviews for the top 2,583 apps in Google Play, we identified 5,527 security and privacy relevant reviews (SPR). For each app version mentioned in the SPR, we use static code analysis to extract permission-protected features mentioned in the reviews. We successfully mapped SPRs to privacy-related changes in app updates in 60.77% of all cases. Using exploratory data analysis and regression analysis we are able to show that preceding SPR are a significant factor for predicting privacy-related app updates, indicating that user reviews in fact lead to privacy improvements of apps. Our results further show that apps that adopt runtime permissions receive a significantly higher number of SPR, showing that runtime permissions put privacy-jeopardizing actions better into users’ minds. Further, we can attribute about half of all privacy-relevant app changes exclusively to third-party library code. This hints at larger problems for app developers to adhere to users’ privacy expectations and markets’ privacy regulations. Our results make a call for action to make app behavior more transparent to users in order to leverage their reviews in creating incentives for developers to adhere to security and privacy best practices, while our results call at the same time for better tools to support app developers in this endeavor.
{"title":"Short Text, Large Effect: Measuring the Impact of User Reviews on Android App Security & Privacy","authors":"Duc Cuong Nguyen, Erik Derr, M. Backes, Sven Bugiel","doi":"10.1109/SP.2019.00012","DOIUrl":"https://doi.org/10.1109/SP.2019.00012","url":null,"abstract":"Application markets streamline the end-users’ task of finding and installing applications. They also form an immediate communication channel between app developers and their end-users in form of app reviews, which allow users to provide developers feedback on their apps. However, it is unclear to which extent users employ this channel to point out their security and privacy concerns about apps, about which aspects of apps users express concerns, and how developers react to such security- and privacy-related reviews. In this paper, we present the first study of the relationship between end-user reviews and security- & privacy-related changes in apps. Using natural language processing on 4.5M user reviews for the top 2,583 apps in Google Play, we identified 5,527 security and privacy relevant reviews (SPR). For each app version mentioned in the SPR, we use static code analysis to extract permission-protected features mentioned in the reviews. We successfully mapped SPRs to privacy-related changes in app updates in 60.77% of all cases. Using exploratory data analysis and regression analysis we are able to show that preceding SPR are a significant factor for predicting privacy-related app updates, indicating that user reviews in fact lead to privacy improvements of apps. Our results further show that apps that adopt runtime permissions receive a significantly higher number of SPR, showing that runtime permissions put privacy-jeopardizing actions better into users’ minds. Further, we can attribute about half of all privacy-relevant app changes exclusively to third-party library code. This hints at larger problems for app developers to adhere to users’ privacy expectations and markets’ privacy regulations. Our results make a call for action to make app behavior more transparent to users in order to leverage their reviews in creating incentives for developers to adhere to security and privacy best practices, while our results call at the same time for better tools to support app developers in this endeavor.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126807120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dae R. Jeong, Kyungtae Kim, B. Shivakumar, Byoungyoung Lee, I. Shin
A data race in a kernel is an important class of bugs, critically impacting the reliability and security of the associated system. As a result of a race, the kernel may become unresponsive. Even worse, an attacker may launch a privilege escalation attack to acquire root privileges. In this paper, we propose Razzer, a tool to find race bugs in kernels. The core of Razzer is in guiding fuzz testing towards potential data race spots in the kernel. Razzer employs two techniques to find races efficiently: a static analysis and a deterministic thread interleaving technique. Using a static analysis, Razzer identifies over-approximated potential data race spots, guiding the fuzzer to search for data races in the kernel more efficiently. Using the deterministic thread interleaving technique implemented at the hypervisor, Razzer tames the non-deterministic behavior of the kernel such that it can deterministically trigger a race. We implemented a prototype of Razzer and ran the latest Linux kernel (from v4.16-rc3 to v4.18-rc3) using Razzer. As a result, Razzer discovered 30 new races in the kernel, with 16 subsequently confirmed and accordingly patched by kernel developers after they were reported.
{"title":"Razzer: Finding Kernel Race Bugs through Fuzzing","authors":"Dae R. Jeong, Kyungtae Kim, B. Shivakumar, Byoungyoung Lee, I. Shin","doi":"10.1109/SP.2019.00017","DOIUrl":"https://doi.org/10.1109/SP.2019.00017","url":null,"abstract":"A data race in a kernel is an important class of bugs, critically impacting the reliability and security of the associated system. As a result of a race, the kernel may become unresponsive. Even worse, an attacker may launch a privilege escalation attack to acquire root privileges. In this paper, we propose Razzer, a tool to find race bugs in kernels. The core of Razzer is in guiding fuzz testing towards potential data race spots in the kernel. Razzer employs two techniques to find races efficiently: a static analysis and a deterministic thread interleaving technique. Using a static analysis, Razzer identifies over-approximated potential data race spots, guiding the fuzzer to search for data races in the kernel more efficiently. Using the deterministic thread interleaving technique implemented at the hypervisor, Razzer tames the non-deterministic behavior of the kernel such that it can deterministically trigger a race. We implemented a prototype of Razzer and ran the latest Linux kernel (from v4.16-rc3 to v4.18-rc3) using Razzer. As a result, Razzer discovered 30 new races in the kernel, with 16 subsequently confirmed and accordingly patched by kernel developers after they were reported.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129931521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Kerber, Markulf Kohlweiss, A. Kiayias, Vassilis Zikas
We present Ouroboros Crypsinous, the first formally analyzed privacy-preserving proof-of-stake blockchain protocol. To model its security we give a thorough treatment of private ledgers in the (G)UC setting that might be of independent interest. To prove our protocol secure against adaptive attacks, we introduce a new coin evolution technique relying on SNARKs and key-private forward secure encryption. The latter primitive—and the associated construction—can be of independent interest. We stress that existing approaches to private blockchain, such as the proof-of-work-based Zerocash are analyzed only against static corruptions.
{"title":"Ouroboros Crypsinous: Privacy-Preserving Proof-of-Stake","authors":"Thomas Kerber, Markulf Kohlweiss, A. Kiayias, Vassilis Zikas","doi":"10.1109/SP.2019.00063","DOIUrl":"https://doi.org/10.1109/SP.2019.00063","url":null,"abstract":"We present Ouroboros Crypsinous, the first formally analyzed privacy-preserving proof-of-stake blockchain protocol. To model its security we give a thorough treatment of private ledgers in the (G)UC setting that might be of independent interest. To prove our protocol secure against adaptive attacks, we introduce a new coin evolution technique relying on SNARKs and key-private forward secure encryption. The latter primitive—and the associated construction—can be of independent interest. We stress that existing approaches to private blockchain, such as the proof-of-work-based Zerocash are analyzed only against static corruptions.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134332897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiang Ling, S. Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, Ting Wang
Deep learning (DL) models are inherently vulnerable to adversarial examples – maliciously crafted inputs to trigger target DL models to misbehave – which significantly hinders the application of DL in security-sensitive domains. Intensive research on adversarial learning has led to an arms race between adversaries and defenders. Such plethora of emerging attacks and defenses raise many questions: Which attacks are more evasive, preprocessing-proof, or transferable? Which defenses are more effective, utility-preserving, or general? Are ensembles of multiple defenses more robust than individuals? Yet, due to the lack of platforms for comprehensive evaluation on adversarial attacks and defenses, these critical questions remain largely unsolved. In this paper, we present the design, implementation, and evaluation of DEEPSEC, a uniform platform that aims to bridge this gap. In its current implementation, DEEPSEC incorporates 16 state-of-the-art attacks with 10 attack utility metrics, and 13 state-of-the-art defenses with 5 defensive utility metrics. To our best knowledge, DEEPSEC is the first platform that enables researchers and practitioners to (i) measure the vulnerability of DL models, (ii) evaluate the effectiveness of various attacks/defenses, and (iii) conduct comparative studies on attacks/defenses in a comprehensive and informative manner. Leveraging DEEPSEC, we systematically evaluate the existing adversarial attack and defense methods, and draw a set of key findings, which demonstrate DEEPSEC’s rich functionality, such as (1) the trade-off between misclassification and imperceptibility is empirically confirmed; (2) most defenses that claim to be universally applicable can only defend against limited types of attacks under restricted settings; (3) it is not necessary that adversarial examples with higher perturbation magnitude are easier to be detected; (4) the ensemble of multiple defenses cannot improve the overall defense capability, but can improve the lower bound of the defense effectiveness of individuals. Extensive analysis on DEEPSEC demonstrates its capabilities and advantages as a benchmark platform which can benefit future adversarial learning research.
{"title":"DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model","authors":"Xiang Ling, S. Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, Ting Wang","doi":"10.1109/SP.2019.00023","DOIUrl":"https://doi.org/10.1109/SP.2019.00023","url":null,"abstract":"Deep learning (DL) models are inherently vulnerable to adversarial examples – maliciously crafted inputs to trigger target DL models to misbehave – which significantly hinders the application of DL in security-sensitive domains. Intensive research on adversarial learning has led to an arms race between adversaries and defenders. Such plethora of emerging attacks and defenses raise many questions: Which attacks are more evasive, preprocessing-proof, or transferable? Which defenses are more effective, utility-preserving, or general? Are ensembles of multiple defenses more robust than individuals? Yet, due to the lack of platforms for comprehensive evaluation on adversarial attacks and defenses, these critical questions remain largely unsolved. In this paper, we present the design, implementation, and evaluation of DEEPSEC, a uniform platform that aims to bridge this gap. In its current implementation, DEEPSEC incorporates 16 state-of-the-art attacks with 10 attack utility metrics, and 13 state-of-the-art defenses with 5 defensive utility metrics. To our best knowledge, DEEPSEC is the first platform that enables researchers and practitioners to (i) measure the vulnerability of DL models, (ii) evaluate the effectiveness of various attacks/defenses, and (iii) conduct comparative studies on attacks/defenses in a comprehensive and informative manner. Leveraging DEEPSEC, we systematically evaluate the existing adversarial attack and defense methods, and draw a set of key findings, which demonstrate DEEPSEC’s rich functionality, such as (1) the trade-off between misclassification and imperceptibility is empirically confirmed; (2) most defenses that claim to be universally applicable can only defend against limited types of attacks under restricted settings; (3) it is not necessary that adversarial examples with higher perturbation magnitude are easier to be detected; (4) the ensemble of multiple defenses cannot improve the overall defense capability, but can improve the lower bound of the defense effectiveness of individuals. Extensive analysis on DEEPSEC demonstrates its capabilities and advantages as a benchmark platform which can benefit future adversarial learning research.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kan Yuan, Di Tang, Xiaojing Liao, Xiaofeng Wang, Xuan Feng, Yi Chen, Menghan Sun, Haoran Lu, Kehuan Zhang
Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today’s strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male`na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others’ images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.
近年来,深度学习(DP)的快速发展也使其潜在的弱点成为安全和机器学习研究的焦点。随着对抗性学习研究的重要发现,令人惊讶的是,人们很少关注现实世界中网络犯罪分子为逃避基于图像的检测而使用的对抗性技术。与使用几乎无法察觉的扰动诱导错误分类的对抗性示例不同,现实世界的对抗性图像往往不太理想,但同样有效。作为了解威胁的第一步,我们在论文中报告了一项对广泛用于地下广告的对抗性促销色情图像(APPIs)的研究。我们表明,今天的对手战略性地构建api以逃避明确的内容检测,同时仍然保持其性吸引力,即使引入的扭曲和噪音对人类来说是清晰可见的。为了理解这种真实世界的敌对图像及其背后的地下商业,我们开发了一种新颖的基于dp的方法,称为Male 'na,它专注于图像中性内容最不模糊的区域,因此对促销的目标受众来说是可见的。使用这种技术,我们从流行的社交媒体上抓取的4,042,690张图片中发现了4,000多个应用程序,并进一步揭示了他们用来逃避流行的显式内容检测器的独特技术(例如,Google Cloud Vision API, Yahoo Open NSFW模型),以及这些技术工作的原因。还研究了此类非法促销活动的生态系统,包括通过这些图片宣传的混淆联系人,用于传播这些图片的受损帐户,以及涉及数千张图片的大型APPI活动。另一个有趣的发现是,网络犯罪分子明显试图窃取他人的图像用于他们的广告。该研究强调了现实世界对抗性学习研究的重要性,并为减轻其构成的威胁迈出了第一步。
{"title":"Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion","authors":"Kan Yuan, Di Tang, Xiaojing Liao, Xiaofeng Wang, Xuan Feng, Yi Chen, Menghan Sun, Haoran Lu, Kehuan Zhang","doi":"10.1109/SP.2019.00032","DOIUrl":"https://doi.org/10.1109/SP.2019.00032","url":null,"abstract":"Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today’s strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male`na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others’ images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126992039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jack Doerner, Yashvanth Kondi, Eysa Lee, Abhi Shelat
Cryptocurrency applications have spurred a resurgence of interest in the computation of ECDSA signatures using threshold protocols---that is, protocols in which the signing key is secret-shared among n parties, of which any subset of size t must interact in order to compute a signature. Among the resulting works to date, that of Doerner et al. requires the most natural assumptions while also achieving the best practical signing speed. It is, however, limited to the setting in which the threshold is two. We propose an extension of their scheme to arbitrary thresholds, and prove it secure against a malicious adversary corrupting up to one party less than the threshold under only the Computational Diffie-Hellman assumption in the Random Oracle model, an assumption strictly weaker than those under which ECDSA is proven. Whereas the best current schemes for threshold-two ECDSA signing use a Diffie-Hellman Key Exchange to calculate each signature's nonce, a direct adaptation of this technique to a larger threshold t would incur a round count linear in t; thus we abandon it in favor of a new mechanism that yields a protocol requiring log(t)+6 rounds in total. We design a new consistency check, similar in spirit to that of Doerner et al., but suitable for an arbitrary number of participants, and we optimize the underlying two-party multiplication protocol on which our scheme is based, reducing its concrete communication and computation costs. We implement our scheme and evaluate it among groups of up to 256 of co-located and 128 geographically-distributed parties, and among small groups of embedded devices. We find that in the LAN setting, our scheme outperforms all prior works by orders of magnitude, and that it is efficient enough for use even on smartphones or hardware tokens. In the WAN setting we find that, despite its logarithmic round count, our protocol outperforms the best constant-round protocols in realistic scenarios.
{"title":"Threshold ECDSA from ECDSA Assumptions: The Multiparty Case","authors":"Jack Doerner, Yashvanth Kondi, Eysa Lee, Abhi Shelat","doi":"10.1109/SP.2019.00024","DOIUrl":"https://doi.org/10.1109/SP.2019.00024","url":null,"abstract":"Cryptocurrency applications have spurred a resurgence of interest in the computation of ECDSA signatures using threshold protocols---that is, protocols in which the signing key is secret-shared among n parties, of which any subset of size t must interact in order to compute a signature. Among the resulting works to date, that of Doerner et al. requires the most natural assumptions while also achieving the best practical signing speed. It is, however, limited to the setting in which the threshold is two. We propose an extension of their scheme to arbitrary thresholds, and prove it secure against a malicious adversary corrupting up to one party less than the threshold under only the Computational Diffie-Hellman assumption in the Random Oracle model, an assumption strictly weaker than those under which ECDSA is proven. Whereas the best current schemes for threshold-two ECDSA signing use a Diffie-Hellman Key Exchange to calculate each signature's nonce, a direct adaptation of this technique to a larger threshold t would incur a round count linear in t; thus we abandon it in favor of a new mechanism that yields a protocol requiring log(t)+6 rounds in total. We design a new consistency check, similar in spirit to that of Doerner et al., but suitable for an arbitrary number of participants, and we optimize the underlying two-party multiplication protocol on which our scheme is based, reducing its concrete communication and computation costs. We implement our scheme and evaluate it among groups of up to 256 of co-located and 128 geographically-distributed parties, and among small groups of embedded devices. We find that in the LAN setting, our scheme outperforms all prior works by orders of magnitude, and that it is efficient enough for use even on smartphones or hardware tokens. In the WAN setting we find that, despite its logarithmic round count, our protocol outperforms the best constant-round protocols in realistic scenarios.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125080110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Rupprecht, K. Kohls, Thorsten Holz, C. Pöpper
Long Term Evolution (LTE) is the latest mobile communication standard and has a pivotal role in our information society: LTE combines performance goals with modern security mechanisms and serves casual use cases as well as critical infrastructure and public safety communications. Both scenarios are demanding towards a resilient and secure specification and implementation of LTE, as outages and open attack vectors potentially lead to severe risks. Previous work on LTE protocol security identified crucial attack vectors for both the physical (layer one) and network (layer three) layers. Data link layer (layer two) protocols, however, remain a blind spot in existing LTE security research. In this paper, we present a comprehensive layer two security analysis and identify three attack vectors. These attacks impair the confidentiality and/or privacy of LTE communication. More specifically, we first present a passive identity mapping attack that matches volatile radio identities to longer lasting network identities, enabling us to identify users within a cell and serving as a stepping stone for follow-up attacks. Second, we demonstrate how a passive attacker can abuse the resource allocation as a side channel to perform website fingerprinting that enables the attacker to learn the websites a user accessed. Finally, we present the A LTE R attack that exploits the fact that LTE user data is encrypted in counter mode (AES-CTR) but not integrity protected, which allows us to modify the message payload. As a proof-of-concept demonstration, we show how an active attacker can redirect DNS requests and then perform a DNS spoofing attack. As a result, the user is redirected to a malicious website. Our experimental analysis demonstrates the real-world applicability of all three attacks and emphasizes the threat of open attack vectors on LTE layer two protocols.
{"title":"Breaking LTE on Layer Two","authors":"David Rupprecht, K. Kohls, Thorsten Holz, C. Pöpper","doi":"10.1109/SP.2019.00006","DOIUrl":"https://doi.org/10.1109/SP.2019.00006","url":null,"abstract":"Long Term Evolution (LTE) is the latest mobile communication standard and has a pivotal role in our information society: LTE combines performance goals with modern security mechanisms and serves casual use cases as well as critical infrastructure and public safety communications. Both scenarios are demanding towards a resilient and secure specification and implementation of LTE, as outages and open attack vectors potentially lead to severe risks. Previous work on LTE protocol security identified crucial attack vectors for both the physical (layer one) and network (layer three) layers. Data link layer (layer two) protocols, however, remain a blind spot in existing LTE security research. In this paper, we present a comprehensive layer two security analysis and identify three attack vectors. These attacks impair the confidentiality and/or privacy of LTE communication. More specifically, we first present a passive identity mapping attack that matches volatile radio identities to longer lasting network identities, enabling us to identify users within a cell and serving as a stepping stone for follow-up attacks. Second, we demonstrate how a passive attacker can abuse the resource allocation as a side channel to perform website fingerprinting that enables the attacker to learn the websites a user accessed. Finally, we present the A LTE R attack that exploits the fact that LTE user data is encrypted in counter mode (AES-CTR) but not integrity protected, which allows us to modify the message payload. As a proof-of-concept demonstration, we show how an active attacker can redirect DNS requests and then perform a DNS spoofing attack. As a result, the user is redirected to a malicious website. Our experimental analysis demonstrates the real-world applicability of all three attacks and emphasizes the threat of open attack vectors on LTE layer two protocols.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"605 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116378521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}