首页 > 最新文献

Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop最新文献

英文 中文
Following the Pebble Trail: Extending Return-Oriented Programming to RISC-V 遵循Pebble轨迹:将面向回报的编程扩展到RISC-V
Bogdan Pavel Deac, Adrian Colesa
It is widely known that return-oriented programming (ROP) attack can be mounted on x86, ARM and SPARC architectures. However, it remained an open question if ROP was possible on RISC-V, a new and promising free and open instruction set architecture (ISA). In this paper we present a novel ROP technique specific to RISC-V architecture. Our method relies on the processor's saved registers and its function calling convention. We use functional gadgets (that perform primitive operations) ended in a jump instruction to an address held in a saved register. The order of gadgets chaining is given by a novel gadget, which we call the charger gadget, which loads the saved registers with the gadgets? addresses from the stack. We constructed a library of gadgets extracted from the standard Linux libraries. Finally, we evaluated our method by exploiting a buffer-overflow vulnerable application.
众所周知,面向返回的编程(return-oriented programming, ROP)攻击可以安装在x86、ARM和SPARC架构上。然而,ROP在RISC-V上是否可行仍然是一个悬而未决的问题,RISC-V是一种新的、有前途的自由开放指令集架构(ISA)。本文提出了一种针对RISC-V架构的新型ROP技术。我们的方法依赖于处理器保存的寄存器及其函数调用约定。我们使用函数式gadget(执行基本操作),以跳转指令结束,跳转到保存在寄存器中的地址。小工具链的顺序是由一个新的小工具给出的,我们称之为充电器小工具,它将小工具加载到保存的寄存器中。来自堆栈的地址。我们构建了一个从标准Linux库中提取的小工具库。最后,我们通过利用易受缓冲区溢出攻击的应用程序来评估我们的方法。
{"title":"Following the Pebble Trail: Extending Return-Oriented Programming to RISC-V","authors":"Bogdan Pavel Deac, Adrian Colesa","doi":"10.1145/3411495.3421366","DOIUrl":"https://doi.org/10.1145/3411495.3421366","url":null,"abstract":"It is widely known that return-oriented programming (ROP) attack can be mounted on x86, ARM and SPARC architectures. However, it remained an open question if ROP was possible on RISC-V, a new and promising free and open instruction set architecture (ISA). In this paper we present a novel ROP technique specific to RISC-V architecture. Our method relies on the processor's saved registers and its function calling convention. We use functional gadgets (that perform primitive operations) ended in a jump instruction to an address held in a saved register. The order of gadgets chaining is given by a novel gadget, which we call the charger gadget, which loads the saved registers with the gadgets? addresses from the stack. We constructed a library of gadgets extracted from the standard Linux libraries. Finally, we evaluated our method by exploiting a buffer-overflow vulnerable application.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MARTINI: Memory Access Traces to Detect Attacks 内存访问跟踪来检测攻击
Yujun Qin, Samuel Gonzalez, K. Angstadt, Xiaowei Wang, S. Forrest, R. Das, Kevin Leach, Westley Weimer
Hardware architectural vulnerabilities, such as Spectre and Meltdown, are difficult or inefficient to mitigate in software. Although revised hardware designs may address some architectural vulnerabilities going forward, most current remedies increase execution time significantly. Techniques are needed to rapidly and efficiently detect these and other emerging threats. We present an anomaly detector, MARTINI, that analyzes traces of memory accesses in real time to detect attacks. Our experimental evaluation shows that anomalies in these traces are strongly correlated with unauthorized program execution, including architectural side-channel attacks of multiple types. MARTINI consists of a finite automaton that models normal program behavior in terms of memory addresses that are read from, and written to, at runtime. The model uses a compact representation of n-grams, i.e., short sequences of memory accesses, which can be stored and processed efficiently. Once the system is trained on authorized behavior, it rapidly detects a variety of low-level anomalous behaviors and attacks not otherwise easily discernible at the software level. MARTINI's implementation leverages recent advances in in-cache and in-memory automata for computation, and we present a hardware unit that repurposes a small portion of a last-level cache slice to monitor memory addresses. Our detector directly inspects the addresses of memory accesses, using the pre-constructed automaton to identify anomalies with high accuracy, negligible runtime overhead, and trivial increase in CPU chip area. We present analyses of expected hardware properties based on indicative cache and memory hierarchy simulations and empirical evaluations.
硬件架构漏洞,如Spectre和Meltdown,很难或低效地在软件中缓解。尽管修改后的硬件设计可能会解决未来的一些体系结构漏洞,但大多数当前补救措施都会显著增加执行时间。需要技术来快速有效地检测这些和其他新出现的威胁。我们提出了一个异常检测器MARTINI,它可以实时分析内存访问的痕迹以检测攻击。我们的实验评估表明,这些痕迹中的异常与未经授权的程序执行密切相关,包括多种类型的架构侧通道攻击。MARTINI由一个有限自动机组成,该自动机根据运行时读取和写入的内存地址来模拟正常的程序行为。该模型使用n-gram的紧凑表示,即存储器访问的短序列,可以有效地存储和处理。一旦系统接受了授权行为的训练,它就会迅速检测到各种低级别的异常行为和攻击,否则在软件级别上很难识别。MARTINI的实现利用了缓存内和内存自动机的最新进展进行计算,我们提出了一个硬件单元,它可以重新利用最后一级缓存片的一小部分来监视内存地址。我们的检测器直接检查内存访问地址,使用预先构建的自动机识别异常,具有高精度,可忽略不计的运行时开销,以及CPU芯片面积的微小增加。我们提出了基于指示性缓存和内存层次模拟和经验评估的预期硬件特性分析。
{"title":"MARTINI: Memory Access Traces to Detect Attacks","authors":"Yujun Qin, Samuel Gonzalez, K. Angstadt, Xiaowei Wang, S. Forrest, R. Das, Kevin Leach, Westley Weimer","doi":"10.1145/3411495.3421353","DOIUrl":"https://doi.org/10.1145/3411495.3421353","url":null,"abstract":"Hardware architectural vulnerabilities, such as Spectre and Meltdown, are difficult or inefficient to mitigate in software. Although revised hardware designs may address some architectural vulnerabilities going forward, most current remedies increase execution time significantly. Techniques are needed to rapidly and efficiently detect these and other emerging threats. We present an anomaly detector, MARTINI, that analyzes traces of memory accesses in real time to detect attacks. Our experimental evaluation shows that anomalies in these traces are strongly correlated with unauthorized program execution, including architectural side-channel attacks of multiple types. MARTINI consists of a finite automaton that models normal program behavior in terms of memory addresses that are read from, and written to, at runtime. The model uses a compact representation of n-grams, i.e., short sequences of memory accesses, which can be stored and processed efficiently. Once the system is trained on authorized behavior, it rapidly detects a variety of low-level anomalous behaviors and attacks not otherwise easily discernible at the software level. MARTINI's implementation leverages recent advances in in-cache and in-memory automata for computation, and we present a hardware unit that repurposes a small portion of a last-level cache slice to monitor memory addresses. Our detector directly inspects the addresses of memory accesses, using the pre-constructed automaton to identify anomalies with high accuracy, negligible runtime overhead, and trivial increase in CPU chip area. We present analyses of expected hardware properties based on indicative cache and memory hierarchy simulations and empirical evaluations.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114142874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Short-Lived Forward-Secure Delegation for TLS TLS的短期前向安全委托
Lukas Alber, Stefan More, Sebastian Ramacher
On today's Internet, combining the end-to-end security of TLS with Content Delivery Networks (CDNs) while ensuring the authenticity of connections results in a challenging delegation problem. When CDN servers provide content, they have to authenticate themselves as the origin server to establish a valid end-to-end TLS connection with the client. In standard TLS, the latter requires access to the secret key of the server. To curb this problem, multiple workarounds exist to realize a delegation of the authentication. In this paper, we present a solution that renders key sharing unnecessary and reduces the need for workarounds. By adapting identity-based signatures to this setting, our solution offers short-lived delegations. Additionally, by enabling forward-security, existing delegations remain valid even if the server's secret key leaks. We provide an implementation of the scheme and discuss integration into a TLS stack. In our evaluation, we show that an efficient implementation incurs less overhead than a typical network round trip. Thereby, we propose an alternative approach to current delegation practices on the web.
在今天的互联网上,将TLS的端到端安全性与内容交付网络(cdn)相结合,同时确保连接的真实性,会导致一个具有挑战性的委托问题。当CDN服务器提供内容时,它们必须将自己验证为源服务器,以便与客户机建立有效的端到端TLS连接。在标准TLS中,后者需要访问服务器的秘密密钥。为了遏制这个问题,存在多种解决方案来实现身份验证的委托。在本文中,我们提出了一种解决方案,使密钥共享变得不必要,并减少了对变通方法的需求。通过使基于身份的签名适应此设置,我们的解决方案提供了短期委托。此外,通过启用前向安全性,即使服务器的秘钥泄露,现有的委托仍然有效。我们提供了该方案的实现,并讨论了集成到TLS堆栈中的问题。在我们的评估中,我们展示了一个有效的实现比一个典型的网络往返产生更少的开销。因此,我们提出了一种替代网络上当前委托实践的方法。
{"title":"Short-Lived Forward-Secure Delegation for TLS","authors":"Lukas Alber, Stefan More, Sebastian Ramacher","doi":"10.1145/3411495.3421362","DOIUrl":"https://doi.org/10.1145/3411495.3421362","url":null,"abstract":"On today's Internet, combining the end-to-end security of TLS with Content Delivery Networks (CDNs) while ensuring the authenticity of connections results in a challenging delegation problem. When CDN servers provide content, they have to authenticate themselves as the origin server to establish a valid end-to-end TLS connection with the client. In standard TLS, the latter requires access to the secret key of the server. To curb this problem, multiple workarounds exist to realize a delegation of the authentication. In this paper, we present a solution that renders key sharing unnecessary and reduces the need for workarounds. By adapting identity-based signatures to this setting, our solution offers short-lived delegations. Additionally, by enabling forward-security, existing delegations remain valid even if the server's secret key leaks. We provide an implementation of the scheme and discuss integration into a TLS stack. In our evaluation, we show that an efficient implementation incurs less overhead than a typical network round trip. Thereby, we propose an alternative approach to current delegation practices on the web.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130140662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Sound of Silence: Mining Security Vulnerabilities from Secret Integration Channels in Open-Source Projects 沉默之声:从开源项目的秘密集成通道中挖掘安全漏洞
Ralf Ramsauer, Lukas Bulwahn, D. Lohmann, W. Mauerer
Public development processes are a key characteristic of open source projects. However, fixes for vulnerabilities are usually discussed privately among a small group of trusted maintainers, and integrated without prior public involvement. This is supposed to prevent early disclosure, and cope with embargo and non-disclosure agreement (NDA) rules. While regular development activities leave publicly available traces, fixes for vulnerabilities that bypass the standard process do not. We present a data-mining based approach to detect code fragments that arise from such infringements of the standard process. By systematically mapping public development artefacts to source code repositories, we can exclude regular process activities, and infer irregularities that stem from non-public integration channels. For the Linux kernel, the most crucial component of many systems, we apply our method to a period of seven months before the release of Linux 5.4. We find 29 commits that address 12 vulnerabilities. For these vulnerabilities, our approach provides a temporal advantage of 2 to 179 days to design exploits before public disclosure takes place, and fixes are rolled out. Established responsible disclosure approaches in open development processes are supposed to limit premature visibility of security vulnerabilities. However, our approach shows that, instead, they open additional possibilities to uncover such changes that thwart the very premise. We conclude by discussing implications and partial countermeasures.
公共开发过程是开源项目的一个关键特征。然而,漏洞的修复通常在一小群受信任的维护者之间私下讨论,并且在没有事先公开参与的情况下集成。这是为了防止过早披露,并应对禁运和保密协议(NDA)规则。虽然常规的开发活动会留下公开可用的痕迹,但绕过标准流程的漏洞修复不会。我们提出了一种基于数据挖掘的方法来检测由于违反标准过程而产生的代码片段。通过系统地将公共开发工件映射到源代码存储库,我们可以排除常规的过程活动,并推断源自非公共集成渠道的不规范行为。对于Linux内核(许多系统中最关键的组件),我们将我们的方法应用于Linux 5.4发布前的7个月。我们发现29个提交涉及12个漏洞。对于这些漏洞,我们的方法提供了2到179天的时间优势,以便在公开披露之前设计漏洞,并推出修复程序。在开放开发过程中建立的负责任的披露方法应该限制安全漏洞的过早可见性。然而,我们的方法表明,相反,它们打开了更多的可能性,以揭示这些变化,破坏了这个前提。最后,我们讨论了影响和部分对策。
{"title":"The Sound of Silence: Mining Security Vulnerabilities from Secret Integration Channels in Open-Source Projects","authors":"Ralf Ramsauer, Lukas Bulwahn, D. Lohmann, W. Mauerer","doi":"10.1145/3411495.3421360","DOIUrl":"https://doi.org/10.1145/3411495.3421360","url":null,"abstract":"Public development processes are a key characteristic of open source projects. However, fixes for vulnerabilities are usually discussed privately among a small group of trusted maintainers, and integrated without prior public involvement. This is supposed to prevent early disclosure, and cope with embargo and non-disclosure agreement (NDA) rules. While regular development activities leave publicly available traces, fixes for vulnerabilities that bypass the standard process do not. We present a data-mining based approach to detect code fragments that arise from such infringements of the standard process. By systematically mapping public development artefacts to source code repositories, we can exclude regular process activities, and infer irregularities that stem from non-public integration channels. For the Linux kernel, the most crucial component of many systems, we apply our method to a period of seven months before the release of Linux 5.4. We find 29 commits that address 12 vulnerabilities. For these vulnerabilities, our approach provides a temporal advantage of 2 to 179 days to design exploits before public disclosure takes place, and fixes are rolled out. Established responsible disclosure approaches in open development processes are supposed to limit premature visibility of security vulnerabilities. However, our approach shows that, instead, they open additional possibilities to uncover such changes that thwart the very premise. We conclude by discussing implications and partial countermeasures.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116604847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning 不是一个,而是许多权衡:隐私与效用在不同的私人机器学习
Benjamin Zi Hao Zhao, M. Kâafar, N. Kourtellis
Data holders are increasingly seeking to protect their user's privacy, whilst still maximizing their ability to produce machine learning (ML) models with high quality predictions. In this work, we empirically evaluate various implementations of differential privacy (DP), and measure their ability to fend off real-world privacy attacks, in addition to measuring their core goal of providing accurate classifications. We establish an evaluation framework to ensure each of these implementations are fairly evaluated. Our selection of DP implementations add DP noise at different positions within the framework, either at the point of data collection/release, during updates while training of the model, or after training by perturbing learned model parameters. We evaluate each implementation across a range of privacy budgets and datasets, each implementation providing the same mathematical privacy guarantees. By measuring the models' resistance to real world attacks of membership and attribute inference, and their classification accuracy. we determine which implementations provide the most desirable tradeoff between privacy and utility. We found that the number of classes of a given dataset is unlikely to influence where the privacy and utility tradeoff occurs, a counter-intuitive inference in contrast to the known relationship of increased privacy vulnerability in datasets with more classes. Additionally, in the scenario that high privacy constraints are required, perturbing input training data before applying ML modeling does not trade off as much utility, as compared to noise added later in the ML process.
数据持有者越来越多地寻求保护用户隐私,同时仍然最大限度地提高他们生成具有高质量预测的机器学习(ML)模型的能力。在这项工作中,我们对差分隐私(DP)的各种实现进行了实证评估,并测量了它们抵御现实世界隐私攻击的能力,此外还测量了它们提供准确分类的核心目标。我们建立了一个评估框架,以确保每个实现都得到公平的评估。我们选择的DP实现在框架内的不同位置添加DP噪声,无论是在数据收集/发布点,在模型训练时更新期间,还是在训练后通过扰动学习的模型参数。我们在一系列隐私预算和数据集上评估每种实现,每种实现都提供相同的数学隐私保证。通过测量模型对隶属度和属性推理攻击的抵抗能力,以及模型的分类精度。我们确定哪些实现在隐私和实用之间提供了最理想的权衡。我们发现,给定数据集的类数量不太可能影响隐私和效用权衡发生的位置,这与已知的类更多的数据集中隐私漏洞增加的关系相反,这是一个反直觉的推断。此外,在需要高度隐私约束的情况下,在应用机器学习建模之前干扰输入训练数据,与稍后在机器学习过程中添加的噪声相比,并没有太多的效用。
{"title":"Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning","authors":"Benjamin Zi Hao Zhao, M. Kâafar, N. Kourtellis","doi":"10.1145/3411495.3421352","DOIUrl":"https://doi.org/10.1145/3411495.3421352","url":null,"abstract":"Data holders are increasingly seeking to protect their user's privacy, whilst still maximizing their ability to produce machine learning (ML) models with high quality predictions. In this work, we empirically evaluate various implementations of differential privacy (DP), and measure their ability to fend off real-world privacy attacks, in addition to measuring their core goal of providing accurate classifications. We establish an evaluation framework to ensure each of these implementations are fairly evaluated. Our selection of DP implementations add DP noise at different positions within the framework, either at the point of data collection/release, during updates while training of the model, or after training by perturbing learned model parameters. We evaluate each implementation across a range of privacy budgets and datasets, each implementation providing the same mathematical privacy guarantees. By measuring the models' resistance to real world attacks of membership and attribute inference, and their classification accuracy. we determine which implementations provide the most desirable tradeoff between privacy and utility. We found that the number of classes of a given dataset is unlikely to influence where the privacy and utility tradeoff occurs, a counter-intuitive inference in contrast to the known relationship of increased privacy vulnerability in datasets with more classes. Additionally, in the scenario that high privacy constraints are required, perturbing input training data before applying ML modeling does not trade off as much utility, as compared to noise added later in the ML process.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130867203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Privacy-preserving Voice Analysis via Disentangled Representations 基于解纠缠表示的隐私保护语音分析
Ranya Aloufi, H. Haddadi, David Boyle
Voice User Interfaces (VUIs) are increasingly popular and built into smartphones, home assistants, and Internet of Things (IoT) devices. Despite offering an always-on convenient user experience, VUIs raise new security and privacy concerns for their users. In this paper, we focus on attribute inference attacks in the speech domain, demonstrating the potential for an attacker to accurately infer a target user's sensitive and private attributes (e.g. their emotion, sex, or health status) from deep acoustic models. To defend against this class of attacks, we design, implement, and evaluate a user-configurable, privacy-aware framework for optimizing speech-related data sharing mechanisms. Our objective is to enable primary tasks such as speech recognition and user identification, while removing sensitive attributes in the raw speech data before sharing it with a cloud service provider. We leverage disentangled representation learning to explicitly learn independent factors in the raw data. Based on a user's preferences, a supervision signal informs the filtering out of invariant factors while retaining the factors reflected in the selected preference. Our experimental evaluation over five datasets shows that the proposed framework can effectively defend against attribute inference attacks by reducing their success rates to approximately that of guessing at random, while maintaining accuracy in excess of 99% for the tasks of interest. We conclude that negotiable privacy settings enabled by disentangled representations can bring new opportunities for privacy-preserving applications.
语音用户界面(VUIs)越来越受欢迎,并被内置到智能手机、家庭助理和物联网(IoT)设备中。尽管ui提供了永远在线的方便用户体验,但它给用户带来了新的安全和隐私问题。在本文中,我们专注于语音领域的属性推理攻击,展示了攻击者从深度声学模型中准确推断目标用户的敏感和私有属性(例如他们的情感,性别或健康状态)的潜力。为了防御这类攻击,我们设计、实现和评估了一个用户可配置的隐私感知框架,用于优化与语音相关的数据共享机制。我们的目标是实现语音识别和用户识别等主要任务,同时在与云服务提供商共享原始语音数据之前删除原始语音数据中的敏感属性。我们利用解纠缠表示学习来明确地学习原始数据中的独立因素。根据用户的偏好,监督信号通知过滤掉不变因素,同时保留所选偏好中反映的因素。我们在五个数据集上的实验评估表明,所提出的框架可以有效地防御属性推理攻击,将其成功率降低到随机猜测的成功率,同时对感兴趣的任务保持超过99%的准确率。我们得出结论,通过解纠缠表示实现可协商的隐私设置可以为隐私保护应用程序带来新的机会。
{"title":"Privacy-preserving Voice Analysis via Disentangled Representations","authors":"Ranya Aloufi, H. Haddadi, David Boyle","doi":"10.1145/3411495.3421355","DOIUrl":"https://doi.org/10.1145/3411495.3421355","url":null,"abstract":"Voice User Interfaces (VUIs) are increasingly popular and built into smartphones, home assistants, and Internet of Things (IoT) devices. Despite offering an always-on convenient user experience, VUIs raise new security and privacy concerns for their users. In this paper, we focus on attribute inference attacks in the speech domain, demonstrating the potential for an attacker to accurately infer a target user's sensitive and private attributes (e.g. their emotion, sex, or health status) from deep acoustic models. To defend against this class of attacks, we design, implement, and evaluate a user-configurable, privacy-aware framework for optimizing speech-related data sharing mechanisms. Our objective is to enable primary tasks such as speech recognition and user identification, while removing sensitive attributes in the raw speech data before sharing it with a cloud service provider. We leverage disentangled representation learning to explicitly learn independent factors in the raw data. Based on a user's preferences, a supervision signal informs the filtering out of invariant factors while retaining the factors reflected in the selected preference. Our experimental evaluation over five datasets shows that the proposed framework can effectively defend against attribute inference attacks by reducing their success rates to approximately that of guessing at random, while maintaining accuracy in excess of 99% for the tasks of interest. We conclude that negotiable privacy settings enabled by disentangled representations can bring new opportunities for privacy-preserving applications.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132871265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
On the Detection of Disinformation Campaign Activity with Network Analysis 基于网络分析的虚假信息活动检测研究
Luis Vargas, Patrick Emami, Patrick Traynor
seek to influence and polarize political topics through massive coordinated efforts. In the process, these efforts leave behind artifacts, which researchers have leveraged to analyze the tactics employed by disinformation campaigns after they are taken down. Coordination network analysis has proven helpful for learning about how disinformation campaigns operate; however, the usefulness of these forensic tools as a detection mechanism is still an open question. In this paper, we explore the use of coordination network analysis to generate features for distinguishing the activity of a disinformation campaign from legitimate Twitter activity. Doing so would provide more evidence to human analysts as they consider takedowns. We create a time series of daily coordination networks for both Twitter disinformation campaigns and legitimate Twitter communities, and train a binary classifier based on statistical features extracted from these networks. Our results show that the classifier can predict future coordinated activity of known disinformation campaigns with high accuracy (F1 =0.98). On the more challenging task of out-of-distribution activity classification, the performance drops yet is still promising (F1= 0.71), mainly due to an increase in the false positive rate. By doing this analysis, we show that while coordination patterns could be useful for providing evidence of disinformation activity, further investigation is needed to improve upon this method before deployment at scale.
通过大规模协调努力,寻求影响和分化政治议题。在这个过程中,这些努力留下了人工制品,研究人员利用这些人工制品来分析虚假信息被删除后所采用的策略。事实证明,协调网络分析有助于了解虚假信息活动的运作方式;然而,这些法医工具作为一种检测机制的有用性仍然是一个悬而未决的问题。在本文中,我们探索使用协调网络分析来生成特征,以区分虚假信息活动与合法Twitter活动。这样做将为人类分析师在考虑撤资时提供更多证据。我们为Twitter虚假信息活动和合法Twitter社区创建了一个日常协调网络的时间序列,并基于从这些网络中提取的统计特征训练了一个二元分类器。我们的研究结果表明,分类器可以以很高的准确率预测已知虚假信息运动的未来协调活动(F1 =0.98)。在更具挑战性的分布外活动分类任务上,性能下降但仍然有希望(F1= 0.71),主要是由于假阳性率的增加。通过此分析,我们表明,虽然协调模式可能有助于提供虚假信息活动的证据,但在大规模部署之前,需要进一步调查以改进此方法。
{"title":"On the Detection of Disinformation Campaign Activity with Network Analysis","authors":"Luis Vargas, Patrick Emami, Patrick Traynor","doi":"10.1145/3411495.3421363","DOIUrl":"https://doi.org/10.1145/3411495.3421363","url":null,"abstract":"seek to influence and polarize political topics through massive coordinated efforts. In the process, these efforts leave behind artifacts, which researchers have leveraged to analyze the tactics employed by disinformation campaigns after they are taken down. Coordination network analysis has proven helpful for learning about how disinformation campaigns operate; however, the usefulness of these forensic tools as a detection mechanism is still an open question. In this paper, we explore the use of coordination network analysis to generate features for distinguishing the activity of a disinformation campaign from legitimate Twitter activity. Doing so would provide more evidence to human analysts as they consider takedowns. We create a time series of daily coordination networks for both Twitter disinformation campaigns and legitimate Twitter communities, and train a binary classifier based on statistical features extracted from these networks. Our results show that the classifier can predict future coordinated activity of known disinformation campaigns with high accuracy (F1 =0.98). On the more challenging task of out-of-distribution activity classification, the performance drops yet is still promising (F1= 0.71), mainly due to an increase in the false positive rate. By doing this analysis, we show that while coordination patterns could be useful for providing evidence of disinformation activity, further investigation is needed to improve upon this method before deployment at scale.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128383377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
期刊
Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1