首页 > 最新文献

2017 IEEE Symposium on Security and Privacy (SP)最新文献

英文 中文
To Catch a Ratter: Monitoring the Behavior of Amateur DarkComet RAT Operators in the Wild 捕捉鼠:监测野外业余暗彗星鼠操作员的行为
Pub Date : 2017-06-23 DOI: 10.1109/SP.2017.48
Brown Farinholt, Mohammad Rezaeirad, P. Pearce, Hitesh Dharmdasani, Haikuo Yin, Stevens Le Blond, Damon McCoy, Kirill Levchenko
Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large-scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they've been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective. In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several-week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample's behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use.
远程访问木马(rat)使远程攻击者能够对受感染的机器进行交互式控制。与僵尸网络等大规模恶意软件不同,RAT是由人工操作员与受感染的机器远程交互单独控制的。rat的多功能性使其对各种复杂程度的演员都具有吸引力:它们已被用于间谍活动、信息盗窃、偷窥和敲诈勒索。尽管它们的使用越来越多,但我们对rat及其操作者的理解仍然存在重大差距,包括动机、意图、程序和防御可能最有效的弱点。在这项工作中,我们研究了DarkComet的使用,这是一个流行的商业RAT。我们收集了19109个野外发现的DarkComet恶意软件样本,并在两个长达数周的实验过程中,在我们的蜜罐环境中运行了尽可能多的样本。通过在我们的系统中监测样本的行为,我们能够重建操作员操作的序列,使我们对操作员行为有一个独特的看法。我们报告了在实验过程中捕获的2,747个互动会话的结果。在这些会话期间,操作员经常试图通过远程桌面与受害者进行交互,捕获视频、音频和击键,并泄露文件和凭据。据我们所知,我们是第一个对RAT使用的大规模系统研究。
{"title":"To Catch a Ratter: Monitoring the Behavior of Amateur DarkComet RAT Operators in the Wild","authors":"Brown Farinholt, Mohammad Rezaeirad, P. Pearce, Hitesh Dharmdasani, Haikuo Yin, Stevens Le Blond, Damon McCoy, Kirill Levchenko","doi":"10.1109/SP.2017.48","DOIUrl":"https://doi.org/10.1109/SP.2017.48","url":null,"abstract":"Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large-scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they've been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective. In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several-week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample's behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"17 1","pages":"770-787"},"PeriodicalIF":0.0,"publicationDate":"2017-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74612835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Under the Shadow of Sunshine: Understanding and Detecting Bulletproof Hosting on Legitimate Service Provider Networks 在阳光的阴影下:了解和检测合法服务提供商网络上的防弹主机
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.32
Sumayah A. Alrwais, Xiaojing Liao, Xianghang Mi, Peng Wang, Xiaofeng Wang, Feng Qian, R. Beyah, Damon McCoy
BulletProof Hosting (BPH) services provide criminal actors with technical infrastructure that is resilient to complaints of illicit activities, which serves as a basic building block for streamlining numerous types of attacks. Anecdotal reports have highlighted an emerging trend of these BPH services reselling infrastructure from lower end service providers (hosting ISPs, cloud hosting, and CDNs) instead of from monolithic BPH providers. This has rendered many of the prior methods of detecting BPH less effective, since instead of the infrastructure being highly concentrated within a few malicious Autonomous Systems (ASes) it is now agile and dispersed across a larger set of providers that have a mixture of benign and malicious clients. In this paper, we present the first systematic study on this new trend of BPH services. By collecting and analyzing a large amount of data (25 snapshots of the entire Whois IPv4 address space, 1.5 TB of passive DNS data, and longitudinal data from several blacklist feeds), we are able to identify a set of new features that uniquely characterizes BPH on sub-allocations and that are costly to evade. Based upon these features, we train a classifier for detecting malicious sub-allocated network blocks, achieving a 98% recall and 1.5% false discovery rates according to our evaluation. Using a conservatively trained version of our classifier, we scan the whole IPv4 address space and detect 39K malicious network blocks. This allows us to perform a large-scale study of the BPH service ecosystem, which sheds light on this underground business strategy, including patterns of network blocks being recycled and malicious clients being migrated to different network blocks, in an effort to evade IP address based blacklisting. Our study highlights the trend of agile BPH services and points to potential methods of detecting and mitigating this emerging threat.
防弹主机(BPH)服务为犯罪行为者提供了能够应对非法活动投诉的技术基础设施,这是简化多种类型攻击的基本组成部分。坊间报道强调了这些BPH服务从低端提供商(托管isp、云托管和cdn)转售基础设施的新趋势,而不是从单一的BPH提供商那里。这使得许多先前检测BPH的方法变得不那么有效,因为基础设施不是高度集中在几个恶意自治系统(ase)中,而是灵活地分散在更大的提供商集合中,这些提供商混合了良性和恶意客户端。在本文中,我们首次对BPH服务的新趋势进行了系统的研究。通过收集和分析大量数据(整个Whois IPv4地址空间的25个快照,1.5 TB的被动DNS数据,以及来自几个黑名单源的纵向数据),我们能够识别出一组新特征,这些特征是子分配上BPH的独特特征,并且规避这些特征的代价很高。基于这些特征,我们训练了一个用于检测恶意子分配网络块的分类器,根据我们的评估,实现了98%的召回率和1.5%的错误发现率。使用我们的分类器的保守训练版本,我们扫描整个IPv4地址空间并检测到39K恶意网络块。这使我们能够对BPH服务生态系统进行大规模研究,从而揭示这种地下业务策略,包括网络块被回收的模式和恶意客户端被迁移到不同的网络块,以逃避基于IP地址的黑名单。我们的研究强调了敏捷BPH服务的趋势,并指出了检测和减轻这种新兴威胁的潜在方法。
{"title":"Under the Shadow of Sunshine: Understanding and Detecting Bulletproof Hosting on Legitimate Service Provider Networks","authors":"Sumayah A. Alrwais, Xiaojing Liao, Xianghang Mi, Peng Wang, Xiaofeng Wang, Feng Qian, R. Beyah, Damon McCoy","doi":"10.1109/SP.2017.32","DOIUrl":"https://doi.org/10.1109/SP.2017.32","url":null,"abstract":"BulletProof Hosting (BPH) services provide criminal actors with technical infrastructure that is resilient to complaints of illicit activities, which serves as a basic building block for streamlining numerous types of attacks. Anecdotal reports have highlighted an emerging trend of these BPH services reselling infrastructure from lower end service providers (hosting ISPs, cloud hosting, and CDNs) instead of from monolithic BPH providers. This has rendered many of the prior methods of detecting BPH less effective, since instead of the infrastructure being highly concentrated within a few malicious Autonomous Systems (ASes) it is now agile and dispersed across a larger set of providers that have a mixture of benign and malicious clients. In this paper, we present the first systematic study on this new trend of BPH services. By collecting and analyzing a large amount of data (25 snapshots of the entire Whois IPv4 address space, 1.5 TB of passive DNS data, and longitudinal data from several blacklist feeds), we are able to identify a set of new features that uniquely characterizes BPH on sub-allocations and that are costly to evade. Based upon these features, we train a classifier for detecting malicious sub-allocated network blocks, achieving a 98% recall and 1.5% false discovery rates according to our evaluation. Using a conservatively trained version of our classifier, we scan the whole IPv4 address space and detect 39K malicious network blocks. This allows us to perform a large-scale study of the BPH service ecosystem, which sheds light on this underground business strategy, including patterns of network blocks being recycled and malicious clients being migrated to different network blocks, in an effort to evade IP address based blacklisting. Our study highlights the trend of agile BPH services and points to potential methods of detecting and mitigating this emerging threat.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"82 1","pages":"805-823"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72735627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Optimized Honest-Majority MPC for Malicious Adversaries — Breaking the 1 Billion-Gate Per Second Barrier 针对恶意对手优化的诚实多数MPC -打破每秒10亿个门的障碍
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.15
Toshinori Araki, A. Barak, Jun Furukawa, Tamar Lichter, Yehuda Lindell, Ariel Nof, Kazuma Ohara, Adi Watzman, Or Weinstein
Secure multiparty computation enables a set of parties to securely carry out a joint computation of their private inputs without revealing anything but the output. In the past few years, the efficiency of secure computation protocols has increased in leaps and bounds. However, when considering the case of security in the presence of malicious adversaries (who may arbitrarily deviate from the protocol specification), we are still very far from achieving high efficiency. In this paper, we consider the specific case of three parties and an honest majority. We provide general techniques for improving efficiency of cut-and-choose protocols on multiplication triples and utilize them to significantly improve the recently published protocol of Furukawa et al. (ePrint 2016/944). We reduce the bandwidth of their protocol down from 10 bits per AND gate to 7 bits per AND gate, and show how to improve some computationally expensive parts of their protocol. Most notably, we design cache-efficient shuffling techniques for implementing cut-and-choose without randomly permuting large arrays (which is very slow due to continual cache misses). We provide a combinatorial analysis of our techniques, bounding the cheating probability of the adversary. Our implementation achieves a rate of approximately 1.15 billion AND gates per second on a cluster of three 20-core machines with a 10Gbps network. Thus, we can securely compute 212,000 AES encryptions per second (which is hundreds of times faster than previous work for this setting). Our results demonstrate that high-throughput secure computation for malicious adversaries is possible.
安全多方计算使一组各方能够安全地对他们的私有输入进行联合计算,而不透露除输出外的任何内容。在过去的几年中,安全计算协议的效率有了跳跃式的提高。然而,当考虑到存在恶意对手(可能任意偏离协议规范)的情况下的安全性时,我们离实现高效率还很遥远。在本文中,我们考虑了三方和诚实多数的具体情况。我们提供了提高乘法三元组切割选择协议效率的一般技术,并利用它们显著改进了Furukawa等人最近发表的协议(ePrint 2016/944)。我们将其协议的带宽从每个与门10位降低到每个与门7位,并展示了如何改进其协议中一些计算昂贵的部分。最值得注意的是,我们设计了缓存高效的洗牌技术来实现切割和选择,而不需要随机排列大数组(由于持续的缓存丢失,这是非常缓慢的)。我们提供了我们技术的组合分析,限制对手的作弊概率。我们的实现在一个由三台20核机器组成的集群上以10Gbps的网络实现了大约每秒11.5亿个AND门的速率。因此,我们可以安全地每秒计算212,000个AES加密(这比此设置的先前工作快了数百倍)。我们的结果表明,针对恶意对手的高吞吐量安全计算是可能的。
{"title":"Optimized Honest-Majority MPC for Malicious Adversaries — Breaking the 1 Billion-Gate Per Second Barrier","authors":"Toshinori Araki, A. Barak, Jun Furukawa, Tamar Lichter, Yehuda Lindell, Ariel Nof, Kazuma Ohara, Adi Watzman, Or Weinstein","doi":"10.1109/SP.2017.15","DOIUrl":"https://doi.org/10.1109/SP.2017.15","url":null,"abstract":"Secure multiparty computation enables a set of parties to securely carry out a joint computation of their private inputs without revealing anything but the output. In the past few years, the efficiency of secure computation protocols has increased in leaps and bounds. However, when considering the case of security in the presence of malicious adversaries (who may arbitrarily deviate from the protocol specification), we are still very far from achieving high efficiency. In this paper, we consider the specific case of three parties and an honest majority. We provide general techniques for improving efficiency of cut-and-choose protocols on multiplication triples and utilize them to significantly improve the recently published protocol of Furukawa et al. (ePrint 2016/944). We reduce the bandwidth of their protocol down from 10 bits per AND gate to 7 bits per AND gate, and show how to improve some computationally expensive parts of their protocol. Most notably, we design cache-efficient shuffling techniques for implementing cut-and-choose without randomly permuting large arrays (which is very slow due to continual cache misses). We provide a combinatorial analysis of our techniques, bounding the cheating probability of the adversary. Our implementation achieves a rate of approximately 1.15 billion AND gates per second on a cluster of three 20-core machines with a 10Gbps network. Thus, we can securely compute 212,000 AES encryptions per second (which is hundreds of times faster than previous work for this setting). Our results demonstrate that high-throughput secure computation for malicious adversaries is possible.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"12 1","pages":"843-862"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78975858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
Is Interaction Necessary for Distributed Private Learning? 分布式私有学习需要互动吗?
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.35
Adam D. Smith, Abhradeep Thakurta, Jalaj Upadhyay
Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual's device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users' hands. For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol. We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction. We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.
最近大规模部署的差分私有算法采用本地隐私模型(有时称为PRAM或随机响应),其中数据在每个人的设备上随机化,然后发送到计算近似汇总统计数据的服务器。服务器在隐私方面无需信任,将数据控制权交给用户。对于一类重要的凸优化问题(包括逻辑回归、支持向量机和欧几里得中值),最著名的局部微分私有算法是高度交互的,需要尽可能多的来回轮,因为协议中有用户。我们的问题是:在局部DP模型中优化凸函数需要多少交互作用?现有的下界要么不适用于凸优化,要么对交互作用一无所知。我们提供的新算法要么是非交互式的,要么使用相对较少的交互轮。我们还展示了一类重要的非交互算法的精度下界,这表明在有交互和没有交互的情况下是可能的。
{"title":"Is Interaction Necessary for Distributed Private Learning?","authors":"Adam D. Smith, Abhradeep Thakurta, Jalaj Upadhyay","doi":"10.1109/SP.2017.35","DOIUrl":"https://doi.org/10.1109/SP.2017.35","url":null,"abstract":"Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual's device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users' hands. For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol. We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction. We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"22 1","pages":"58-77"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86594166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 138
SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices SmarPer:移动设备的上下文感知和自动运行时权限
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.25
Katarzyna Olejnik, Italo Dacosta, Joana Soares Machado, Kévin Huguenin, M. E. Khan, J. Hubaux
Permission systems are the main defense that mobile platforms, such as Android and iOS, offer to users to protect their private data from prying apps. However, due to the tension between usability and control, such systems have several limitations that often force users to overshare sensitive data. We address some of these limitations with SmarPer, an advanced permission mechanism for Android. To address the rigidity of current permission systems and their poor matching of users' privacy preferences, SmarPer relies on contextual information and machine learning methods to predict permission decisions at runtime. Note that the goal of SmarPer is to mimic the users' decisions, not to make privacy-preserving decisions per se. Using our SmarPer implementation, we collected 8,521 runtime permission decisions from 41 participants in real conditions. With this unique data set, we show that using an efficient Bayesian linear regression model results in a mean correct classification rate of 80% (±3%). This represents a mean relative reduction of approximately 50% in the number of incorrect decisions when compared with a user-defined static permission policy, i.e., the model used in current permission systems. SmarPer also focuses on the suboptimal trade-off between privacy and utility, instead of only "allow" or "deny" type of decisions, SmarPer also offers an "obfuscate" option where users can still obtain utility by revealing partial information to apps. We implemented obfuscation techniques in SmarPer for different data types and evaluated them during our data collection campaign. Our results show that 73% of the participants found obfuscation useful and it accounted for almost a third of the total number of decisions. In short, we are the first to show, using a large dataset of real in situ permission decisions, that it is possible to learn users' unique decision patterns at runtime using contextual information while supporting data obfuscation, this is an important step towards automating the management of permissions in smartphones.
许可系统是Android和iOS等移动平台为用户提供的主要防御措施,目的是保护他们的私人数据不受窥探应用的侵害。然而,由于可用性和控制之间的紧张关系,这样的系统有一些限制,经常迫使用户过度共享敏感数据。我们用SmarPer解决了其中的一些限制,这是Android的一种高级权限机制。为了解决当前权限系统的僵化和用户隐私偏好的不匹配问题,SmarPer依靠上下文信息和机器学习方法来预测运行时的权限决策。请注意,SmarPer的目标是模仿用户的决策,而不是做出保护隐私的决策。使用我们的SmarPer实现,我们从41个参与者那里收集了真实条件下的8,521个运行时权限决策。有了这个独特的数据集,我们表明使用有效的贝叶斯线性回归模型的平均正确分类率为80%(±3%)。这表示与用户定义的静态权限策略(即当前权限系统中使用的模型)相比,错误决策的数量平均相对减少了大约50%。SmarPer还专注于隐私和实用性之间的次优权衡,而不仅仅是“允许”或“拒绝”类型的决策,SmarPer还提供了一个“模糊”选项,用户仍然可以通过向应用程序透露部分信息来获得实用性。我们在SmarPer中实现了不同数据类型的混淆技术,并在数据收集活动中对其进行了评估。我们的结果表明,73%的参与者认为混淆是有用的,它几乎占决策总数的三分之一。简而言之,我们是第一个使用真实的现场权限决策的大型数据集来展示的人,在支持数据混淆的同时,使用上下文信息在运行时学习用户独特的决策模式是可能的,这是朝着智能手机权限管理自动化迈出的重要一步。
{"title":"SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices","authors":"Katarzyna Olejnik, Italo Dacosta, Joana Soares Machado, Kévin Huguenin, M. E. Khan, J. Hubaux","doi":"10.1109/SP.2017.25","DOIUrl":"https://doi.org/10.1109/SP.2017.25","url":null,"abstract":"Permission systems are the main defense that mobile platforms, such as Android and iOS, offer to users to protect their private data from prying apps. However, due to the tension between usability and control, such systems have several limitations that often force users to overshare sensitive data. We address some of these limitations with SmarPer, an advanced permission mechanism for Android. To address the rigidity of current permission systems and their poor matching of users' privacy preferences, SmarPer relies on contextual information and machine learning methods to predict permission decisions at runtime. Note that the goal of SmarPer is to mimic the users' decisions, not to make privacy-preserving decisions per se. Using our SmarPer implementation, we collected 8,521 runtime permission decisions from 41 participants in real conditions. With this unique data set, we show that using an efficient Bayesian linear regression model results in a mean correct classification rate of 80% (±3%). This represents a mean relative reduction of approximately 50% in the number of incorrect decisions when compared with a user-defined static permission policy, i.e., the model used in current permission systems. SmarPer also focuses on the suboptimal trade-off between privacy and utility, instead of only \"allow\" or \"deny\" type of decisions, SmarPer also offers an \"obfuscate\" option where users can still obtain utility by revealing partial information to apps. We implemented obfuscation techniques in SmarPer for different data types and evaluated them during our data collection campaign. Our results show that 73% of the participants found obfuscation useful and it accounted for almost a third of the total number of decisions. In short, we are the first to show, using a large dataset of real in situ permission decisions, that it is possible to learn users' unique decision patterns at runtime using contextual information while supporting data obfuscation, this is an important step towards automating the management of permissions in smartphones.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"8 1","pages":"1058-1076"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90039703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
SecureML: A System for Scalable Privacy-Preserving Machine Learning SecureML:一个可扩展的隐私保护机器学习系统
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.12
Payman Mohassel, Yupeng Zhang
Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to non-linear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.
机器学习在实践中被广泛用于为图像处理、语音和文本识别等应用生成预测模型。当对从不同来源收集的大量数据进行训练时,这些模型更加准确。然而,大规模的数据收集引发了人们对隐私的担忧。在本文中,我们提出了新的有效的协议,用于线性回归,逻辑回归和使用随机梯度下降方法的神经网络训练的隐私保护机器学习。我们的协议属于双服务器模型,其中数据所有者将他们的私有数据分发到两个非串通的服务器上,这些服务器使用安全的两方计算(2PC)在联合数据上训练各种模型。我们开发了新技术来支持共享十进制数的安全算术运算,并提出了mpc友好的非线性函数替代方案,如sigmoid和softmax,这些替代方案优于先前的工作。我们用c++实现我们的系统。我们的实验验证了我们的协议比目前最先进的隐私保护线性和逻辑回归实现快几个数量级,并且可以扩展到具有数千个特征的数百万个数据样本。我们还实现了第一个用于训练神经网络的隐私保护系统。
{"title":"SecureML: A System for Scalable Privacy-Preserving Machine Learning","authors":"Payman Mohassel, Yupeng Zhang","doi":"10.1109/SP.2017.12","DOIUrl":"https://doi.org/10.1109/SP.2017.12","url":null,"abstract":"Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to non-linear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"21 1","pages":"19-38"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81326101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1370
Machine-Checked Proofs of Privacy for Electronic Voting Protocols 电子投票协议的机器检查隐私证明
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.28
V. Cortier, C. Drăgan, François Dupressoir, Benedikt Schmidt, Pierre-Yves Strub, B. Warinschi
We provide the first machine-checked proof of privacy-related properties (including ballot privacy) for an electronic voting protocol in the computational model. We target the popular Helios family of voting protocols, for which we identify appropriate levels of abstractions to allow the simplification and convenient reuse of proof steps across many variations of the voting scheme. The resulting framework enables machine-checked security proofs for several hundred variants of Helios and should serve as a stepping stone for the analysis of further variations of the scheme. In addition, we highlight some of the lessons learned regarding the gap between pen-and-paper and machine-checked proofs, and report on the experience with formalizing the security of protocols at this scale.
我们为计算模型中的电子投票协议提供了第一个由机器检查的隐私相关属性(包括选票隐私)的证明。我们的目标是流行的Helios投票协议家族,为此我们确定了适当的抽象级别,以便在投票方案的许多变体中简化和方便地重用证明步骤。由此产生的框架能够为数百种Helios变体提供机器检查的安全证明,并应作为分析该方案进一步变体的垫脚石。此外,我们强调了一些关于纸笔和机器检查证明之间差距的经验教训,并报告了在这种规模下形式化协议安全性的经验。
{"title":"Machine-Checked Proofs of Privacy for Electronic Voting Protocols","authors":"V. Cortier, C. Drăgan, François Dupressoir, Benedikt Schmidt, Pierre-Yves Strub, B. Warinschi","doi":"10.1109/SP.2017.28","DOIUrl":"https://doi.org/10.1109/SP.2017.28","url":null,"abstract":"We provide the first machine-checked proof of privacy-related properties (including ballot privacy) for an electronic voting protocol in the computational model. We target the popular Helios family of voting protocols, for which we identify appropriate levels of abstractions to allow the simplification and convenient reuse of proof steps across many variations of the voting scheme. The resulting framework enables machine-checked security proofs for several hundred variants of Helios and should serve as a stepping stone for the analysis of further variations of the scheme. In addition, we highlight some of the lessons learned regarding the gap between pen-and-paper and machine-checked proofs, and report on the experience with formalizing the security of protocols at this scale.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"6 1","pages":"993-1008"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87379899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Identifying Personal DNA Methylation Profiles by Genotype Inference 通过基因型推断确定个人DNA甲基化谱
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.21
M. Backes, Pascal Berrang, M. Bieg, R. Eils, C. Herrmann, Mathias Humbert, I. Lehmann
Since the first whole-genome sequencing, the biomedical research community has made significant steps towards a more precise, predictive and personalized medicine. Genomic data is nowadays widely considered privacy-sensitive and consequently protected by strict regulations and released only after careful consideration. Various additional types of biomedical data, however, are not shielded by any dedicated legal means and consequently disseminated much less thoughtfully. This in particular holds true for DNA methylation data as one of the most important and well-understood epigenetic element influencing human health. In this paper, we show that, in contrast to the aforementioned belief, releasing one's DNA methylation data causes privacy issues akin to releasing one's actual genome. We show that already a small subset of methylation regions influenced by genomic variants are sufficient to infer parts of someone's genome, and to further map this DNA methylation profile to the corresponding genome. Notably, we show that such re-identification is possible with 97.5% accuracy, relying on a dataset of more than 2500 genomes, and that we can reject all wrongly matched genomes using an appropriate statistical test. We provide means for countering this threat by proposing a novel cryptographic scheme for privately classifying tumors that enables a privacy-respecting medical diagnosis in a common clinical setting. The scheme relies on a combination of random forests and homomorphic encryption, and it is proven secure in the honest-but-curious model. We evaluate this scheme on real DNA methylation data, and show that we can keep the computational overhead to acceptable values for our application scenario.
自从第一次全基因组测序以来,生物医学研究界已经朝着更精确、可预测和个性化的医学迈出了重要的一步。如今,基因组数据被广泛认为是隐私敏感的,因此受到严格法规的保护,只有在仔细考虑后才会发布。然而,各种其他类型的生物医学数据没有受到任何专门法律手段的保护,因此传播起来就不那么周到了。这尤其适用于DNA甲基化数据,因为它是影响人类健康的最重要和众所周知的表观遗传因素之一。在本文中,我们表明,与上述观点相反,释放一个人的DNA甲基化数据会导致类似于释放一个人的实际基因组的隐私问题。我们表明,受基因组变异影响的一小部分甲基化区域已经足以推断某人基因组的一部分,并进一步将这种DNA甲基化谱映射到相应的基因组。值得注意的是,我们表明这种重新识别的准确率为97.5%,依赖于超过2500个基因组的数据集,并且我们可以使用适当的统计检验拒绝所有错误匹配的基因组。我们提出了一种新的加密方案,用于私下对肿瘤进行分类,从而在常见的临床环境中进行尊重隐私的医疗诊断,从而为应对这种威胁提供了手段。该方案依赖于随机森林和同态加密的组合,并且在诚实但好奇的模型中被证明是安全的。我们在真实的DNA甲基化数据上对该方案进行了评估,并表明我们可以将计算开销保持在我们的应用场景可以接受的值。
{"title":"Identifying Personal DNA Methylation Profiles by Genotype Inference","authors":"M. Backes, Pascal Berrang, M. Bieg, R. Eils, C. Herrmann, Mathias Humbert, I. Lehmann","doi":"10.1109/SP.2017.21","DOIUrl":"https://doi.org/10.1109/SP.2017.21","url":null,"abstract":"Since the first whole-genome sequencing, the biomedical research community has made significant steps towards a more precise, predictive and personalized medicine. Genomic data is nowadays widely considered privacy-sensitive and consequently protected by strict regulations and released only after careful consideration. Various additional types of biomedical data, however, are not shielded by any dedicated legal means and consequently disseminated much less thoughtfully. This in particular holds true for DNA methylation data as one of the most important and well-understood epigenetic element influencing human health. In this paper, we show that, in contrast to the aforementioned belief, releasing one's DNA methylation data causes privacy issues akin to releasing one's actual genome. We show that already a small subset of methylation regions influenced by genomic variants are sufficient to infer parts of someone's genome, and to further map this DNA methylation profile to the corresponding genome. Notably, we show that such re-identification is possible with 97.5% accuracy, relying on a dataset of more than 2500 genomes, and that we can reject all wrongly matched genomes using an appropriate statistical test. We provide means for countering this threat by proposing a novel cryptographic scheme for privately classifying tumors that enables a privacy-respecting medical diagnosis in a common clinical setting. The scheme relies on a combination of random forests and homomorphic encryption, and it is proven secure in the honest-but-curious model. We evaluate this scheme on real DNA methylation data, and show that we can keep the computational overhead to acceptable values for our application scenario.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"137 1","pages":"957-976"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79703781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Skyfire: Data-Driven Seed Generation for Fuzzing Skyfire:模糊测试的数据驱动种子生成
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.23
Junjie Wang, Bihuan Chen, Lei Wei, Yang Liu
Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption bugs (among which there are 16 new vulnerabilities and received 33.5k USD bug bounty rewards) and 32 denial-of-service bugs.
将高度结构化文件作为输入的程序通常分阶段处理输入:语法解析、语义检查和应用程序执行。深层bug通常隐藏在应用程序执行阶段,自动生成测试输入来触发它们是非常重要的。基于突变的模糊算法通过随机或启发式地修改格式良好的种子输入来生成测试输入。大多数输入在早期语法解析阶段被拒绝。不同的是,基于生成的模糊测试从规范(例如,语法)中生成输入。它们可以快速地将模糊测试带到语法解析阶段之外。然而,大多数输入无法通过语义检查(例如,违反语义规则),这限制了它们发现深层bug的能力。在本文中,我们提出了一种新的数据驱动的种子生成方法,名为Skyfire,它利用大量现有样本中的知识为处理高度结构化输入的模糊程序生成分布良好的种子输入。Skyfire以语料库和语法作为输入,它由两个步骤组成。Skyfire的第一步学习概率上下文敏感语法(PCSG)来指定语法特征和语义规则,然后第二步利用学习到的PCSG来生成种子输入。我们将收集到的样本和Skyfire生成的输入作为AFL的种子来模糊几个开源XSLT和XML引擎(即Sablotron、libxslt和libxml2)。结果表明,Skyfire可以生成分布良好的输入,从而显著提高代码覆盖率(即,平均20%的行覆盖率和15%的函数覆盖率)和fuzzers的bug查找能力。我们还使用Skyfire生成的输入模糊化了Internet Explorer 11的闭源JavaScript和渲染引擎。我们一共发现了19个新的内存破坏漏洞(其中16个新漏洞,获得了33.5万美元的漏洞赏金奖励)和32个拒绝服务漏洞。
{"title":"Skyfire: Data-Driven Seed Generation for Fuzzing","authors":"Junjie Wang, Bihuan Chen, Lei Wei, Yang Liu","doi":"10.1109/SP.2017.23","DOIUrl":"https://doi.org/10.1109/SP.2017.23","url":null,"abstract":"Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption bugs (among which there are 16 new vulnerabilities and received 33.5k USD bug bounty rewards) and 32 denial-of-service bugs.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"11 Spec No 1","pages":"579-594"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77767288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 268
One TPM to Bind Them All: Fixing TPM 2.0 for Provably Secure Anonymous Attestation 一个TPM绑定所有TPM:为可证明的安全匿名认证修复TPM 2.0
Pub Date : 2017-05-22 DOI: 10.1109/SP.2017.22
J. Camenisch, Liqun Chen, Manu Drijvers, Anja Lehmann, David Novick, Rainer Urian
The Trusted Platform Module (TPM) is an international standard for a security chip that can be used for the management of cryptographic keys and for remote attestation. The specification of the most recent TPM 2.0 interfaces for direct anonymous attestation unfortunately has a number of severe shortcomings. First of all, they do not allow for security proofs (indeed, the published proofs are incorrect). Second, they provide a Diffie-Hellman oracle w.r.t. the secret key of the TPM, weakening the security and preventing forward anonymity of attestations. Fixes to these problems have been proposed, but they create new issues: they enable a fraudulent TPM to encode information into an attestation signature, which could be used to break anonymity or to leak the secret key. Furthermore, all proposed ways to remove the Diffie-Hellman oracle either strongly limit the functionality of the TPM or would require significant changes to the TPM 2.0 interfaces. In this paper we provide a better specification of the TPM 2.0 interfaces that addresses these problems and requires only minimal changes to the current TPM 2.0 commands. We then show how to use the revised interfaces to build q-SDH-and LRSW-based anonymous attestation schemes, and prove their security. We finally discuss how to obtain other schemes addressing different use cases such as key-binding for U-Prove and e-cash.
可信平台模块(Trusted Platform Module, TPM)是一种安全芯片的国际标准,可用于管理加密密钥和远程认证。不幸的是,用于直接匿名认证的最新TPM 2.0接口规范有许多严重的缺点。首先,它们不允许安全证明(实际上,发布的证明是不正确的)。其次,他们在TPM的秘钥上提供了一个Diffie-Hellman oracle,削弱了安全性,防止了认证的前向匿名性。已经提出了对这些问题的修复方法,但是它们产生了新的问题:它们使欺诈性TPM能够将信息编码到认证签名中,这可能被用来破坏匿名性或泄露密钥。此外,所有提出的删除Diffie-Hellman oracle的方法要么严重限制了TPM的功能,要么需要对TPM 2.0接口进行重大更改。在本文中,我们提供了一个更好的TPM 2.0接口规范来解决这些问题,并且只需要对当前的TPM 2.0命令进行最小的更改。然后,我们将展示如何使用修改后的接口来构建基于q- sdh和lrsw的匿名认证方案,并证明其安全性。我们最后讨论了如何获得其他方案来解决不同的用例,例如U-Prove和e-cash的键绑定。
{"title":"One TPM to Bind Them All: Fixing TPM 2.0 for Provably Secure Anonymous Attestation","authors":"J. Camenisch, Liqun Chen, Manu Drijvers, Anja Lehmann, David Novick, Rainer Urian","doi":"10.1109/SP.2017.22","DOIUrl":"https://doi.org/10.1109/SP.2017.22","url":null,"abstract":"The Trusted Platform Module (TPM) is an international standard for a security chip that can be used for the management of cryptographic keys and for remote attestation. The specification of the most recent TPM 2.0 interfaces for direct anonymous attestation unfortunately has a number of severe shortcomings. First of all, they do not allow for security proofs (indeed, the published proofs are incorrect). Second, they provide a Diffie-Hellman oracle w.r.t. the secret key of the TPM, weakening the security and preventing forward anonymity of attestations. Fixes to these problems have been proposed, but they create new issues: they enable a fraudulent TPM to encode information into an attestation signature, which could be used to break anonymity or to leak the secret key. Furthermore, all proposed ways to remove the Diffie-Hellman oracle either strongly limit the functionality of the TPM or would require significant changes to the TPM 2.0 interfaces. In this paper we provide a better specification of the TPM 2.0 interfaces that addresses these problems and requires only minimal changes to the current TPM 2.0 commands. We then show how to use the revised interfaces to build q-SDH-and LRSW-based anonymous attestation schemes, and prove their security. We finally discuss how to obtain other schemes addressing different use cases such as key-binding for U-Prove and e-cash.","PeriodicalId":6502,"journal":{"name":"2017 IEEE Symposium on Security and Privacy (SP)","volume":"24 1","pages":"901-920"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81593958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
期刊
2017 IEEE Symposium on Security and Privacy (SP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1