首页 > 最新文献

2019 IEEE Security and Privacy Workshops (SPW)最新文献

英文 中文
Title Page iii 第三页标题
Pub Date : 2019-05-01 DOI: 10.1109/spw.2019.00002
{"title":"Title Page iii","authors":"","doi":"10.1109/spw.2019.00002","DOIUrl":"https://doi.org/10.1109/spw.2019.00002","url":null,"abstract":"","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"802 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132012745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Copyright notice] (版权)
Pub Date : 2019-05-01 DOI: 10.1109/spw.2019.00003
{"title":"[Copyright notice]","authors":"","doi":"10.1109/spw.2019.00003","DOIUrl":"https://doi.org/10.1109/spw.2019.00003","url":null,"abstract":"","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133432346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DLS 2019 Organization DLS 2019组织
Pub Date : 2019-05-01 DOI: 10.1109/spw.2019.00007
{"title":"DLS 2019 Organization","authors":"","doi":"10.1109/spw.2019.00007","DOIUrl":"https://doi.org/10.1109/spw.2019.00007","url":null,"abstract":"","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116646769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the Workshop General Chair 讲习班总主席致辞
Pub Date : 2019-05-01 DOI: 10.1109/spw.2019.00005
Welcome to the 1996 IEEE Second International Workshop on Systems Management. Since the first Systems Management Workshop in 1993, the world of systems and application management has grown and moved simultaneously in different directions: the growing acceptance of objectoriented technology to model and implement management activities and computing environments; more concern about end-to-end quality of service, especially with the emergence of ATM technology; the further blurring of the boundaries between telecommunications and computer communications; the growing reality of distributed computing environments with personal computers, workstations, servers and mainframes; the growth of the Internet as the means for distributed operations and services.
{"title":"Message from the Workshop General Chair","authors":"","doi":"10.1109/spw.2019.00005","DOIUrl":"https://doi.org/10.1109/spw.2019.00005","url":null,"abstract":"Welcome to the 1996 IEEE Second International Workshop on Systems Management. Since the first Systems Management Workshop in 1993, the world of systems and application management has grown and moved simultaneously in different directions: the growing acceptance of objectoriented technology to model and implement management activities and computing environments; more concern about end-to-end quality of service, especially with the emergence of ATM technology; the further blurring of the boundaries between telecommunications and computer communications; the growing reality of distributed computing environments with personal computers, workstations, servers and mainframes; the growth of the Internet as the means for distributed operations and services.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"1994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125549787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Victim Routine Influences the Number of DDoS Attacks: Evidence from Dutch Educational Network 受害者的日常行为影响DDoS攻击的数量:来自荷兰教育网的证据
Pub Date : 2019-05-01 DOI: 10.1109/SPW.2019.00052
Abhishta Abhishta, M. Junger, R. Joosten, L. Nieuwenhuis
We study the influence of daily routines of Dutch academic institutions on the number of DDoS attacks targeting their infrastructures. We hypothesise that the attacks are motivated and harness the postulates of Routine Activity Theory (RAT) from criminology to analyse the data. We define routine periods in order to group days with similar activities and use 2.5 years of NetFlow alerts data measured by SURFnet to compare the number of alerts generated during each of these periods. Our analysis shows clear correlation between academic schedules and attack patterns on academic institutions. This leads us to believe that most of these attacks are not random and are initiated by someone who might benefit by disrupting scheduled educational activities.
我们研究了荷兰学术机构的日常工作对针对其基础设施的DDoS攻击数量的影响。我们假设这些攻击是有动机的,并利用犯罪学的常规活动理论(RAT)的假设来分析数据。我们定义了常规周期,以便对具有类似活动的天数进行分组,并使用由SURFnet测量的2.5年NetFlow警报数据来比较每个周期产生的警报数量。我们的分析表明,学术时间表和对学术机构的攻击模式之间存在明显的相关性。这使我们相信,大多数这些攻击不是随机的,并且是由可能通过破坏预定的教育活动而受益的人发起的。
{"title":"Victim Routine Influences the Number of DDoS Attacks: Evidence from Dutch Educational Network","authors":"Abhishta Abhishta, M. Junger, R. Joosten, L. Nieuwenhuis","doi":"10.1109/SPW.2019.00052","DOIUrl":"https://doi.org/10.1109/SPW.2019.00052","url":null,"abstract":"We study the influence of daily routines of Dutch academic institutions on the number of DDoS attacks targeting their infrastructures. We hypothesise that the attacks are motivated and harness the postulates of Routine Activity Theory (RAT) from criminology to analyse the data. We define routine periods in order to group days with similar activities and use 2.5 years of NetFlow alerts data measured by SURFnet to compare the number of alerts generated during each of these periods. Our analysis shows clear correlation between academic schedules and attack patterns on academic institutions. This leads us to believe that most of these attacks are not random and are initiated by someone who might benefit by disrupting scheduled educational activities.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126696241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Are Self-Driving Cars Secure? Evasion Attacks Against Deep Neural Networks for Steering Angle Prediction 自动驾驶汽车安全吗?面向转向角预测的深度神经网络规避攻击
Pub Date : 2019-04-15 DOI: 10.1109/SPW.2019.00033
Alesia Chernikova, Alina Oprea, C. Nita-Rotaru, Baekgyu Kim
Deep Neural Networks (DNNs) have tremendous potential in advancing the vision for self-driving cars. However, the security of DNN models in this context leads to major safety implications and needs to be better understood. We consider the case study of steering angle prediction from camera images, using the dataset from the 2014 Udacity challenge. We demonstrate for the first time adversarial testing-time attacks for this application for both classification and regression settings. We show that minor modifications to the camera image (an L_2 distance of 0.82 for one of the considered models) result in mis-classification of an image to any class of attacker's choice. Furthermore, our regression attack results in a significant increase in Mean Square Error (MSE) – by a factor of 69 in the worst case.
深度神经网络(dnn)在推进自动驾驶汽车的愿景方面具有巨大的潜力。然而,在这种情况下,深度神经网络模型的安全性会导致重大的安全问题,需要更好地理解。我们考虑使用2014年Udacity挑战赛的数据集,从相机图像中预测转向角度的案例研究。我们首次演示了针对分类和回归设置的对抗性测试时间攻击。我们表明,对相机图像的微小修改(其中一个考虑的模型的l2距离为0.82)会导致对攻击者选择的任何类别的图像进行错误分类。此外,我们的回归攻击导致均方误差(MSE)显著增加——在最坏的情况下增加了69倍。
{"title":"Are Self-Driving Cars Secure? Evasion Attacks Against Deep Neural Networks for Steering Angle Prediction","authors":"Alesia Chernikova, Alina Oprea, C. Nita-Rotaru, Baekgyu Kim","doi":"10.1109/SPW.2019.00033","DOIUrl":"https://doi.org/10.1109/SPW.2019.00033","url":null,"abstract":"Deep Neural Networks (DNNs) have tremendous potential in advancing the vision for self-driving cars. However, the security of DNN models in this context leads to major safety implications and needs to be better understood. We consider the case study of steering angle prediction from camera images, using the dataset from the 2014 Udacity challenge. We demonstrate for the first time adversarial testing-time attacks for this application for both classification and regression settings. We show that minor modifications to the camera image (an L_2 distance of 0.82 for one of the considered models) result in mis-classification of an image to any class of attacker's choice. Furthermore, our regression attack results in a significant increase in Mean Square Error (MSE) – by a factor of 69 in the worst case.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133878452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
On the Robustness of Deep K-Nearest Neighbors 关于深度k近邻的鲁棒性
Pub Date : 2019-03-20 DOI: 10.1109/SPW.2019.00014
Chawin Sitawarin, David A. Wagner
Despite a large amount of attention on adversarial examples, very few works have demonstrated an effective defense against this threat. We examine Deep k-Nearest Neighbor (DkNN), a proposed defense that combines k-Nearest Neighbor (kNN) and deep learning to improve the model's robustness to adversarial examples. It is challenging to evaluate the robustness of this scheme due to a lack of efficient algorithm for attacking kNN classifiers with large k and high-dimensional data. We propose a heuristic attack that allows us to use gradient descent to find adversarial examples for kNN classifiers, and then apply it to attack the DkNN defense as well. Results suggest that our attack is moderately stronger than any naive attack on kNN and significantly outperforms other attacks on DkNN.
尽管对抗性例子有大量的关注,但很少有作品展示了对这种威胁的有效防御。我们研究了深度k近邻(DkNN),这是一种结合k近邻(kNN)和深度学习的防御方法,以提高模型对对抗示例的鲁棒性。由于缺乏有效的算法来攻击具有大k和高维数据的kNN分类器,因此评估该方案的鲁棒性具有挑战性。我们提出了一种启发式攻击,它允许我们使用梯度下降来找到kNN分类器的对抗性示例,然后将其应用于攻击DkNN防御。结果表明,我们的攻击比任何对kNN的朴素攻击都要强,并且明显优于对DkNN的其他攻击。
{"title":"On the Robustness of Deep K-Nearest Neighbors","authors":"Chawin Sitawarin, David A. Wagner","doi":"10.1109/SPW.2019.00014","DOIUrl":"https://doi.org/10.1109/SPW.2019.00014","url":null,"abstract":"Despite a large amount of attention on adversarial examples, very few works have demonstrated an effective defense against this threat. We examine Deep k-Nearest Neighbor (DkNN), a proposed defense that combines k-Nearest Neighbor (kNN) and deep learning to improve the model's robustness to adversarial examples. It is challenging to evaluate the robustness of this scheme due to a lack of efficient algorithm for attacking kNN classifiers with large k and high-dimensional data. We propose a heuristic attack that allows us to use gradient descent to find adversarial examples for kNN classifiers, and then apply it to attack the DkNN defense as well. Results suggest that our attack is moderately stronger than any naive attack on kNN and significantly outperforms other attacks on DkNN.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115445591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification 基于字节的深度神经网络恶意软件分类激活分析
Pub Date : 2019-03-12 DOI: 10.1109/SPW.2019.00017
Scott E. Coull, Christopher Gardner
Feature engineering is one of the most costly aspects of developing effective machine learning models, and that cost is even greater in specialized problem domains, like malware classification, where expert skills are necessary to identify useful features. Recent work, however, has shown that deep learning models can be used to automatically learn feature representations directly from the raw, unstructured bytes of the binaries themselves. In this paper, we explore what these models are learning about malware. To do so, we examine the learned features at multiple levels of resolution, from individual byte embeddings to end-to-end analysis of the model. At each step, we connect these byte-oriented activations to their original semantics through parsing and disassembly of the binary to arrive at human-understandable features. Through our results, we identify several interesting features learned by the model and their connection to manually-derived features typically used by traditional machine learning models. Additionally, we explore the impact of training data volume and regularization on the quality of the learned features and the efficacy of the classifiers, revealing the somewhat paradoxical insight that better generalization does not necessarily result in better performance for byte-based malware classifiers.
特征工程是开发有效的机器学习模型最昂贵的方面之一,在专门的问题领域,比如恶意软件分类,这种成本甚至更大,在这些领域,识别有用的特征需要专家技能。然而,最近的研究表明,深度学习模型可以直接从二进制文件本身的原始、非结构化字节中自动学习特征表示。在本文中,我们探讨了这些模型对恶意软件的了解。为此,我们在多个分辨率级别上检查学习到的特征,从单个字节嵌入到模型的端到端分析。在每一步中,我们通过解析和反汇编二进制文件,将这些面向字节的激活连接到它们的原始语义,以获得人类可以理解的特征。通过我们的结果,我们确定了模型学习到的几个有趣的特征,以及它们与传统机器学习模型通常使用的手动衍生特征的联系。此外,我们探讨了训练数据量和正则化对学习特征质量和分类器效率的影响,揭示了更好的泛化并不一定会给基于字节的恶意软件分类器带来更好的性能这一有点矛盾的见解。
{"title":"Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification","authors":"Scott E. Coull, Christopher Gardner","doi":"10.1109/SPW.2019.00017","DOIUrl":"https://doi.org/10.1109/SPW.2019.00017","url":null,"abstract":"Feature engineering is one of the most costly aspects of developing effective machine learning models, and that cost is even greater in specialized problem domains, like malware classification, where expert skills are necessary to identify useful features. Recent work, however, has shown that deep learning models can be used to automatically learn feature representations directly from the raw, unstructured bytes of the binaries themselves. In this paper, we explore what these models are learning about malware. To do so, we examine the learned features at multiple levels of resolution, from individual byte embeddings to end-to-end analysis of the model. At each step, we connect these byte-oriented activations to their original semantics through parsing and disassembly of the binary to arrive at human-understandable features. Through our results, we identify several interesting features learned by the model and their connection to manually-derived features typically used by traditional machine learning models. Additionally, we explore the impact of training data volume and regularization on the quality of the learned features and the efficacy of the classifiers, revealing the somewhat paradoxical insight that better generalization does not necessarily result in better performance for byte-based malware classifiers.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129814680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Exploring Adversarial Examples in Malware Detection 探索恶意软件检测中的对抗性示例
Pub Date : 2018-10-18 DOI: 10.1109/SPW.2019.00015
Octavian Suciu, Scott E. Coull, Jeffrey Johns
The convolutional neural network (CNN) architecture is increasingly being applied to new domains, such as malware detection, where it is able to learn malicious behavior from raw bytes extracted from executables. These architectures reach impressive performance with no feature engineering effort involved, but their robustness against active attackers is yet to be understood. Such malware detectors could face a new attack vector in the form of adversarial interference with the classification model. Existing evasion attacks intended to cause misclassification on test-time instances, which have been extensively studied for image classifiers, are not applicable because of the input semantics that prevents arbitrary changes to the binaries. This paper explores the area of adversarial examples for malware detection. By training an existing model on a production-scale dataset, we show that some previous attacks are less effective than initially reported, while simultaneously highlighting architectural weaknesses that facilitate new attack strategies for malware classification. Finally, we explore how generalizable different attack strategies are, the trade-offs when aiming to increase their effectiveness, and the transferability of single-step attacks.
卷积神经网络(CNN)架构正越来越多地应用于新的领域,例如恶意软件检测,它能够从可执行文件中提取的原始字节中学习恶意行为。这些体系结构在不涉及特征工程的情况下达到了令人印象深刻的性能,但是它们对主动攻击者的健壮性还有待了解。这种恶意软件检测器可能面临一种新的攻击向量,其形式是对分类模型的对抗性干扰。现有的逃避攻击旨在导致对测试时实例的错误分类,这些攻击已经在图像分类器中得到了广泛的研究,但由于输入语义阻止了对二进制文件的任意更改,因此不适用。本文探讨了恶意软件检测的对抗性示例领域。通过在生产规模数据集上训练现有模型,我们发现一些先前的攻击不如最初报道的有效,同时突出了架构上的弱点,这些弱点有助于新的攻击策略进行恶意软件分类。最后,我们探讨了不同攻击策略的通用性,提高其有效性时的权衡,以及单步攻击的可转移性。
{"title":"Exploring Adversarial Examples in Malware Detection","authors":"Octavian Suciu, Scott E. Coull, Jeffrey Johns","doi":"10.1109/SPW.2019.00015","DOIUrl":"https://doi.org/10.1109/SPW.2019.00015","url":null,"abstract":"The convolutional neural network (CNN) architecture is increasingly being applied to new domains, such as malware detection, where it is able to learn malicious behavior from raw bytes extracted from executables. These architectures reach impressive performance with no feature engineering effort involved, but their robustness against active attackers is yet to be understood. Such malware detectors could face a new attack vector in the form of adversarial interference with the classification model. Existing evasion attacks intended to cause misclassification on test-time instances, which have been extensively studied for image classifiers, are not applicable because of the input semantics that prevents arbitrary changes to the binaries. This paper explores the area of adversarial examples for malware detection. By training an existing model on a production-scale dataset, we show that some previous attacks are less effective than initially reported, while simultaneously highlighting architectural weaknesses that facilitate new attack strategies for malware classification. Finally, we explore how generalizable different attack strategies are, the trade-offs when aiming to increase their effectiveness, and the transferability of single-step attacks.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125807024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
Targeted Adversarial Examples for Black Box Audio Systems 黑箱音频系统的目标对抗示例
Pub Date : 2018-05-20 DOI: 10.1109/SPW.2019.00016
Rohan Taori, Amog Kamsetty, Brenton Chu, N. Vemuri
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity, with 35% targeted attack success rate, after 3000 generations while maintaining 94.6% audio file similarity.
深度循环网络在音频转录中的应用在自动语音识别(ASR)系统中取得了令人印象深刻的进展。许多人已经证明,小的对抗性扰动可以欺骗深度神经网络,使其以高置信度错误地预测特定目标。目前欺骗ASR系统的工作主要集中在白盒攻击上,其中模型架构和参数是已知的。在本文中,我们采用黑盒方法来对抗生成,结合遗传算法和梯度估计的方法来解决任务。经过3000代后,我们实现了89.25%的目标攻击相似度,目标攻击成功率为35%,同时保持了94.6%的音频文件相似度。
{"title":"Targeted Adversarial Examples for Black Box Audio Systems","authors":"Rohan Taori, Amog Kamsetty, Brenton Chu, N. Vemuri","doi":"10.1109/SPW.2019.00016","DOIUrl":"https://doi.org/10.1109/SPW.2019.00016","url":null,"abstract":"The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity, with 35% targeted attack success rate, after 3000 generations while maintaining 94.6% audio file similarity.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131318177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 153
期刊
2019 IEEE Security and Privacy Workshops (SPW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1