Time series adversarial attacks: an investigation of smooth perturbations and defense approaches

IF 3.4 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal of Data Science and Analytics Pub Date : 2023-10-24 DOI:10.1007/s41060-023-00438-0
Gautier Pialla, Hassan Ismail Fawaz, Maxime Devanne, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller, Christoph Bergmeir, Daniel F. Schmidt, Geoffrey I. Webb, Germain Forestier
{"title":"Time series adversarial attacks: an investigation of smooth perturbations and defense approaches","authors":"Gautier Pialla, Hassan Ismail Fawaz, Maxime Devanne, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller, Christoph Bergmeir, Daniel F. Schmidt, Geoffrey I. Webb, Germain Forestier","doi":"10.1007/s41060-023-00438-0","DOIUrl":null,"url":null,"abstract":"Abstract Adversarial attacks represent a threat to every deep neural network. They are particularly effective if they can perturb a given model while remaining undetectable. They have been initially introduced for image classifiers, and are well studied for this task. For time series, few attacks have yet been proposed. Most that have are adaptations of attacks previously proposed for image classifiers. Although these attacks are effective, they generate perturbations containing clearly discernible patterns such as sawtooth and spikes. Adversarial patterns are not perceptible on images, but the attacks proposed to date are readily perceptible in the case of time series. In order to generate stealthier adversarial attacks for time series, we propose a new attack that produces smoother perturbations. We introduced a function to measure the smoothness for time series. Using it, we find that smooth perturbations are harder to detect both visually, by the naked eye and by deep learning models. We also show two ways of protection against adversarial attacks: the first one by detecting the attacks using a deep model; the second one by using adversarial training to improve the robustness of a model against a specific attack, thus making it less vulnerable.","PeriodicalId":45667,"journal":{"name":"International Journal of Data Science and Analytics","volume":"6 4","pages":"0"},"PeriodicalIF":3.4000,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Data Science and Analytics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41060-023-00438-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Adversarial attacks represent a threat to every deep neural network. They are particularly effective if they can perturb a given model while remaining undetectable. They have been initially introduced for image classifiers, and are well studied for this task. For time series, few attacks have yet been proposed. Most that have are adaptations of attacks previously proposed for image classifiers. Although these attacks are effective, they generate perturbations containing clearly discernible patterns such as sawtooth and spikes. Adversarial patterns are not perceptible on images, but the attacks proposed to date are readily perceptible in the case of time series. In order to generate stealthier adversarial attacks for time series, we propose a new attack that produces smoother perturbations. We introduced a function to measure the smoothness for time series. Using it, we find that smooth perturbations are harder to detect both visually, by the naked eye and by deep learning models. We also show two ways of protection against adversarial attacks: the first one by detecting the attacks using a deep model; the second one by using adversarial training to improve the robustness of a model against a specific attack, thus making it less vulnerable.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
时间序列对抗性攻击:平滑扰动和防御方法的研究
对抗性攻击对每一个深度神经网络都是一种威胁。如果它们能够干扰给定的模型而又不被检测到,那么它们就特别有效。它们最初被引入到图像分类器中,并且在这项任务中得到了很好的研究。对于时间序列,目前提出的攻击很少。大多数攻击都是对先前提出的图像分类器攻击的改进。虽然这些攻击是有效的,但它们产生的扰动包含清晰可辨的模式,如锯齿状和尖峰状。对抗模式在图像上是无法察觉的,但迄今为止提出的攻击在时间序列的情况下是很容易察觉的。为了对时间序列产生更隐蔽的对抗性攻击,我们提出了一种产生更平滑摄动的新攻击。我们引入了一个函数来测量时间序列的平滑度。使用它,我们发现平滑扰动很难通过肉眼和深度学习模型在视觉上检测到。我们还展示了两种防止对抗性攻击的方法:第一种方法是使用深度模型检测攻击;第二种是使用对抗性训练来提高模型对特定攻击的鲁棒性,从而使其不那么容易受到攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.40
自引率
8.30%
发文量
72
期刊介绍: Data Science has been established as an important emergent scientific field and paradigm driving research evolution in such disciplines as statistics, computing science and intelligence science, and practical transformation in such domains as science, engineering, the public sector, business, social sci­ence, and lifestyle. The field encompasses the larger ar­eas of artificial intelligence, data analytics, machine learning, pattern recognition, natural language understanding, and big data manipulation. It also tackles related new sci­entific chal­lenges, ranging from data capture, creation, storage, retrieval, sharing, analysis, optimization, and vis­ualization, to integrative analysis across heterogeneous and interdependent complex resources for better decision-making, collaboration, and, ultimately, value creation.The International Journal of Data Science and Analytics (JDSA) brings together thought leaders, researchers, industry practitioners, and potential users of data science and analytics, to develop the field, discuss new trends and opportunities, exchange ideas and practices, and promote transdisciplinary and cross-domain collaborations. The jour­nal is composed of three streams: Regular, to communicate original and reproducible theoretical and experimental findings on data science and analytics; Applications, to report the significant data science applications to real-life situations; and Trends, to report expert opinion and comprehensive surveys and reviews of relevant areas and topics in data science and analytics.Topics of relevance include all aspects of the trends, scientific foundations, techniques, and applica­tions of data science and analytics, with a primary focus on:statistical and mathematical foundations for data science and analytics;understanding and analytics of complex data, human, domain, network, organizational, social, behavior, and system characteristics, complexities and intelligences;creation and extraction, processing, representation and modelling, learning and discovery, fusion and integration, presentation and visualization of complex data, behavior, knowledge and intelligence;data analytics, pattern recognition, knowledge discovery, machine learning, deep analytics and deep learning, and intelligent processing of various data (including transaction, text, image, video, graph and network), behaviors and systems;active, real-time, personalized, actionable and automated analytics, learning, computation, optimization, presentation and recommendation; big data architecture, infrastructure, computing, matching, indexing, query processing, mapping, search, retrieval, interopera­bility, exchange, and recommendation;in-memory, distributed, parallel, scalable and high-performance computing, analytics and optimization for big data;review, surveys, trends, prospects and opportunities of data science research, innovation and applications;data science applications, intelligent devices and services in scientific, business, governmental, cultural, behavioral, social and economic, health and medical, human, natural and artificial (including online/Web, cloud, IoT, mobile and social media) domains; andethics, quality, privacy, safety and security, trust, and risk of data science and analytics
期刊最新文献
Power Analysis for Causal Discovery. Discrete double factors of a family of odd Weibull-G distributions: features and modeling Artificial intelligence trend analysis in German business and politics: a web mining approach Speech-based detection of multi-class Alzheimer’s disease classification using machine learning Implementation of air pollution traceability method based on IF-GNN-FC model with multiple-source data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1