首页 > 最新文献

2022 International Conference on Machine Learning and Cybernetics (ICMLC)最新文献

英文 中文
Infrared Guided White Cane for Assisting the Visually Impaired to Walk Alone 辅助视障人士独自行走的红外线白色手杖
Pub Date : 2022-09-09 DOI: 10.1109/ICMLC56445.2022.9941336
Taisei Hiramoto, Tomoyuki Araki, Takashi Suzuki
This study proposes an indoor navigation system using a white cane equipped with an infrared beacon and receiver installed on the ceiling of a facility, and sound and speech as an option for assisting visually impaired persons to walk alone. This support does not require extensive facility modifications or detailed environmental mapping, and is compact and simple enough to be used and obtained as a tool, like a white cane. This support was verified by a visually impaired person, and its potential to be used as a stand-alone walking support was demonstrated.
本研究提出了一种室内导航系统,该系统使用白色手杖,配有红外信标和接收器,安装在设施的天花板上,并以声音和语音作为辅助视障人士独自行走的选择。这种支持不需要大量的设施修改或详细的环境测绘,并且紧凑和简单,可以像白手杖一样作为工具使用和获得。该支持由视障人士验证,并证明了其作为独立行走支持的潜力。
{"title":"Infrared Guided White Cane for Assisting the Visually Impaired to Walk Alone","authors":"Taisei Hiramoto, Tomoyuki Araki, Takashi Suzuki","doi":"10.1109/ICMLC56445.2022.9941336","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941336","url":null,"abstract":"This study proposes an indoor navigation system using a white cane equipped with an infrared beacon and receiver installed on the ceiling of a facility, and sound and speech as an option for assisting visually impaired persons to walk alone. This support does not require extensive facility modifications or detailed environmental mapping, and is compact and simple enough to be used and obtained as a tool, like a white cane. This support was verified by a visually impaired person, and its potential to be used as a stand-alone walking support was demonstrated.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126032660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development of a New Graphic Description Language for Line Drawings -- Assuming the Use of the Visually Impaired 一种新的线条画图形描述语言的发展——假设视障人士使用
Pub Date : 2022-09-09 DOI: 10.1109/ICMLC56445.2022.9941294
Hiroto Nakanishi, Noboru Takagi, K. Sawai, H. Masuta, T. Motoyoshi
In recent years, the development of information technology has made it easier for visually impaired people to access language information by developing electronic books and OCR applications. However, graphics are still inaccessible to the visually impaired. Therefore, it is very difficult for the visually impaired to create graphics without help of sighted people. Conventional graphic description languages such as TikZ and SVG and so on are difficult for the visually impaired to write codes because they require numerical coordinates precisely when drawing basic shapes; hence calculating such numerical coordinates is quite difficult for blind users. To solve this problem, we are developing a graphic description language and a drawing assistance system that enables visually impaired people to create figures independently. Our language is based on an object-oriented design in order to reduce the difficulties on the visually impaired. In this paper, we describe our language and show the result of an experiment for evaluating the effectiveness of our language.
近年来,资讯科技的发展,使得视障人士更容易透过电子图书及OCR应用程式取得语言资讯。然而,视障人士仍然无法使用图形。因此,如果没有视力正常的人的帮助,视障人士很难创作图形。传统的图形描述语言,如TikZ和SVG等,对于视障人士来说很难编写代码,因为它们在绘制基本形状时需要精确的数字坐标;因此,对于盲人用户来说,计算这种数值坐标是相当困难的。为了解决这个问题,我们正在开发一种图形描述语言和绘图辅助系统,使视障人士能够独立创作图形。我们的语言是基于面向对象的设计,以减少视障人士的困难。在本文中,我们描述了我们的语言,并展示了评估我们语言有效性的实验结果。
{"title":"Development of a New Graphic Description Language for Line Drawings -- Assuming the Use of the Visually Impaired","authors":"Hiroto Nakanishi, Noboru Takagi, K. Sawai, H. Masuta, T. Motoyoshi","doi":"10.1109/ICMLC56445.2022.9941294","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941294","url":null,"abstract":"In recent years, the development of information technology has made it easier for visually impaired people to access language information by developing electronic books and OCR applications. However, graphics are still inaccessible to the visually impaired. Therefore, it is very difficult for the visually impaired to create graphics without help of sighted people. Conventional graphic description languages such as TikZ and SVG and so on are difficult for the visually impaired to write codes because they require numerical coordinates precisely when drawing basic shapes; hence calculating such numerical coordinates is quite difficult for blind users. To solve this problem, we are developing a graphic description language and a drawing assistance system that enables visually impaired people to create figures independently. Our language is based on an object-oriented design in order to reduce the difficulties on the visually impaired. In this paper, we describe our language and show the result of an experiment for evaluating the effectiveness of our language.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129583621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method for Forecasting The Pork Price Based on Fluctuation Forecasting and Attention Mechanism 基于波动预测和注意机制的猪肉价格预测方法
Pub Date : 2022-09-09 DOI: 10.1109/ICMLC56445.2022.9941318
S. Zhao, Xudong Lin, Xiaojian Weng
With the continuous development of the economy and improvement of people’s living standards, people’s consumption of meat is getting higher and higher, and China has become the largest pork consumer and producer. The price of pork affects not only the quality of life of the residents but also the development of the pig farming industry to a certain extent. Effective pork price forecasting contributes to social stability and unity, not only to ensure farmers’ income, but also to ensure the relation between supply and demand. This paper synthesizes various indicators related to pork prices in the Chinese pork market, and respectively establishes XGboost, SVM and Random Forest models to make preliminary upward and downward forecasts for the samples. The best forecasting results are used to add price forecasting features, and then the LSTM model optimized by the attention mechanism is used to forecast specific prices. The weekly price data of 201501-202106 from the National Bureau of Statistics used in the experiment compared the forecasting effects of three kinds of price increase and decrease forecasting models and eight kinds of numerical price forecasting models. The results show that the Attention-LSTM method of forecasting pork prices based on up and down forecasts is superior to other methods in pork price forecasting accuracy. RMSE = 1.57, MAE = 1.28, MAPE = 2.83%, all belong to a minimum.
随着经济的不断发展和人民生活水平的提高,人们对肉类的消费越来越高,中国已经成为最大的猪肉消费国和生产国。猪肉价格不仅影响着居民的生活质量,也在一定程度上影响着养猪业的发展。有效的猪肉价格预测有利于社会的稳定和团结,既能保证农民的收入,又能保证供需关系。本文综合了中国猪肉市场中与猪肉价格相关的各项指标,分别建立了XGboost、SVM和Random Forest模型,对样本进行了初步的向上和向下预测。利用最佳预测结果加入价格预测特征,然后利用注意力机制优化的LSTM模型对具体价格进行预测。实验使用的是国家统计局2015 - 2016年的每周价格数据,比较了三种价格涨跌预测模型和八种数值价格预测模型的预测效果。结果表明,基于上下预测的Attention-LSTM方法在猪肉价格预测精度上优于其他方法。RMSE = 1.57, MAE = 1.28, MAPE = 2.83%,均属于最小值。
{"title":"A Method for Forecasting The Pork Price Based on Fluctuation Forecasting and Attention Mechanism","authors":"S. Zhao, Xudong Lin, Xiaojian Weng","doi":"10.1109/ICMLC56445.2022.9941318","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941318","url":null,"abstract":"With the continuous development of the economy and improvement of people’s living standards, people’s consumption of meat is getting higher and higher, and China has become the largest pork consumer and producer. The price of pork affects not only the quality of life of the residents but also the development of the pig farming industry to a certain extent. Effective pork price forecasting contributes to social stability and unity, not only to ensure farmers’ income, but also to ensure the relation between supply and demand. This paper synthesizes various indicators related to pork prices in the Chinese pork market, and respectively establishes XGboost, SVM and Random Forest models to make preliminary upward and downward forecasts for the samples. The best forecasting results are used to add price forecasting features, and then the LSTM model optimized by the attention mechanism is used to forecast specific prices. The weekly price data of 201501-202106 from the National Bureau of Statistics used in the experiment compared the forecasting effects of three kinds of price increase and decrease forecasting models and eight kinds of numerical price forecasting models. The results show that the Attention-LSTM method of forecasting pork prices based on up and down forecasts is superior to other methods in pork price forecasting accuracy. RMSE = 1.57, MAE = 1.28, MAPE = 2.83%, all belong to a minimum.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Access Control Method with Secret Key for Semantic Segmentation Models 语义分割模型的密钥访问控制方法
Pub Date : 2022-08-28 DOI: 10.1109/ICMLC56445.2022.9941323
Teru Nagamori, Ryota Iijima, H. Kiya
A novel method for access control with a secret key is proposed to protect models from unauthorized access in this paper. We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR). Most existing access control methods focus on image classification tasks, or they are limited to CNNs. By using a patch embedding structure that ViT has, trained models and test images can be efficiently encrypted with a secret key, and then semantic segmentation tasks are carried out in the encrypted domain. In an experiment, the method is confirmed to provide the same accuracy as that of using plain images without any encryption to authorized users with a correct key and also to provide an extremely degraded accuracy to unauthorized users.
本文提出了一种新的密钥访问控制方法,以防止模型被非法访问。我们重点研究了使用视觉转换器(ViT)的语义分割模型,即分割转换器(SETR)。现有的访问控制方法大多集中在图像分类任务上,或者局限于cnn。利用ViT所具有的补丁嵌入结构,对训练好的模型和测试图像进行密钥加密,然后在加密域中进行语义分割任务。实验证明,该方法对使用正确密钥的授权用户提供与使用未加密的纯图像相同的精度,但对未授权用户提供极低的精度。
{"title":"An Access Control Method with Secret Key for Semantic Segmentation Models","authors":"Teru Nagamori, Ryota Iijima, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941323","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941323","url":null,"abstract":"A novel method for access control with a secret key is proposed to protect models from unauthorized access in this paper. We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR). Most existing access control methods focus on image classification tasks, or they are limited to CNNs. By using a patch embedding structure that ViT has, trained models and test images can be efficiently encrypted with a secret key, and then semantic segmentation tasks are carried out in the encrypted domain. In an experiment, the method is confirmed to provide the same accuracy as that of using plain images without any encryption to authorized users with a correct key and also to provide an extremely degraded accuracy to unauthorized users.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121348253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Encryption Method of Convmixer Models without Performance Degradation 一种无性能退化的卷积混合模型加密方法
Pub Date : 2022-07-25 DOI: 10.1109/ICMLC56445.2022.9941283
Ryota Iijima, H. Kiya
In this paper, we propose an encryption method for ConvMixer models with a secret key. Encryption methods for DNN models have been studied to achieve adversarial defense, model protection and privacy-preserving image classification. However, the use of conventional encryption methods degrades the performance of models compared with that of plain models. Accordingly, we propose a novel method for encrypting ConvMixer models. The method is carried out on the basis of an embedding architecture that ConvMixer has, and models encrypted with the method can have the same performance as models trained with plain images only when using test images encrypted with a secret key. In addition, the proposed method does not require any specially prepared data for model training or network modification. In an experiment, the effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection in an image classification task on the CIFAR10 dataset.
本文提出了一种带密钥的ConvMixer模型加密方法。研究了DNN模型的加密方法,以实现对抗性防御、模型保护和保护隐私的图像分类。然而,与普通模型相比,使用常规加密方法会降低模型的性能。据此,我们提出了一种对ConvMixer模型进行加密的新方法。该方法基于ConvMixer所具有的嵌入体系结构实现,使用该方法加密的模型只有在使用使用密钥加密的测试图像时才能具有与使用普通图像训练的模型相同的性能。此外,该方法不需要任何专门准备的数据进行模型训练或网络修改。在CIFAR10数据集的图像分类任务中,从分类精度和模型保护两个方面对该方法的有效性进行了评估。
{"title":"An Encryption Method of Convmixer Models without Performance Degradation","authors":"Ryota Iijima, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941283","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941283","url":null,"abstract":"In this paper, we propose an encryption method for ConvMixer models with a secret key. Encryption methods for DNN models have been studied to achieve adversarial defense, model protection and privacy-preserving image classification. However, the use of conventional encryption methods degrades the performance of models compared with that of plain models. Accordingly, we propose a novel method for encrypting ConvMixer models. The method is carried out on the basis of an embedding architecture that ConvMixer has, and models encrypted with the method can have the same performance as models trained with plain images only when using test images encrypted with a secret key. In addition, the proposed method does not require any specially prepared data for model training or network modification. In an experiment, the effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection in an image classification task on the CIFAR10 dataset.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126919794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Security Evaluation of Compressible Image Encryption for Privacy-Preserving Image Classification Against Ciphertext-Only Attacks 针对纯密文攻击的可压缩图像加密保密图像分类安全性评估
Pub Date : 2022-07-17 DOI: 10.1109/ICMLC56445.2022.9941309
Tatsuya Chuman, H. Kiya
The security of learnable image encryption schemes for image classification using deep neural networks against several attacks has been discussed. On the other hand, block scrambling image encryption using the vision transformer has been proposed, which applies to lossless compression methods such as JPEG standard by dividing an image into permuted blocks. Although robustness of the block scrambling image encryption against jigsaw puzzle solver attacks that utilize a correlation among the blocks has been evaluated under the condition of a large number of encrypted blocks, the security of encrypted images with a small number of blocks has never been evaluated. In this paper, the security of the block scrambling image encryption against ciphertext-only attacks is evaluated by using jigsaw puzzle solver attacks.
讨论了利用深度神经网络进行图像分类的可学习图像加密方案对几种攻击的安全性。另一方面,提出了基于视觉变换的块置乱图像加密方法,该方法通过将图像分割成排列好的块,适用于JPEG标准等无损压缩方法。虽然在大量加密块的情况下,已经评估了块置乱图像加密对利用块之间相关性的拼图求解器攻击的鲁棒性,但从未评估过少量块加密图像的安全性。本文利用拼图求解器攻击来评估块置乱图像加密对纯密文攻击的安全性。
{"title":"Security Evaluation of Compressible Image Encryption for Privacy-Preserving Image Classification Against Ciphertext-Only Attacks","authors":"Tatsuya Chuman, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941309","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941309","url":null,"abstract":"The security of learnable image encryption schemes for image classification using deep neural networks against several attacks has been discussed. On the other hand, block scrambling image encryption using the vision transformer has been proposed, which applies to lossless compression methods such as JPEG standard by dividing an image into permuted blocks. Although robustness of the block scrambling image encryption against jigsaw puzzle solver attacks that utilize a correlation among the blocks has been evaluated under the condition of a large number of encrypted blocks, the security of encrypted images with a small number of blocks has never been evaluated. In this paper, the security of the block scrambling image encryption against ciphertext-only attacks is evaluated by using jigsaw puzzle solver attacks.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116752423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training 对比对抗训练中认知解离缓解的稳健性
Pub Date : 2022-03-16 DOI: 10.1109/ICMLC56445.2022.9941337
Adir Rahamim, I. Naeh
In this paper, we introduce a novel neural network training framework that increases model’s adversarial robustness to adversarial attacks while maintaining high clean accuracy by combining contrastive learning (CL) with adversarial training (AT). We propose to improve model robustness to adversarial attacks by learning feature representations that are consistent under both data augmentations and adversarial perturbations. We leverage contrastive learning to improve adversarial robustness by considering an adversarial example els another positive example, and aim to maximize the similarity between random augmentations of data samples and their adversarial example, while constantly updating the classification head in order to avoid a cognitive dissociation between the classification head and the embedding space. This dissociation is caused by the fact that CL updates the network up to the embedding space, while freezing the classification head which is used to generate new positive adversarial examples. We validate our method, Contrastive Learning with Adversarial Features (CLAF), on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.
在本文中,我们引入了一种新的神经网络训练框架,通过将对比学习(CL)与对抗训练(AT)相结合,提高了模型对对抗攻击的对抗鲁棒性,同时保持了较高的干净准确性。我们建议通过学习在数据增强和对抗性扰动下一致的特征表示来提高模型对对抗性攻击的鲁棒性。我们利用对比学习来提高对抗鲁棒性,通过考虑一个对抗例子和另一个正面例子,并旨在最大限度地提高数据样本的随机增强与其对抗例子之间的相似性,同时不断更新分类头,以避免分类头和嵌入空间之间的认知分离。这种分离是由于CL将网络更新到嵌入空间,同时冻结用于生成新的正对抗示例的分类头。我们在CIFAR-10数据集上验证了我们的方法,具有对抗特征的对比学习(CLAF),在此数据集上,它优于替代监督和自监督对抗学习方法的鲁棒准确性和干净准确性。
{"title":"Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training","authors":"Adir Rahamim, I. Naeh","doi":"10.1109/ICMLC56445.2022.9941337","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941337","url":null,"abstract":"In this paper, we introduce a novel neural network training framework that increases model’s adversarial robustness to adversarial attacks while maintaining high clean accuracy by combining contrastive learning (CL) with adversarial training (AT). We propose to improve model robustness to adversarial attacks by learning feature representations that are consistent under both data augmentations and adversarial perturbations. We leverage contrastive learning to improve adversarial robustness by considering an adversarial example els another positive example, and aim to maximize the similarity between random augmentations of data samples and their adversarial example, while constantly updating the classification head in order to avoid a cognitive dissociation between the classification head and the embedding space. This dissociation is caused by the fact that CL updates the network up to the embedding space, while freezing the classification head which is used to generate new positive adversarial examples. We validate our method, Contrastive Learning with Adversarial Features (CLAF), on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":" 33","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114051312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adversarial Robust Classification by Conditional Generative Model Inversion 基于条件生成模型反演的对抗鲁棒分类
Pub Date : 2022-01-12 DOI: 10.1109/ICMLC56445.2022.9941288
Mitra Alirezaei, T. Tasdizen
Most adversarial attack defense methods rely on obfuscating gradients. These methods are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction against black-box attacks without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers. On the other hand, a generative model is typically a low-to-high-dimensional mapping. Since the range of images that can be generated by the model for a given class is limited to its learned manifold, the "inversion" process cannot generate images that are arbitrarily close to adversarial examples leading to a robust model by construction. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and does not depend on previous knowledge about the attack strength.
大多数对抗性攻击防御方法依赖于混淆梯度。这些方法很容易被攻击绕过,这些攻击要么不使用梯度,要么接近并使用校正的梯度防御(不混淆梯度),如对抗性训练存在,但这些方法通常对攻击进行假设,如其大小。我们提出了一种不混淆梯度的分类模型,并且在不假设攻击的先验知识的情况下,通过构造针对黑盒攻击的鲁棒性。我们的方法将分类转换为一个优化问题,我们“反转”一个在未受干扰的自然图像上训练的条件生成器,以找到生成最接近查询图像样本的类。我们假设,对抗对抗性攻击的脆弱性的潜在来源是前馈分类器的高到低维性质。另一方面,生成模型通常是低维到高维的映射。由于模型可以为给定类生成的图像范围仅限于其学习到的流形,因此“反转”过程不能生成任意接近对抗示例的图像,从而通过构建鲁棒模型。虽然该方法与Defense-GAN相关,但在我们的模型中使用条件生成模型和反转而不是前馈分类器是一个关键的区别。与Defense-GAN不同,我们表明我们的方法不会混淆梯度。我们证明了我们的模型对黑盒攻击具有极强的鲁棒性,并且不依赖于先前关于攻击强度的知识。
{"title":"Adversarial Robust Classification by Conditional Generative Model Inversion","authors":"Mitra Alirezaei, T. Tasdizen","doi":"10.1109/ICMLC56445.2022.9941288","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941288","url":null,"abstract":"Most adversarial attack defense methods rely on obfuscating gradients. These methods are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction against black-box attacks without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we \"invert\" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers. On the other hand, a generative model is typically a low-to-high-dimensional mapping. Since the range of images that can be generated by the model for a given class is limited to its learned manifold, the \"inversion\" process cannot generate images that are arbitrarily close to adversarial examples leading to a robust model by construction. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and does not depend on previous knowledge about the attack strength.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131970268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding and Quantifying Adversarial Examples Existence in Linear Classification 理解和量化线性分类中对抗性例子的存在
Pub Date : 2019-10-27 DOI: 10.1109/ICMLC56445.2022.9941315
Xupeng Shi, A. Ding
State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem.
最先进的深度神经网络(DNN)很容易受到对抗性示例的攻击:对输入进行精心设计的小扰动,这是人类无法察觉的,可以误导DNN。为了理解对抗性示例的根本原因,我们量化了线性分类器对抗性示例存在的概率。以前对抗性示例的数学定义只涉及总体摄动量,我们提出了一个更实用的强对抗性示例的相关定义,该定义也单独限制了沿信号方向的摄动。我们证明了在之前的定义下不存在对抗性鲁棒线性分类器的情况下,线性分类器可以对强对抗性示例攻击具有鲁棒性。结果表明,设计一般的强对抗鲁棒学习系统是可行的,但只有通过结合人类对潜在分类问题的知识。
{"title":"Understanding and Quantifying Adversarial Examples Existence in Linear Classification","authors":"Xupeng Shi, A. Ding","doi":"10.1109/ICMLC56445.2022.9941315","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941315","url":null,"abstract":"State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129079129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2022 International Conference on Machine Learning and Cybernetics (ICMLC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1