Pub Date : 2022-09-09DOI: 10.1109/ICMLC56445.2022.9941336
Taisei Hiramoto, Tomoyuki Araki, Takashi Suzuki
This study proposes an indoor navigation system using a white cane equipped with an infrared beacon and receiver installed on the ceiling of a facility, and sound and speech as an option for assisting visually impaired persons to walk alone. This support does not require extensive facility modifications or detailed environmental mapping, and is compact and simple enough to be used and obtained as a tool, like a white cane. This support was verified by a visually impaired person, and its potential to be used as a stand-alone walking support was demonstrated.
{"title":"Infrared Guided White Cane for Assisting the Visually Impaired to Walk Alone","authors":"Taisei Hiramoto, Tomoyuki Araki, Takashi Suzuki","doi":"10.1109/ICMLC56445.2022.9941336","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941336","url":null,"abstract":"This study proposes an indoor navigation system using a white cane equipped with an infrared beacon and receiver installed on the ceiling of a facility, and sound and speech as an option for assisting visually impaired persons to walk alone. This support does not require extensive facility modifications or detailed environmental mapping, and is compact and simple enough to be used and obtained as a tool, like a white cane. This support was verified by a visually impaired person, and its potential to be used as a stand-alone walking support was demonstrated.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126032660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-09DOI: 10.1109/ICMLC56445.2022.9941294
Hiroto Nakanishi, Noboru Takagi, K. Sawai, H. Masuta, T. Motoyoshi
In recent years, the development of information technology has made it easier for visually impaired people to access language information by developing electronic books and OCR applications. However, graphics are still inaccessible to the visually impaired. Therefore, it is very difficult for the visually impaired to create graphics without help of sighted people. Conventional graphic description languages such as TikZ and SVG and so on are difficult for the visually impaired to write codes because they require numerical coordinates precisely when drawing basic shapes; hence calculating such numerical coordinates is quite difficult for blind users. To solve this problem, we are developing a graphic description language and a drawing assistance system that enables visually impaired people to create figures independently. Our language is based on an object-oriented design in order to reduce the difficulties on the visually impaired. In this paper, we describe our language and show the result of an experiment for evaluating the effectiveness of our language.
{"title":"Development of a New Graphic Description Language for Line Drawings -- Assuming the Use of the Visually Impaired","authors":"Hiroto Nakanishi, Noboru Takagi, K. Sawai, H. Masuta, T. Motoyoshi","doi":"10.1109/ICMLC56445.2022.9941294","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941294","url":null,"abstract":"In recent years, the development of information technology has made it easier for visually impaired people to access language information by developing electronic books and OCR applications. However, graphics are still inaccessible to the visually impaired. Therefore, it is very difficult for the visually impaired to create graphics without help of sighted people. Conventional graphic description languages such as TikZ and SVG and so on are difficult for the visually impaired to write codes because they require numerical coordinates precisely when drawing basic shapes; hence calculating such numerical coordinates is quite difficult for blind users. To solve this problem, we are developing a graphic description language and a drawing assistance system that enables visually impaired people to create figures independently. Our language is based on an object-oriented design in order to reduce the difficulties on the visually impaired. In this paper, we describe our language and show the result of an experiment for evaluating the effectiveness of our language.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129583621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-09DOI: 10.1109/ICMLC56445.2022.9941318
S. Zhao, Xudong Lin, Xiaojian Weng
With the continuous development of the economy and improvement of people’s living standards, people’s consumption of meat is getting higher and higher, and China has become the largest pork consumer and producer. The price of pork affects not only the quality of life of the residents but also the development of the pig farming industry to a certain extent. Effective pork price forecasting contributes to social stability and unity, not only to ensure farmers’ income, but also to ensure the relation between supply and demand. This paper synthesizes various indicators related to pork prices in the Chinese pork market, and respectively establishes XGboost, SVM and Random Forest models to make preliminary upward and downward forecasts for the samples. The best forecasting results are used to add price forecasting features, and then the LSTM model optimized by the attention mechanism is used to forecast specific prices. The weekly price data of 201501-202106 from the National Bureau of Statistics used in the experiment compared the forecasting effects of three kinds of price increase and decrease forecasting models and eight kinds of numerical price forecasting models. The results show that the Attention-LSTM method of forecasting pork prices based on up and down forecasts is superior to other methods in pork price forecasting accuracy. RMSE = 1.57, MAE = 1.28, MAPE = 2.83%, all belong to a minimum.
{"title":"A Method for Forecasting The Pork Price Based on Fluctuation Forecasting and Attention Mechanism","authors":"S. Zhao, Xudong Lin, Xiaojian Weng","doi":"10.1109/ICMLC56445.2022.9941318","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941318","url":null,"abstract":"With the continuous development of the economy and improvement of people’s living standards, people’s consumption of meat is getting higher and higher, and China has become the largest pork consumer and producer. The price of pork affects not only the quality of life of the residents but also the development of the pig farming industry to a certain extent. Effective pork price forecasting contributes to social stability and unity, not only to ensure farmers’ income, but also to ensure the relation between supply and demand. This paper synthesizes various indicators related to pork prices in the Chinese pork market, and respectively establishes XGboost, SVM and Random Forest models to make preliminary upward and downward forecasts for the samples. The best forecasting results are used to add price forecasting features, and then the LSTM model optimized by the attention mechanism is used to forecast specific prices. The weekly price data of 201501-202106 from the National Bureau of Statistics used in the experiment compared the forecasting effects of three kinds of price increase and decrease forecasting models and eight kinds of numerical price forecasting models. The results show that the Attention-LSTM method of forecasting pork prices based on up and down forecasts is superior to other methods in pork price forecasting accuracy. RMSE = 1.57, MAE = 1.28, MAPE = 2.83%, all belong to a minimum.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-28DOI: 10.1109/ICMLC56445.2022.9941323
Teru Nagamori, Ryota Iijima, H. Kiya
A novel method for access control with a secret key is proposed to protect models from unauthorized access in this paper. We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR). Most existing access control methods focus on image classification tasks, or they are limited to CNNs. By using a patch embedding structure that ViT has, trained models and test images can be efficiently encrypted with a secret key, and then semantic segmentation tasks are carried out in the encrypted domain. In an experiment, the method is confirmed to provide the same accuracy as that of using plain images without any encryption to authorized users with a correct key and also to provide an extremely degraded accuracy to unauthorized users.
{"title":"An Access Control Method with Secret Key for Semantic Segmentation Models","authors":"Teru Nagamori, Ryota Iijima, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941323","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941323","url":null,"abstract":"A novel method for access control with a secret key is proposed to protect models from unauthorized access in this paper. We focus on semantic segmentation models with the vision transformer (ViT), called segmentation transformer (SETR). Most existing access control methods focus on image classification tasks, or they are limited to CNNs. By using a patch embedding structure that ViT has, trained models and test images can be efficiently encrypted with a secret key, and then semantic segmentation tasks are carried out in the encrypted domain. In an experiment, the method is confirmed to provide the same accuracy as that of using plain images without any encryption to authorized users with a correct key and also to provide an extremely degraded accuracy to unauthorized users.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121348253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.1109/ICMLC56445.2022.9941283
Ryota Iijima, H. Kiya
In this paper, we propose an encryption method for ConvMixer models with a secret key. Encryption methods for DNN models have been studied to achieve adversarial defense, model protection and privacy-preserving image classification. However, the use of conventional encryption methods degrades the performance of models compared with that of plain models. Accordingly, we propose a novel method for encrypting ConvMixer models. The method is carried out on the basis of an embedding architecture that ConvMixer has, and models encrypted with the method can have the same performance as models trained with plain images only when using test images encrypted with a secret key. In addition, the proposed method does not require any specially prepared data for model training or network modification. In an experiment, the effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection in an image classification task on the CIFAR10 dataset.
{"title":"An Encryption Method of Convmixer Models without Performance Degradation","authors":"Ryota Iijima, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941283","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941283","url":null,"abstract":"In this paper, we propose an encryption method for ConvMixer models with a secret key. Encryption methods for DNN models have been studied to achieve adversarial defense, model protection and privacy-preserving image classification. However, the use of conventional encryption methods degrades the performance of models compared with that of plain models. Accordingly, we propose a novel method for encrypting ConvMixer models. The method is carried out on the basis of an embedding architecture that ConvMixer has, and models encrypted with the method can have the same performance as models trained with plain images only when using test images encrypted with a secret key. In addition, the proposed method does not require any specially prepared data for model training or network modification. In an experiment, the effectiveness of the proposed method is evaluated in terms of classification accuracy and model protection in an image classification task on the CIFAR10 dataset.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126919794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-17DOI: 10.1109/ICMLC56445.2022.9941309
Tatsuya Chuman, H. Kiya
The security of learnable image encryption schemes for image classification using deep neural networks against several attacks has been discussed. On the other hand, block scrambling image encryption using the vision transformer has been proposed, which applies to lossless compression methods such as JPEG standard by dividing an image into permuted blocks. Although robustness of the block scrambling image encryption against jigsaw puzzle solver attacks that utilize a correlation among the blocks has been evaluated under the condition of a large number of encrypted blocks, the security of encrypted images with a small number of blocks has never been evaluated. In this paper, the security of the block scrambling image encryption against ciphertext-only attacks is evaluated by using jigsaw puzzle solver attacks.
{"title":"Security Evaluation of Compressible Image Encryption for Privacy-Preserving Image Classification Against Ciphertext-Only Attacks","authors":"Tatsuya Chuman, H. Kiya","doi":"10.1109/ICMLC56445.2022.9941309","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941309","url":null,"abstract":"The security of learnable image encryption schemes for image classification using deep neural networks against several attacks has been discussed. On the other hand, block scrambling image encryption using the vision transformer has been proposed, which applies to lossless compression methods such as JPEG standard by dividing an image into permuted blocks. Although robustness of the block scrambling image encryption against jigsaw puzzle solver attacks that utilize a correlation among the blocks has been evaluated under the condition of a large number of encrypted blocks, the security of encrypted images with a small number of blocks has never been evaluated. In this paper, the security of the block scrambling image encryption against ciphertext-only attacks is evaluated by using jigsaw puzzle solver attacks.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116752423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-16DOI: 10.1109/ICMLC56445.2022.9941337
Adir Rahamim, I. Naeh
In this paper, we introduce a novel neural network training framework that increases model’s adversarial robustness to adversarial attacks while maintaining high clean accuracy by combining contrastive learning (CL) with adversarial training (AT). We propose to improve model robustness to adversarial attacks by learning feature representations that are consistent under both data augmentations and adversarial perturbations. We leverage contrastive learning to improve adversarial robustness by considering an adversarial example els another positive example, and aim to maximize the similarity between random augmentations of data samples and their adversarial example, while constantly updating the classification head in order to avoid a cognitive dissociation between the classification head and the embedding space. This dissociation is caused by the fact that CL updates the network up to the embedding space, while freezing the classification head which is used to generate new positive adversarial examples. We validate our method, Contrastive Learning with Adversarial Features (CLAF), on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.
{"title":"Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training","authors":"Adir Rahamim, I. Naeh","doi":"10.1109/ICMLC56445.2022.9941337","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941337","url":null,"abstract":"In this paper, we introduce a novel neural network training framework that increases model’s adversarial robustness to adversarial attacks while maintaining high clean accuracy by combining contrastive learning (CL) with adversarial training (AT). We propose to improve model robustness to adversarial attacks by learning feature representations that are consistent under both data augmentations and adversarial perturbations. We leverage contrastive learning to improve adversarial robustness by considering an adversarial example els another positive example, and aim to maximize the similarity between random augmentations of data samples and their adversarial example, while constantly updating the classification head in order to avoid a cognitive dissociation between the classification head and the embedding space. This dissociation is caused by the fact that CL updates the network up to the embedding space, while freezing the classification head which is used to generate new positive adversarial examples. We validate our method, Contrastive Learning with Adversarial Features (CLAF), on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":" 33","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114051312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-12DOI: 10.1109/ICMLC56445.2022.9941288
Mitra Alirezaei, T. Tasdizen
Most adversarial attack defense methods rely on obfuscating gradients. These methods are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction against black-box attacks without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers. On the other hand, a generative model is typically a low-to-high-dimensional mapping. Since the range of images that can be generated by the model for a given class is limited to its learned manifold, the "inversion" process cannot generate images that are arbitrarily close to adversarial examples leading to a robust model by construction. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and does not depend on previous knowledge about the attack strength.
{"title":"Adversarial Robust Classification by Conditional Generative Model Inversion","authors":"Mitra Alirezaei, T. Tasdizen","doi":"10.1109/ICMLC56445.2022.9941288","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941288","url":null,"abstract":"Most adversarial attack defense methods rely on obfuscating gradients. These methods are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction against black-box attacks without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we \"invert\" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers. On the other hand, a generative model is typically a low-to-high-dimensional mapping. Since the range of images that can be generated by the model for a given class is limited to its learned manifold, the \"inversion\" process cannot generate images that are arbitrarily close to adversarial examples leading to a robust model by construction. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and does not depend on previous knowledge about the attack strength.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131970268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-27DOI: 10.1109/ICMLC56445.2022.9941315
Xupeng Shi, A. Ding
State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem.
{"title":"Understanding and Quantifying Adversarial Examples Existence in Linear Classification","authors":"Xupeng Shi, A. Ding","doi":"10.1109/ICMLC56445.2022.9941315","DOIUrl":"https://doi.org/10.1109/ICMLC56445.2022.9941315","url":null,"abstract":"State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem.","PeriodicalId":117829,"journal":{"name":"2022 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129079129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}