Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669000
Seong-Hoon Kim, Gi-Tae Han
The respiration status of a person is one of the vital signs that can be used to check the health condition of the person. The respiration status has been measured in various ways in the medical and healthcare sectors. Contact type sensors were conventionally used to measure respiration. The contact type sensors have been used primarily in the medical sector, because they can be only used in a limited environment. Recent studies have evaluated the ways of detecting human respiration patterns using Ultra-Wideband (UWB) Radar, which relies on non-contact type sensors. Previous studies evaluated the apnea pattern during sleep by analyzing the respiration signals acquired by UWB Radar using a principal component analysis (PCA). However, it is necessary to measure various respiration patterns in addition to apnea in order to accurately analyze the health condition of an individual in the healthcare sector. Therefore, this study proposed a method to recognize four respiration patterns based on the 1D convolutional neural network from the respiration signals acquired from UWB Radar. The proposed method extracts the eupnea, bradypnea, tachypnea, and apnea respiration patterns from UWB Radar and composes a learning dataset. The proposed method learned data through 1D CNN and the recognition accuracy was measured. The results of this study revealed that the accuracy of the proposed method was up to 15% higher than that of the conventional classification algorithms (i.e., PCA and Support Vector Machine (SVM)).
{"title":"1D CNN Based Human Respiration Pattern Recognition using Ultra Wideband Radar","authors":"Seong-Hoon Kim, Gi-Tae Han","doi":"10.1109/ICAIIC.2019.8669000","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669000","url":null,"abstract":"The respiration status of a person is one of the vital signs that can be used to check the health condition of the person. The respiration status has been measured in various ways in the medical and healthcare sectors. Contact type sensors were conventionally used to measure respiration. The contact type sensors have been used primarily in the medical sector, because they can be only used in a limited environment. Recent studies have evaluated the ways of detecting human respiration patterns using Ultra-Wideband (UWB) Radar, which relies on non-contact type sensors. Previous studies evaluated the apnea pattern during sleep by analyzing the respiration signals acquired by UWB Radar using a principal component analysis (PCA). However, it is necessary to measure various respiration patterns in addition to apnea in order to accurately analyze the health condition of an individual in the healthcare sector. Therefore, this study proposed a method to recognize four respiration patterns based on the 1D convolutional neural network from the respiration signals acquired from UWB Radar. The proposed method extracts the eupnea, bradypnea, tachypnea, and apnea respiration patterns from UWB Radar and composes a learning dataset. The proposed method learned data through 1D CNN and the recognition accuracy was measured. The results of this study revealed that the accuracy of the proposed method was up to 15% higher than that of the conventional classification algorithms (i.e., PCA and Support Vector Machine (SVM)).","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123001595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669037
Juntae Kim, G. Lim, Youngi Kim, Bokyeong Kim, Changseok Bae
Recent outstanding progresses in artificial intelligence researches enable many tries to implement self-driving cars. However, in real world, there are a lot of risks and cost problems to acquire training data for self-driving artificial intelligence algorithms. This paper proposes an algorithm to collect training data from a driving game, which has quite similar environment to the real world. In the data collection scheme, the proposed algorithm gathers both driving game screen image and control key value. We employ the collected data from virtual game environment to learn a deep neural network. Experimental result for applying the virtual driving game data to drive real world children’s car show the effectiveness of the proposed algorithm.
{"title":"Deep Learning Algorithm using Virtual Environment Data for Self-driving Car","authors":"Juntae Kim, G. Lim, Youngi Kim, Bokyeong Kim, Changseok Bae","doi":"10.1109/ICAIIC.2019.8669037","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669037","url":null,"abstract":"Recent outstanding progresses in artificial intelligence researches enable many tries to implement self-driving cars. However, in real world, there are a lot of risks and cost problems to acquire training data for self-driving artificial intelligence algorithms. This paper proposes an algorithm to collect training data from a driving game, which has quite similar environment to the real world. In the data collection scheme, the proposed algorithm gathers both driving game screen image and control key value. We employ the collected data from virtual game environment to learn a deep neural network. Experimental result for applying the virtual driving game data to drive real world children’s car show the effectiveness of the proposed algorithm.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134043081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669049
Partha Pratim Banik, Rappy Saha, Ki-Doo Kim
Blood cell image classification is an important part for medical diagnosis system. In this paper, we propose a fused convolutional neural network (CNN) model to classify the images of white blood cell (WBC). We use five convolutional layer, three max-pooling layer and a fully connected network with single hidden layer. We fuse the feature maps of two convolutional layers by using the operation of max-pooling to give input to the fully connected neural network layer. We compare the result of our model accuracy and computational time with CNN-recurrent neural network (RNN) combined model. We also show that our model trains faster than CNN-RNN model.
{"title":"Fused Convolutional Neural Network for White Blood Cell Image Classification","authors":"Partha Pratim Banik, Rappy Saha, Ki-Doo Kim","doi":"10.1109/ICAIIC.2019.8669049","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669049","url":null,"abstract":"Blood cell image classification is an important part for medical diagnosis system. In this paper, we propose a fused convolutional neural network (CNN) model to classify the images of white blood cell (WBC). We use five convolutional layer, three max-pooling layer and a fully connected network with single hidden layer. We fuse the feature maps of two convolutional layers by using the operation of max-pooling to give input to the fully connected neural network layer. We compare the result of our model accuracy and computational time with CNN-recurrent neural network (RNN) combined model. We also show that our model trains faster than CNN-RNN model.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121456784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669088
I. Ullah, Philip Chikontwe, Sang Hyun Park
In recent years, research has been carried out using a micro-robot catheter instead of classic cardiac surgery performed using a catheter. To accurately control the micro-robot catheter, accurate and decisive tracking of the guidewire tip is required. In this paper, we propose a method based on the deep convolutional neural network (CNN) to track the guidewire tip. To extract a very small tip region from a large image in video sequences, we first segment small tip candidates using a segmentation CNN architecture, and then extract the best candidate using shape and motion constraints. The segmentation-based tracking strategy makes the tracking process robust and sturdy. The tracking of the guidewire tip in video sequences is performed fully-automated in real-time, i.e., 71 ms per image. For two-fold cross-validation, the proposed method achieves the average Dice score of 88.07% and IoU score of 85.07%.
{"title":"Guidewire Tip Tracking using U-Net with Shape and Motion Constraints","authors":"I. Ullah, Philip Chikontwe, Sang Hyun Park","doi":"10.1109/ICAIIC.2019.8669088","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669088","url":null,"abstract":"In recent years, research has been carried out using a micro-robot catheter instead of classic cardiac surgery performed using a catheter. To accurately control the micro-robot catheter, accurate and decisive tracking of the guidewire tip is required. In this paper, we propose a method based on the deep convolutional neural network (CNN) to track the guidewire tip. To extract a very small tip region from a large image in video sequences, we first segment small tip candidates using a segmentation CNN architecture, and then extract the best candidate using shape and motion constraints. The segmentation-based tracking strategy makes the tracking process robust and sturdy. The tracking of the guidewire tip in video sequences is performed fully-automated in real-time, i.e., 71 ms per image. For two-fold cross-validation, the proposed method achieves the average Dice score of 88.07% and IoU score of 85.07%.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114236514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669039
Kyoungson Jhang, Junsoo Cho
It appears that CNN for camera-based age and gender prediction is usually trained with RGB color images. However, it is difficult to say that CNN trained with RGB color images always produces good results in an environment where testing is performed with camera rather than with image files. With experiments, we observe that in camera-based testing CNN trained with grayscale images shows better gender and age group prediction accuracy than CNN trained with RGB color images.
{"title":"CNN Training for Face Photo based Gender and Age Group Prediction with Camera","authors":"Kyoungson Jhang, Junsoo Cho","doi":"10.1109/ICAIIC.2019.8669039","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669039","url":null,"abstract":"It appears that CNN for camera-based age and gender prediction is usually trained with RGB color images. However, it is difficult to say that CNN trained with RGB color images always produces good results in an environment where testing is performed with camera rather than with image files. With experiments, we observe that in camera-based testing CNN trained with grayscale images shows better gender and age group prediction accuracy than CNN trained with RGB color images.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127154531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669047
Q. Cabanes, B. Senouci, A. Ramdane-Cherif
Embedded smart systems are Hardware/Software (HW/SW) architectures integrated in new autonomous vehicles in order to increase their smartness. A key example of such applications are camera-based automatic parking systems. In this paper we introduce a fast prototyping perspective within a complete design methodology for these embedded smart systems. One of our main objective being to reduce development and prototyping time, compared to usual simulation approaches. Based on our previous work [1], a supervised machine learning approach, we propose a HW/SW algorithm implementation for objects detection and recognition around autonomous vehicles. We validate our real-time approach via a quick prototype on the top of a Multi-CPU/FPGA platform (ZYNQ). The main contribution of this current work is the definition of a complete design methodology for smart embedded vehicle applications which defines four main parts: specification & native software, hardware acceleration, machine learning software, and the real embedded system prototype. Toward a full automation of our methodology, several steps are already automated and presented in this work. Our hardware acceleration of point cloud-based data processing tasks is 300 times faster than a pure software implementation.
{"title":"A Complete Multi-CPU/FPGA-based Design and Prototyping Methodology for Autonomous Vehicles: Multiple Object Detection and Recognition Case Study","authors":"Q. Cabanes, B. Senouci, A. Ramdane-Cherif","doi":"10.1109/ICAIIC.2019.8669047","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669047","url":null,"abstract":"Embedded smart systems are Hardware/Software (HW/SW) architectures integrated in new autonomous vehicles in order to increase their smartness. A key example of such applications are camera-based automatic parking systems. In this paper we introduce a fast prototyping perspective within a complete design methodology for these embedded smart systems. One of our main objective being to reduce development and prototyping time, compared to usual simulation approaches. Based on our previous work [1], a supervised machine learning approach, we propose a HW/SW algorithm implementation for objects detection and recognition around autonomous vehicles. We validate our real-time approach via a quick prototype on the top of a Multi-CPU/FPGA platform (ZYNQ). The main contribution of this current work is the definition of a complete design methodology for smart embedded vehicle applications which defines four main parts: specification & native software, hardware acceleration, machine learning software, and the real embedded system prototype. Toward a full automation of our methodology, several steps are already automated and presented in this work. Our hardware acceleration of point cloud-based data processing tasks is 300 times faster than a pure software implementation.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125421938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/icaiic.2019.8669007
{"title":"ICAIIC 2019 Message from Organizing Chairs","authors":"","doi":"10.1109/icaiic.2019.8669007","DOIUrl":"https://doi.org/10.1109/icaiic.2019.8669007","url":null,"abstract":"","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126199046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8668965
K. Kim, Dohyun Kim, Joongheon Kim
This paper considers the general adaptation of the application of raw painting images via style transfer. Experimental results show that both the previous studies style transfer using pre-trained CNN and style transfer using GAN has only different algorithms or structure but same problem. That is the un-general application in various painting styles. A striking difference between experiment results in Rococo painting style and experiment results in Impressionism painting style speak for the above problem. In particular, the derivation of awkward results for the application of style transfer method in Rococo painting style represents this kind of problem.
{"title":"Hardness on Style Transfer Deep Learning for Rococo Painting Masterpieces","authors":"K. Kim, Dohyun Kim, Joongheon Kim","doi":"10.1109/ICAIIC.2019.8668965","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8668965","url":null,"abstract":"This paper considers the general adaptation of the application of raw painting images via style transfer. Experimental results show that both the previous studies style transfer using pre-trained CNN and style transfer using GAN has only different algorithms or structure but same problem. That is the un-general application in various painting styles. A striking difference between experiment results in Rococo painting style and experiment results in Impressionism painting style speak for the above problem. In particular, the derivation of awkward results for the application of style transfer method in Rococo painting style represents this kind of problem.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669063
Shohei Fujii, H. Hayashi
In this paper, we compare the performance of activation functions on a deep image prior. The activation functions considered here are the standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), and the randomized leaky rectified linear unit (RReLU). We use these functions for denoising, super-resolution, and inpainting of the deep image prior. Our aim is to observe the effect of differences in the activation functions.
{"title":"Comparison of Performance by Activation Functions on Deep Image Prior","authors":"Shohei Fujii, H. Hayashi","doi":"10.1109/ICAIIC.2019.8669063","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669063","url":null,"abstract":"In this paper, we compare the performance of activation functions on a deep image prior. The activation functions considered here are the standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), and the randomized leaky rectified linear unit (RReLU). We use these functions for denoising, super-resolution, and inpainting of the deep image prior. Our aim is to observe the effect of differences in the activation functions.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124993324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-01DOI: 10.1109/ICAIIC.2019.8669079
Masataka Kawai, K. Ota, Mianxing Dong
In recent years, researches on malware detection using machine learning have been attracting wide attention. At the same time, how to avoid these detections is also regarded as an emerging topic. In this paper, we focus on the avoidance of malware detection based on Generative Adversarial Network (GAN). Previous GAN-based researches use the same feature quantities for learning malware detection. Moreover, existing learning algorithms use multiple malware, which affects the performance of avoidance and is not realistic on attackers. To settle this issue, we apply differentiated learning methods with the different feature quantities and only one malware. Experimental results show that our method can achieve better performance than existing ones.
{"title":"Improved MalGAN: Avoiding Malware Detector by Leaning Cleanware Features","authors":"Masataka Kawai, K. Ota, Mianxing Dong","doi":"10.1109/ICAIIC.2019.8669079","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669079","url":null,"abstract":"In recent years, researches on malware detection using machine learning have been attracting wide attention. At the same time, how to avoid these detections is also regarded as an emerging topic. In this paper, we focus on the avoidance of malware detection based on Generative Adversarial Network (GAN). Previous GAN-based researches use the same feature quantities for learning malware detection. Moreover, existing learning algorithms use multiple malware, which affects the performance of avoidance and is not realistic on attackers. To settle this issue, we apply differentiated learning methods with the different feature quantities and only one malware. Experimental results show that our method can achieve better performance than existing ones.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131712237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}