首页 > 最新文献

2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)最新文献

英文 中文
1D CNN Based Human Respiration Pattern Recognition using Ultra Wideband Radar 基于CNN的超宽带雷达人体呼吸模式识别
Seong-Hoon Kim, Gi-Tae Han
The respiration status of a person is one of the vital signs that can be used to check the health condition of the person. The respiration status has been measured in various ways in the medical and healthcare sectors. Contact type sensors were conventionally used to measure respiration. The contact type sensors have been used primarily in the medical sector, because they can be only used in a limited environment. Recent studies have evaluated the ways of detecting human respiration patterns using Ultra-Wideband (UWB) Radar, which relies on non-contact type sensors. Previous studies evaluated the apnea pattern during sleep by analyzing the respiration signals acquired by UWB Radar using a principal component analysis (PCA). However, it is necessary to measure various respiration patterns in addition to apnea in order to accurately analyze the health condition of an individual in the healthcare sector. Therefore, this study proposed a method to recognize four respiration patterns based on the 1D convolutional neural network from the respiration signals acquired from UWB Radar. The proposed method extracts the eupnea, bradypnea, tachypnea, and apnea respiration patterns from UWB Radar and composes a learning dataset. The proposed method learned data through 1D CNN and the recognition accuracy was measured. The results of this study revealed that the accuracy of the proposed method was up to 15% higher than that of the conventional classification algorithms (i.e., PCA and Support Vector Machine (SVM)).
呼吸状态是一个人的生命体征之一,可以用来检查一个人的健康状况。在医疗和保健部门以各种方式测量呼吸状况。接触式传感器通常用于测量呼吸。接触式传感器主要用于医疗领域,因为它们只能在有限的环境中使用。最近的研究评估了使用超宽带(UWB)雷达检测人体呼吸模式的方法,该方法依赖于非接触式传感器。以往的研究采用主成分分析法(PCA)对超宽带雷达采集的呼吸信号进行分析,评价睡眠时的呼吸暂停模式。然而,为了准确分析医疗保健部门个人的健康状况,除了呼吸暂停外,还需要测量各种呼吸模式。因此,本研究提出了一种基于一维卷积神经网络的方法,从超宽带雷达采集的呼吸信号中识别四种呼吸模式。该方法从超宽带雷达中提取呼吸暂停、呼吸缓慢、呼吸急促和呼吸暂停呼吸模式,并组成学习数据集。该方法通过1D CNN学习数据,并对识别精度进行了测试。研究结果表明,该方法的准确率比传统的分类算法(即PCA和支持向量机(SVM))提高了15%。
{"title":"1D CNN Based Human Respiration Pattern Recognition using Ultra Wideband Radar","authors":"Seong-Hoon Kim, Gi-Tae Han","doi":"10.1109/ICAIIC.2019.8669000","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669000","url":null,"abstract":"The respiration status of a person is one of the vital signs that can be used to check the health condition of the person. The respiration status has been measured in various ways in the medical and healthcare sectors. Contact type sensors were conventionally used to measure respiration. The contact type sensors have been used primarily in the medical sector, because they can be only used in a limited environment. Recent studies have evaluated the ways of detecting human respiration patterns using Ultra-Wideband (UWB) Radar, which relies on non-contact type sensors. Previous studies evaluated the apnea pattern during sleep by analyzing the respiration signals acquired by UWB Radar using a principal component analysis (PCA). However, it is necessary to measure various respiration patterns in addition to apnea in order to accurately analyze the health condition of an individual in the healthcare sector. Therefore, this study proposed a method to recognize four respiration patterns based on the 1D convolutional neural network from the respiration signals acquired from UWB Radar. The proposed method extracts the eupnea, bradypnea, tachypnea, and apnea respiration patterns from UWB Radar and composes a learning dataset. The proposed method learned data through 1D CNN and the recognition accuracy was measured. The results of this study revealed that the accuracy of the proposed method was up to 15% higher than that of the conventional classification algorithms (i.e., PCA and Support Vector Machine (SVM)).","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123001595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Deep Learning Algorithm using Virtual Environment Data for Self-driving Car 基于虚拟环境数据的自动驾驶汽车深度学习算法
Juntae Kim, G. Lim, Youngi Kim, Bokyeong Kim, Changseok Bae
Recent outstanding progresses in artificial intelligence researches enable many tries to implement self-driving cars. However, in real world, there are a lot of risks and cost problems to acquire training data for self-driving artificial intelligence algorithms. This paper proposes an algorithm to collect training data from a driving game, which has quite similar environment to the real world. In the data collection scheme, the proposed algorithm gathers both driving game screen image and control key value. We employ the collected data from virtual game environment to learn a deep neural network. Experimental result for applying the virtual driving game data to drive real world children’s car show the effectiveness of the proposed algorithm.
近年来人工智能研究的突出进展使许多人尝试实现自动驾驶汽车。然而,在现实世界中,获取自动驾驶人工智能算法的训练数据存在很多风险和成本问题。本文提出了一种从驾驶游戏中收集训练数据的算法,该算法与现实环境非常相似。在数据采集方案中,提出的算法同时采集驾驶游戏画面图像和控制按键值。我们利用从虚拟游戏环境中收集的数据来学习一个深度神经网络。将虚拟驾驶游戏数据应用于现实世界儿童汽车的实验结果表明了该算法的有效性。
{"title":"Deep Learning Algorithm using Virtual Environment Data for Self-driving Car","authors":"Juntae Kim, G. Lim, Youngi Kim, Bokyeong Kim, Changseok Bae","doi":"10.1109/ICAIIC.2019.8669037","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669037","url":null,"abstract":"Recent outstanding progresses in artificial intelligence researches enable many tries to implement self-driving cars. However, in real world, there are a lot of risks and cost problems to acquire training data for self-driving artificial intelligence algorithms. This paper proposes an algorithm to collect training data from a driving game, which has quite similar environment to the real world. In the data collection scheme, the proposed algorithm gathers both driving game screen image and control key value. We employ the collected data from virtual game environment to learn a deep neural network. Experimental result for applying the virtual driving game data to drive real world children’s car show the effectiveness of the proposed algorithm.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134043081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Fused Convolutional Neural Network for White Blood Cell Image Classification 融合卷积神经网络用于白细胞图像分类
Partha Pratim Banik, Rappy Saha, Ki-Doo Kim
Blood cell image classification is an important part for medical diagnosis system. In this paper, we propose a fused convolutional neural network (CNN) model to classify the images of white blood cell (WBC). We use five convolutional layer, three max-pooling layer and a fully connected network with single hidden layer. We fuse the feature maps of two convolutional layers by using the operation of max-pooling to give input to the fully connected neural network layer. We compare the result of our model accuracy and computational time with CNN-recurrent neural network (RNN) combined model. We also show that our model trains faster than CNN-RNN model.
血细胞图像分类是医学诊断系统的重要组成部分。本文提出了一种融合卷积神经网络(CNN)模型对白细胞(WBC)图像进行分类。我们使用了5个卷积层,3个最大池化层和一个具有单个隐藏层的全连接网络。我们使用最大池化操作将两个卷积层的特征映射融合到全连接的神经网络层中。我们将模型的精度和计算时间与cnn -递归神经网络(RNN)组合模型进行了比较。我们还表明,我们的模型训练速度比CNN-RNN模型快。
{"title":"Fused Convolutional Neural Network for White Blood Cell Image Classification","authors":"Partha Pratim Banik, Rappy Saha, Ki-Doo Kim","doi":"10.1109/ICAIIC.2019.8669049","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669049","url":null,"abstract":"Blood cell image classification is an important part for medical diagnosis system. In this paper, we propose a fused convolutional neural network (CNN) model to classify the images of white blood cell (WBC). We use five convolutional layer, three max-pooling layer and a fully connected network with single hidden layer. We fuse the feature maps of two convolutional layers by using the operation of max-pooling to give input to the fully connected neural network layer. We compare the result of our model accuracy and computational time with CNN-recurrent neural network (RNN) combined model. We also show that our model trains faster than CNN-RNN model.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121456784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Guidewire Tip Tracking using U-Net with Shape and Motion Constraints 使用U-Net跟踪形状和运动约束的导丝尖端
I. Ullah, Philip Chikontwe, Sang Hyun Park
In recent years, research has been carried out using a micro-robot catheter instead of classic cardiac surgery performed using a catheter. To accurately control the micro-robot catheter, accurate and decisive tracking of the guidewire tip is required. In this paper, we propose a method based on the deep convolutional neural network (CNN) to track the guidewire tip. To extract a very small tip region from a large image in video sequences, we first segment small tip candidates using a segmentation CNN architecture, and then extract the best candidate using shape and motion constraints. The segmentation-based tracking strategy makes the tracking process robust and sturdy. The tracking of the guidewire tip in video sequences is performed fully-automated in real-time, i.e., 71 ms per image. For two-fold cross-validation, the proposed method achieves the average Dice score of 88.07% and IoU score of 85.07%.
近年来,研究人员使用微型机器人导管代替传统的心脏手术使用导管。为了精确控制微型机器人导管,需要对导丝尖端进行精确而果断的跟踪。本文提出了一种基于深度卷积神经网络(CNN)的导丝尖端跟踪方法。为了从视频序列中的大图像中提取非常小的尖端区域,我们首先使用分割CNN架构对小尖端候选区域进行分割,然后使用形状和运动约束提取最佳候选区域。基于分段的跟踪策略使跟踪过程具有鲁棒性和稳健性。视频序列中导丝尖端的跟踪是全自动实时执行的,即每张图像71毫秒。经过二次交叉验证,本文方法的Dice平均得分为88.07%,IoU平均得分为85.07%。
{"title":"Guidewire Tip Tracking using U-Net with Shape and Motion Constraints","authors":"I. Ullah, Philip Chikontwe, Sang Hyun Park","doi":"10.1109/ICAIIC.2019.8669088","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669088","url":null,"abstract":"In recent years, research has been carried out using a micro-robot catheter instead of classic cardiac surgery performed using a catheter. To accurately control the micro-robot catheter, accurate and decisive tracking of the guidewire tip is required. In this paper, we propose a method based on the deep convolutional neural network (CNN) to track the guidewire tip. To extract a very small tip region from a large image in video sequences, we first segment small tip candidates using a segmentation CNN architecture, and then extract the best candidate using shape and motion constraints. The segmentation-based tracking strategy makes the tracking process robust and sturdy. The tracking of the guidewire tip in video sequences is performed fully-automated in real-time, i.e., 71 ms per image. For two-fold cross-validation, the proposed method achieves the average Dice score of 88.07% and IoU score of 85.07%.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114236514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CNN Training for Face Photo based Gender and Age Group Prediction with Camera CNN训练基于人脸照片的性别和年龄群体与相机预测
Kyoungson Jhang, Junsoo Cho
It appears that CNN for camera-based age and gender prediction is usually trained with RGB color images. However, it is difficult to say that CNN trained with RGB color images always produces good results in an environment where testing is performed with camera rather than with image files. With experiments, we observe that in camera-based testing CNN trained with grayscale images shows better gender and age group prediction accuracy than CNN trained with RGB color images.
CNN基于相机的年龄和性别预测似乎通常是用RGB彩色图像训练的。然而,在使用相机而不是图像文件进行测试的环境中,很难说用RGB彩色图像训练的CNN总是能产生良好的结果。通过实验,我们观察到在基于相机的测试中,使用灰度图像训练的CNN比使用RGB彩色图像训练的CNN具有更好的性别和年龄组预测准确率。
{"title":"CNN Training for Face Photo based Gender and Age Group Prediction with Camera","authors":"Kyoungson Jhang, Junsoo Cho","doi":"10.1109/ICAIIC.2019.8669039","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669039","url":null,"abstract":"It appears that CNN for camera-based age and gender prediction is usually trained with RGB color images. However, it is difficult to say that CNN trained with RGB color images always produces good results in an environment where testing is performed with camera rather than with image files. With experiments, we observe that in camera-based testing CNN trained with grayscale images shows better gender and age group prediction accuracy than CNN trained with RGB color images.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127154531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Complete Multi-CPU/FPGA-based Design and Prototyping Methodology for Autonomous Vehicles: Multiple Object Detection and Recognition Case Study 一种完整的基于多cpu / fpga的自动驾驶汽车设计和原型方法:多目标检测和识别案例研究
Q. Cabanes, B. Senouci, A. Ramdane-Cherif
Embedded smart systems are Hardware/Software (HW/SW) architectures integrated in new autonomous vehicles in order to increase their smartness. A key example of such applications are camera-based automatic parking systems. In this paper we introduce a fast prototyping perspective within a complete design methodology for these embedded smart systems. One of our main objective being to reduce development and prototyping time, compared to usual simulation approaches. Based on our previous work [1], a supervised machine learning approach, we propose a HW/SW algorithm implementation for objects detection and recognition around autonomous vehicles. We validate our real-time approach via a quick prototype on the top of a Multi-CPU/FPGA platform (ZYNQ). The main contribution of this current work is the definition of a complete design methodology for smart embedded vehicle applications which defines four main parts: specification & native software, hardware acceleration, machine learning software, and the real embedded system prototype. Toward a full automation of our methodology, several steps are already automated and presented in this work. Our hardware acceleration of point cloud-based data processing tasks is 300 times faster than a pure software implementation.
嵌入式智能系统是集成在新型自动驾驶汽车中的硬件/软件(HW/SW)架构,以提高其智能。这类应用的一个关键例子是基于摄像头的自动停车系统。在本文中,我们在这些嵌入式智能系统的完整设计方法中介绍了快速原型设计的观点。与通常的模拟方法相比,我们的主要目标之一是减少开发和原型制作时间。基于我们之前的工作[1],一种监督机器学习方法,我们提出了一种用于自动驾驶汽车周围物体检测和识别的HW/SW算法实现。我们通过在多cpu /FPGA平台(ZYNQ)上的快速原型验证了我们的实时方法。当前这项工作的主要贡献是为智能嵌入式汽车应用定义了一个完整的设计方法,其中定义了四个主要部分:规范和本地软件、硬件加速、机器学习软件和真实的嵌入式系统原型。为了使我们的方法完全自动化,本工作中已经实现了几个步骤的自动化。我们基于点云的数据处理任务的硬件加速比纯软件实现快300倍。
{"title":"A Complete Multi-CPU/FPGA-based Design and Prototyping Methodology for Autonomous Vehicles: Multiple Object Detection and Recognition Case Study","authors":"Q. Cabanes, B. Senouci, A. Ramdane-Cherif","doi":"10.1109/ICAIIC.2019.8669047","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669047","url":null,"abstract":"Embedded smart systems are Hardware/Software (HW/SW) architectures integrated in new autonomous vehicles in order to increase their smartness. A key example of such applications are camera-based automatic parking systems. In this paper we introduce a fast prototyping perspective within a complete design methodology for these embedded smart systems. One of our main objective being to reduce development and prototyping time, compared to usual simulation approaches. Based on our previous work [1], a supervised machine learning approach, we propose a HW/SW algorithm implementation for objects detection and recognition around autonomous vehicles. We validate our real-time approach via a quick prototype on the top of a Multi-CPU/FPGA platform (ZYNQ). The main contribution of this current work is the definition of a complete design methodology for smart embedded vehicle applications which defines four main parts: specification & native software, hardware acceleration, machine learning software, and the real embedded system prototype. Toward a full automation of our methodology, several steps are already automated and presented in this work. Our hardware acceleration of point cloud-based data processing tasks is 300 times faster than a pure software implementation.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125421938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ICAIIC 2019 Message from Organizing Chairs ICAIIC 2019组委会主席致辞
{"title":"ICAIIC 2019 Message from Organizing Chairs","authors":"","doi":"10.1109/icaiic.2019.8669007","DOIUrl":"https://doi.org/10.1109/icaiic.2019.8669007","url":null,"abstract":"","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126199046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardness on Style Transfer Deep Learning for Rococo Painting Masterpieces 洛可可绘画名作风格迁移的深度学习
K. Kim, Dohyun Kim, Joongheon Kim
This paper considers the general adaptation of the application of raw painting images via style transfer. Experimental results show that both the previous studies style transfer using pre-trained CNN and style transfer using GAN has only different algorithms or structure but same problem. That is the un-general application in various painting styles. A striking difference between experiment results in Rococo painting style and experiment results in Impressionism painting style speak for the above problem. In particular, the derivation of awkward results for the application of style transfer method in Rococo painting style represents this kind of problem.
本文考虑了通过风格转移对原始绘画图像应用的一般适应。实验结果表明,以往使用预训练CNN的风格迁移研究和使用GAN的风格迁移研究只是算法或结构不同,但问题是相同的。这是各种绘画风格的非一般应用。洛可可画风实验结果与印象派画风实验结果的显著差异说明了上述问题。特别是洛可可绘画风格中风格迁移方法应用的尴尬结果的推导就代表了这类问题。
{"title":"Hardness on Style Transfer Deep Learning for Rococo Painting Masterpieces","authors":"K. Kim, Dohyun Kim, Joongheon Kim","doi":"10.1109/ICAIIC.2019.8668965","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8668965","url":null,"abstract":"This paper considers the general adaptation of the application of raw painting images via style transfer. Experimental results show that both the previous studies style transfer using pre-trained CNN and style transfer using GAN has only different algorithms or structure but same problem. That is the un-general application in various painting styles. A striking difference between experiment results in Rococo painting style and experiment results in Impressionism painting style speak for the above problem. In particular, the derivation of awkward results for the application of style transfer method in Rococo painting style represents this kind of problem.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Comparison of Performance by Activation Functions on Deep Image Prior 激活函数对深度图像先验性能的比较
Shohei Fujii, H. Hayashi
In this paper, we compare the performance of activation functions on a deep image prior. The activation functions considered here are the standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), and the randomized leaky rectified linear unit (RReLU). We use these functions for denoising, super-resolution, and inpainting of the deep image prior. Our aim is to observe the effect of differences in the activation functions.
在本文中,我们比较了激活函数在深度图像先验上的性能。这里考虑的激活函数是标准整流线性单元(ReLU),泄漏整流线性单元(leaky ReLU)和随机泄漏整流线性单元(RReLU)。我们使用这些函数对深度图像进行去噪、超分辨率和先验的图像修复。我们的目的是观察激活函数差异的影响。
{"title":"Comparison of Performance by Activation Functions on Deep Image Prior","authors":"Shohei Fujii, H. Hayashi","doi":"10.1109/ICAIIC.2019.8669063","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669063","url":null,"abstract":"In this paper, we compare the performance of activation functions on a deep image prior. The activation functions considered here are the standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), and the randomized leaky rectified linear unit (RReLU). We use these functions for denoising, super-resolution, and inpainting of the deep image prior. Our aim is to observe the effect of differences in the activation functions.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124993324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improved MalGAN: Avoiding Malware Detector by Leaning Cleanware Features 改进的MalGAN:避免恶意软件检测器通过学习清洁软件的功能
Masataka Kawai, K. Ota, Mianxing Dong
In recent years, researches on malware detection using machine learning have been attracting wide attention. At the same time, how to avoid these detections is also regarded as an emerging topic. In this paper, we focus on the avoidance of malware detection based on Generative Adversarial Network (GAN). Previous GAN-based researches use the same feature quantities for learning malware detection. Moreover, existing learning algorithms use multiple malware, which affects the performance of avoidance and is not realistic on attackers. To settle this issue, we apply differentiated learning methods with the different feature quantities and only one malware. Experimental results show that our method can achieve better performance than existing ones.
近年来,利用机器学习进行恶意软件检测的研究受到了广泛关注。同时,如何避免这些检测也被视为一个新兴的课题。本文主要研究基于生成式对抗网络(GAN)的恶意软件检测的规避问题。以往基于gan的研究使用相同的特征量来学习恶意软件检测。此外,现有的学习算法使用了多种恶意软件,影响了回避的性能,对攻击者来说不太现实。为了解决这个问题,我们采用了不同特征量的差异化学习方法,并且只使用一个恶意软件。实验结果表明,该方法比现有方法具有更好的性能。
{"title":"Improved MalGAN: Avoiding Malware Detector by Leaning Cleanware Features","authors":"Masataka Kawai, K. Ota, Mianxing Dong","doi":"10.1109/ICAIIC.2019.8669079","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669079","url":null,"abstract":"In recent years, researches on malware detection using machine learning have been attracting wide attention. At the same time, how to avoid these detections is also regarded as an emerging topic. In this paper, we focus on the avoidance of malware detection based on Generative Adversarial Network (GAN). Previous GAN-based researches use the same feature quantities for learning malware detection. Moreover, existing learning algorithms use multiple malware, which affects the performance of avoidance and is not realistic on attackers. To settle this issue, we apply differentiated learning methods with the different feature quantities and only one malware. Experimental results show that our method can achieve better performance than existing ones.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131712237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1