首页 > 最新文献

2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)最新文献

英文 中文
Future Optical Camera Communication Based Applications and Opportunities for 5G and Beyond 未来基于光学相机通信的应用和5G及以后的机遇
M. Shahjalal, Moh. Khalid Hasan, M. Z. Chowdhury, Y. Jang
Optical camera communication (OCC) refers to the wireless communications between optical sources and cameras (image sensor). Camera image sensors are used to receive data from light emitting diodes. This technology can be implemented in all cameras as well in smartphone as it has the capability of image processing. In this paper we provided some future OCC based indoor and outdoor applications and described their opportunities for 5G and beyond communication systems. Also, we addressed some future works in real-time OCC features.
光学摄像机通信(OCC)是指光源与摄像机(图像传感器)之间的无线通信。相机图像传感器用于接收来自发光二极管的数据。该技术具有图像处理能力,可以应用于所有相机和智能手机。在本文中,我们提供了一些未来基于OCC的室内和室外应用,并描述了它们在5G及以后通信系统中的机会。此外,我们还讨论了实时OCC功能的一些未来工作。
{"title":"Future Optical Camera Communication Based Applications and Opportunities for 5G and Beyond","authors":"M. Shahjalal, Moh. Khalid Hasan, M. Z. Chowdhury, Y. Jang","doi":"10.1109/ICAIIC.2019.8669075","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669075","url":null,"abstract":"Optical camera communication (OCC) refers to the wireless communications between optical sources and cameras (image sensor). Camera image sensors are used to receive data from light emitting diodes. This technology can be implemented in all cameras as well in smartphone as it has the capability of image processing. In this paper we provided some future OCC based indoor and outdoor applications and described their opportunities for 5G and beyond communication systems. Also, we addressed some future works in real-time OCC features.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134557911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Data Fusion Analysis with Optimal Weight in Smart Grid 智能电网中最优权值的数据融合分析
Dengjun Zhu, Haiwei Yuan, Jinlong Yan, Yanping Qing, Weijie Yang
In recent years, Wireless Sensor Network (WSN) have been widely used in the Industrial Internet of Things (IIOT), especially in smart grid. The sensors not only extract the key attributes of the basic data onto operating state of various underground cables, but also remove the redundant description of the data onto the data systems. Further, the sensors can also deal with inconsistent information about the data systems. In order to ensure the reliable fusion data containing all the key information of the basic data and meeting the standard requirements of underground cable, we could overlap the sensing areas of the sensor nodes with each other. The deployment often has the problems of weak expansion ability, network delay and uneven energy consumption of nodes. Therefore, this paper focuses on the optimal weight data fusion analysis to improve the current situation. This method is more efficient for data fusion processing and data extraction.
近年来,无线传感器网络(WSN)在工业物联网(IIOT)特别是智能电网中得到了广泛的应用。该传感器不仅能将基础数据的关键属性提取到各种地下电缆的运行状态中,而且还能消除数据系统中对数据的冗余描述。此外,传感器还可以处理有关数据系统的不一致信息。为了保证融合数据的可靠性,包含基础数据的所有关键信息,满足地下电缆的标准要求,我们可以将传感器节点的传感区域相互重叠。这种部署往往存在扩展能力弱、网络时延和节点能耗不均匀等问题。因此,本文主要研究最优权重数据融合分析来改善这一现状。该方法在数据融合处理和数据提取方面具有更高的效率。
{"title":"Data Fusion Analysis with Optimal Weight in Smart Grid","authors":"Dengjun Zhu, Haiwei Yuan, Jinlong Yan, Yanping Qing, Weijie Yang","doi":"10.1109/ICAIIC.2019.8668988","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8668988","url":null,"abstract":"In recent years, Wireless Sensor Network (WSN) have been widely used in the Industrial Internet of Things (IIOT), especially in smart grid. The sensors not only extract the key attributes of the basic data onto operating state of various underground cables, but also remove the redundant description of the data onto the data systems. Further, the sensors can also deal with inconsistent information about the data systems. In order to ensure the reliable fusion data containing all the key information of the basic data and meeting the standard requirements of underground cable, we could overlap the sensing areas of the sensor nodes with each other. The deployment often has the problems of weak expansion ability, network delay and uneven energy consumption of nodes. Therefore, this paper focuses on the optimal weight data fusion analysis to improve the current situation. This method is more efficient for data fusion processing and data extraction.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132037920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fused Convolutional Neural Network for White Blood Cell Image Classification 融合卷积神经网络用于白细胞图像分类
Partha Pratim Banik, Rappy Saha, Ki-Doo Kim
Blood cell image classification is an important part for medical diagnosis system. In this paper, we propose a fused convolutional neural network (CNN) model to classify the images of white blood cell (WBC). We use five convolutional layer, three max-pooling layer and a fully connected network with single hidden layer. We fuse the feature maps of two convolutional layers by using the operation of max-pooling to give input to the fully connected neural network layer. We compare the result of our model accuracy and computational time with CNN-recurrent neural network (RNN) combined model. We also show that our model trains faster than CNN-RNN model.
血细胞图像分类是医学诊断系统的重要组成部分。本文提出了一种融合卷积神经网络(CNN)模型对白细胞(WBC)图像进行分类。我们使用了5个卷积层,3个最大池化层和一个具有单个隐藏层的全连接网络。我们使用最大池化操作将两个卷积层的特征映射融合到全连接的神经网络层中。我们将模型的精度和计算时间与cnn -递归神经网络(RNN)组合模型进行了比较。我们还表明,我们的模型训练速度比CNN-RNN模型快。
{"title":"Fused Convolutional Neural Network for White Blood Cell Image Classification","authors":"Partha Pratim Banik, Rappy Saha, Ki-Doo Kim","doi":"10.1109/ICAIIC.2019.8669049","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669049","url":null,"abstract":"Blood cell image classification is an important part for medical diagnosis system. In this paper, we propose a fused convolutional neural network (CNN) model to classify the images of white blood cell (WBC). We use five convolutional layer, three max-pooling layer and a fully connected network with single hidden layer. We fuse the feature maps of two convolutional layers by using the operation of max-pooling to give input to the fully connected neural network layer. We compare the result of our model accuracy and computational time with CNN-recurrent neural network (RNN) combined model. We also show that our model trains faster than CNN-RNN model.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121456784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Guidewire Tip Tracking using U-Net with Shape and Motion Constraints 使用U-Net跟踪形状和运动约束的导丝尖端
I. Ullah, Philip Chikontwe, Sang Hyun Park
In recent years, research has been carried out using a micro-robot catheter instead of classic cardiac surgery performed using a catheter. To accurately control the micro-robot catheter, accurate and decisive tracking of the guidewire tip is required. In this paper, we propose a method based on the deep convolutional neural network (CNN) to track the guidewire tip. To extract a very small tip region from a large image in video sequences, we first segment small tip candidates using a segmentation CNN architecture, and then extract the best candidate using shape and motion constraints. The segmentation-based tracking strategy makes the tracking process robust and sturdy. The tracking of the guidewire tip in video sequences is performed fully-automated in real-time, i.e., 71 ms per image. For two-fold cross-validation, the proposed method achieves the average Dice score of 88.07% and IoU score of 85.07%.
近年来,研究人员使用微型机器人导管代替传统的心脏手术使用导管。为了精确控制微型机器人导管,需要对导丝尖端进行精确而果断的跟踪。本文提出了一种基于深度卷积神经网络(CNN)的导丝尖端跟踪方法。为了从视频序列中的大图像中提取非常小的尖端区域,我们首先使用分割CNN架构对小尖端候选区域进行分割,然后使用形状和运动约束提取最佳候选区域。基于分段的跟踪策略使跟踪过程具有鲁棒性和稳健性。视频序列中导丝尖端的跟踪是全自动实时执行的,即每张图像71毫秒。经过二次交叉验证,本文方法的Dice平均得分为88.07%,IoU平均得分为85.07%。
{"title":"Guidewire Tip Tracking using U-Net with Shape and Motion Constraints","authors":"I. Ullah, Philip Chikontwe, Sang Hyun Park","doi":"10.1109/ICAIIC.2019.8669088","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669088","url":null,"abstract":"In recent years, research has been carried out using a micro-robot catheter instead of classic cardiac surgery performed using a catheter. To accurately control the micro-robot catheter, accurate and decisive tracking of the guidewire tip is required. In this paper, we propose a method based on the deep convolutional neural network (CNN) to track the guidewire tip. To extract a very small tip region from a large image in video sequences, we first segment small tip candidates using a segmentation CNN architecture, and then extract the best candidate using shape and motion constraints. The segmentation-based tracking strategy makes the tracking process robust and sturdy. The tracking of the guidewire tip in video sequences is performed fully-automated in real-time, i.e., 71 ms per image. For two-fold cross-validation, the proposed method achieves the average Dice score of 88.07% and IoU score of 85.07%.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114236514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CNN Training for Face Photo based Gender and Age Group Prediction with Camera CNN训练基于人脸照片的性别和年龄群体与相机预测
Kyoungson Jhang, Junsoo Cho
It appears that CNN for camera-based age and gender prediction is usually trained with RGB color images. However, it is difficult to say that CNN trained with RGB color images always produces good results in an environment where testing is performed with camera rather than with image files. With experiments, we observe that in camera-based testing CNN trained with grayscale images shows better gender and age group prediction accuracy than CNN trained with RGB color images.
CNN基于相机的年龄和性别预测似乎通常是用RGB彩色图像训练的。然而,在使用相机而不是图像文件进行测试的环境中,很难说用RGB彩色图像训练的CNN总是能产生良好的结果。通过实验,我们观察到在基于相机的测试中,使用灰度图像训练的CNN比使用RGB彩色图像训练的CNN具有更好的性别和年龄组预测准确率。
{"title":"CNN Training for Face Photo based Gender and Age Group Prediction with Camera","authors":"Kyoungson Jhang, Junsoo Cho","doi":"10.1109/ICAIIC.2019.8669039","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669039","url":null,"abstract":"It appears that CNN for camera-based age and gender prediction is usually trained with RGB color images. However, it is difficult to say that CNN trained with RGB color images always produces good results in an environment where testing is performed with camera rather than with image files. With experiments, we observe that in camera-based testing CNN trained with grayscale images shows better gender and age group prediction accuracy than CNN trained with RGB color images.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127154531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Complete Multi-CPU/FPGA-based Design and Prototyping Methodology for Autonomous Vehicles: Multiple Object Detection and Recognition Case Study 一种完整的基于多cpu / fpga的自动驾驶汽车设计和原型方法:多目标检测和识别案例研究
Q. Cabanes, B. Senouci, A. Ramdane-Cherif
Embedded smart systems are Hardware/Software (HW/SW) architectures integrated in new autonomous vehicles in order to increase their smartness. A key example of such applications are camera-based automatic parking systems. In this paper we introduce a fast prototyping perspective within a complete design methodology for these embedded smart systems. One of our main objective being to reduce development and prototyping time, compared to usual simulation approaches. Based on our previous work [1], a supervised machine learning approach, we propose a HW/SW algorithm implementation for objects detection and recognition around autonomous vehicles. We validate our real-time approach via a quick prototype on the top of a Multi-CPU/FPGA platform (ZYNQ). The main contribution of this current work is the definition of a complete design methodology for smart embedded vehicle applications which defines four main parts: specification & native software, hardware acceleration, machine learning software, and the real embedded system prototype. Toward a full automation of our methodology, several steps are already automated and presented in this work. Our hardware acceleration of point cloud-based data processing tasks is 300 times faster than a pure software implementation.
嵌入式智能系统是集成在新型自动驾驶汽车中的硬件/软件(HW/SW)架构,以提高其智能。这类应用的一个关键例子是基于摄像头的自动停车系统。在本文中,我们在这些嵌入式智能系统的完整设计方法中介绍了快速原型设计的观点。与通常的模拟方法相比,我们的主要目标之一是减少开发和原型制作时间。基于我们之前的工作[1],一种监督机器学习方法,我们提出了一种用于自动驾驶汽车周围物体检测和识别的HW/SW算法实现。我们通过在多cpu /FPGA平台(ZYNQ)上的快速原型验证了我们的实时方法。当前这项工作的主要贡献是为智能嵌入式汽车应用定义了一个完整的设计方法,其中定义了四个主要部分:规范和本地软件、硬件加速、机器学习软件和真实的嵌入式系统原型。为了使我们的方法完全自动化,本工作中已经实现了几个步骤的自动化。我们基于点云的数据处理任务的硬件加速比纯软件实现快300倍。
{"title":"A Complete Multi-CPU/FPGA-based Design and Prototyping Methodology for Autonomous Vehicles: Multiple Object Detection and Recognition Case Study","authors":"Q. Cabanes, B. Senouci, A. Ramdane-Cherif","doi":"10.1109/ICAIIC.2019.8669047","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669047","url":null,"abstract":"Embedded smart systems are Hardware/Software (HW/SW) architectures integrated in new autonomous vehicles in order to increase their smartness. A key example of such applications are camera-based automatic parking systems. In this paper we introduce a fast prototyping perspective within a complete design methodology for these embedded smart systems. One of our main objective being to reduce development and prototyping time, compared to usual simulation approaches. Based on our previous work [1], a supervised machine learning approach, we propose a HW/SW algorithm implementation for objects detection and recognition around autonomous vehicles. We validate our real-time approach via a quick prototype on the top of a Multi-CPU/FPGA platform (ZYNQ). The main contribution of this current work is the definition of a complete design methodology for smart embedded vehicle applications which defines four main parts: specification & native software, hardware acceleration, machine learning software, and the real embedded system prototype. Toward a full automation of our methodology, several steps are already automated and presented in this work. Our hardware acceleration of point cloud-based data processing tasks is 300 times faster than a pure software implementation.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125421938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ICAIIC 2019 Message from Organizing Chairs ICAIIC 2019组委会主席致辞
{"title":"ICAIIC 2019 Message from Organizing Chairs","authors":"","doi":"10.1109/icaiic.2019.8669007","DOIUrl":"https://doi.org/10.1109/icaiic.2019.8669007","url":null,"abstract":"","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126199046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardness on Style Transfer Deep Learning for Rococo Painting Masterpieces 洛可可绘画名作风格迁移的深度学习
K. Kim, Dohyun Kim, Joongheon Kim
This paper considers the general adaptation of the application of raw painting images via style transfer. Experimental results show that both the previous studies style transfer using pre-trained CNN and style transfer using GAN has only different algorithms or structure but same problem. That is the un-general application in various painting styles. A striking difference between experiment results in Rococo painting style and experiment results in Impressionism painting style speak for the above problem. In particular, the derivation of awkward results for the application of style transfer method in Rococo painting style represents this kind of problem.
本文考虑了通过风格转移对原始绘画图像应用的一般适应。实验结果表明,以往使用预训练CNN的风格迁移研究和使用GAN的风格迁移研究只是算法或结构不同,但问题是相同的。这是各种绘画风格的非一般应用。洛可可画风实验结果与印象派画风实验结果的显著差异说明了上述问题。特别是洛可可绘画风格中风格迁移方法应用的尴尬结果的推导就代表了这类问题。
{"title":"Hardness on Style Transfer Deep Learning for Rococo Painting Masterpieces","authors":"K. Kim, Dohyun Kim, Joongheon Kim","doi":"10.1109/ICAIIC.2019.8668965","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8668965","url":null,"abstract":"This paper considers the general adaptation of the application of raw painting images via style transfer. Experimental results show that both the previous studies style transfer using pre-trained CNN and style transfer using GAN has only different algorithms or structure but same problem. That is the un-general application in various painting styles. A striking difference between experiment results in Rococo painting style and experiment results in Impressionism painting style speak for the above problem. In particular, the derivation of awkward results for the application of style transfer method in Rococo painting style represents this kind of problem.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Comparison of Performance by Activation Functions on Deep Image Prior 激活函数对深度图像先验性能的比较
Shohei Fujii, H. Hayashi
In this paper, we compare the performance of activation functions on a deep image prior. The activation functions considered here are the standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), and the randomized leaky rectified linear unit (RReLU). We use these functions for denoising, super-resolution, and inpainting of the deep image prior. Our aim is to observe the effect of differences in the activation functions.
在本文中,我们比较了激活函数在深度图像先验上的性能。这里考虑的激活函数是标准整流线性单元(ReLU),泄漏整流线性单元(leaky ReLU)和随机泄漏整流线性单元(RReLU)。我们使用这些函数对深度图像进行去噪、超分辨率和先验的图像修复。我们的目的是观察激活函数差异的影响。
{"title":"Comparison of Performance by Activation Functions on Deep Image Prior","authors":"Shohei Fujii, H. Hayashi","doi":"10.1109/ICAIIC.2019.8669063","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669063","url":null,"abstract":"In this paper, we compare the performance of activation functions on a deep image prior. The activation functions considered here are the standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), and the randomized leaky rectified linear unit (RReLU). We use these functions for denoising, super-resolution, and inpainting of the deep image prior. Our aim is to observe the effect of differences in the activation functions.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124993324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improved MalGAN: Avoiding Malware Detector by Leaning Cleanware Features 改进的MalGAN:避免恶意软件检测器通过学习清洁软件的功能
Masataka Kawai, K. Ota, Mianxing Dong
In recent years, researches on malware detection using machine learning have been attracting wide attention. At the same time, how to avoid these detections is also regarded as an emerging topic. In this paper, we focus on the avoidance of malware detection based on Generative Adversarial Network (GAN). Previous GAN-based researches use the same feature quantities for learning malware detection. Moreover, existing learning algorithms use multiple malware, which affects the performance of avoidance and is not realistic on attackers. To settle this issue, we apply differentiated learning methods with the different feature quantities and only one malware. Experimental results show that our method can achieve better performance than existing ones.
近年来,利用机器学习进行恶意软件检测的研究受到了广泛关注。同时,如何避免这些检测也被视为一个新兴的课题。本文主要研究基于生成式对抗网络(GAN)的恶意软件检测的规避问题。以往基于gan的研究使用相同的特征量来学习恶意软件检测。此外,现有的学习算法使用了多种恶意软件,影响了回避的性能,对攻击者来说不太现实。为了解决这个问题,我们采用了不同特征量的差异化学习方法,并且只使用一个恶意软件。实验结果表明,该方法比现有方法具有更好的性能。
{"title":"Improved MalGAN: Avoiding Malware Detector by Leaning Cleanware Features","authors":"Masataka Kawai, K. Ota, Mianxing Dong","doi":"10.1109/ICAIIC.2019.8669079","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669079","url":null,"abstract":"In recent years, researches on malware detection using machine learning have been attracting wide attention. At the same time, how to avoid these detections is also regarded as an emerging topic. In this paper, we focus on the avoidance of malware detection based on Generative Adversarial Network (GAN). Previous GAN-based researches use the same feature quantities for learning malware detection. Moreover, existing learning algorithms use multiple malware, which affects the performance of avoidance and is not realistic on attackers. To settle this issue, we apply differentiated learning methods with the different feature quantities and only one malware. Experimental results show that our method can achieve better performance than existing ones.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131712237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1