基于对抗性人工智能的高架图像分类模型

Charles Rogers, John Bugg, C. Nyheim, Will Gebhardt, Brian Andris, Evan Heitman, C. Fleming
{"title":"基于对抗性人工智能的高架图像分类模型","authors":"Charles Rogers, John Bugg, C. Nyheim, Will Gebhardt, Brian Andris, Evan Heitman, C. Fleming","doi":"10.1109/SIEDS.2019.8735608","DOIUrl":null,"url":null,"abstract":"In overhead object detection, computers are increasingly replacing humans at spotting and identifying specific items within images through the use of machine learning (ML). These ML programs must be both accurate and robust. Accuracy means the results must be trusted enough to substitute for the manual deduction process. Robustness is the magnitude to which the network can handle discrepancies within the images. One way to gauge the robustness is through the use of adversarial networks. Adversarial algorithms are trained using perturbations of the image to reduce the accuracy of an existing classification model. The greater degree of perturbations a model can withstand, the more robust it is. In this paper, comparisons of existing deep neural network models and the advancement of adversarial AI are explored. While there is some published research about AI and adversarial networks, very little discusses this particular utilization for overhead imagery. This paper focuses on overhead imagery, specifically that of ships. Using a public Kaggle dataset, we developed multiple models to detect ships in overhead imagery, specifically ResNet50, DenseNet201, and InceptionV3. The goal of the adversarial works is to manipulate an image so that its contents are misclassified. This paper focuses specifically on producing perturbations that can be recreated in the physical world. This serves to account for physical conditions, whether intentional or not, that could reduce accuracy within our network. While there are military applications for this specific research, the general findings can be applied to all AI overhead image classification topics. This work will explore both the vulnerabilities of existing classifier neural net models and the visualization of these vulnerabilities.","PeriodicalId":265421,"journal":{"name":"2019 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Adversarial Artificial Intelligence for Overhead Imagery Classification Models\",\"authors\":\"Charles Rogers, John Bugg, C. Nyheim, Will Gebhardt, Brian Andris, Evan Heitman, C. Fleming\",\"doi\":\"10.1109/SIEDS.2019.8735608\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In overhead object detection, computers are increasingly replacing humans at spotting and identifying specific items within images through the use of machine learning (ML). These ML programs must be both accurate and robust. Accuracy means the results must be trusted enough to substitute for the manual deduction process. Robustness is the magnitude to which the network can handle discrepancies within the images. One way to gauge the robustness is through the use of adversarial networks. Adversarial algorithms are trained using perturbations of the image to reduce the accuracy of an existing classification model. The greater degree of perturbations a model can withstand, the more robust it is. In this paper, comparisons of existing deep neural network models and the advancement of adversarial AI are explored. While there is some published research about AI and adversarial networks, very little discusses this particular utilization for overhead imagery. This paper focuses on overhead imagery, specifically that of ships. Using a public Kaggle dataset, we developed multiple models to detect ships in overhead imagery, specifically ResNet50, DenseNet201, and InceptionV3. The goal of the adversarial works is to manipulate an image so that its contents are misclassified. This paper focuses specifically on producing perturbations that can be recreated in the physical world. This serves to account for physical conditions, whether intentional or not, that could reduce accuracy within our network. While there are military applications for this specific research, the general findings can be applied to all AI overhead image classification topics. This work will explore both the vulnerabilities of existing classifier neural net models and the visualization of these vulnerabilities.\",\"PeriodicalId\":265421,\"journal\":{\"name\":\"2019 Systems and Information Engineering Design Symposium (SIEDS)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Systems and Information Engineering Design Symposium (SIEDS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIEDS.2019.8735608\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIEDS.2019.8735608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在头顶物体检测中,通过使用机器学习(ML),计算机越来越多地取代人类在图像中发现和识别特定物品。这些机器学习程序必须既准确又健壮。准确性意味着结果必须足够可信,以取代人工推理过程。鲁棒性是指网络能够处理图像中的差异的程度。衡量稳健性的一种方法是使用对抗性网络。对抗算法使用图像的扰动来训练,以降低现有分类模型的准确性。一个模型能承受的扰动程度越大,它就越健壮。本文对现有的深度神经网络模型和对抗人工智能的进展进行了比较。虽然有一些关于人工智能和对抗网络的出版研究,但很少讨论这种对头顶图像的特殊利用。本文的重点是高架图像,特别是船舶的高架图像。使用公共Kaggle数据集,我们开发了多个模型来检测架空图像中的船舶,特别是ResNet50, DenseNet201和InceptionV3。对抗性作品的目标是操纵图像,使其内容被错误分类。这篇论文特别着重于产生可以在物理世界中重现的扰动。这有助于解释物理条件,无论是有意还是无意,都可能降低我们网络中的准确性。虽然这一特定研究有军事应用,但一般研究结果可以应用于所有人工智能头顶图像分类主题。这项工作将探索现有分类器神经网络模型的漏洞以及这些漏洞的可视化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Adversarial Artificial Intelligence for Overhead Imagery Classification Models
In overhead object detection, computers are increasingly replacing humans at spotting and identifying specific items within images through the use of machine learning (ML). These ML programs must be both accurate and robust. Accuracy means the results must be trusted enough to substitute for the manual deduction process. Robustness is the magnitude to which the network can handle discrepancies within the images. One way to gauge the robustness is through the use of adversarial networks. Adversarial algorithms are trained using perturbations of the image to reduce the accuracy of an existing classification model. The greater degree of perturbations a model can withstand, the more robust it is. In this paper, comparisons of existing deep neural network models and the advancement of adversarial AI are explored. While there is some published research about AI and adversarial networks, very little discusses this particular utilization for overhead imagery. This paper focuses on overhead imagery, specifically that of ships. Using a public Kaggle dataset, we developed multiple models to detect ships in overhead imagery, specifically ResNet50, DenseNet201, and InceptionV3. The goal of the adversarial works is to manipulate an image so that its contents are misclassified. This paper focuses specifically on producing perturbations that can be recreated in the physical world. This serves to account for physical conditions, whether intentional or not, that could reduce accuracy within our network. While there are military applications for this specific research, the general findings can be applied to all AI overhead image classification topics. This work will explore both the vulnerabilities of existing classifier neural net models and the visualization of these vulnerabilities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Impact of Artificial Intelligence and Internet of Things in the Transformation of E-Business Sector Gamification of eHealth Interventions to Increase User Engagement and Reduce Attrition Modeling User Context from Smartphone Data for Recognition of Health Status Developing a data pipeline to improve accessibility and utilization of Charlottesville's Open Data Portal Deep Learning for Detecting Diseases in Gastrointestinal Biopsy Images
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1