Video modification in drone and satellite imagery

Michael J. Reale, Daniel P. Murphy, Maria Cornacchia, Jamie Vazquez Madera
{"title":"Video modification in drone and satellite imagery","authors":"Michael J. Reale, Daniel P. Murphy, Maria Cornacchia, Jamie Vazquez Madera","doi":"10.1117/12.3013881","DOIUrl":null,"url":null,"abstract":"The ability to create and detect synthetic video is becoming critically important to scene understanding. Techniques for synthetic manipulation and augmentation of data increase diversity within available datasets, while not requiring laborious labeling efforts. That is, the ability to create synthetic video can enable augmentation of small realistic datasets on which to further train Artificial Intelligence and Machine Learning (AI/ML) algorithms. Thus, it may be desirable to add, remove, or modify vehicles in satellite and overhead video. In our previous work, we leveraged Generative Adversarial Networks (GANs) to transform cars into trucks (and vice versa) in static images. We utilized an attention-based masking approach that assists the network in transformation of the object and not background. In addition, we demonstrated the benefits of numerous data augmentation procedures, including presenting a new artificial dataset of vehicles from an aerial perspective and introducing novel augmentation techniques appropriate for our network architectures. This work extends the applied techniques from still imagery to video. We employ a few different architectures: (1) a fully dynamic 3D convolutional discriminator network with static generators, (2) a fully dynamic 3D convolutional discriminator and generator network, and (3) an architecture that computes \"warp\" between frames for input to a static generator. Additionally, to help enforce consistency, we experiment with an interframe classifier that verifies whether two frames belong to the same video sequence or not. We run experiments on a real-world dataset, presenting promising results in terms of FID, KID, and metrics developed from a classifier trained on our dataset.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":"24 30","pages":"1305813 - 1305813-10"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Defense + Commercial Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3013881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The ability to create and detect synthetic video is becoming critically important to scene understanding. Techniques for synthetic manipulation and augmentation of data increase diversity within available datasets, while not requiring laborious labeling efforts. That is, the ability to create synthetic video can enable augmentation of small realistic datasets on which to further train Artificial Intelligence and Machine Learning (AI/ML) algorithms. Thus, it may be desirable to add, remove, or modify vehicles in satellite and overhead video. In our previous work, we leveraged Generative Adversarial Networks (GANs) to transform cars into trucks (and vice versa) in static images. We utilized an attention-based masking approach that assists the network in transformation of the object and not background. In addition, we demonstrated the benefits of numerous data augmentation procedures, including presenting a new artificial dataset of vehicles from an aerial perspective and introducing novel augmentation techniques appropriate for our network architectures. This work extends the applied techniques from still imagery to video. We employ a few different architectures: (1) a fully dynamic 3D convolutional discriminator network with static generators, (2) a fully dynamic 3D convolutional discriminator and generator network, and (3) an architecture that computes "warp" between frames for input to a static generator. Additionally, to help enforce consistency, we experiment with an interframe classifier that verifies whether two frames belong to the same video sequence or not. We run experiments on a real-world dataset, presenting promising results in terms of FID, KID, and metrics developed from a classifier trained on our dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
无人机和卫星图像的视频修改
创建和检测合成视频的能力对场景理解至关重要。合成操作和增强数据的技术增加了可用数据集的多样性,同时不需要费力的标记工作。也就是说,创建合成视频的能力可以增强小型真实数据集,在此基础上进一步训练人工智能和机器学习(AI/ML)算法。因此,在卫星和高空视频中添加、删除或修改车辆可能是可取的。在我们之前的工作中,我们利用生成对抗网络(GAN)将静态图像中的汽车转换为卡车(反之亦然)。我们采用了一种基于注意力的遮蔽方法,协助网络完成对象而非背景的转换。此外,我们还展示了大量数据增强程序的优势,包括从空中视角展示一个新的车辆人工数据集,以及引入适合我们网络架构的新型增强技术。这项工作将应用技术从静态图像扩展到视频。我们采用了几种不同的架构:(1) 带有静态生成器的全动态三维卷积判别器网络,(2) 全动态三维卷积判别器和生成器网络,以及 (3) 计算帧间 "翘曲 "以输入到静态生成器的架构。此外,为了帮助实现一致性,我们还试验了一种帧间分类器,用于验证两个帧是否属于同一视频序列。我们在真实世界的数据集上进行了实验,在 FID、KID 以及在我们的数据集上训练的分类器开发的指标方面取得了可喜的成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhanced robot state estimation using physics-informed neural networks and multimodal proprioceptive data Exploring MOF-based micromotors as SERS sensors Adaptive object detection algorithms for resource constrained autonomous robotic systems Adaptive SIF-EKF estimation for fault detection in attitude control experiments A homogeneous low-resolution face recognition method using correlation features at the edge
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1