Evaluating the Lidar/HSI direct method for physics-based scene modeling

Ryan N. Givens, K. Walli, M. Eismann
{"title":"Evaluating the Lidar/HSI direct method for physics-based scene modeling","authors":"Ryan N. Givens, K. Walli, M. Eismann","doi":"10.1109/AIPR.2014.7041906","DOIUrl":null,"url":null,"abstract":"Recent work has been able to automate the process of generating three-dimensional, spectrally attributed scenes for use in physics-based modeling software using the Lidar/Hyperspectral Direct (LHD) method. The LHD method autonomously generates three-dimensional Digital Imaging and Remote Sensing Image Generation (DIRSIG) scenes from input high-resolution imagery, lidar data, and hyperspectral imagery and has been shown to do this successfully using both modeled and real datasets. While the output scenes look realistic and appear to match the input scenes under qualitative comparisons, a more quantitative approach is needed to evaluate the full utility of these autonomously generated scenes. This paper seeks to improve the evaluation of the spatial and spectral accuracy of autonomously generated three-dimensional scenes using the DIRSIG model. Two scenes are presented for this evaluation. The first is generated from a modeled dataset and the second is generated using data collected over a real-world site. DIRSIG-generated synthetic imagery over the recreated scenes are then compared to the original input imagery to evaluate how well the recreated scenes match the original scenes in spatial and spectral accuracy and to determine the ability of the recreated scenes to produce useful outputs for algorithm development.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2014.7041906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent work has been able to automate the process of generating three-dimensional, spectrally attributed scenes for use in physics-based modeling software using the Lidar/Hyperspectral Direct (LHD) method. The LHD method autonomously generates three-dimensional Digital Imaging and Remote Sensing Image Generation (DIRSIG) scenes from input high-resolution imagery, lidar data, and hyperspectral imagery and has been shown to do this successfully using both modeled and real datasets. While the output scenes look realistic and appear to match the input scenes under qualitative comparisons, a more quantitative approach is needed to evaluate the full utility of these autonomously generated scenes. This paper seeks to improve the evaluation of the spatial and spectral accuracy of autonomously generated three-dimensional scenes using the DIRSIG model. Two scenes are presented for this evaluation. The first is generated from a modeled dataset and the second is generated using data collected over a real-world site. DIRSIG-generated synthetic imagery over the recreated scenes are then compared to the original input imagery to evaluate how well the recreated scenes match the original scenes in spatial and spectral accuracy and to determine the ability of the recreated scenes to produce useful outputs for algorithm development.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估基于物理场景建模的Lidar/HSI直接方法
最近的工作已经能够使用激光雷达/高光谱直接(LHD)方法自动生成三维光谱属性场景的过程,用于基于物理的建模软件。LHD方法从输入的高分辨率图像、激光雷达数据和高光谱图像中自动生成三维数字成像和遥感图像生成(DIRSIG)场景,并已被证明可以成功地使用建模和实际数据集。虽然输出场景看起来很逼真,并且在定性比较下似乎与输入场景相匹配,但需要更定量的方法来评估这些自主生成场景的全部效用。本文旨在利用DIRSIG模型改进对自主生成三维场景的空间和光谱精度的评估。本文给出了两个场景来进行评估。第一个是从建模的数据集生成的,第二个是使用从真实站点收集的数据生成的。然后将dirsig生成的重建场景合成图像与原始输入图像进行比较,以评估重建场景在空间和光谱精度方面与原始场景的匹配程度,并确定重建场景为算法开发提供有用输出的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Learning tree-structured approximations for conditional random fields Multi-resolution deblurring High dynamic range (HDR) video processing for the exploitation of high bit-depth sensors in human-monitored surveillance Extension of no-reference deblurring methods through image fusion 3D sparse point reconstructions of atmospheric nuclear detonations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1