Astronomical bodies detection with stacking of CoAtNets by fusion of RGB and depth Images

Chinnala Balakrishna, Shepuri Srinivasulu
{"title":"Astronomical bodies detection with stacking of CoAtNets by fusion of RGB and depth Images","authors":"Chinnala Balakrishna, Shepuri Srinivasulu","doi":"10.30574/ijsra.2024.12.2.1234","DOIUrl":null,"url":null,"abstract":"Space situational awareness (SSA) system requires detection of space objects that are varied in sizes, shapes, and types. The space images are difficult because of various factors such as illumination and noise and as a result make the recognition task complex. Image fusion is an important area in image processing for a variety of applications including RGB-D sensor fusion, remote sensing, medical diagnostics, and infrared and visible image fusion. In recent times, various image fusion algorithms have been developed and they showed a superior performance to explore more information that is not available in single images. In this paper I compared various methods of RGB and Depth image fusion for space object classification task. The experiments were carried out, and the performance was evaluated using fusion performance metrics. It was found that the guided filter context enhancement (GFCE) outperformed other image fusion methods in terms of average gradient, spatial frequency, and entropy. Additionally, due to its ability to balance between good performance and inference speed, GFCE was selected for RGB and Depth image fusion stage before feature extraction and classification stage. The outcome of fusion method is merged images that were used to train a deep assembly of CoAtNets to classify space objects into ten categories. The deep ensemble learning methods including bagging, boosting, and stacking were trained and evaluated for classification purposes. It was found that combination of fusion and stacking was able to improve classification accuracy.","PeriodicalId":14366,"journal":{"name":"International Journal of Science and Research Archive","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Science and Research Archive","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30574/ijsra.2024.12.2.1234","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Space situational awareness (SSA) system requires detection of space objects that are varied in sizes, shapes, and types. The space images are difficult because of various factors such as illumination and noise and as a result make the recognition task complex. Image fusion is an important area in image processing for a variety of applications including RGB-D sensor fusion, remote sensing, medical diagnostics, and infrared and visible image fusion. In recent times, various image fusion algorithms have been developed and they showed a superior performance to explore more information that is not available in single images. In this paper I compared various methods of RGB and Depth image fusion for space object classification task. The experiments were carried out, and the performance was evaluated using fusion performance metrics. It was found that the guided filter context enhancement (GFCE) outperformed other image fusion methods in terms of average gradient, spatial frequency, and entropy. Additionally, due to its ability to balance between good performance and inference speed, GFCE was selected for RGB and Depth image fusion stage before feature extraction and classification stage. The outcome of fusion method is merged images that were used to train a deep assembly of CoAtNets to classify space objects into ten categories. The deep ensemble learning methods including bagging, boosting, and stacking were trained and evaluated for classification purposes. It was found that combination of fusion and stacking was able to improve classification accuracy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过融合 RGB 和深度图像,利用堆叠 CoAtNets 检测天体
空间态势感知(SSA)系统需要检测大小、形状和类型各异的空间物体。由于光照和噪声等各种因素,空间图像很难识别,因此识别任务非常复杂。图像融合是图像处理的一个重要领域,可用于多种应用,包括 RGB-D 传感器融合、遥感、医疗诊断以及红外和可见光图像融合。近来,各种图像融合算法相继问世,它们在探索更多单一图像所不具备的信息方面表现出了卓越的性能。在本文中,我比较了用于空间物体分类任务的 RGB 和深度图像融合的各种方法。实验采用融合性能指标对性能进行评估。结果发现,就平均梯度、空间频率和熵而言,引导滤波上下文增强(GFCE)优于其他图像融合方法。此外,由于 GFCE 能够在良好性能和推理速度之间取得平衡,因此在特征提取和分类阶段之前的 RGB 和深度图像融合阶段选择了 GFCE。融合方法的结果是合并图像,用于训练 CoAtNets 深度集合,将空间物体分为十个类别。为了分类的目的,对深度集合学习方法进行了训练和评估,包括bagging、boosting和stacking。结果发现,融合和堆叠的组合能够提高分类的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Isolation and biochemical screening of Phosphate Solubilizing Bacteria from Kans grass rhizosphere of Fly Ash Dump Sites near NTPC, Angul The impact of personality disorders in parents on children's social skills, peer relationships, and emotional development: A systematic review and meta-analysis Validation of the crib ii score for predicting very low birth weight infants’ mortality Optimizing antenna performance: A review of multiple-input multiple output (MIMO) antenna design techniques The influence of school principal leadership, teachers’ creativity and motivation on teachers’ performance in vocational high schools in Bangka Regency
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1