Identifying muon rings in VERITAS data using convolutional neural networks trained on images classified with Muon Hunters 2

Kevin Flanagan, John Quinn, D. Wright, H. Dickinson, P. Wilcox, Michael Laraia, S. Serjeant
{"title":"Identifying muon rings in VERITAS data using convolutional neural networks trained on images classified with Muon Hunters 2","authors":"Kevin Flanagan, John Quinn, D. Wright, H. Dickinson, P. Wilcox, Michael Laraia, S. Serjeant","doi":"10.22323/1.395.0766","DOIUrl":null,"url":null,"abstract":"Muons from extensive air showers appear as rings in images taken with imaging atmospheric Cherenkov telescopes, such as VERITAS. These muon-ring images are used for the calibration of the VERITAS telescopes, however the calibration accuracy can be improved with a more efficient muon-identification algorithm. Convolutional neural networks (CNNs) are used in many state-ofthe-art image-recognition systems and are ideal for muon image identification, once trained on a suitable dataset with labels for muon images. However, by training a CNN on a dataset labelled by existing algorithms, the performance of the CNN would be limited by the suboptimal muonidentification efficiency of the original algorithms. Muon Hunters 2 is a citizen science project that asks users to label grids of VERITAS telescope images, stating which images contain muon rings. Each image is labelled 10 times by independent volunteers, and the votes are aggregated and used to assign a ‘muon’ or ‘non-muon’ label to the corresponding image. An analysis was performed using an expert-labelled dataset in order to determine the optimal vote percentage cut-offs for assigning labels to each image for CNN training. This was optimised so as to identify as many muon images as possible while avoiding false positives. The performance of this model greatly improves on existing muon identification algorithms, identifying approximately 30 times the number of muon images identified by the current algorithm implemented in VEGAS (VERITAS Gamma-ray Analysis Suite), and roughly 2.5 times the number identified by the Hough transform method, along with significantly outperforming a CNN trained on VEGAS-labelled data.","PeriodicalId":20473,"journal":{"name":"Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021)","volume":"207 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22323/1.395.0766","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Muons from extensive air showers appear as rings in images taken with imaging atmospheric Cherenkov telescopes, such as VERITAS. These muon-ring images are used for the calibration of the VERITAS telescopes, however the calibration accuracy can be improved with a more efficient muon-identification algorithm. Convolutional neural networks (CNNs) are used in many state-ofthe-art image-recognition systems and are ideal for muon image identification, once trained on a suitable dataset with labels for muon images. However, by training a CNN on a dataset labelled by existing algorithms, the performance of the CNN would be limited by the suboptimal muonidentification efficiency of the original algorithms. Muon Hunters 2 is a citizen science project that asks users to label grids of VERITAS telescope images, stating which images contain muon rings. Each image is labelled 10 times by independent volunteers, and the votes are aggregated and used to assign a ‘muon’ or ‘non-muon’ label to the corresponding image. An analysis was performed using an expert-labelled dataset in order to determine the optimal vote percentage cut-offs for assigning labels to each image for CNN training. This was optimised so as to identify as many muon images as possible while avoiding false positives. The performance of this model greatly improves on existing muon identification algorithms, identifying approximately 30 times the number of muon images identified by the current algorithm implemented in VEGAS (VERITAS Gamma-ray Analysis Suite), and roughly 2.5 times the number identified by the Hough transform method, along with significantly outperforming a CNN trained on VEGAS-labelled data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用卷积神经网络对muon Hunters 2分类的图像进行训练,识别VERITAS数据中的μ子环
在像VERITAS这样的大气切伦科夫成像望远镜拍摄的图像中,来自大面积空气阵雨的μ子以环状的形式出现。这些μ子环图像用于VERITAS望远镜的校准,但可以通过更有效的μ子识别算法来提高校准精度。卷积神经网络(cnn)被用于许多最先进的图像识别系统,一旦在合适的带有μ子图像标签的数据集上进行训练,它是μ子图像识别的理想选择。然而,通过在现有算法标记的数据集上训练CNN, CNN的性能将受到原始算法的次优μ子识别效率的限制。μ子猎人2是一个公民科学项目,要求用户标记VERITAS望远镜图像的网格,说明哪些图像包含μ子环。每张图像由独立志愿者标记10次,然后将投票汇总并用于为相应的图像分配“μ子”或“非μ子”标签。使用专家标记数据集进行分析,以确定为CNN训练的每张图像分配标签的最佳投票百分比截止值。这被优化,以便识别尽可能多的μ子图像,同时避免误报。该模型的性能大大提高了现有的μ子识别算法,识别出的μ子图像数量约为VEGAS (VERITAS Gamma-ray Analysis Suite)中当前算法识别的μ子图像数量的30倍,约为Hough变换方法识别数量的2.5倍,同时显著优于在VEGAS标记数据上训练的CNN。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Southern Wide-field Gamma-ray Observatory reach for Primordial Black Hole evaporation Periodicities Observed in Neutron Monitor Counting Rates Throughout Solar Cycles 20-24 Time calibration of the LHAASO-WCDA detectors Energetic particle observations close to the Sun by Solar Orbiter and Parker Solar Probe Nearly a Decade of Cosmic Ray Observations in the Very Local Interstellar Medium
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1