Hyper-YOLO: When Visual Object Detection Meets Hypergraph Computation

Yifan Feng;Jiangang Huang;Shaoyi Du;Shihui Ying;Jun-Hai Yong;Yipeng Li;Guiguang Ding;Rongrong Ji;Yue Gao
{"title":"Hyper-YOLO: When Visual Object Detection Meets Hypergraph Computation","authors":"Yifan Feng;Jiangang Huang;Shaoyi Du;Shihui Ying;Jun-Hai Yong;Yipeng Li;Guiguang Ding;Rongrong Ji;Yue Gao","doi":"10.1109/TPAMI.2024.3524377","DOIUrl":null,"url":null,"abstract":"We introduce Hyper-YOLO, a new object detection method that integrates hypergraph computations to capture the complex high-order correlations among visual features. Traditional YOLO models, while powerful, have limitations in their neck designs that restrict the integration of cross-level features and the exploitation of high-order feature interrelationships. To address these challenges, we propose the Hypergraph Computation Empowered Semantic Collecting and Scattering (HGC-SCS) framework, which transposes visual feature maps into a semantic space and constructs a hypergraph for high-order message propagation. This enables the model to acquire both semantic and structural information, advancing beyond conventional feature-focused learning. Hyper-YOLO incorporates the proposed Mixed Aggregation Network (MANet) in its backbone for enhanced feature extraction and introduces the Hypergraph-Based Cross-Level and Cross-Position Representation Network (HyperC2Net) in its neck. HyperC2Net operates across five scales and breaks free from traditional grid structures, allowing for sophisticated high-order interactions across levels and positions. This synergy of components positions Hyper-YOLO as a state-of-the-art architecture in various scale models, as evidenced by its superior performance on the COCO dataset. Specifically, Hyper-YOLO-N significantly outperforms the advanced YOLOv8-N and YOLOv9-T with 12% <inline-formula><tex-math>$\\text{AP}^{val}$</tex-math></inline-formula> and 9% <inline-formula><tex-math>$\\text{AP}^{val}$</tex-math></inline-formula> improvements.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 4","pages":"2388-2401"},"PeriodicalIF":18.6000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10818703/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We introduce Hyper-YOLO, a new object detection method that integrates hypergraph computations to capture the complex high-order correlations among visual features. Traditional YOLO models, while powerful, have limitations in their neck designs that restrict the integration of cross-level features and the exploitation of high-order feature interrelationships. To address these challenges, we propose the Hypergraph Computation Empowered Semantic Collecting and Scattering (HGC-SCS) framework, which transposes visual feature maps into a semantic space and constructs a hypergraph for high-order message propagation. This enables the model to acquire both semantic and structural information, advancing beyond conventional feature-focused learning. Hyper-YOLO incorporates the proposed Mixed Aggregation Network (MANet) in its backbone for enhanced feature extraction and introduces the Hypergraph-Based Cross-Level and Cross-Position Representation Network (HyperC2Net) in its neck. HyperC2Net operates across five scales and breaks free from traditional grid structures, allowing for sophisticated high-order interactions across levels and positions. This synergy of components positions Hyper-YOLO as a state-of-the-art architecture in various scale models, as evidenced by its superior performance on the COCO dataset. Specifically, Hyper-YOLO-N significantly outperforms the advanced YOLOv8-N and YOLOv9-T with 12% $\text{AP}^{val}$ and 9% $\text{AP}^{val}$ improvements.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Hyper-YOLO:当视觉对象检测与超图计算相结合
我们介绍了一种新的目标检测方法Hyper-YOLO,它集成了超图计算来捕获视觉特征之间复杂的高阶相关性。传统的YOLO模型虽然功能强大,但其颈部设计存在局限性,限制了跨层特征的集成和高阶特征相互关系的利用。为了解决这些挑战,我们提出了Hypergraph computing Empowered Semantic Collecting and Scattering (HGC-SCS)框架,该框架将视觉特征映射转换为语义空间,并构建用于高阶消息传播的超图。这使得模型能够同时获取语义和结构信息,超越了传统的以特征为中心的学习。Hyper-YOLO在其主干中引入了混合聚合网络(MANet)以增强特征提取,并在其颈部引入了基于hypergraph的跨层和跨位置表示网络(Hypergraph-Based Cross-Level and Cross-Position Representation Network, HyperC2Net)。hyper2net可在五个尺度上运行,打破了传统的网格结构,允许跨层和位置进行复杂的高阶交互。这种组件的协同作用使Hyper-YOLO成为各种比例模型中最先进的架构,正如其在COCO数据集上的卓越性能所证明的那样。具体来说,hyper - yoloo - n以12%的$\text{AP}^{val}$和9%的$\text{AP}^{val}$的改进显著优于高级的YOLOv8-N和YOLOv9-T。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Unsupervised Gaze Representation Learning by Switching Features. H2OT: Hierarchical Hourglass Tokenizer for Efficient Video Pose Transformers. MV2DFusion: Leveraging Modality-Specific Object Semantics for Multi-Modal 3D Detection. Parse Trees Guided LLM Prompt Compression. Fast Multi-View Discrete Clustering Via Spectral Embedding Fusion.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1