PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration.

Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim
{"title":"PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration.","authors":"Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim","doi":"10.1007/s10278-024-01249-w","DOIUrl":null,"url":null,"abstract":"<p><p>PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"957-966"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950488/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01249-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/9 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PelviNet:用于增强骨盆图像配准的多代理卷积协作网络
PelviNet 引入了一种开创性的多代理卷积网络架构,专为增强骨盆图像配准而定制。这一创新框架利用共享卷积层,实现了代理之间的同步学习,确保对复杂的三维骨盆结构进行详尽分析。该架构结合了最大池化、参数化 ReLU 激活和代理特定层,以优化个体和集体决策过程。通信机制可有效汇总这些共享层的输出,使代理能够利用综合智能做出明智的决策。PelviNet 的评估以定量准确度指标和可视化表示为中心,以阐明代理在精确定位最佳地标方面的表现。实证结果表明 PelviNet 优于传统方法,图像平均误差为 2.8 毫米,主体平均误差为 3.2 毫米,平均欧氏距离误差为 3.0 毫米。这些定量结果凸显了该模型在地标识别方面的高效性和精确性,这对于放射治疗等医疗领域至关重要,因为准确的地标识别会对治疗效果产生重大影响。通过可靠地识别关键结构,PelviNet 推进了骨盆图像分析,并为更广泛的医学成像应用提供了潜在的增强功能,标志着计算医疗向前迈出了重要一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Effectiveness of AI-CAD Software for Breast Cancer Detection in Automated Breast Ultrasound. Topological Feature Extraction from Multi-color Channels for Pattern Recognition: An Application to Fundus Image Analysis. A Hybrid YOLOv8s+Swin-T Transformer Approach for Automated Caries Detection on Periapical Radiographs. Improving Chronological Age Estimation in Children Using the Demirjian Method Enhanced with Transformer and Regression Models. Harnessing Native-Resolution 2D Embeddings for Lung Cancer Classification: A Feasibility Study with the RAD-DINO Self-supervised Foundation Model.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1