Hassan Rasheed, Reuben Dorent, Maximilian Fehrentz, Tina Kapur, William M. Wells III, Alexandra Golby, Sarah Frisken, Julia A. Schnabel, Nazim Haouchine
{"title":"学习匹配术前磁共振和术中超声的二维关键点","authors":"Hassan Rasheed, Reuben Dorent, Maximilian Fehrentz, Tina Kapur, William M. Wells III, Alexandra Golby, Sarah Frisken, Julia A. Schnabel, Nazim Haouchine","doi":"arxiv-2409.08169","DOIUrl":null,"url":null,"abstract":"We propose in this paper a texture-invariant 2D keypoints descriptor\nspecifically designed for matching preoperative Magnetic Resonance (MR) images\nwith intraoperative Ultrasound (US) images. We introduce a\nmatching-by-synthesis strategy, where intraoperative US images are synthesized\nfrom MR images accounting for multiple MR modalities and intraoperative US\nvariability. We build our training set by enforcing keypoints localization over\nall images then train a patient-specific descriptor network that learns\ntexture-invariant discriminant features in a supervised contrastive manner,\nleading to robust keypoints descriptors. Our experiments on real cases with\nground truth show the effectiveness of the proposed approach, outperforming the\nstate-of-the-art methods and achieving 80.35% matching precision on average.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"39 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning to Match 2D Keypoints Across Preoperative MR and Intraoperative Ultrasound\",\"authors\":\"Hassan Rasheed, Reuben Dorent, Maximilian Fehrentz, Tina Kapur, William M. Wells III, Alexandra Golby, Sarah Frisken, Julia A. Schnabel, Nazim Haouchine\",\"doi\":\"arxiv-2409.08169\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose in this paper a texture-invariant 2D keypoints descriptor\\nspecifically designed for matching preoperative Magnetic Resonance (MR) images\\nwith intraoperative Ultrasound (US) images. We introduce a\\nmatching-by-synthesis strategy, where intraoperative US images are synthesized\\nfrom MR images accounting for multiple MR modalities and intraoperative US\\nvariability. We build our training set by enforcing keypoints localization over\\nall images then train a patient-specific descriptor network that learns\\ntexture-invariant discriminant features in a supervised contrastive manner,\\nleading to robust keypoints descriptors. Our experiments on real cases with\\nground truth show the effectiveness of the proposed approach, outperforming the\\nstate-of-the-art methods and achieving 80.35% matching precision on average.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":\"39 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08169\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文提出了一种纹理不变的二维关键点描述符,专门用于匹配术前磁共振(MR)图像和术中超声(US)图像。我们引入了 "合成匹配"(batching-by-synthesis)策略,即术中 US 图像由 MR 图像合成,其中考虑了多种 MR 模式和术中 US 变异性。我们通过对整体图像进行关键点定位来建立训练集,然后训练患者特定的描述符网络,该网络以监督对比的方式学习与纹理无关的判别特征,从而获得稳健的关键点描述符。我们在真实病例中进行的实验表明,所提出的方法非常有效,其性能优于最先进的方法,平均匹配精度达到 80.35%。
Learning to Match 2D Keypoints Across Preoperative MR and Intraoperative Ultrasound
We propose in this paper a texture-invariant 2D keypoints descriptor
specifically designed for matching preoperative Magnetic Resonance (MR) images
with intraoperative Ultrasound (US) images. We introduce a
matching-by-synthesis strategy, where intraoperative US images are synthesized
from MR images accounting for multiple MR modalities and intraoperative US
variability. We build our training set by enforcing keypoints localization over
all images then train a patient-specific descriptor network that learns
texture-invariant discriminant features in a supervised contrastive manner,
leading to robust keypoints descriptors. Our experiments on real cases with
ground truth show the effectiveness of the proposed approach, outperforming the
state-of-the-art methods and achieving 80.35% matching precision on average.