Jing Yang , Lianxin Wang , Chen Lin , Jiacheng Wang , Liansheng Wang
{"title":"DDKG: A Dual Domain Knowledge Guidance strategy for localization and diagnosis of non-displaced femoral neck fractures","authors":"Jing Yang , Lianxin Wang , Chen Lin , Jiacheng Wang , Liansheng Wang","doi":"10.1016/j.media.2024.103393","DOIUrl":null,"url":null,"abstract":"<div><div>X-ray is the primary tool for diagnosing fractures, crucial for determining their type, location, and severity. However, non-displaced femoral neck fractures (ND-FNF) can pose challenges in identification due to subtle cracks and complex anatomical structures. Most deep learning-based methods for diagnosing ND-FNF rely on cropped images, necessitating manual annotation of the hip location, which increases annotation costs. To address this challenge, we propose Dual Domain Knowledge Guidance (DDKG), which harnesses spatial and semantic domain knowledge to guide the model in acquiring robust representations of ND-FNF across the whole X-ray image. Specifically, DDKG comprises two key modules: the Spatial Aware Module (SAM) and the Semantic Coordination Module (SCM). SAM employs limited positional supervision to guide the model in focusing on the hip joint region and reducing background interference. SCM integrates information from radiological reports, utilizes prior knowledge from large language models to extract critical information related to ND-FNF, and guides the model to learn relevant visual representations. During inference, the model only requires the whole X-ray image for accurate diagnosis without additional information. The model was validated on datasets from four different centers, showing consistent accuracy and robustness. Codes and models are available at <span><span>https://github.com/Yjing07/DDKG</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103393"},"PeriodicalIF":10.7000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841524003189","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
X-ray is the primary tool for diagnosing fractures, crucial for determining their type, location, and severity. However, non-displaced femoral neck fractures (ND-FNF) can pose challenges in identification due to subtle cracks and complex anatomical structures. Most deep learning-based methods for diagnosing ND-FNF rely on cropped images, necessitating manual annotation of the hip location, which increases annotation costs. To address this challenge, we propose Dual Domain Knowledge Guidance (DDKG), which harnesses spatial and semantic domain knowledge to guide the model in acquiring robust representations of ND-FNF across the whole X-ray image. Specifically, DDKG comprises two key modules: the Spatial Aware Module (SAM) and the Semantic Coordination Module (SCM). SAM employs limited positional supervision to guide the model in focusing on the hip joint region and reducing background interference. SCM integrates information from radiological reports, utilizes prior knowledge from large language models to extract critical information related to ND-FNF, and guides the model to learn relevant visual representations. During inference, the model only requires the whole X-ray image for accurate diagnosis without additional information. The model was validated on datasets from four different centers, showing consistent accuracy and robustness. Codes and models are available at https://github.com/Yjing07/DDKG.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.