Automated radiology report generation has emerged as a crucial technology for improving clinical workflow efficiency and alleviating the documentation burden on radiologists. Current approaches predominantly employ encoder-decoder architectures, they often overemphasize text generation while neglecting two critical issues: inherent biases in textual data distribution that limit abnormal region descriptions, and inadequate cross- modal interaction. To address these challenges, we propose an innovative Image-Tag Adapter (ITAdapter) framework that dynamically balances visual information and diagnostic information during decoding, with particular attention to optimizing feature selection for different types of generated words. The framework incorporates two key components: a Retrieval Knowledge Enhancer (RKE) that utilizes pre-trained CLIP models’ cross-modal retrieval capability to obtain relevant clinical reports as diagnostic references, and an Image-Tag Adapter (ITA) that intelligently fuses visual information with diagnostic information from disease tags. For model optimization, we combine reinforcement learning with knowledge distillation to enable effective knowledge transfer through iterative training. Extensive experiments on IU X-ray and MIMIC-CXR benchmark datasets demonstrate our method’s effectiveness in generating more accurate and clinically relevant reports, achieving the highest performance scores: on IU X-ray, BLEU-1 = 0.536, BLEU-4 = 0.206 and METEOR = 0.220; on MIMIC-CXR, BLEU-1 = 0.411, BLEU-4 = 0.141 and METEOR = 0.152.
扫码关注我们
求助内容:
应助结果提醒方式:
