首页 > 最新文献

Medical image understanding and analysis : 26th annual conference, MIUA 2022, Cambridge, UK, July 27-29, 2022, proceedings. Medical Image Understanding and Analysis (Conference) (26th : 2022 : Cambridge, England)最新文献

英文 中文
Weakly Supervised Captioning of Ultrasound Images. 超声图像的弱监督字幕。
Mohammad Alsharid, Harshita Sharma, Lior Drukker, Aris T Papageorgiou, J Alison Noble

Medical image captioning models generate text to describe the semantic contents of an image, aiding the non-experts in understanding and interpretation. We propose a weakly-supervised approach to improve the performance of image captioning models on small image-text datasets by leveraging a large anatomically-labelled image classification dataset. Our method generates pseudo-captions (weak labels) for caption-less but anatomically-labelled (class-labelled) images using an encoder-decoder sequence-to-sequence model. The augmented dataset is used to train an image-captioning model in a weakly supervised learning manner. For fetal ultrasound, we demonstrate that the proposed augmentation approach outperforms the baseline on semantics and syntax-based metrics, with nearly twice as much improvement in value on BLEU-1 and ROUGE-L. Moreover, we observe that superior models are trained with the proposed data augmentation, when compared with the existing regularization techniques. This work allows seamless automatic annotation of images that lack human-prepared descriptive captions for training image-captioning models. Using pseudo-captions in the training data is particularly useful for medical image captioning when significant time and effort of medical experts is required to obtain real image captions.

医学图像字幕模型生成文本来描述图像的语义内容,帮助非专家理解和解释。我们提出了一种弱监督的方法,通过利用大型解剖标记的图像分类数据集来提高图像字幕模型在小型图像-文本数据集上的性能。我们的方法使用编码器-解码器序列到序列模型为无标题但解剖标记(类标记)的图像生成伪标题(弱标签)。增强的数据集用于以弱监督学习的方式训练图像字幕模型。对于胎儿超声,我们证明了所提出的增强方法在基于语义和语法的指标上优于基线,在blue -1和ROUGE-L上的价值提高了近两倍。此外,我们观察到,与现有的正则化技术相比,所提出的数据增强训练出的模型更优。这项工作允许对缺乏人工准备的描述性字幕的图像进行无缝自动注释,以训练图像字幕模型。当医学专家需要花费大量的时间和精力来获得真实的图像字幕时,在训练数据中使用伪字幕对医学图像字幕特别有用。
{"title":"Weakly Supervised Captioning of Ultrasound Images.","authors":"Mohammad Alsharid,&nbsp;Harshita Sharma,&nbsp;Lior Drukker,&nbsp;Aris T Papageorgiou,&nbsp;J Alison Noble","doi":"10.1007/978-3-031-12053-4_14","DOIUrl":"https://doi.org/10.1007/978-3-031-12053-4_14","url":null,"abstract":"<p><p>Medical image captioning models generate text to describe the semantic contents of an image, aiding the non-experts in understanding and interpretation. We propose a weakly-supervised approach to improve the performance of image captioning models on small image-text datasets by leveraging a large anatomically-labelled image classification dataset. Our method generates pseudo-captions (weak labels) for caption-less but anatomically-labelled (class-labelled) images using an encoder-decoder sequence-to-sequence model. The augmented dataset is used to train an image-captioning model in a weakly supervised learning manner. For fetal ultrasound, we demonstrate that the proposed augmentation approach outperforms the baseline on semantics and syntax-based metrics, with nearly twice as much improvement in value on <i>BLEU-1</i> and <i>ROUGE-L</i>. Moreover, we observe that superior models are trained with the proposed data augmentation, when compared with the existing regularization techniques. This work allows seamless automatic annotation of images that lack human-prepared descriptive captions for training image-captioning models. Using pseudo-captions in the training data is particularly useful for medical image captioning when significant time and effort of medical experts is required to obtain real image captions.</p>","PeriodicalId":74147,"journal":{"name":"Medical image understanding and analysis : 26th annual conference, MIUA 2022, Cambridge, UK, July 27-29, 2022, proceedings. Medical Image Understanding and Analysis (Conference) (26th : 2022 : Cambridge, England)","volume":"13413 ","pages":"187-198"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614238/pdf/EMS159395.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9736100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STAMP: A Self-training Student-Teacher Augmentation-Driven Meta Pseudo-Labeling Framework for 3D Cardiac MRI Image Segmentation. STAMP:用于三维心脏磁共振成像图像分割的学生-教师增强驱动元伪标记自我训练框架。
S M Kamrul Hasan, Cristian Linte

Medical image segmentation has significantly benefitted thanks to deep learning architectures. Furthermore, semi-supervised learning (SSL) has led to a significant improvement in overall model performance by leveraging abundant unlabeled data. Nevertheless, one shortcoming of pseudo-labeled based semi-supervised learning is pseudo-labeling bias, whose mitigation is the focus of this work. Here we propose a simple, yet effective SSL framework for image segmentation-STAMP (Student-Teacher Augmentation-driven consistency regularization via Meta Pseudo-Labeling). The proposed method uses self-training (through meta pseudo-labeling) in concert with a Teacher network that instructs the Student network by generating pseudo-labels given unlabeled input data. Unlike pseudo-labeling methods, for which the Teacher network remains unchanged, meta pseudo-labeling methods allow the Teacher network to constantly adapt in response to the performance of the Student network on the labeled dataset, hence enabling the Teacher to identify more effective pseudo-labels to instruct the Student. Moreover, to improve generalization and reduce error rate, we apply both strong and weak data augmentation policies, to ensure the segmentor outputs a consistent probability distribution regardless of the augmentation level. Our extensive experimentation with varied quantities of labeled data in the training sets demonstrates the effectiveness of our model in segmenting the left atrial cavity from Gadolinium-enhanced magnetic resonance (GE-MR) images. By exploiting unlabeled data with weak and strong augmentation effectively, our proposed model yielded a statistically significant 2.6% improvement ( p < 0.001 ) in Dice and a 4.4% improvement ( p < 0.001 ) in Jaccard over other state-of-the-art SSL methods using only 10% labeled data for training.

由于深度学习架构的出现,医学影像分割技术得到了极大的发展。此外,半监督学习(SSL)通过利用丰富的非标记数据,显著提高了模型的整体性能。然而,基于伪标记的半监督学习的一个缺点是伪标记偏差,而减轻伪标记偏差正是这项工作的重点。在这里,我们提出了一种简单而有效的图像分割 SSL 框架--STAMP(通过元伪标记的学生-教师增强驱动一致性正则化)。所提出的方法通过元伪标签与教师网络协同使用自我训练(通过元伪标签),教师网络通过给定未标签的输入数据生成伪标签来指导学生网络。与教师网络保持不变的伪标签方法不同,元伪标签方法允许教师网络根据学生网络在标签数据集上的表现不断调整,从而使教师能够识别出更有效的伪标签来指导学生。此外,为了提高泛化能力并降低错误率,我们采用了强数据增强策略和弱数据增强策略,以确保分割器输出一致的概率分布,而不受增强水平的影响。我们在训练集中使用不同数量的标记数据进行了大量实验,证明了我们的模型在从钆增强磁共振(GE-MR)图像中分割左心房腔方面的有效性。通过有效利用弱增强和强增强的非标记数据,我们提出的模型与其他仅使用 10%标记数据进行训练的最先进 SSL 方法相比,在 Dice 和 Jaccard 方面分别取得了 2.6% (p 0.001)和 4.4% (p 0.001)的显著提高。
{"title":"STAMP: A Self-training Student-Teacher Augmentation-Driven Meta Pseudo-Labeling Framework for 3D Cardiac MRI Image Segmentation.","authors":"S M Kamrul Hasan, Cristian Linte","doi":"10.1007/978-3-031-12053-4_28","DOIUrl":"10.1007/978-3-031-12053-4_28","url":null,"abstract":"<p><p>Medical image segmentation has significantly benefitted thanks to deep learning architectures. Furthermore, semi-supervised learning (SSL) has led to a significant improvement in overall model performance by leveraging abundant unlabeled data. Nevertheless, one shortcoming of pseudo-labeled based semi-supervised learning is pseudo-labeling bias, whose mitigation is the focus of this work. Here we propose a simple, yet effective SSL framework for image segmentation-<i>STAMP</i> (<i>Student-Teacher A</i>ugmentation-driven consistency regularization via <i>M</i>eta <i>P</i>seudo-Labeling). The proposed method uses self-training (through meta pseudo-labeling) in concert with a Teacher network that instructs the Student network by generating pseudo-labels given unlabeled input data. Unlike pseudo-labeling methods, for which the Teacher network remains unchanged, meta pseudo-labeling methods allow the Teacher network to constantly adapt in response to the performance of the Student network on the labeled dataset, hence enabling the Teacher to identify more effective pseudo-labels to instruct the Student. Moreover, to improve generalization and reduce error rate, we apply both strong and weak <i>data augmentation</i> policies, to ensure the segmentor outputs a consistent probability distribution regardless of the augmentation level. Our extensive experimentation with varied quantities of labeled data in the training sets demonstrates the effectiveness of our model in segmenting the left atrial cavity from Gadolinium-enhanced magnetic resonance (GE-MR) images. By exploiting unlabeled data with weak and strong augmentation effectively, our proposed model yielded a statistically significant 2.6% improvement <math><mo>(</mo> <mi>p</mi> <mo><</mo> <mn>0.001</mn> <mo>)</mo></math> in Dice and a 4.4% improvement <math><mo>(</mo> <mi>p</mi> <mo><</mo> <mn>0.001</mn> <mo>)</mo></math> in Jaccard over other state-of-the-art SSL methods using only 10% labeled data for training.</p>","PeriodicalId":74147,"journal":{"name":"Medical image understanding and analysis : 26th annual conference, MIUA 2022, Cambridge, UK, July 27-29, 2022, proceedings. Medical Image Understanding and Analysis (Conference) (26th : 2022 : Cambridge, England)","volume":"13413 ","pages":"371-386"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10134897/pdf/nihms-1892744.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9455789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image understanding and analysis : 26th annual conference, MIUA 2022, Cambridge, UK, July 27-29, 2022, proceedings. Medical Image Understanding and Analysis (Conference) (26th : 2022 : Cambridge, England)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1