{"title":"用于模型预训练和可解释预测的乳房x线图像元数据学习","authors":"Lester Litchfield, M. Hill, N. Khan, R. Highnam","doi":"10.1117/12.2626199","DOIUrl":null,"url":null,"abstract":"Purpose: To introduce a novel technique for pretraining deep neural networks on mammographic images, where the network learns to predict multiple metadata attributes and simultaneously to match images from the same patient and study. Further to demonstrate how this network can be used to produce explainable predictions. Methods: We trained a neural network on a dataset of 85,558 raw mammographic images and seven types of metadata, using a combination of supervised and self-supervised learning techniques. We evaluated the performance of our model on a dataset of 4,678 raw mammographic images using classification accuracy and correlation. We also designed an ablation study to demonstrate how the model can produce explainable predictions. Results: The model learned to predict all but one of the seven metadata fields with classification accuracy ranging from 78-99% on the validation dataset. The model was able to predict which images were from the same patient with over 93% accuracy on a balanced dataset. Using a simple X-ray system classifier built on top of the first model, representations learned on the initial X-ray system classification task showed by far the largest effect size on ablation, illustrating a method for producing explainable predictions. Conclusions: It is possible to train a neural network to predict several kinds of mammogram metadata simultaneously. The representations learned by the model for these tasks can be summed to produce an image representation that captures features unique to a patient and study. With such a model, ablation offers a promising method to enhance the explainability of deep learning predictions.","PeriodicalId":92005,"journal":{"name":"Breast imaging : 11th International Workshop, IWDM 2012, Philadelphia, PA, USA, July 8-11, 2012 : proceedings. International Workshop on Breast Imaging (11th : 2012 : Philadelphia, Pa.)","volume":"23 1","pages":"1228616 - 1228616-6"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Mammographic image metadata learning for model pretraining and explainable predictions\",\"authors\":\"Lester Litchfield, M. Hill, N. Khan, R. Highnam\",\"doi\":\"10.1117/12.2626199\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Purpose: To introduce a novel technique for pretraining deep neural networks on mammographic images, where the network learns to predict multiple metadata attributes and simultaneously to match images from the same patient and study. Further to demonstrate how this network can be used to produce explainable predictions. Methods: We trained a neural network on a dataset of 85,558 raw mammographic images and seven types of metadata, using a combination of supervised and self-supervised learning techniques. We evaluated the performance of our model on a dataset of 4,678 raw mammographic images using classification accuracy and correlation. We also designed an ablation study to demonstrate how the model can produce explainable predictions. Results: The model learned to predict all but one of the seven metadata fields with classification accuracy ranging from 78-99% on the validation dataset. The model was able to predict which images were from the same patient with over 93% accuracy on a balanced dataset. Using a simple X-ray system classifier built on top of the first model, representations learned on the initial X-ray system classification task showed by far the largest effect size on ablation, illustrating a method for producing explainable predictions. Conclusions: It is possible to train a neural network to predict several kinds of mammogram metadata simultaneously. The representations learned by the model for these tasks can be summed to produce an image representation that captures features unique to a patient and study. With such a model, ablation offers a promising method to enhance the explainability of deep learning predictions.\",\"PeriodicalId\":92005,\"journal\":{\"name\":\"Breast imaging : 11th International Workshop, IWDM 2012, Philadelphia, PA, USA, July 8-11, 2012 : proceedings. International Workshop on Breast Imaging (11th : 2012 : Philadelphia, Pa.)\",\"volume\":\"23 1\",\"pages\":\"1228616 - 1228616-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Breast imaging : 11th International Workshop, IWDM 2012, Philadelphia, PA, USA, July 8-11, 2012 : proceedings. International Workshop on Breast Imaging (11th : 2012 : Philadelphia, Pa.)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2626199\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Breast imaging : 11th International Workshop, IWDM 2012, Philadelphia, PA, USA, July 8-11, 2012 : proceedings. International Workshop on Breast Imaging (11th : 2012 : Philadelphia, Pa.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2626199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mammographic image metadata learning for model pretraining and explainable predictions
Purpose: To introduce a novel technique for pretraining deep neural networks on mammographic images, where the network learns to predict multiple metadata attributes and simultaneously to match images from the same patient and study. Further to demonstrate how this network can be used to produce explainable predictions. Methods: We trained a neural network on a dataset of 85,558 raw mammographic images and seven types of metadata, using a combination of supervised and self-supervised learning techniques. We evaluated the performance of our model on a dataset of 4,678 raw mammographic images using classification accuracy and correlation. We also designed an ablation study to demonstrate how the model can produce explainable predictions. Results: The model learned to predict all but one of the seven metadata fields with classification accuracy ranging from 78-99% on the validation dataset. The model was able to predict which images were from the same patient with over 93% accuracy on a balanced dataset. Using a simple X-ray system classifier built on top of the first model, representations learned on the initial X-ray system classification task showed by far the largest effect size on ablation, illustrating a method for producing explainable predictions. Conclusions: It is possible to train a neural network to predict several kinds of mammogram metadata simultaneously. The representations learned by the model for these tasks can be summed to produce an image representation that captures features unique to a patient and study. With such a model, ablation offers a promising method to enhance the explainability of deep learning predictions.