Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433876
A. Chakravarty, Avik Kar, Ramanathan Sethuraman, D. Sheet
The shortage of Radiologists is inspiring the development of Deep Learning (DL) based solutions for detecting cardio, thoracic and pulmonary pathologies in Chest radiographs through multi-institutional collaborations. However, sharing the training data across multiple sites is often impossible due to privacy, ownership and technical challenges. Although Federated Learning (FL) has emerged as a solution to this, the large variations in disease prevalence and co-morbidity distributions across the sites may hinder proper training. We propose a DL architecture with a Convolutional Neural Network (CNN) followed by a Graph Neural Network (GNN) to address this issue. The CNN-GNN model is trained by modifying the Federated Averaging algorithm. The CNN weights are shared across all sites to extract robust features while separate GNN models are trained at each site to leverage the local co-morbidity dependencies for multi-label disease classification. The CheXpert dataset is partitioned across five sites to simulate the FL set up. Federated training did not show any significant drop in performance over centralized training. The site-specific GNN models also demonstrated their efficacy in modelling local disease co-occurrence statistics leading to an average area under the ROC curve of 0.79 with a 1.74% improvement.
{"title":"Federated Learning for Site Aware Chest Radiograph Screening","authors":"A. Chakravarty, Avik Kar, Ramanathan Sethuraman, D. Sheet","doi":"10.1109/ISBI48211.2021.9433876","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433876","url":null,"abstract":"The shortage of Radiologists is inspiring the development of Deep Learning (DL) based solutions for detecting cardio, thoracic and pulmonary pathologies in Chest radiographs through multi-institutional collaborations. However, sharing the training data across multiple sites is often impossible due to privacy, ownership and technical challenges. Although Federated Learning (FL) has emerged as a solution to this, the large variations in disease prevalence and co-morbidity distributions across the sites may hinder proper training. We propose a DL architecture with a Convolutional Neural Network (CNN) followed by a Graph Neural Network (GNN) to address this issue. The CNN-GNN model is trained by modifying the Federated Averaging algorithm. The CNN weights are shared across all sites to extract robust features while separate GNN models are trained at each site to leverage the local co-morbidity dependencies for multi-label disease classification. The CheXpert dataset is partitioned across five sites to simulate the FL set up. Federated training did not show any significant drop in performance over centralized training. The site-specific GNN models also demonstrated their efficacy in modelling local disease co-occurrence statistics leading to an average area under the ROC curve of 0.79 with a 1.74% improvement.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121683412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434004
Herman Verinaz-Jadan, P. Song, Carmel L. Howe, Peter Quicke, Amanda J. Foust, P. Dragotti
Light Field Microscopy (LFM) is an imaging technique that captures 3D spatial information in a single 2D image. LFM is attractive because of its relatively simple implementation and fast acquisition rate. However, classic 3D reconstruction typically suffers from high computational cost, low lateral resolution, and reconstruction artifacts. In this work, we propose a new physics-based learning approach to improve the performance of the reconstruction under realistic conditions, these being lack of training data, background noise, and high data dimensionality. First, we propose a novel description of the system using a linear convolutional neural network. This description is complemented by a method that compacts the number of views of the acquired light field. Then, this model is used to solve the inverse problem under two scenarios. If labelled data is available, we train an end-to-end network that uses the Learned Iterative Shrinkage and Thresholding Algorithm (LISTA). If no labelled data is available, we propose an unsupervised technique that uses only unlabelled data to train LISTA by making use of Wasserstein Generative Adversarial Networks (WGANs). We experimentally show that our approach performs better than classic strategies in terms of artifact reduction and image quality.
{"title":"Deep Learning For Light Field Microscopy Using Physics-Based Models","authors":"Herman Verinaz-Jadan, P. Song, Carmel L. Howe, Peter Quicke, Amanda J. Foust, P. Dragotti","doi":"10.1109/ISBI48211.2021.9434004","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434004","url":null,"abstract":"Light Field Microscopy (LFM) is an imaging technique that captures 3D spatial information in a single 2D image. LFM is attractive because of its relatively simple implementation and fast acquisition rate. However, classic 3D reconstruction typically suffers from high computational cost, low lateral resolution, and reconstruction artifacts. In this work, we propose a new physics-based learning approach to improve the performance of the reconstruction under realistic conditions, these being lack of training data, background noise, and high data dimensionality. First, we propose a novel description of the system using a linear convolutional neural network. This description is complemented by a method that compacts the number of views of the acquired light field. Then, this model is used to solve the inverse problem under two scenarios. If labelled data is available, we train an end-to-end network that uses the Learned Iterative Shrinkage and Thresholding Algorithm (LISTA). If no labelled data is available, we propose an unsupervised technique that uses only unlabelled data to train LISTA by making use of Wasserstein Generative Adversarial Networks (WGANs). We experimentally show that our approach performs better than classic strategies in terms of artifact reduction and image quality.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121757157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434059
Angshuman Paul, Thomas C. Shen, Yifan Peng, Zhiyong Lu, R. Summers
A trained radiologist may learn the visual presentation of a new disease by looking at a few relevant image examples in research articles. However, training a machine learning model in such a manner is an arduous task not only due to the small number of labeled training images but also for the low resolution of such images. We design a few-shot learning method that can diagnose new diseases from chest x-rays utilizing only a few relevant labeled x-ray images from the published literature. Our method uses prior knowledge about other diseases for feature extraction from x-rays of new diseases. We formulate a classifier that is initially trained with a few labeled feature vectors corresponding to low-resolution images from the PubMed Central. The classifier is subsequently re-trained using unlabeled feature vectors corresponding to high-resolution x-ray images. Experiments on publicly available datasets show the superiority of the proposed method to several state-of-the-art few-shot learning techniques for chest x-ray diagnosis.
{"title":"Learning Few-Shot Chest X-Ray Diagnosis Using Images From The Published Scientific Literature","authors":"Angshuman Paul, Thomas C. Shen, Yifan Peng, Zhiyong Lu, R. Summers","doi":"10.1109/ISBI48211.2021.9434059","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434059","url":null,"abstract":"A trained radiologist may learn the visual presentation of a new disease by looking at a few relevant image examples in research articles. However, training a machine learning model in such a manner is an arduous task not only due to the small number of labeled training images but also for the low resolution of such images. We design a few-shot learning method that can diagnose new diseases from chest x-rays utilizing only a few relevant labeled x-ray images from the published literature. Our method uses prior knowledge about other diseases for feature extraction from x-rays of new diseases. We formulate a classifier that is initially trained with a few labeled feature vectors corresponding to low-resolution images from the PubMed Central. The classifier is subsequently re-trained using unlabeled feature vectors corresponding to high-resolution x-ray images. Experiments on publicly available datasets show the superiority of the proposed method to several state-of-the-art few-shot learning techniques for chest x-ray diagnosis.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121947765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433946
Yutong Yan, Pierre-Henri Conze, G. Quellec, M. Lamard, B. Cochener, G. Coatrieux
Manually segmenting masses from native mammograms is a very time-consuming and error-prone task. Therefore, an integrated computer-aided diagnosis (CAD) system is required to assist radiologists for automatic and precise breast mass delineation. In this work, we present a two-stage multi-scale pipeline that provides accurate mass delineations from high-resolution full mammograms. First, we propose an extended deep detector integrating a multi-scale fusion strategy for automated mass localization. Second, a convolutional encoder-decoder network using nested and dense skip connections is used to fine-delineate candidate masses. Experiments on public DDSM-CBIS and INbreast datasets reveals strong robustness against the diversity of size, shape and appearance of masses, with an average Dice of 80.44% on INbreast. This shows promising accuracy as an automated full-image mass segmentation system, towards better interaction-free CAD.
{"title":"Two-Stage Multi-Scale Mass Segmentation From Full Mammograms","authors":"Yutong Yan, Pierre-Henri Conze, G. Quellec, M. Lamard, B. Cochener, G. Coatrieux","doi":"10.1109/ISBI48211.2021.9433946","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433946","url":null,"abstract":"Manually segmenting masses from native mammograms is a very time-consuming and error-prone task. Therefore, an integrated computer-aided diagnosis (CAD) system is required to assist radiologists for automatic and precise breast mass delineation. In this work, we present a two-stage multi-scale pipeline that provides accurate mass delineations from high-resolution full mammograms. First, we propose an extended deep detector integrating a multi-scale fusion strategy for automated mass localization. Second, a convolutional encoder-decoder network using nested and dense skip connections is used to fine-delineate candidate masses. Experiments on public DDSM-CBIS and INbreast datasets reveals strong robustness against the diversity of size, shape and appearance of masses, with an average Dice of 80.44% on INbreast. This shows promising accuracy as an automated full-image mass segmentation system, towards better interaction-free CAD.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122910848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434117
Aurélie Leborgne, F. Ber, Laetitia Degiorgis, L. Harsan, Stella Marc-Zwecker, V. Noblet
Functional Magnetic Resonance Imaging (fMRI) is an imaging technique that allows to explore brain function in vivo. Many methods dedicated to analyzing these data are based on graph modeling, each node corresponding to a brain region and the edges representing their functional link. The objective of this work is to investigate the interest of methods for extracting frequent pattern in graphs to compare these data between two populations. Results are presented in the context of the characterization of a mouse model of Alzheimer’s disease in comparison with a group of control mice.
{"title":"Analysis Of Brain Functional Connectivity By Frequent Pattern Mining In Graphs. Application To The Characterization Of Murine Models","authors":"Aurélie Leborgne, F. Ber, Laetitia Degiorgis, L. Harsan, Stella Marc-Zwecker, V. Noblet","doi":"10.1109/ISBI48211.2021.9434117","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434117","url":null,"abstract":"Functional Magnetic Resonance Imaging (fMRI) is an imaging technique that allows to explore brain function in vivo. Many methods dedicated to analyzing these data are based on graph modeling, each node corresponding to a brain region and the edges representing their functional link. The objective of this work is to investigate the interest of methods for extracting frequent pattern in graphs to compare these data between two populations. Results are presented in the context of the characterization of a mouse model of Alzheimer’s disease in comparison with a group of control mice.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124232122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computed tomography urography imaging is routinely performed to evaluate the kidneys. Kidney 3D segmentation and reconstruction from urographic images provides physicians with an intuitive visualization way to accurately diagnose and treat kidney diseases, particularly used in surgical planning and outcome analysis before and after kidney surgery. While 3D fully convolution networks have achieved a big success in medical image segmentation, they get trapped in clinical unseen data and cannot be adapted in deferent modalities with one training procedure. This study proposes an unsupervised domain adaptation or translation method with 2D networks to deeply learn urographic images for accurate kidney segmentation. We tested our proposed method on clinical urography data. The experimental results demonstrate our proposed method can resolve the domain shift problem of kidney segmentation and achieve the comparable or better results than supervised learning based segmentation methods.
{"title":"Accurate 3d Kidney Segmentation Using Unsupervised Domain Translation And Adversarial Networks","authors":"Wankang Zeng, Wenkang Fan, Rongzhen Chen, Zhuohui Zheng, Song Zheng, Jianhui Chen, Rong Liu, Q. Zeng, Zengqin Liu, Yinran Chen, Xióngbiao Luó","doi":"10.1109/ISBI48211.2021.9434099","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434099","url":null,"abstract":"Computed tomography urography imaging is routinely performed to evaluate the kidneys. Kidney 3D segmentation and reconstruction from urographic images provides physicians with an intuitive visualization way to accurately diagnose and treat kidney diseases, particularly used in surgical planning and outcome analysis before and after kidney surgery. While 3D fully convolution networks have achieved a big success in medical image segmentation, they get trapped in clinical unseen data and cannot be adapted in deferent modalities with one training procedure. This study proposes an unsupervised domain adaptation or translation method with 2D networks to deeply learn urographic images for accurate kidney segmentation. We tested our proposed method on clinical urography data. The experimental results demonstrate our proposed method can resolve the domain shift problem of kidney segmentation and achieve the comparable or better results than supervised learning based segmentation methods.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125385894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433765
Golara Javadi, S. Samadi, Sharareh Bayat, Samira Sojoudi, Antonio Hurtado, Silvia D. Chang, Peter C. Black, P. Mousavi, P. Abolmaesumi
Ultrasound imaging is a common tool used in prostate biopsy. The challenges associated with using a systematic and nontargeted approach are the high rate of false negatives and not being patient specific. Intraprostatic pathology information of individuals is not available during the biopsy procedure. Even after histopathology analysis of the biopsy cores, the report only represents a statistical distribution of cancer within the core. Labeling the data based on these noisy labels results in challenges for network training, where networks inevitably overfit to noisy data. To overcome this problem, we argue that it is critical to build a clean dataset. In this paper, we address the challenges associated with using statistical labels and alleviate this issue by taking advantage of confident learning to estimate uncertainty in the data label. Next, we find the label error, clean the labels, and evaluate the clean data by comparing it using a metric based on the involvement of cancer in core.
{"title":"Characterizing The Uncertainty Of Label Noise In Systematic Ultrasound-Guided Prostate Biopsy","authors":"Golara Javadi, S. Samadi, Sharareh Bayat, Samira Sojoudi, Antonio Hurtado, Silvia D. Chang, Peter C. Black, P. Mousavi, P. Abolmaesumi","doi":"10.1109/ISBI48211.2021.9433765","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433765","url":null,"abstract":"Ultrasound imaging is a common tool used in prostate biopsy. The challenges associated with using a systematic and nontargeted approach are the high rate of false negatives and not being patient specific. Intraprostatic pathology information of individuals is not available during the biopsy procedure. Even after histopathology analysis of the biopsy cores, the report only represents a statistical distribution of cancer within the core. Labeling the data based on these noisy labels results in challenges for network training, where networks inevitably overfit to noisy data. To overcome this problem, we argue that it is critical to build a clean dataset. In this paper, we address the challenges associated with using statistical labels and alleviate this issue by taking advantage of confident learning to estimate uncertainty in the data label. Next, we find the label error, clean the labels, and evaluate the clean data by comparing it using a metric based on the involvement of cancer in core.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129977968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434162
Ranjeet Ranjan Jha, Hritik Gupta, S. Pathak, W. Schneider, B. V. R. Kumar, A. Bhavsar, A. Nigam
In addition to the more traditional diffusion tensor imaging (DTI), over time, reconstruction techniques like HARDI have been proposed, which have a comparatively higher scanning time due to increased measurements, but are significantly better in the estimation of fiber structures. In order to make HARDI-based analysis faster, we propose an approach to reconstruct more HARDI volumes in q-space. The proposed GAN-based architecture leverages several modules, including a multi-context module, feature inter-dependencies module along-with numerous losses such as L1, adversarial, and total variation loss, to learn the transformation. The method is backed by some encouraging quantitative and visual results.
{"title":"Enhancing HARDI Reconstruction from Undersampled Data Via Multi-Context and Feature Inter-Dependency GAN","authors":"Ranjeet Ranjan Jha, Hritik Gupta, S. Pathak, W. Schneider, B. V. R. Kumar, A. Bhavsar, A. Nigam","doi":"10.1109/ISBI48211.2021.9434162","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434162","url":null,"abstract":"In addition to the more traditional diffusion tensor imaging (DTI), over time, reconstruction techniques like HARDI have been proposed, which have a comparatively higher scanning time due to increased measurements, but are significantly better in the estimation of fiber structures. In order to make HARDI-based analysis faster, we propose an approach to reconstruct more HARDI volumes in q-space. The proposed GAN-based architecture leverages several modules, including a multi-context module, feature inter-dependencies module along-with numerous losses such as L1, adversarial, and total variation loss, to learn the transformation. The method is backed by some encouraging quantitative and visual results.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130095999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433849
Ben Huyge, Jonathan G. Sanctorum, Nathanael Six, J. D. Beenhouwer, Jan Sijbers
One of the most commonly used correction methods in X-ray imaging is flat field correction, which corrects for systematic inconsistencies, such as differences in detector pixel response. In conventional X-ray imaging, flat fields are acquired by exposing the detector without any object in the X-ray beam. However, in edge illumination X-ray CT, which is an emerging phase contrast imaging technique, two masks are used to measure the refraction of the X-rays. These masks remain in place while the flat fields are acquired and thus influence the intensity of the flat fields. This influence is studied theoretically and validated experimentally using Monte Carlo simulations of an edge illumination experiment in GATE.
{"title":"Analysis Of Flat Fields In Edge Illumination Phase Contrast Imaging","authors":"Ben Huyge, Jonathan G. Sanctorum, Nathanael Six, J. D. Beenhouwer, Jan Sijbers","doi":"10.1109/ISBI48211.2021.9433849","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433849","url":null,"abstract":"One of the most commonly used correction methods in X-ray imaging is flat field correction, which corrects for systematic inconsistencies, such as differences in detector pixel response. In conventional X-ray imaging, flat fields are acquired by exposing the detector without any object in the X-ray beam. However, in edge illumination X-ray CT, which is an emerging phase contrast imaging technique, two masks are used to measure the refraction of the X-rays. These masks remain in place while the flat fields are acquired and thus influence the intensity of the flat fields. This influence is studied theoretically and validated experimentally using Monte Carlo simulations of an edge illumination experiment in GATE.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124974531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433801
Suemin Lee, I. Bajić
Deep Neural Networks (DNNs) have become ubiquitous in medical image processing and analysis. Among them, U-Nets are very popular in various image segmentation tasks. Yet, little is known about how information flows through these networks and whether they are indeed properly designed for the tasks they are being proposed for. In this paper, we employ information-theoretic tools in order to gain insight into information flow through U-Nets. In particular, we show how mutual information between input/output and an intermediate layer can be a useful tool to understand information flow through various portions of a U-Net, assess its architectural efficiency, and even propose more efficient designs.
{"title":"Information Flow Through U-Nets","authors":"Suemin Lee, I. Bajić","doi":"10.1109/ISBI48211.2021.9433801","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433801","url":null,"abstract":"Deep Neural Networks (DNNs) have become ubiquitous in medical image processing and analysis. Among them, U-Nets are very popular in various image segmentation tasks. Yet, little is known about how information flows through these networks and whether they are indeed properly designed for the tasks they are being proposed for. In this paper, we employ information-theoretic tools in order to gain insight into information flow through U-Nets. In particular, we show how mutual information between input/output and an intermediate layer can be a useful tool to understand information flow through various portions of a U-Net, assess its architectural efficiency, and even propose more efficient designs.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131495400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}