Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433821
Eneko Uruñuela, Stefano Moia, C. Caballero-Gaudes
Current deconvolution algorithms for functional magnetic resonance imaging (fMRI) data are hindered by widespread signal changes arising from motion or physiological processes (e.g. deep breaths) that can be interpreted incorrectly as neuronal-related hemodynamic events. This work proposes a novel deconvolution approach that simultaneously estimates global signal fluctuations and neuronal-related activity with no prior information about the timings of the blood oxygenation level-dependent (BOLD) events by means of a low rank plus sparse decomposition algorithm. The performance of the proposed method is evaluated on simulated and experimental fMRI data, and compared with state-of-the-art sparsity-based deconvolution approaches and with a conventional analysis that is aware of the temporal model of the neuronal-related activity. We demonstrate that the novel low-rank and sparse paradigm free mapping algorithm can estimate global signal fluctuations related to motion in our task, while estimating the neuronal-related activity with high fidelity.
{"title":"A Low Rank and Sparse Paradigm Free Mapping Algorithm For Deconvolution of FMRI Data","authors":"Eneko Uruñuela, Stefano Moia, C. Caballero-Gaudes","doi":"10.1109/ISBI48211.2021.9433821","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433821","url":null,"abstract":"Current deconvolution algorithms for functional magnetic resonance imaging (fMRI) data are hindered by widespread signal changes arising from motion or physiological processes (e.g. deep breaths) that can be interpreted incorrectly as neuronal-related hemodynamic events. This work proposes a novel deconvolution approach that simultaneously estimates global signal fluctuations and neuronal-related activity with no prior information about the timings of the blood oxygenation level-dependent (BOLD) events by means of a low rank plus sparse decomposition algorithm. The performance of the proposed method is evaluated on simulated and experimental fMRI data, and compared with state-of-the-art sparsity-based deconvolution approaches and with a conventional analysis that is aware of the temporal model of the neuronal-related activity. We demonstrate that the novel low-rank and sparse paradigm free mapping algorithm can estimate global signal fluctuations related to motion in our task, while estimating the neuronal-related activity with high fidelity.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114990341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434140
Jacob Carse, F. Carey, S. McKenna
Digital pathology tasks have benefited greatly from modern deep learning algorithms. However, their need for large quantities of annotated data has been identified as a key challenge. This need for data can be countered by using unsupervised learning in situations where data are abundant but access to annotations is limited. Feature representations learned from un-annotated data using contrastive predictive coding (CPC) have been shown to enable classifiers to obtain state of the art performance from relatively small amounts of annotated computer vision data. We present a modification to the CPC framework for use with digital pathology patches. This is achieved by introducing an alternative mask for building the latent context and using a multi-directional PixelCNN autoregressor. To demonstrate our proposed method we learn feature representations from the Patch Camelyon histology dataset. We show that our proposed modification can yield improved deep classification of histology patches.
{"title":"Unsupervised Representation Learning From Pathology Images With Multi-Directional Contrastive Predictive Coding","authors":"Jacob Carse, F. Carey, S. McKenna","doi":"10.1109/ISBI48211.2021.9434140","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434140","url":null,"abstract":"Digital pathology tasks have benefited greatly from modern deep learning algorithms. However, their need for large quantities of annotated data has been identified as a key challenge. This need for data can be countered by using unsupervised learning in situations where data are abundant but access to annotations is limited. Feature representations learned from un-annotated data using contrastive predictive coding (CPC) have been shown to enable classifiers to obtain state of the art performance from relatively small amounts of annotated computer vision data. We present a modification to the CPC framework for use with digital pathology patches. This is achieved by introducing an alternative mask for building the latent context and using a multi-directional PixelCNN autoregressor. To demonstrate our proposed method we learn feature representations from the Patch Camelyon histology dataset. We show that our proposed modification can yield improved deep classification of histology patches.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115282421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433907
Afonso Nunes, S. Desai, T. Semple, Anand Shah, E. Angelini
Chronic Pulmonary Aspergillosis (CPA) is a complex type of fungal infection caused by the Aspergillus fungus that mostly affects people with pre-existing lung lesions or weakened immune systems. Pleural thicknening, fungal balls and cavities visualized on CT scans are used to score the extent and gravity of CPA, in a qualitative manner. This work focuses on the use of deep-learning to improve current standards in localising and scoring CPA signs for longitudinal follow-up. We propose an original framework fully implemented in 3D, combining imaging and time series encoding, to provide activation maps, CPA severity scores and 5-years mortality prediction.
{"title":"3d Pathological Signs Detection And Scoring On CPA CT Lung Scans","authors":"Afonso Nunes, S. Desai, T. Semple, Anand Shah, E. Angelini","doi":"10.1109/ISBI48211.2021.9433907","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433907","url":null,"abstract":"Chronic Pulmonary Aspergillosis (CPA) is a complex type of fungal infection caused by the Aspergillus fungus that mostly affects people with pre-existing lung lesions or weakened immune systems. Pleural thicknening, fungal balls and cavities visualized on CT scans are used to score the extent and gravity of CPA, in a qualitative manner. This work focuses on the use of deep-learning to improve current standards in localising and scoring CPA signs for longitudinal follow-up. We propose an original framework fully implemented in 3D, combining imaging and time series encoding, to provide activation maps, CPA severity scores and 5-years mortality prediction.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117138693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433972
Salamata Konate, Léo Lebrat, Rodrigo Santa Cruz, P. Bourgeat, V. Doré, J. Fripp, Andrew Bradley, C. Fookes, Olivier Salvado
Despite the pervasive growth of deep neural networks in medical image analysis, methods to monitor and assess network outputs, such as segmentation or regression, remain limited. In this paper, we introduce SMOCAM (SMOoth Conditional Attention Mask), an optimization method that reveals the specific regions of the input image taken into account by the prediction of a trained neural network. We developed SMOCAM explicitly to perform saliency analysis for complex regression tasks in 3D medical imagery. Our formulation optimises an 3D-attention mask at a given layer of a convolutional neural network (CNN). Unlike previous attempts, our method is relatively fast (40s per output) and is suitable for large data such as 3D MRI. We applied SMOCAM on a CNN that predicts Brain morphometry from 3D MRI which was trained using more than 5000 3D brain MRIs. We show that SMOCAM highlights neural network’s limitations when cases are underrepresented and in cases with large volume asymmetry.
{"title":"Smocam: Smooth Conditional Attention Mask For 3d-Regression Models","authors":"Salamata Konate, Léo Lebrat, Rodrigo Santa Cruz, P. Bourgeat, V. Doré, J. Fripp, Andrew Bradley, C. Fookes, Olivier Salvado","doi":"10.1109/ISBI48211.2021.9433972","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433972","url":null,"abstract":"Despite the pervasive growth of deep neural networks in medical image analysis, methods to monitor and assess network outputs, such as segmentation or regression, remain limited. In this paper, we introduce SMOCAM (SMOoth Conditional Attention Mask), an optimization method that reveals the specific regions of the input image taken into account by the prediction of a trained neural network. We developed SMOCAM explicitly to perform saliency analysis for complex regression tasks in 3D medical imagery. Our formulation optimises an 3D-attention mask at a given layer of a convolutional neural network (CNN). Unlike previous attempts, our method is relatively fast (40s per output) and is suitable for large data such as 3D MRI. We applied SMOCAM on a CNN that predicts Brain morphometry from 3D MRI which was trained using more than 5000 3D brain MRIs. We show that SMOCAM highlights neural network’s limitations when cases are underrepresented and in cases with large volume asymmetry.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"61 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121003483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain imaging genetics is an emerging research field to explore the underlying genetic architecture of brain structure and function measured by different imaging modalities. As a bi-multivariate technique for brain imaging genetics, sparse canonical correlation analysis (SCCA) has the good ability to identify complex multi-SNP-multi-QT associations. However, for most current brain imaging genetics with SCCA, there exist three main challenges for calculating accurate bi-multivariate relationships and selecting relevant features, i.e., nonlinearity, high-dimensionality (across all 4005 network edges between 90 brain regions), and a small number of subjects. We propose a novel deep self-reconstruction sparse canonical correlation analysis (DS-SCCA) for solving mentioned challenges in brain imaging genetics problems. Specifically, we employ deep network, i.e., multiple stacked layers of nonlinear transformation, as the kernel function, and learn the self-reconstruction matrix to reconstruct the original data at the top layer of the network. The parameters of our model are iteratively learned using parametric approach, augmented Lagrange method, and stochastic gradient descent for optimization. Experimental results on ADNI dataset are given to demonstrate that our method produces improved cross-validation performances and biologically meaningful results.
{"title":"Deep Self-Reconstruction Sparse Canonical Correlation Analysis For Brain Imaging Genetics","authors":"Meiling Wang, Wei Shao, Shuo Huang, Daoqiang Zhang","doi":"10.1109/ISBI48211.2021.9434077","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434077","url":null,"abstract":"Brain imaging genetics is an emerging research field to explore the underlying genetic architecture of brain structure and function measured by different imaging modalities. As a bi-multivariate technique for brain imaging genetics, sparse canonical correlation analysis (SCCA) has the good ability to identify complex multi-SNP-multi-QT associations. However, for most current brain imaging genetics with SCCA, there exist three main challenges for calculating accurate bi-multivariate relationships and selecting relevant features, i.e., nonlinearity, high-dimensionality (across all 4005 network edges between 90 brain regions), and a small number of subjects. We propose a novel deep self-reconstruction sparse canonical correlation analysis (DS-SCCA) for solving mentioned challenges in brain imaging genetics problems. Specifically, we employ deep network, i.e., multiple stacked layers of nonlinear transformation, as the kernel function, and learn the self-reconstruction matrix to reconstruct the original data at the top layer of the network. The parameters of our model are iteratively learned using parametric approach, augmented Lagrange method, and stochastic gradient descent for optimization. Experimental results on ADNI dataset are given to demonstrate that our method produces improved cross-validation performances and biologically meaningful results.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434020
V. Sairanen, Mario Ocampo-Pineda, C. Granziera, S. Schiavi, Alessandro Daducci
Diffusion-weighted magnetic resonance imaging tractography is used to represent brain structures but it has limited specificity. Tractogram filtering is proposed to fix this by utilizing e.g. microstructural information to find which streamlines are essential in respect to the original measurements. However, filtered results can be biased if the measurements are unreliable due to partial voluming or artifacts e.g. due to subject motion. We propose augmenting filtering methods with outlier information to adjust for such unreliability. We implemented this in the Convex Optimization modelling for Microstructure Informed Tractography (COMMIT) framework to conduct experiments on data from a synthetic fiber phantom and the Human Connectome Project. Our results demonstrate that the newly augmented COMMIT provides more precise estimations of intra-axonal signal fractions than the original algorithm when diffusion-weighted images are affected by artifacts. Furthermore, we argue this approach could be highly beneficial for clinical studies with limited resolution and numerous unreliable measurements.
{"title":"Enhancing Reliability Of Structural Brain Connectivity With Outlier Adjusted Tractogram Filtering","authors":"V. Sairanen, Mario Ocampo-Pineda, C. Granziera, S. Schiavi, Alessandro Daducci","doi":"10.1109/ISBI48211.2021.9434020","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434020","url":null,"abstract":"Diffusion-weighted magnetic resonance imaging tractography is used to represent brain structures but it has limited specificity. Tractogram filtering is proposed to fix this by utilizing e.g. microstructural information to find which streamlines are essential in respect to the original measurements. However, filtered results can be biased if the measurements are unreliable due to partial voluming or artifacts e.g. due to subject motion. We propose augmenting filtering methods with outlier information to adjust for such unreliability. We implemented this in the Convex Optimization modelling for Microstructure Informed Tractography (COMMIT) framework to conduct experiments on data from a synthetic fiber phantom and the Human Connectome Project. Our results demonstrate that the newly augmented COMMIT provides more precise estimations of intra-axonal signal fractions than the original algorithm when diffusion-weighted images are affected by artifacts. Furthermore, we argue this approach could be highly beneficial for clinical studies with limited resolution and numerous unreliable measurements.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126115374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433757
Shujaat Khan, Jaeyoung Huh, Jong-Chul Ye
In ultrasound (US) imaging, various adaptive beamforming methods have been proposed to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, they often require computationally expensive calculations and their performance degrades when the underlying model is not sufficiently accurate. Moreover, ultrasound images usually require various type of post filtration such as deblurring and despeckling, etc., which further increase the complexity of the system. Deep learning-based solutions provides a quick remedy to these issue; however, in the current technology, a separate beamformer should be trained and stored for each application, demanding significant scanner resources. To address this problem, here we propose a switchable deep beamformer that can produce various types of output such as DAS, speckle removal, deconvolution, etc., using a single network with a simple switch. In particular, the switch is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated by merely changing the AdaIN code. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.
{"title":"Switchable Deep Beamformer For Ultrasound Imaging Using Adain","authors":"Shujaat Khan, Jaeyoung Huh, Jong-Chul Ye","doi":"10.1109/ISBI48211.2021.9433757","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433757","url":null,"abstract":"In ultrasound (US) imaging, various adaptive beamforming methods have been proposed to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, they often require computationally expensive calculations and their performance degrades when the underlying model is not sufficiently accurate. Moreover, ultrasound images usually require various type of post filtration such as deblurring and despeckling, etc., which further increase the complexity of the system. Deep learning-based solutions provides a quick remedy to these issue; however, in the current technology, a separate beamformer should be trained and stored for each application, demanding significant scanner resources. To address this problem, here we propose a switchable deep beamformer that can produce various types of output such as DAS, speckle removal, deconvolution, etc., using a single network with a simple switch. In particular, the switch is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated by merely changing the AdaIN code. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433836
Amir Akbarnejad, Nilanjan Ray, G. Bigras
Adopting machine learning methods for histological sections is a challenging task given the generated huge size of whole slide images (WSIs) especially using high power resolution. In this paper we propose a novel WSI classification method which efficiently predicts a WSI’s label. The proposed method considers each WSI as a population of patches and computes a statistic by having some samples from the population. This statistic can be computed efficiently, and our test time on a WSI is about one tenth of that of the existing methods. Moreover, our pooling strategy on the WSI is more general than that of previous works. Further, the assumptions of our method are quite general, and therefore, it is applicable to any WSI classification task. The experiments show that the performance of our method is competitive in two different tasks, while, unlike some of the competing methods, it does not consider any prior clinical knowledge about the label to be predicted.
{"title":"Deep Fisher Vector Coding For Whole Slide Image Classification","authors":"Amir Akbarnejad, Nilanjan Ray, G. Bigras","doi":"10.1109/ISBI48211.2021.9433836","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433836","url":null,"abstract":"Adopting machine learning methods for histological sections is a challenging task given the generated huge size of whole slide images (WSIs) especially using high power resolution. In this paper we propose a novel WSI classification method which efficiently predicts a WSI’s label. The proposed method considers each WSI as a population of patches and computes a statistic by having some samples from the population. This statistic can be computed efficiently, and our test time on a WSI is about one tenth of that of the existing methods. Moreover, our pooling strategy on the WSI is more general than that of previous works. Further, the assumptions of our method are quite general, and therefore, it is applicable to any WSI classification task. The experiments show that the performance of our method is competitive in two different tasks, while, unlike some of the competing methods, it does not consider any prior clinical knowledge about the label to be predicted.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116219895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434154
Jefferson Rodríguez, David Romo-Bucheli, F. Sierra, Diana Valenzuela, C. Valenzuela, Lina Vasquez, Paúl Camacho, Daniela S. Mantilla, F. Martínez
This work introduces a 3D deep learning methodology to stratify patients according to the severity of lung infection caused by COVID-19 disease on computerized tomography images (CT). A set of volumetric attention maps were also obtained to explain the results and support the diagnostic tasks. The validation of the approach was carried out on a dataset composed of 350 patients, diagnosed by the RT-PCR assay either as negative (control - 175) or positive (COVID-19 - 175). Additionally, the patients were graded (0-25) by two expert radiologists according to the extent of lobar involvement. These gradings were used to define 5 COVID-19 severity categories. The model yields an average 60% accuracy for the multi-severity classification task. Additionally, a set of Mann Whitney U significance tests were conducted to compare the severity groups. Results show that patients in different severity groups have significantly different severity scores (p < 0.01) for all the compared severity groups.
{"title":"A Covid-19 Patient Severity Stratification using a 3D Convolutional Strategy on CT-Scans","authors":"Jefferson Rodríguez, David Romo-Bucheli, F. Sierra, Diana Valenzuela, C. Valenzuela, Lina Vasquez, Paúl Camacho, Daniela S. Mantilla, F. Martínez","doi":"10.1109/ISBI48211.2021.9434154","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434154","url":null,"abstract":"This work introduces a 3D deep learning methodology to stratify patients according to the severity of lung infection caused by COVID-19 disease on computerized tomography images (CT). A set of volumetric attention maps were also obtained to explain the results and support the diagnostic tasks. The validation of the approach was carried out on a dataset composed of 350 patients, diagnosed by the RT-PCR assay either as negative (control - 175) or positive (COVID-19 - 175). Additionally, the patients were graded (0-25) by two expert radiologists according to the extent of lobar involvement. These gradings were used to define 5 COVID-19 severity categories. The model yields an average 60% accuracy for the multi-severity classification task. Additionally, a set of Mann Whitney U significance tests were conducted to compare the severity groups. Results show that patients in different severity groups have significantly different severity scores (p < 0.01) for all the compared severity groups.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"16 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116550864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433795
Heng Li, Haofeng Liu, Yan Hu, Risa Higashita, Yitian Zhao, H. Qi, Jiang Liu
Cataract presents the leading cause of preventable blindness in the world. The degraded image quality of cataract fundus increases the risk of misdiagnosis and the uncertainty in preoperative planning. Unfortunately, the absence of annotated data, which should consist of cataract images and the corresponding clear ones from the same patients after surgery, limits the development of restoration algorithms for cataract images. In this paper, we propose an end-to-end unsupervised restoration method of cataract images to enhance the clinical observation of cataract fundus. The proposed method begins with constructing an annotated source domain through simulating cataract-like images. Then a restoration model for cataract images is designed based on pix2pix framework and trained via unsupervised domain adaptation to generalize the restoration mapping from simulated data to real one. In the experiment, the proposed method is validated in an ablation study and a comparison with previous methods. A favorable performance is presented by the proposed method against the previous methods. The code of of this paper will be released at https://github.com/liamheng/Restoration-of-Cataract-Images-via-Domain-Adaptation.
{"title":"Restoration Of Cataract Fundus Images Via Unsupervised Domain Adaptation","authors":"Heng Li, Haofeng Liu, Yan Hu, Risa Higashita, Yitian Zhao, H. Qi, Jiang Liu","doi":"10.1109/ISBI48211.2021.9433795","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433795","url":null,"abstract":"Cataract presents the leading cause of preventable blindness in the world. The degraded image quality of cataract fundus increases the risk of misdiagnosis and the uncertainty in preoperative planning. Unfortunately, the absence of annotated data, which should consist of cataract images and the corresponding clear ones from the same patients after surgery, limits the development of restoration algorithms for cataract images. In this paper, we propose an end-to-end unsupervised restoration method of cataract images to enhance the clinical observation of cataract fundus. The proposed method begins with constructing an annotated source domain through simulating cataract-like images. Then a restoration model for cataract images is designed based on pix2pix framework and trained via unsupervised domain adaptation to generalize the restoration mapping from simulated data to real one. In the experiment, the proposed method is validated in an ablation study and a comparison with previous methods. A favorable performance is presented by the proposed method against the previous methods. The code of of this paper will be released at https://github.com/liamheng/Restoration-of-Cataract-Images-via-Domain-Adaptation.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121848671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}