Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098471
Mansu Kim, Ji Hye Won, Jisu Hong, Junmo Kwon, Hyunjin Park, Li Shen
Imaging genetics is a methodology for discovering associations between imaging and genetic variables. Many studies adopted sparse models such as sparse canonical correlation analysis (SCCA) for imaging genetics. These methods are limited to modeling the linear imaging genetics relationship and cannot capture the non-linear high-level relationship between the explored variables. Deep learning approaches are underexplored in imaging genetics, compared to their great successes in many other biomedical domains such as image segmentation and disease classification. In this work, we proposed a deep learning model to select genetic features that can explain the imaging features well. Our empirical study on simulated and real datasets demonstrated that our method outperformed the widely used SCCA method and was able to select important genetic features in a robust fashion. These promising results indicate our deep learning model has the potential to reveal new biomarkers to improve mechanistic understanding of the studied brain disorders.
{"title":"DEEP NETWORK-BASED FEATURE SELECTION FOR IMAGING GENETICS: APPLICATION TO IDENTIFYING BIOMARKERS FOR PARKINSON'S DISEASE.","authors":"Mansu Kim, Ji Hye Won, Jisu Hong, Junmo Kwon, Hyunjin Park, Li Shen","doi":"10.1109/isbi45749.2020.9098471","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098471","url":null,"abstract":"<p><p>Imaging genetics is a methodology for discovering associations between imaging and genetic variables. Many studies adopted sparse models such as sparse canonical correlation analysis (SCCA) for imaging genetics. These methods are limited to modeling the linear imaging genetics relationship and cannot capture the non-linear high-level relationship between the explored variables. Deep learning approaches are underexplored in imaging genetics, compared to their great successes in many other biomedical domains such as image segmentation and disease classification. In this work, we proposed a deep learning model to select genetic features that can explain the imaging features well. Our empirical study on simulated and real datasets demonstrated that our method outperformed the widely used SCCA method and was able to select important genetic features in a robust fashion. These promising results indicate our deep learning model has the potential to reveal new biomarkers to improve mechanistic understanding of the studied brain disorders.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39498848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098414
Lingyan Hao, Shunxing Bao, Yucheng Tang, Riqiang Gao, Prasanna Parvathaneni, Jacob A Miller, Willa Voorhies, Jewelia Yao, Silvia A Bunge, Kevin S Weiner, Bennett A Landman, Ilwoo Lyu
In this paper, we present the automatic labeling framework for sulci in the human lateral prefrontal cortex (PFC). We adapt an existing spherical U-Net architecture with our recent surface data augmentation technique to improve the sulcal labeling accuracy in a developmental cohort. Specifically, our framework consists of the following key components: (1) augmented geometrical features being generated during cortical surface registration, (2) spherical U-Net architecture to efficiently fit the augmented features, and (3) postrefinement of sulcal labeling by optimizing spatial coherence via a graph cut technique. We validate our method on 30 healthy subjects with manual labeling of sulcal regions within PFC. In the experiments, we demonstrate significantly improved labeling performance (0.7749) in mean Dice overlap compared to that of multi-atlas (0.6410) and standard spherical U-Net (0.7011) approaches, respectively (p < 0.05). Additionally, the proposed method achieves a full set of sulcal labels in 20 seconds in this developmental cohort.
{"title":"AUTOMATIC LABELING OF CORTICAL SULCI USING SPHERICAL CONVOLUTIONAL NEURAL NETWORKS IN A DEVELOPMENTAL COHORT.","authors":"Lingyan Hao, Shunxing Bao, Yucheng Tang, Riqiang Gao, Prasanna Parvathaneni, Jacob A Miller, Willa Voorhies, Jewelia Yao, Silvia A Bunge, Kevin S Weiner, Bennett A Landman, Ilwoo Lyu","doi":"10.1109/isbi45749.2020.9098414","DOIUrl":"10.1109/isbi45749.2020.9098414","url":null,"abstract":"<p><p>In this paper, we present the automatic labeling framework for sulci in the human lateral prefrontal cortex (PFC). We adapt an existing spherical U-Net architecture with our recent surface data augmentation technique to improve the sulcal labeling accuracy in a developmental cohort. Specifically, our framework consists of the following key components: (1) augmented geometrical features being generated during cortical surface registration, (2) spherical U-Net architecture to efficiently fit the augmented features, and (3) postrefinement of sulcal labeling by optimizing spatial coherence via a graph cut technique. We validate our method on 30 healthy subjects with manual labeling of sulcal regions within PFC. In the experiments, we demonstrate significantly improved labeling performance (0.7749) in mean Dice overlap compared to that of multi-atlas (0.6410) and standard spherical U-Net (0.7011) approaches, respectively (p < 0.05). Additionally, the proposed method achieves a full set of sulcal labels in 20 seconds in this developmental cohort.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"412-415"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7296783/pdf/nihms-1597608.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38057501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098558
Syed Ahmed Nadeem, Eric A Hoffman, Alejandro P Comellas, Punam K Saha
Quantitative computed tomography (CT)-based characterization of bronchial metrics is increasingly being used to investigate chronic obstructive pulmonary disease (COPD)-related phenotypes. Automated methods for airway measurements benefit large multi-site studies by reducing cost and subjectivity errors. Critical challenges for CT-based analysis of airway morphology are related to location of lumen and wall transitions in the presence of varying scales and intensity-contrasts from proximal to distal sites. This paper introduces locally adaptive half-max methods to locate airway lumen and wall transitions and compute cross-sectional lumen area and wall-thickness. Also, the method uses a consistency analysis of wall-thickness to avoid adjoining-structure-artifacts. Experimental results show that computed bronchial measures at individual anatomic airway tree locations are repeat CT scan reproducible with intra-class correlation coefficient (ICC) values exceeding 0.9 and 0.8 for lumen-area and wall-thickness, respectively. Observed ICC values for derived morphologic measures, e.g., lumen-area compactness (ICC>0.67) and tapering (ICC>0.47) are relatively lower.
{"title":"LOCALLY ADAPTIVE HALF-MAX METHODS FOR AIRWAY LUMEN-AREA AND WALL-THICKNESS AND THEIR REPEAT CT SCAN REPRODUCIBILITY.","authors":"Syed Ahmed Nadeem, Eric A Hoffman, Alejandro P Comellas, Punam K Saha","doi":"10.1109/isbi45749.2020.9098558","DOIUrl":"10.1109/isbi45749.2020.9098558","url":null,"abstract":"<p><p>Quantitative computed tomography (CT)-based characterization of bronchial metrics is increasingly being used to investigate chronic obstructive pulmonary disease (COPD)-related phenotypes. Automated methods for airway measurements benefit large multi-site studies by reducing cost and subjectivity errors. Critical challenges for CT-based analysis of airway morphology are related to location of lumen and wall transitions in the presence of varying scales and intensity-contrasts from proximal to distal sites. This paper introduces locally adaptive half-max methods to locate airway lumen and wall transitions and compute cross-sectional lumen area and wall-thickness. Also, the method uses a consistency analysis of wall-thickness to avoid adjoining-structure-artifacts. Experimental results show that computed bronchial measures at individual anatomic airway tree locations are repeat CT scan reproducible with intra-class correlation coefficient (ICC) values exceeding 0.9 and 0.8 for lumen-area and wall-thickness, respectively. Observed ICC values for derived morphologic measures, e.g., lumen-area compactness (ICC>0.67) and tapering (ICC>0.47) are relatively lower.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8375398/pdf/nihms-1571624.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39335062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/ISBI45749.2020.9098555
Cooper J Mellema, Alex Treacher, Kevin P Nguyen, Albert Montillo
Currently, the diagnosis of Autism Spectrum Disorder (ASD) is dependent upon a subjective, time-consuming evaluation of behavioral tests by an expert clinician. Non-invasive functional MRI (fMRI) characterizes brain connectivity and may be used to inform diagnoses and democratize medicine. However, successful construction of predictive models, such as deep learning models, from fMRI requires addressing key choices about the model's architecture, including the number of layers and number of neurons per layer. Meanwhile, deriving functional connectivity (FC) features from fMRI requires choosing an atlas with an appropriate level of granularity. Once an accurate diagnostic model has been built, it is vital to determine which features are predictive of ASD and if similar features are learned across atlas granularity levels. Identifying new important features extends our understanding of the biological underpinnings of ASD, while identifying features that corroborate past findings and extend across atlas levels instills model confidence. To identify aptly suited architectural configurations, probability distributions of the configurations of high versus low performing models are compared. To determine the effect of atlas granularity, connectivity features are derived from atlases with 3 levels of granularity and important features are ranked with permutation feature importance. Results show the highest performing models use between 2-4 hidden layers and 16-64 neurons per layer, granularity dependent. Connectivity features identified as important across all 3 atlas granularity levels include FC to the supplementary motor gyrus and language association cortex, regions whose abnormal development are associated with deficits in social and sensory processing common in ASD. Importantly, the cerebellum, often not included in functional analyses, is also identified as a region whose abnormal connectivity is highly predictive of ASD. Results of this study identify important regions to include in future studies of ASD, help assist in the selection of network architectures, and help identify appropriate levels of granularity to facilitate the development of accurate diagnostic models of ASD.
{"title":"Architectural configurations, atlas granularity and functional connectivity with diagnostic value in Autism Spectrum Disorder.","authors":"Cooper J Mellema, Alex Treacher, Kevin P Nguyen, Albert Montillo","doi":"10.1109/ISBI45749.2020.9098555","DOIUrl":"https://doi.org/10.1109/ISBI45749.2020.9098555","url":null,"abstract":"<p><p>Currently, the diagnosis of Autism Spectrum Disorder (ASD) is dependent upon a subjective, time-consuming evaluation of behavioral tests by an expert clinician. Non-invasive functional MRI (fMRI) characterizes brain connectivity and may be used to inform diagnoses and democratize medicine. However, successful construction of predictive models, such as deep learning models, from fMRI requires addressing key choices about the model's architecture, including the number of layers and number of neurons per layer. Meanwhile, deriving functional connectivity (FC) features from fMRI requires choosing an atlas with an appropriate level of granularity. Once an accurate diagnostic model has been built, it is vital to determine which features are predictive of ASD and if similar features are learned across atlas granularity levels. Identifying new important features extends our understanding of the biological underpinnings of ASD, while identifying features that corroborate past findings and extend across atlas levels instills model confidence. To identify aptly suited architectural configurations, probability distributions of the configurations of high versus low performing models are compared. To determine the effect of atlas granularity, connectivity features are derived from atlases with 3 levels of granularity and important features are ranked with permutation feature importance. Results show the highest performing models use between 2-4 hidden layers and 16-64 neurons per layer, granularity dependent. Connectivity features identified as important across all 3 atlas granularity levels include FC to the supplementary motor gyrus and language association cortex, regions whose abnormal development are associated with deficits in social and sensory processing common in ASD. Importantly, the cerebellum, often not included in functional analyses, is also identified as a region whose abnormal connectivity is highly predictive of ASD. Results of this study identify important regions to include in future studies of ASD, help assist in the selection of network architectures, and help identify appropriate levels of granularity to facilitate the development of accurate diagnostic models of ASD.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1022-1025"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI45749.2020.9098555","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25517923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098599
Davood Karimi, Jurriaan M Peters, Abdelhakim Ouaalam, Sanjay P Prabhu, Mustafa Sahin, Darcy A Krueger, Alexander Kolevzon, Charis Eng, Simon K Warfield, Ali Gholipour
Supervised training of deep neural networks in medical imaging applications relies heavily on expert-provided annotations. These annotations, however, are often imperfect, as voxel-by-voxel labeling of structures on 3D images is difficult and laborious. In this paper, we focus on one common type of label imperfection, namely, false negatives. Focusing on brain lesion detection, we propose a method to train a convolutional neural network (CNN) to segment lesions while simultaneously improving the quality of the training labels by identifying false negatives and adding them to the training labels. To identify lesions missed by annotators in the training data, our method makes use of the 1) CNN predictions, 2) prediction uncertainty estimated during training, and 3) prior knowledge about lesion size and features. On a dataset of 165 scans of children with tuberous sclerosis complex from five centers, our method achieved better lesion detection and segmentation accuracy than the baseline CNN trained on the noisy labels, and than several alternative techniques.
{"title":"LEARNING TO DETECT BRAIN LESIONS FROM NOISY ANNOTATIONS.","authors":"Davood Karimi, Jurriaan M Peters, Abdelhakim Ouaalam, Sanjay P Prabhu, Mustafa Sahin, Darcy A Krueger, Alexander Kolevzon, Charis Eng, Simon K Warfield, Ali Gholipour","doi":"10.1109/isbi45749.2020.9098599","DOIUrl":"10.1109/isbi45749.2020.9098599","url":null,"abstract":"<p><p>Supervised training of deep neural networks in medical imaging applications relies heavily on expert-provided annotations. These annotations, however, are often imperfect, as voxel-by-voxel labeling of structures on 3D images is difficult and laborious. In this paper, we focus on one common type of label imperfection, namely, false negatives. Focusing on brain lesion detection, we propose a method to train a convolutional neural network (CNN) to segment lesions while simultaneously improving the quality of the training labels by identifying false negatives and adding them to the training labels. To identify lesions missed by annotators in the training data, our method makes use of the 1) CNN predictions, 2) prediction uncertainty estimated during training, and 3) prior knowledge about lesion size and features. On a dataset of 165 scans of children with tuberous sclerosis complex from five centers, our method achieved better lesion detection and segmentation accuracy than the baseline CNN trained on the noisy labels, and than several alternative techniques.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1910-1914"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098599","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38340495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098684
Ukash Nakarmi, Joseph Y Cheng, Edgar P Rios, Morteza Mardani, John M Pauly, Leslie Ying, Shreyas S Vasanawala
Accelerating data acquisition in magnetic resonance imaging (MRI) has been of perennial interest due to its prohibitively slow data acquisition process. Recent trends in accelerating MRI employ data-centric deep learning frameworks due to its fast inference time and 'one-parameter-fit-all' principle unlike in traditional model-based acceleration techniques. Unrolled deep learning framework that combines the deep priors and model knowledge are robust compared to naive deep learning based framework. In this paper, we propose a novel multi-scale unrolled deep learning framework which learns deep image priors through multi-scale CNN and is combined with unrolled framework to enforce data-consistency and model knowledge. Essentially, this framework combines the best of both learning paradigms:model-based and data-centric learning paradigms. Proposed method is verified using several experiments on numerous data sets.
{"title":"Multi-scale Unrolled Deep Learning Framework for Accelerated Magnetic Resonance Imaging.","authors":"Ukash Nakarmi, Joseph Y Cheng, Edgar P Rios, Morteza Mardani, John M Pauly, Leslie Ying, Shreyas S Vasanawala","doi":"10.1109/isbi45749.2020.9098684","DOIUrl":"10.1109/isbi45749.2020.9098684","url":null,"abstract":"<p><p>Accelerating data acquisition in magnetic resonance imaging (MRI) has been of perennial interest due to its prohibitively slow data acquisition process. Recent trends in accelerating MRI employ data-centric deep learning frameworks due to its fast inference time and 'one-parameter-fit-all' principle unlike in traditional model-based acceleration techniques. Unrolled deep learning framework that combines the deep priors and model knowledge are robust compared to naive deep learning based framework. In this paper, we propose a novel multi-scale unrolled deep learning framework which learns deep image priors through multi-scale CNN and is combined with unrolled framework to enforce data-consistency and model knowledge. Essentially, this framework combines the best of both learning paradigms:model-based and data-centric learning paradigms. Proposed method is verified using several experiments on numerous data sets.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1056-1059"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098684","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38679063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098453
Sizhuo Liu, Edward Reehorst, Philip Schniter, Rizwan Ahmad
Cardiac magnetic resonance imaging (CMR) is a noninvasive imaging modality that provides a comprehensive evaluation of the cardiovascular system. The clinical utility of CMR is hampered by long acquisition times, however. In this work, we propose and validate a plug-and-play (PnP) method for CMR reconstruction from undersampled multicoil data. To fully exploit the rich image structure inherent in CMR, we pair the PnP framework with a deep learning (DL)-based denoiser that is trained using spatiotemporal patches from high-quality, breath-held cardiac cine images. The resulting "PnP-DL" method iterates over data consistency and denoising subroutines. We compare the reconstruction performance of PnP-DL to that of compressed sensing (CS) using eight breath-held and ten real-time (RT) free-breathing cardiac cine datasets. We find that, for breath-held datasets, PnP-DL offers more than one dB advantage over commonly used CS methods. For RT free-breathing datasets, where ground truth is not available, PnP-DL receives higher scores in qualitative evaluation. The results highlight the potential of PnP-DL to accelerate RT CMR.
{"title":"FREE-BREATHING CARDIOVASCULAR MRI USING A PLUG-AND-PLAY METHOD WITH LEARNED DENOISER.","authors":"Sizhuo Liu, Edward Reehorst, Philip Schniter, Rizwan Ahmad","doi":"10.1109/isbi45749.2020.9098453","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098453","url":null,"abstract":"<p><p>Cardiac magnetic resonance imaging (CMR) is a noninvasive imaging modality that provides a comprehensive evaluation of the cardiovascular system. The clinical utility of CMR is hampered by long acquisition times, however. In this work, we propose and validate a plug-and-play (PnP) method for CMR reconstruction from undersampled multicoil data. To fully exploit the rich image structure inherent in CMR, we pair the PnP framework with a deep learning (DL)-based denoiser that is trained using spatiotemporal patches from high-quality, breath-held cardiac cine images. The resulting \"PnP-DL\" method iterates over data consistency and denoising subroutines. We compare the reconstruction performance of PnP-DL to that of compressed sensing (CS) using eight breath-held and ten real-time (RT) free-breathing cardiac cine datasets. We find that, for breath-held datasets, PnP-DL offers more than one dB advantage over commonly used CS methods. For RT free-breathing datasets, where ground truth is not available, PnP-DL receives higher scores in qualitative evaluation. The results highlight the potential of PnP-DL to accelerate RT CMR.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1748-1751"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098453","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39834656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098611
Haotian Wang, Min Xian, Aleksandar Vakanski
Separating overlapped nuclei is a major challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on public datasets; however, their performance in segmenting overlapped nuclei are limited. To address the issue, we propose the bending loss regularized network for nuclei segmentation. The proposed bending loss defines high penalties to contour points with large curvatures, and applies small penalties to contour points with small curvature. Minimizing the bending loss can avoid generating contours that encompass multiple nuclei. The proposed approach is validated on the MoNuSeg dataset using five quantitative metrics. It outperforms six state-of-the-art approaches on the following metrics: Aggregate Jaccard Index, Dice, Recognition Quality, and Panoptic Quality.
{"title":"BENDING LOSS REGULARIZED NETWORK FOR NUCLEI SEGMENTATION IN HISTOPATHOLOGY IMAGES.","authors":"Haotian Wang, Min Xian, Aleksandar Vakanski","doi":"10.1109/isbi45749.2020.9098611","DOIUrl":"10.1109/isbi45749.2020.9098611","url":null,"abstract":"<p><p>Separating overlapped nuclei is a major challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on public datasets; however, their performance in segmenting overlapped nuclei are limited. To address the issue, we propose the bending loss regularized network for nuclei segmentation. The proposed bending loss defines high penalties to contour points with large curvatures, and applies small penalties to contour points with small curvature. Minimizing the bending loss can avoid generating contours that encompass multiple nuclei. The proposed approach is validated on the MoNuSeg dataset using five quantitative metrics. It outperforms six state-of-the-art approaches on the following metrics: Aggregate Jaccard Index, Dice, Recognition Quality, and Panoptic Quality.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"258-262"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7733529/pdf/nihms-1651655.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38716840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098691
Bryar Shareef, Min Xian, Aleksandar Vakanski
Breast tumor segmentation provides accurate tumor boundary, and serves as a key step toward further cancer quantification. Although deep learning-based approaches have been proposed and achieved promising results, existing approaches have difficulty in detecting small breast tumors. The capacity to detecting small tumors is particularly important in finding early stage cancers using computer-aided diagnosis (CAD) systems. In this paper, we propose a novel deep learning architecture called Small Tumor-Aware Network (STAN), to improve the performance of segmenting tumors with different size. The new architecture integrates both rich context information and high-resolution image features. We validate the proposed approach using seven quantitative metrics on two public breast ultrasound datasets. The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
{"title":"STAN: SMALL TUMOR-AWARE NETWORK FOR BREAST ULTRASOUND IMAGE SEGMENTATION.","authors":"Bryar Shareef, Min Xian, Aleksandar Vakanski","doi":"10.1109/isbi45749.2020.9098691","DOIUrl":"10.1109/isbi45749.2020.9098691","url":null,"abstract":"<p><p>Breast tumor segmentation provides accurate tumor boundary, and serves as a key step toward further cancer quantification. Although deep learning-based approaches have been proposed and achieved promising results, existing approaches have difficulty in detecting small breast tumors. The capacity to detecting small tumors is particularly important in finding early stage cancers using computer-aided diagnosis (CAD) systems. In this paper, we propose a novel deep learning architecture called Small Tumor-Aware Network (STAN), to improve the performance of segmenting tumors with different size. The new architecture integrates both rich context information and high-resolution image features. We validate the proposed approach using seven quantitative metrics on two public breast ultrasound datasets. The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1469-1473"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7733528/pdf/nihms-1651650.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38716841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/isbi45749.2020.9098601
Mahmoud Mostapha, B. Mailhé, Xiao Chen, P. Ceccaldi, Y. Yoo, M. Nadar
Recent advances in supervised deep learning, mainly using convolutional neural networks, enabled the fast acquisition of high-quality brain tissue segmentation from structural magnetic resonance brain images (MRI). However, the robustness of such deep learning models is limited by the existing training datasets acquired with a homogeneous MRI acquisition protocol. Moreover, current models fail to utilize commonly available relevant non-imaging information (i.e., meta-data). In this paper, the notion of a braided block is introduced as a generalization of convolutional or fully connected layers for learning from paired data (meta-data, images). For robust MRI tissue segmentation, a braided 3D U-Net architecture is implemented as a combination of such braided blocks with scanner information, MRI sequence parameters, geometrical information, and task-specific prior information used as meta-data. When applied to a large (> 16,000 scans) and highly heterogeneous (wide range of MRI protocols) dataset, our method generates highly accurate segmentation results (Dice scores > 0.9) within seconds****The concepts and information presented in this paper are based on research results that are not commercially available..
{"title":"Braided Networks for Scan-Aware MRI Brain Tissue Segmentation","authors":"Mahmoud Mostapha, B. Mailhé, Xiao Chen, P. Ceccaldi, Y. Yoo, M. Nadar","doi":"10.1109/isbi45749.2020.9098601","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098601","url":null,"abstract":"Recent advances in supervised deep learning, mainly using convolutional neural networks, enabled the fast acquisition of high-quality brain tissue segmentation from structural magnetic resonance brain images (MRI). However, the robustness of such deep learning models is limited by the existing training datasets acquired with a homogeneous MRI acquisition protocol. Moreover, current models fail to utilize commonly available relevant non-imaging information (i.e., meta-data). In this paper, the notion of a braided block is introduced as a generalization of convolutional or fully connected layers for learning from paired data (meta-data, images). For robust MRI tissue segmentation, a braided 3D U-Net architecture is implemented as a combination of such braided blocks with scanner information, MRI sequence parameters, geometrical information, and task-specific prior information used as meta-data. When applied to a large (> 16,000 scans) and highly heterogeneous (wide range of MRI protocols) dataset, our method generates highly accurate segmentation results (Dice scores > 0.9) within seconds****The concepts and information presented in this paper are based on research results that are not commercially available..","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"1 1","pages":"136-139"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098601","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47139494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}