Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098377
Nicha C Dvornek, Pamela Ventola, James S Duncan
We propose a method for estimating more reproducible functional networks that are more strongly associated with dynamic task activity by using recurrent neural networks with long short term memory (LSTMs). The LSTM model is trained in an unsupervised manner to learn to generate the functional magnetic resonance imaging (fMRI) time-series data in regions of interest. The learned functional networks can then be used for further analysis, e.g., correlation analysis to determine functional networks that are strongly associated with an fMRI task paradigm. We test our approach and compare to other methods for decomposing functional networks from fMRI activity on 2 related but separate datasets that employ a biological motion perception task. We demonstrate that the functional networks learned by the LSTM model are more strongly associated with the task activity and dynamics compared to other approaches. Furthermore, the patterns of network association are more closely replicated across subjects within the same dataset as well as across datasets. More reproducible functional networks are essential for better characterizing the neural correlates of a target task.
{"title":"ESTIMATING REPRODUCIBLE FUNCTIONAL NETWORKS ASSOCIATED WITH TASK DYNAMICS USING UNSUPERVISED LSTMS.","authors":"Nicha C Dvornek, Pamela Ventola, James S Duncan","doi":"10.1109/isbi45749.2020.9098377","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098377","url":null,"abstract":"<p><p>We propose a method for estimating more reproducible functional networks that are more strongly associated with dynamic task activity by using recurrent neural networks with long short term memory (LSTMs). The LSTM model is trained in an unsupervised manner to learn to generate the functional magnetic resonance imaging (fMRI) time-series data in regions of interest. The learned functional networks can then be used for further analysis, e.g., correlation analysis to determine functional networks that are strongly associated with an fMRI task paradigm. We test our approach and compare to other methods for decomposing functional networks from fMRI activity on 2 related but separate datasets that employ a biological motion perception task. We demonstrate that the functional networks learned by the LSTM model are more strongly associated with the task activity and dynamics compared to other approaches. Furthermore, the patterns of network association are more closely replicated across subjects within the same dataset as well as across datasets. More reproducible functional networks are essential for better characterizing the neural correlates of a target task.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098377","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39335064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098515
Toan Duc Bui, Li Wang, Weili Lin, Gang Li, Dinggang Shen
Due to the extremely low intensity contrast between the white matter (WM) and the gray matter (GM) at around 6 months of age (the isointense phase), it is difficult for manual annotation, hence the number of training labels is highly limited. Consequently, it is still challenging to automatically segment isointense infant brain MRI. Meanwhile, the contrast of intensity images in the early adult phase, such as 24 months of age, is a relatively better, which can be easily segmented by the well-developed tools, e.g., FreeSurfer. Therefore, the question is how could we employ these high-contrast images (such as 24-month-old images) to guide the segmentation of 6-month-old images. Motivated by the above purpose, we propose a method to explore the 24-month-old images for a reliable tissue segmentation of 6-month-old images. Specifically, we design a 3D-cycleGAN-Seg architecture to generate synthetic images of the isointense phase by transferring appearances between the two time-points. To guarantee the tissue segmentation consistency between 6-month-old and 24-month-old images, we employ features from generated segmentations to guide the training of the generator network. To further improve the quality of synthetic images, we propose a feature matching loss that computes the cosine distance between unpaired segmentation features of the real and fake images. Then, the transferred of 24-month-old images is used to jointly train the segmentation model on the 6-month-old images. Experimental results demonstrate a superior performance of the proposed method compared with the existing deep learning-based methods.
{"title":"6-MONTH INFANT BRAIN MRI SEGMENTATION GUIDED BY 24-MONTH DATA USING CYCLE-CONSISTENT ADVERSARIAL NETWORKS.","authors":"Toan Duc Bui, Li Wang, Weili Lin, Gang Li, Dinggang Shen","doi":"10.1109/isbi45749.2020.9098515","DOIUrl":"10.1109/isbi45749.2020.9098515","url":null,"abstract":"<p><p>Due to the extremely low intensity contrast between the white matter (WM) and the gray matter (GM) at around 6 months of age (the isointense phase), it is difficult for manual annotation, hence the number of training labels is highly limited. Consequently, it is still challenging to automatically segment isointense infant brain MRI. Meanwhile, the contrast of intensity images in the early adult phase, such as 24 months of age, is a relatively better, which can be easily segmented by the well-developed tools, e.g., FreeSurfer. Therefore, the question is how could we employ these high-contrast images (such as 24-month-old images) to guide the segmentation of 6-month-old images. Motivated by the above purpose, we propose a method to explore the 24-month-old images for a reliable tissue segmentation of 6-month-old images. Specifically, we design a 3D-cycleGAN-Seg architecture to generate synthetic images of the isointense phase by transferring appearances between the two time-points. To guarantee the tissue segmentation consistency between 6-month-old and 24-month-old images, we employ features from generated segmentations to guide the training of the generator network. To further improve the quality of synthetic images, we propose a feature matching loss that computes the cosine distance between unpaired segmentation features of the real and fake images. Then, the transferred of 24-month-old images is used to jointly train the segmentation model on the 6-month-old images. Experimental results demonstrate a superior performance of the proposed method compared with the existing deep learning-based methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8375399/pdf/nihms-1564559.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39335063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098418
Stephen Siemonsma, Stanley Kruger, Arvind Balachandrasekaran, Merry Mani, Mathews Jacob
Echo-planar imaging (EPI), which is the main workhorse of functional MRI, suffers from field inhomogeneity-induced geometric distortions. The amount of distortion is proportional to the readout duration, which restricts the maximum achievable spatial resolution. The spatially varying nature of the decay makes it challenging for EPI schemes with a single echo time to obtain good sensitivity to functional activations in different brain regions. Despite the use of parallel MRI and multislice acceleration, the number of different echo times that can be acquired in a reasonable TR is limited. The main focus of this work is to introduce a rosette-based acquisition scheme and a structured low-rank reconstruction algorithm to overcome the above challenges. The proposed scheme exploits the exponential structure of the time series to recover distortion-free images from several echoes simultaneously.
{"title":"MULTI-ECHO RECOVERY WITH FIELD INHOMOGENEITY COMPENSATION USING STRUCTURED LOW-RANK MATRIX COMPLETION.","authors":"Stephen Siemonsma, Stanley Kruger, Arvind Balachandrasekaran, Merry Mani, Mathews Jacob","doi":"10.1109/isbi45749.2020.9098418","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098418","url":null,"abstract":"<p><p>Echo-planar imaging (EPI), which is the main workhorse of functional MRI, suffers from field inhomogeneity-induced geometric distortions. The amount of distortion is proportional to the readout duration, which restricts the maximum achievable spatial resolution. The spatially varying nature of the <math> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> </math> decay makes it challenging for EPI schemes with a single echo time to obtain good sensitivity to functional activations in different brain regions. Despite the use of parallel MRI and multislice acceleration, the number of different echo times that can be acquired in a reasonable TR is limited. The main focus of this work is to introduce a rosette-based acquisition scheme and a structured low-rank reconstruction algorithm to overcome the above challenges. The proposed scheme exploits the exponential structure of the time series to recover distortion-free images from several echoes simultaneously.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1074-1077"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098418","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39561163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098692
Kilian Hett, Hans Johnson, Pierrick Coupé, Jane S Paulsen, Jeffrey D Long, Ipek Oguz
The improvements in magnetic resonance imaging have led to the development of numerous techniques to better detect structural alterations caused by neurodegenerative diseases. Among these, the patch-based grading framework has been proposed to model local patterns of anatomical changes. This approach is attractive because of its low computational cost and its competitive performance. Other studies have proposed to analyze the deformations of brain structures using tensor-based morphometry, which is a highly interpretable approach. In this work, we propose to combine the advantages of these two approaches by extending the patch-based grading framework with a new tensor-based grading method that enables us to model patterns of local deformation using a log-Euclidean metric. We evaluate our new method in a study of the putamen for the classification of patients with pre-manifest Huntington's disease and healthy controls. Our experiments show a substantial increase in classification accuracy (87.5 ± 0.5 vs. 81.3 ± 0.6) compared to the existing patch-based grading methods, and a good complement to putamen volume, which is a primary imaging-based marker for the study of Huntington's disease.
磁共振成像技术的进步促进了许多技术的发展,以更好地检测由神经退行性疾病引起的结构改变。其中,基于斑块的分级框架被提出来模拟解剖变化的局部模式。这种方法由于其低计算成本和具有竞争力的性能而具有吸引力。其他研究已经提出使用基于张量的形态测量学来分析大脑结构的变形,这是一种高度可解释的方法。在这项工作中,我们建议结合这两种方法的优点,通过扩展基于补丁的分级框架和一种新的基于张量的分级方法,使我们能够使用对数欧几里得度量来建模局部变形模式。我们评估我们的新方法在壳核的研究分类的患者与前显性亨廷顿氏病和健康对照。我们的实验表明,与现有的基于补丁的分级方法相比,该方法的分类准确率(87.5±0.5 vs. 81.3±0.6)有了显著提高,并且可以很好地补充壳核体积,壳核体积是研究亨廷顿病的主要影像学标记。
{"title":"TENSOR-BASED GRADING: A NOVEL PATCH-BASED GRADING APPROACH FOR THE ANALYSIS OF DEFORMATION FIELDS IN HUNTINGTON'S DISEASE.","authors":"Kilian Hett, Hans Johnson, Pierrick Coupé, Jane S Paulsen, Jeffrey D Long, Ipek Oguz","doi":"10.1109/isbi45749.2020.9098692","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098692","url":null,"abstract":"<p><p>The improvements in magnetic resonance imaging have led to the development of numerous techniques to better detect structural alterations caused by neurodegenerative diseases. Among these, the patch-based grading framework has been proposed to model local patterns of anatomical changes. This approach is attractive because of its low computational cost and its competitive performance. Other studies have proposed to analyze the deformations of brain structures using tensor-based morphometry, which is a highly interpretable approach. In this work, we propose to combine the advantages of these two approaches by extending the patch-based grading framework with a new tensor-based grading method that enables us to model patterns of local deformation using a log-Euclidean metric. We evaluate our new method in a study of the putamen for the classification of patients with pre-manifest Huntington's disease and healthy controls. Our experiments show a substantial increase in classification accuracy (87.5 ± 0.5 vs. 81.3 ± 0.6) compared to the existing patch-based grading methods, and a good complement to putamen volume, which is a primary imaging-based marker for the study of Huntington's disease.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1091-1095"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39698927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098735
Jeffrey J Ma, Ukash Nakarmi, Cedric Yue Sik Kin, Christopher M Sandino, Joseph Y Cheng, Ali B Syed, Peter Wei, John M Pauly, Shreyas S Vasanawala
Magnetic Resonance Imaging (MRI) suffers from several artifacts, the most common of which are motion artifacts. These artifacts often yield images that are of non-diagnostic quality. To detect such artifacts, images are prospectively evaluated by experts for their diagnostic quality, which necessitates patient-revisits and rescans whenever non-diagnostic quality scans are encountered. This motivates the need to develop an automated framework capable of accessing medical image quality and detecting diagnostic and non-diagnostic images. In this paper, we explore several convolutional neural network-based frameworks for medical image quality assessment and investigate several challenges therein.
{"title":"DIAGNOSTIC IMAGE QUALITY ASSESSMENT AND CLASSIFICATION IN MEDICAL IMAGING: OPPORTUNITIES AND CHALLENGES.","authors":"Jeffrey J Ma, Ukash Nakarmi, Cedric Yue Sik Kin, Christopher M Sandino, Joseph Y Cheng, Ali B Syed, Peter Wei, John M Pauly, Shreyas S Vasanawala","doi":"10.1109/isbi45749.2020.9098735","DOIUrl":"10.1109/isbi45749.2020.9098735","url":null,"abstract":"<p><p>Magnetic Resonance Imaging (MRI) suffers from several artifacts, the most common of which are motion artifacts. These artifacts often yield images that are of non-diagnostic quality. To detect such artifacts, images are prospectively evaluated by experts for their diagnostic quality, which necessitates patient-revisits and rescans whenever non-diagnostic quality scans are encountered. This motivates the need to develop an automated framework capable of accessing medical image quality and detecting diagnostic and non-diagnostic images. In this paper, we explore several convolutional neural network-based frameworks for medical image quality assessment and investigate several challenges therein.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"337-340"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7710391/pdf/nihms-1648203.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38333301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098490
Aniket Pramanik, Hemant Aggarwal, Mathews Jacob
We introduce a fast model based deep learning approach for calibrationless parallel MRI reconstruction. The proposed scheme is a non-linear generalization of structured low rank (SLR) methods that self learn linear annihilation filters from the same subject. It pre-learns non-linear annihilation relations in the Fourier domain from exemplar data. The pre-learning strategy significantly reduces the computational complexity, making the proposed scheme three orders of magnitude faster than SLR schemes. The proposed framework also allows the use of a complementary spatial domain prior; the hybrid regularization scheme offers improved performance over calibrated image domain MoDL approach. The calibrationless strategy minimizes potential mismatches between calibration data and the main scan, while eliminating the need for a fully sampled calibration region.
{"title":"CALIBRATIONLESS PARALLEL MRI USING MODEL BASED DEEP LEARNING (C-MODL).","authors":"Aniket Pramanik, Hemant Aggarwal, Mathews Jacob","doi":"10.1109/isbi45749.2020.9098490","DOIUrl":"10.1109/isbi45749.2020.9098490","url":null,"abstract":"<p><p>We introduce a fast model based deep learning approach for calibrationless parallel MRI reconstruction. The proposed scheme is a non-linear generalization of structured low rank (SLR) methods that self learn linear annihilation filters from the same subject. It pre-learns non-linear annihilation relations in the Fourier domain from exemplar data. The pre-learning strategy significantly reduces the computational complexity, making the proposed scheme three orders of magnitude faster than SLR schemes. The proposed framework also allows the use of a complementary spatial domain prior; the hybrid regularization scheme offers improved performance over calibrated image domain MoDL approach. The calibrationless strategy minimizes potential mismatches between calibration data and the main scan, while eliminating the need for a fully sampled calibration region.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1428-1431"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7877806/pdf/nihms-1667588.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25368270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098485
Hongyi Duanmu, Jinkoo Kim, Praitayini Kanakaraj, Andrew Wang, John Joshua, Jun Kong, Fusheng Wang
3D organ contouring is an essential step in radiation therapy treatment planning for organ dose estimation as well as for optimizing plans to reduce organs-at-risk doses. Manual contouring is time-consuming and its inter-clinician variability adversely affects the outcomes study. Such organs also vary dramatically on sizes - up to two orders of magnitude difference in volumes. In this paper, we present BrainSegNet, a novel 3D fully convolutional neural network (FCNN) based approach for automatic segmentation of brain organs. BrainSegNet takes a multiple resolution paths approach and uses a weighted loss function to solve the major challenge of the large variability in organ sizes. We evaluated our approach with a dataset of 46 Brain CT image volumes with corresponding expert organ contours as reference. Compared with those of LiviaNet and V-Net, BrainSegNet has a superior performance in segmenting tiny or thin organs, such as chiasm, optic nerves, and cochlea, and outperforms these methods in segmenting large organs as well. BrainSegNet can reduce the manual contouring time of a volume from an hour to less than two minutes, and holds high potential to improve the efficiency of radiation therapy workflow.
{"title":"AUTOMATIC BRAIN ORGAN SEGMENTATION WITH 3D FULLY CONVOLUTIONAL NEURAL NETWORK FOR RADIATION THERAPY TREATMENT PLANNING.","authors":"Hongyi Duanmu, Jinkoo Kim, Praitayini Kanakaraj, Andrew Wang, John Joshua, Jun Kong, Fusheng Wang","doi":"10.1109/isbi45749.2020.9098485","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098485","url":null,"abstract":"<p><p>3D organ contouring is an essential step in radiation therapy treatment planning for organ dose estimation as well as for optimizing plans to reduce organs-at-risk doses. Manual contouring is time-consuming and its inter-clinician variability adversely affects the outcomes study. Such organs also vary dramatically on sizes - up to two orders of magnitude difference in volumes. In this paper, we present BrainSegNet, a novel 3D fully convolutional neural network (FCNN) based approach for automatic segmentation of brain organs. BrainSegNet takes a multiple resolution paths approach and uses a weighted loss function to solve the major challenge of the large variability in organ sizes. We evaluated our approach with a dataset of 46 Brain CT image volumes with corresponding expert organ contours as reference. Compared with those of LiviaNet and V-Net, BrainSegNet has a superior performance in segmenting tiny or thin organs, such as chiasm, optic nerves, and cochlea, and outperforms these methods in segmenting large organs as well. BrainSegNet can reduce the manual contouring time of a volume from an hour to less than two minutes, and holds high potential to improve the efficiency of radiation therapy workflow.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"758-762"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098485","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38270335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098630
J Vince Pulido, Shan Guleria, Lubaina Ehsan, Tilak Shah, Sana Syed, Don E Brown
Histologic diagnosis of Barrett's esophagus and esophageal malignancy via probe-based confocal laser endomicroscopy (pCLE) allows for real-time examination of epithelial architecture and targeted biopsy sampling. Although pCLE demonstrates high specificity, sensitivity remains low. This study employs deep learning architectures in order to improve the accuracy of pCLE in diagnosing esophageal cancer and its precursors. pCLE videos are curated and annotated as belonging to one of the three classes: squamous, Barrett's (intestinal metaplasia without dysplasia), or dysplasia. We introduce two novel video architectures, AttentionPooling and Multi-Module AttentionPooling deep networks, that outperform other models and demonstrate a high degree of explainability.
{"title":"SCREENING FOR BARRETT'S ESOPHAGUS WITH PROBE-BASED CONFOCAL LASER ENDOMICROSCOPY VIDEOS.","authors":"J Vince Pulido, Shan Guleria, Lubaina Ehsan, Tilak Shah, Sana Syed, Don E Brown","doi":"10.1109/isbi45749.2020.9098630","DOIUrl":"10.1109/isbi45749.2020.9098630","url":null,"abstract":"<p><p>Histologic diagnosis of Barrett's esophagus and esophageal malignancy via probe-based confocal laser endomicroscopy (pCLE) allows for real-time examination of epithelial architecture and targeted biopsy sampling. Although pCLE demonstrates high specificity, sensitivity remains low. This study employs deep learning architectures in order to improve the accuracy of pCLE in diagnosing esophageal cancer and its precursors. pCLE videos are curated and annotated as belonging to one of the three classes: squamous, Barrett's (intestinal metaplasia without dysplasia), or dysplasia. We introduce two novel video architectures, AttentionPooling and Multi-Module AttentionPooling deep networks, that outperform other models and demonstrate a high degree of explainability.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1659-1663"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098630","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39020905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098412
Yusuf Osmanlıoğlu, Jacob A Alappatt, Drew Parker, Ragini Verma
Analysis of structural and functional connectivity of brain has become a fundamental approach in neuroscientific research. Despite several studies reporting consistent similarities as well as differences for structural and resting state (rs) functional connectomes, a comparative investigation of connectomic consistency between the two modalities is still lacking. Nonetheless, connectomic analysis comprising both connectivity types necessitate extra attention as consistency of connectivity differs across modalities, possibly affecting the interpretation of the results. In this study, we present a comprehensive analysis of consistency in structural and rs-functional connectomes obtained from longitudinal diffusion MRI and rs-fMRI data of a single healthy subject. We contrast consistency of deterministic and probabilistic tracking with that of full, positive, and negative functional connectivities across various connectome generation schemes, using correlation as a measure of consistency.
{"title":"Analysis of Consistency in Structural and Functional Connectivity of Human Brain.","authors":"Yusuf Osmanlıoğlu, Jacob A Alappatt, Drew Parker, Ragini Verma","doi":"10.1109/isbi45749.2020.9098412","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098412","url":null,"abstract":"<p><p>Analysis of structural and functional connectivity of brain has become a fundamental approach in neuroscientific research. Despite several studies reporting consistent similarities as well as differences for structural and resting state (rs) functional connectomes, a comparative investigation of connectomic consistency between the two modalities is still lacking. Nonetheless, connectomic analysis comprising both connectivity types necessitate extra attention as consistency of connectivity differs across modalities, possibly affecting the interpretation of the results. In this study, we present a comprehensive analysis of consistency in structural and rs-functional connectomes obtained from longitudinal diffusion MRI and rs-fMRI data of a single healthy subject. We contrast consistency of deterministic and probabilistic tracking with that of full, positive, and negative functional connectivities across various connectome generation schemes, using correlation as a measure of consistency.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1694-1697"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098412","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38377178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01Epub Date: 2020-05-22DOI: 10.1109/isbi45749.2020.9098524
Zhicheng Jiao, Hongming Li, Yong Fan
Functional connectivity (FC) analysis is an appealing tool to aid diagnosis and elucidate the neurophysiological underpinnings of autism spectrum disorder (ASD). Many machine learning methods have been developed to distinguish ASD patients from healthy controls based on FC measures and identify abnormal FC patterns of ASD. Particularly, several studies have demonstrated that deep learning models could achieve better performance for ASD diagnosis than conventional machine learning methods. Although promising classification performance has been achieved by the existing machine learning methods, they do not explicitly model heterogeneity of ASD, incapable of disentangling heterogeneous FC patterns of ASD. To achieve an improved diagnosis and a better understanding of ASD, we adopt capsule networks (CapsNets) to build classifiers for distinguishing ASD patients from healthy controls based on FC measures and stratify ASD patients into groups with distinct FC patterns. Evaluation results based on a large multi-site dataset have demonstrated that our method not only obtained better classification performance than state-of-the-art alternative machine learning methods, but also identified clinically meaningful subgroups of ASD patients based on their vectorized classification outputs of the CapsNets classification model.
{"title":"Improving Diagnosis of Autism Spectrum Disorder and Disentangling its Heterogeneous Functional Connectivity Patterns Using Capsule Networks.","authors":"Zhicheng Jiao, Hongming Li, Yong Fan","doi":"10.1109/isbi45749.2020.9098524","DOIUrl":"https://doi.org/10.1109/isbi45749.2020.9098524","url":null,"abstract":"<p><p>Functional connectivity (FC) analysis is an appealing tool to aid diagnosis and elucidate the neurophysiological underpinnings of autism spectrum disorder (ASD). Many machine learning methods have been developed to distinguish ASD patients from healthy controls based on FC measures and identify abnormal FC patterns of ASD. Particularly, several studies have demonstrated that deep learning models could achieve better performance for ASD diagnosis than conventional machine learning methods. Although promising classification performance has been achieved by the existing machine learning methods, they do not explicitly model heterogeneity of ASD, incapable of disentangling heterogeneous FC patterns of ASD. To achieve an improved diagnosis and a better understanding of ASD, we adopt capsule networks (CapsNets) to build classifiers for distinguishing ASD patients from healthy controls based on FC measures and stratify ASD patients into groups with distinct FC patterns. Evaluation results based on a large multi-site dataset have demonstrated that our method not only obtained better classification performance than state-of-the-art alternative machine learning methods, but also identified clinically meaningful subgroups of ASD patients based on their vectorized classification outputs of the CapsNets classification model.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1331-1334"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi45749.2020.9098524","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38652517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}