Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230719
Wenhui Zhu, Peijie Qiu, Mohammad Farazi, Keshav Nandakumar, Oana M Dumitrascu, Yalin Wang
Real-world non-mydriatic retinal fundus photography is prone to artifacts, imperfections, and low-quality when certain ocular or systemic co-morbidities exist. Artifacts may result in inaccuracy or ambiguity in clinical diagnoses. In this paper, we proposed a simple but effective end-to-end framework for enhancing poor-quality retinal fundus images. Leveraging the optimal transport theory, we proposed an unpaired image-to-image translation scheme for transporting low-quality images to their high-quality counterparts. We theoretically proved that a Generative Adversarial Networks (GAN) model with a generator and discriminator is sufficient for this task. Furthermore, to mitigate the inconsistency of information between the low-quality images and their enhancements, an information consistency mechanism was proposed to maximally maintain structural consistency (optical discs, blood vessels, lesions) between the source and enhanced domains. Extensive experiments were conducted on the EyeQ dataset to demonstrate the superiority of our proposed method perceptually and quantitatively.
{"title":"OPTIMAL TRANSPORT GUIDED UNSUPERVISED LEARNING FOR ENHANCING LOW-QUALITY RETINAL IMAGES.","authors":"Wenhui Zhu, Peijie Qiu, Mohammad Farazi, Keshav Nandakumar, Oana M Dumitrascu, Yalin Wang","doi":"10.1109/isbi53787.2023.10230719","DOIUrl":"https://doi.org/10.1109/isbi53787.2023.10230719","url":null,"abstract":"<p><p>Real-world non-mydriatic retinal fundus photography is prone to artifacts, imperfections, and low-quality when certain ocular or systemic co-morbidities exist. Artifacts may result in inaccuracy or ambiguity in clinical diagnoses. In this paper, we proposed a simple but effective end-to-end framework for enhancing poor-quality retinal fundus images. Leveraging the optimal transport theory, we proposed an unpaired image-to-image translation scheme for transporting low-quality images to their high-quality counterparts. We theoretically proved that a Generative Adversarial Networks (GAN) model with a generator and discriminator is sufficient for this task. Furthermore, to mitigate the inconsistency of information between the low-quality images and their enhancements, an information consistency mechanism was proposed to maximally maintain structural consistency (optical discs, blood vessels, lesions) between the source and enhanced domains. Extensive experiments were conducted on the EyeQ dataset to demonstrate the superiority of our proposed method perceptually and quantitatively.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10513403/pdf/nihms-1880846.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41179829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230623
S Pachade, S Datta, Y Dong, S Salazar-Marioni, R Abdelkhaleq, A Niktabe, K Roberts, S A Sheth, L Giancardo
Scarcity of labels for medical images is a significant barrier for training representation learning approaches based on deep neural networks. This limitation is also present when using imaging data collected during routine clinical care stored in picture archiving communication systems (PACS), as these data rarely have attached the high-quality labels required for medical image computing tasks. However, medical images extracted from PACS are commonly coupled with descriptive radiology reports that contain significant information and could be leveraged to pre-train imaging models, which could serve as starting points for further task-specific fine-tuning. In this work, we perform a head-to-head comparison of three different self-supervised strategies to pre-train the same imaging model on 3D brain computed tomography angiogram (CTA) images, with large vessel occlusion (LVO) detection as the downstream task. These strategies evaluate two natural language processing (NLP) approaches, one to extract 100 explicit radiology concepts (Rad-SpatialNet) and the other to create general-purpose radiology reports embeddings (DistilBERT). In addition, we experiment with learning radiology concepts directly or by using a recent self-supervised learning approach (CLIP) that learns by ranking the distance between language and image vector embeddings. The LVO detection task was selected because it requires 3D imaging data, is clinically important, and requires the algorithm to learn outputs not explicitly stated in the radiology report. Pre-training was performed on an unlabeled dataset containing 1,542 3D CTA - reports pairs. The downstream task was tested on a labeled dataset of 402 subjects for LVO. We find that the pre-training performed with CLIP-based strategies improve the performance of the imaging model to detect LVO compared to a model trained only on the labeled data. The best performance was achieved by pre-training using the explicit radiology concepts and CLIP strategy.
{"title":"SELF-SUPERVISED LEARNING WITH RADIOLOGY REPORTS, A COMPARATIVE ANALYSIS OF STRATEGIES FOR LARGE VESSEL OCCLUSION AND BRAIN CTA IMAGES.","authors":"S Pachade, S Datta, Y Dong, S Salazar-Marioni, R Abdelkhaleq, A Niktabe, K Roberts, S A Sheth, L Giancardo","doi":"10.1109/isbi53787.2023.10230623","DOIUrl":"10.1109/isbi53787.2023.10230623","url":null,"abstract":"<p><p>Scarcity of labels for medical images is a significant barrier for training representation learning approaches based on deep neural networks. This limitation is also present when using imaging data collected during routine clinical care stored in picture archiving communication systems (PACS), as these data rarely have attached the high-quality labels required for medical image computing tasks. However, medical images extracted from PACS are commonly coupled with descriptive radiology reports that contain significant information and could be leveraged to pre-train imaging models, which could serve as starting points for further task-specific fine-tuning. In this work, we perform a head-to-head comparison of three different self-supervised strategies to pre-train the same imaging model on 3D brain computed tomography angiogram (CTA) images, with large vessel occlusion (LVO) detection as the downstream task. These strategies evaluate two natural language processing (NLP) approaches, one to extract 100 explicit radiology concepts (Rad-SpatialNet) and the other to create general-purpose radiology reports embeddings (DistilBERT). In addition, we experiment with learning radiology concepts directly or by using a recent self-supervised learning approach (CLIP) that learns by ranking the distance between language and image vector embeddings. The LVO detection task was selected because it requires 3D imaging data, is clinically important, and requires the algorithm to learn outputs not explicitly stated in the radiology report. Pre-training was performed on an unlabeled dataset containing 1,542 3D CTA - reports pairs. The downstream task was tested on a labeled dataset of 402 subjects for LVO. We find that the pre-training performed with CLIP-based strategies improve the performance of the imaging model to detect LVO compared to a model trained only on the labeled data. The best performance was achieved by pre-training using the explicit radiology concepts and CLIP strategy.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10498780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10306669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230488
Hao Zheng, Hongming Li, Yong Fan
To achieve fast, robust, and accurate reconstruction of the human cortical surfaces from 3D magnetic resonance images (MRIs), we develop a novel deep learning-based framework, referred to as SurfNN, to reconstruct simultaneously both inner (between white matter and gray matter) and outer (pial) surfaces from MRIs. Different from existing deep learning-based cortical surface reconstruction methods that either reconstruct the cortical surfaces separately or neglect the interdependence between the inner and outer surfaces, SurfNN reconstructs both the inner and outer cortical surfaces jointly by training a single network to predict a midthickness surface that lies at the center of the inner and outer cortical surfaces. The input of SurfNN consists of a 3D MRI and an initialization of the midthickness surface that is represented both implicitly as a 3D distance map and explicitly as a triangular mesh with spherical topology, and its output includes both the inner and outer cortical surfaces, as well as the midthickness surface. The method has been evaluated on a large-scale MRI dataset and demonstrated competitive cortical surface reconstruction performance.
{"title":"<i>Surf</i>NN: Joint Reconstruction of Multiple Cortical Surfaces from Magnetic Resonance Images.","authors":"Hao Zheng, Hongming Li, Yong Fan","doi":"10.1109/isbi53787.2023.10230488","DOIUrl":"10.1109/isbi53787.2023.10230488","url":null,"abstract":"<p><p>To achieve fast, robust, and accurate reconstruction of the human cortical surfaces from 3D magnetic resonance images (MRIs), we develop a novel deep learning-based framework, referred to as <i>Surf</i>NN, to reconstruct simultaneously both inner (between white matter and gray matter) and outer (pial) surfaces from MRIs. Different from existing deep learning-based cortical surface reconstruction methods that either reconstruct the cortical surfaces separately or neglect the interdependence between the inner and outer surfaces, <i>Surf</i>NN reconstructs both the inner and outer cortical surfaces jointly by training a single network to predict a midthickness surface that lies at the center of the inner and outer cortical surfaces. The input of <i>Surf</i>NN consists of a 3D MRI and an initialization of the midthickness surface that is represented both implicitly as a 3D distance map and explicitly as a triangular mesh with spherical topology, and its output includes both the inner and outer cortical surfaces, as well as the midthickness surface. The method has been evaluated on a large-scale MRI dataset and demonstrated competitive cortical surface reconstruction performance.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41176109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/ISBI52829.2022.9761400
Robail Yasrab, Zeyu Fu, Lior Drukker, Lok Hin Lee, He Zhao, Aris T Papageorghiou, Alison J Noble
This study presents a novel approach to automatic detection and segmentation of the Crown Rump Length (CRL) and Nuchal Translucency (NT), two essential measurements in the first trimester US scan. The proposed method automatically localises a standard plane within a video clip as defined by the UK Fetal Abnormality Screening Programme. A Nested Hourglass (NHG) based network performs semantic pixel-wise segmentation to extract NT and CRL structures. Our results show that the NHG network is faster (19.52% < GFlops than FCN32) and offers high pixel agreement (mean-IoU=80.74) with expert manual annotations.
本研究提出了一种自动检测和分割胎儿臀长(CRL)和颈部透明层(NT)的新方法,这两项指标是孕期前三个月 US 扫描的基本测量指标。根据英国胎儿畸形筛查计划的规定,该方法可自动定位视频片段中的标准平面。基于嵌套沙漏(NHG)的网络执行语义像素分割,以提取 NT 和 CRL 结构。我们的结果表明,NHG 网络的速度更快(比 FCN32 快 19.52%),与专家手动注释的像素一致性更高(平均 IoU=80.74)。
{"title":"End-to-end First Trimester Fetal Ultrasound Video Automated CRL and NT Segmentation.","authors":"Robail Yasrab, Zeyu Fu, Lior Drukker, Lok Hin Lee, He Zhao, Aris T Papageorghiou, Alison J Noble","doi":"10.1109/ISBI52829.2022.9761400","DOIUrl":"10.1109/ISBI52829.2022.9761400","url":null,"abstract":"<p><p>This study presents a novel approach to automatic detection and segmentation of the Crown Rump Length (CRL) and Nuchal Translucency (NT), two essential measurements in the first trimester US scan. The proposed method automatically localises a standard plane within a video clip as defined by the UK Fetal Abnormality Screening Programme. A Nested Hourglass (NHG) based network performs semantic pixel-wise segmentation to extract NT and CRL structures. Our results show that the NHG network is faster (19.52% < GFlops than FCN32) and offers high pixel agreement (mean-IoU=80.74) with expert manual annotations.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614066/pdf/EMS159390.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9359298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-26DOI: 10.1109/ISBI52829.2022.9761585
Elizaveta Savochkina, Lok Hin Lee, He Zhao, Lior Drukker, Aris T Papageorghiou, J Alison Noble
In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).
{"title":"First Trimester video Saliency Prediction using CLSTMU-NET with Stochastic Augmentation.","authors":"Elizaveta Savochkina, Lok Hin Lee, He Zhao, Lior Drukker, Aris T Papageorghiou, J Alison Noble","doi":"10.1109/ISBI52829.2022.9761585","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761585","url":null,"abstract":"<p><p>In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614063/pdf/EMS159391.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9350355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01Epub Date: 2022-04-26DOI: 10.1109/isbi52829.2022.9761629
Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
Unsupervised domain adaptation (UDA) between two significantly disparate domains to learn high-level semantic alignment is a crucial yet challenging task. To this end, in this work, we propose exploiting low-level edge information to facilitate the adaptation as a precursor task, which has a small cross-domain gap, compared with semantic segmentation. The precise contour then provides spatial information to guide the semantic adaptation. More specifically, we propose a multi-task framework to learn a contouring adaptation network along with a semantic segmentation adaptation network, which takes both magnetic resonance imaging (MRI) slice and its initial edge map as input. These two networks are jointly trained with source domain labels, and the feature and edge map level adversarial learning is carried out for cross-domain alignment. In addition, self-entropy minimization is incorporated to further enhance segmentation performance. We evaluated our framework on the BraTS2018 database for cross-modality segmentation of brain tumors, showing the validity and superiority of our approach, compared with competing methods.
{"title":"SELF-SEMANTIC CONTOUR ADAPTATION FOR CROSS MODALITY BRAIN TUMOR SEGMENTATION.","authors":"Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo","doi":"10.1109/isbi52829.2022.9761629","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761629","url":null,"abstract":"<p><p>Unsupervised domain adaptation (UDA) between two significantly disparate domains to learn high-level semantic alignment is a crucial yet challenging task. To this end, in this work, we propose exploiting low-level edge information to facilitate the adaptation as a precursor task, which has a small cross-domain gap, compared with semantic segmentation. The precise contour then provides spatial information to guide the semantic adaptation. More specifically, we propose a multi-task framework to learn a contouring adaptation network along with a semantic segmentation adaptation network, which takes both magnetic resonance imaging (MRI) slice and its initial edge map as input. These two networks are jointly trained with source domain labels, and the feature and edge map level adversarial learning is carried out for cross-domain alignment. In addition, self-entropy minimization is incorporated to further enhance segmentation performance. We evaluated our framework on the BraTS2018 database for cross-modality segmentation of brain tumors, showing the validity and superiority of our approach, compared with competing methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9387767/pdf/nihms-1779009.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40431593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01Epub Date: 2022-04-26DOI: 10.1109/isbi52829.2022.9761543
Haoteng Tang, Lei Guo, Xiyao Fu, Benjamin Qu, Paul M Thompson, Heng Huang, Liang Zhan
Brain networks have been extensively studied in neuroscience, to better understand human behavior, and to identify and characterize distributed brain abnormalities in neurological and psychiatric conditions. Several deep graph learning models have been proposed for brain network analysis, yet most current models lack interpretability, which makes it hard to gain any heuristic biological insights into the results. In this paper, we propose a new explainable graph learning model, named hierarchical brain embedding (HBE), to extract brain network representations based on the network community structure, yielding interpretable hierarchical patterns. We apply our new method to predict aggressivity, rule-breaking, and other standardized behavioral scores from functional brain networks derived using ICA from 1,000 young healthy subjects scanned by the Human Connectome Project. Our results show that the proposed HBE outperforms several state-of-the-art graph learning methods in predicting behavioral measures, and demonstrates similar hierarchical brain network patterns associated with clinical symptoms.
{"title":"HIERARCHICAL BRAIN EMBEDDING USING EXPLAINABLE GRAPH LEARNING.","authors":"Haoteng Tang, Lei Guo, Xiyao Fu, Benjamin Qu, Paul M Thompson, Heng Huang, Liang Zhan","doi":"10.1109/isbi52829.2022.9761543","DOIUrl":"10.1109/isbi52829.2022.9761543","url":null,"abstract":"<p><p>Brain networks have been extensively studied in neuroscience, to better understand human behavior, and to identify and characterize distributed brain abnormalities in neurological and psychiatric conditions. Several deep graph learning models have been proposed for brain network analysis, yet most current models lack interpretability, which makes it hard to gain any heuristic biological insights into the results. In this paper, we propose a new explainable graph learning model, named hierarchical brain embedding (HBE), to extract brain network representations based on the network community structure, yielding interpretable hierarchical patterns. We apply our new method to predict aggressivity, rule-breaking, and other standardized behavioral scores from functional brain networks derived using ICA from 1,000 young healthy subjects scanned by the Human Connectome Project. Our results show that the proposed HBE outperforms several state-of-the-art graph learning methods in predicting behavioral measures, and demonstrates similar hierarchical brain network patterns associated with clinical symptoms.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9851647/pdf/nihms-1864928.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10614271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/isbi52829.2022.9761423
Seung Yeon Shin, Sungwon Lee, Ronald M Summers
We present a new graph-based method for small bowel path tracking based on cylindrical constraints. A distinctive characteristic of the small bowel compared to other organs is the contact between parts of itself along its course, which makes the path tracking difficult together with the indistinct appearance of the wall. It causes the tracked path to easily cross over the walls when relying on low-level features like the wall detection. To circumvent this, a series of cylinders that are fitted along the course of the small bowel are used to guide the tracking to more reliable directions. It is implemented as soft constraints using a new cost function. The proposed method is evaluated against ground-truth paths that are all connected from start to end of the small bowel for 10 abdominal CT scans. The proposed method showed clear improvements compared to the baseline method in tracking the path without making an error. Improvements of 6.6% and 17.0%, in terms of the tracked length, were observed for two different settings related to the small bowel segmentation.
{"title":"GRAPH-BASED SMALL BOWEL PATH TRACKING WITH CYLINDRICAL CONSTRAINTS.","authors":"Seung Yeon Shin, Sungwon Lee, Ronald M Summers","doi":"10.1109/isbi52829.2022.9761423","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761423","url":null,"abstract":"<p><p>We present a new graph-based method for small bowel path tracking based on cylindrical constraints. A distinctive characteristic of the small bowel compared to other organs is the contact between parts of itself along its course, which makes the path tracking difficult together with the indistinct appearance of the wall. It causes the tracked path to easily cross over the walls when relying on low-level features like the wall detection. To circumvent this, a series of cylinders that are fitted along the course of the small bowel are used to guide the tracking to more reliable directions. It is implemented as soft constraints using a new cost function. The proposed method is evaluated against ground-truth paths that are all connected from start to end of the small bowel for 10 abdominal CT scans. The proposed method showed clear improvements compared to the baseline method in tracking the path without making an error. Improvements of 6.6% and 17.0%, in terms of the tracked length, were observed for two different settings related to the small bowel segmentation.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10134031/pdf/nihms-1887665.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9768420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/isbi52829.2022.9761576
Jianfeng Wu, Yi Su, Eric M Reiman, Richard J Caselli, Kewei Chen, Paul M Thompson, Junwen Wang, Yalin Wang
Alzheimer's disease (AD) affects more than 1 in 9 people age 65 and older and becomes an urgent public health concern as the global population ages. Tau tangle is the specific protein pathological hallmark of AD and plays a crucial role in leading to dementia-related structural deformations observed in magnetic resonance imaging (MRI) scans. The volume loss of hippocampus is mainly related to the development of AD. Besides, apolipoprotein E (APOE) also has significant effects on the risk of developing AD. However, few studies focus on integrating genotypes, MRI, and tau deposition to infer multimodal relationships. In this paper, we proposed a federated chow test model to study the synergistic effects of APOE and tau on hippocampal morphometry. Our experimental results demonstrate our model can detect the difference of tau deposition and hippocampal atrophy among the cohorts with different genotypes and subiculum and cornu ammonis 1 (CA1 subfield) were identified as hippocampal subregions where atrophy is strongly associated with abnormal tau in the homozygote cohort. Our model will provide novel insight into the neural mechanisms about the individual impact of APOE and tau deposition on brain imaging.
{"title":"INVESTIGATING THE EFFECT OF TAU DEPOSITION AND APOE ON HIPPOCAMPAL MORPHOMETRY IN ALZHEIMER'S DISEASE: A FEDERATED CHOW TEST MODEL.","authors":"Jianfeng Wu, Yi Su, Eric M Reiman, Richard J Caselli, Kewei Chen, Paul M Thompson, Junwen Wang, Yalin Wang","doi":"10.1109/isbi52829.2022.9761576","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761576","url":null,"abstract":"<p><p>Alzheimer's disease (AD) affects more than 1 in 9 people age 65 and older and becomes an urgent public health concern as the global population ages. Tau tangle is the specific protein pathological hallmark of AD and plays a crucial role in leading to dementia-related structural deformations observed in magnetic resonance imaging (MRI) scans. The volume loss of hippocampus is mainly related to the development of AD. Besides, apolipoprotein E (<i>APOE</i>) also has significant effects on the risk of developing AD. However, few studies focus on integrating genotypes, MRI, and tau deposition to infer multimodal relationships. In this paper, we proposed a federated chow test model to study the synergistic effects of <i>APOE</i> and tau on hippocampal morphometry. Our experimental results demonstrate our model can detect the difference of tau deposition and hippocampal atrophy among the cohorts with different genotypes and subiculum and cornu ammonis 1 (CA1 subfield) were identified as hippocampal subregions where atrophy is strongly associated with abnormal tau in the homozygote cohort. Our model will provide novel insight into the neural mechanisms about the individual impact of <i>APOE</i> and tau deposition on brain imaging.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9491515/pdf/nihms-1788723.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9254720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/isbi52829.2022.9761579
Md Ashequr Rahman, Zitong Yu, Abhinav K Jha
In medical imaging, it is widely recognized that image quality should be objectively evaluated based on performance in clinical tasks. To evaluate performance in signal-detection tasks, the ideal observer (IO) is optimal but also challenging to compute in clinically realistic settings. Markov Chain Monte Carlo (MCMC)-based strategies have demonstrated the ability to compute the IO using pre-computed projections of an anatomical database. To evaluate image quality in clinically realistic scenarios, the observer performance should be measured for a realistic patient distribution. This implies that the anatomical database should also be derived from a realistic population. In this manuscript, we propose to advance the MCMC-based approach towards achieving these goals. We then use the proposed approach to study the effect of anatomical database size on IO computation for the task of detecting perfusion defects in simulated myocardial perfusion SPECT images. Our preliminary results provide evidence that the size of the anatomical database affects the computation of the IO.
{"title":"Ideal-Observer Computation with anthropomorphic phantoms using Markov chain Monte Carlo.","authors":"Md Ashequr Rahman, Zitong Yu, Abhinav K Jha","doi":"10.1109/isbi52829.2022.9761579","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761579","url":null,"abstract":"<p><p>In medical imaging, it is widely recognized that image quality should be objectively evaluated based on performance in clinical tasks. To evaluate performance in signal-detection tasks, the ideal observer (IO) is optimal but also challenging to compute in clinically realistic settings. Markov Chain Monte Carlo (MCMC)-based strategies have demonstrated the ability to compute the IO using pre-computed projections of an anatomical database. To evaluate image quality in clinically realistic scenarios, the observer performance should be measured for a realistic patient distribution. This implies that the anatomical database should also be derived from a realistic population. In this manuscript, we propose to advance the MCMC-based approach towards achieving these goals. We then use the proposed approach to study the effect of anatomical database size on IO computation for the task of detecting perfusion defects in simulated myocardial perfusion SPECT images. Our preliminary results provide evidence that the size of the anatomical database affects the computation of the IO.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9648621/pdf/nihms-1819092.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9350470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}