Primary angle-closure glaucoma (PACG) is a major sub-type of glaucoma that is responsible for half of the glaucoma-related blindness worldwide. The early detection of PACG is very important, so as to provide timely treatment and prevent potential irreversible vision loss. Clinically, the diagnosis of PACG is based on the evaluation of anterior chamber angle (ACA) with anterior segment optical coherence tomography (AS-OCT). To this end, the Angle closure Glaucoma Evaluation (AGE) challenge1 held on MICCAI 2019 aims to encourage researchers to develop automated systems for angle closure classification and scleral spur (SS) localization. We participated in the competition and won the championship on both tasks. In this paper, we share some ideas adopted in our entry of the competition, which significantly improve the accuracy of scleral spur localization. There are extensive literatures on keypoint detection for the tasks such as human body keypoint and facial landmark detection. However, they are proven to fail on dealing with scleral spur localization in the experiments, due to the gap between natural and medical images. In this regard, we propose a set of constraints to encourage a two-stage keypoint detection framework to spontaneously exploit diverse information, including the image-level knowledge and contextual information around SS, from the AS-OCT for the accurate SS localization. Extensive experiments are conducted to demonstrate the effectiveness of the proposed constraints.1https://age.grand-challenge.org/
{"title":"The Winner of Age Challenge: Going One Step Further From Keypoint Detection to Scleral Spur Localization","authors":"Xing Tao, Chenglang Yuan, Cheng Bian, Yuexiang Li, Kai Ma, Dong Ni, Yefeng Zheng","doi":"10.1109/ISBI48211.2021.9433822","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433822","url":null,"abstract":"Primary angle-closure glaucoma (PACG) is a major sub-type of glaucoma that is responsible for half of the glaucoma-related blindness worldwide. The early detection of PACG is very important, so as to provide timely treatment and prevent potential irreversible vision loss. Clinically, the diagnosis of PACG is based on the evaluation of anterior chamber angle (ACA) with anterior segment optical coherence tomography (AS-OCT). To this end, the Angle closure Glaucoma Evaluation (AGE) challenge1 held on MICCAI 2019 aims to encourage researchers to develop automated systems for angle closure classification and scleral spur (SS) localization. We participated in the competition and won the championship on both tasks. In this paper, we share some ideas adopted in our entry of the competition, which significantly improve the accuracy of scleral spur localization. There are extensive literatures on keypoint detection for the tasks such as human body keypoint and facial landmark detection. However, they are proven to fail on dealing with scleral spur localization in the experiments, due to the gap between natural and medical images. In this regard, we propose a set of constraints to encourage a two-stage keypoint detection framework to spontaneously exploit diverse information, including the image-level knowledge and contextual information around SS, from the AS-OCT for the accurate SS localization. Extensive experiments are conducted to demonstrate the effectiveness of the proposed constraints.1https://age.grand-challenge.org/","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121878561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433855
B. Zufiria, Maialen Stephens, Maria Jesús Sánchez, J. Ruíz-Cabello, Karen López-Linares, I. Macía
Pulmonary Hypertension (PH) induces anatomical changes in the cardiac muscle that can be quantitativly assessed using Magnetic Resonance (MR). Yet, the extraction of biomarkers relies on the segmentation of the affected structures, which in many cases is performed manually by physicians. Previous approaches have shown successful automatic segmentation results for different heart structures from human cardiac MR images. Nevertheless, the segmentation from mice images is rarely addressed, but it is essential for preclinical studies. Thus, the aim of this work is to develop an automatic tool based on a convolutional neural network for the segmentation of 4 cardiac structures at once in healthy and pathological mice to precisely evaluate biomarkers that may correlate to PH. The obtained automatic segmentations are comparable to manual segmentations, and they improve the distinction between control and pathological cases, especially regarding biomarkers from the right ventricle.
{"title":"Fully Automatic Cardiac Segmentation And Quantification For Pulmonary Hypertension Analysis Using Mice Cine Mr Images","authors":"B. Zufiria, Maialen Stephens, Maria Jesús Sánchez, J. Ruíz-Cabello, Karen López-Linares, I. Macía","doi":"10.1109/ISBI48211.2021.9433855","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433855","url":null,"abstract":"Pulmonary Hypertension (PH) induces anatomical changes in the cardiac muscle that can be quantitativly assessed using Magnetic Resonance (MR). Yet, the extraction of biomarkers relies on the segmentation of the affected structures, which in many cases is performed manually by physicians. Previous approaches have shown successful automatic segmentation results for different heart structures from human cardiac MR images. Nevertheless, the segmentation from mice images is rarely addressed, but it is essential for preclinical studies. Thus, the aim of this work is to develop an automatic tool based on a convolutional neural network for the segmentation of 4 cardiac structures at once in healthy and pathological mice to precisely evaluate biomarkers that may correlate to PH. The obtained automatic segmentations are comparable to manual segmentations, and they improve the distinction between control and pathological cases, especially regarding biomarkers from the right ventricle.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"252 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122057757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434000
Xiongfeng Zhu, Qianjin Feng
Chest radiography is a critical imaging modality to access thorax diseases. Automated radiograph classification algorithms have enormous potential to support clinical assistant diagnosis. Most algorithms focus solely on the single-view radiograph to make a prediction. However, both frontal and lateral images are valuable information sources for disease diagnosis. In this paper, we present multi-view chest radiograph classification network (MVC-Net) to fuse paired frontal and lateral views at both the feature and decision level. Specifically, back projection transposition(BPT) explicitly incorporates the spatial information from two orthogonal X-rays at feature level, and mimicry loss enables cross-view predictions to mimic from each other at decision level. The experimental results on 13 pathologies from MIMIC-CXR dataset show that MVC-Net yields the highest average AUROC score of 0.810, which gives better classification metrics as compared with various baseline methods. The code is available at https://github.com/fzfs/Multi-view-Chest-X-ray-Classification.
{"title":"MVC-NET: Multi-View Chest Radiograph Classification Network With Deep Fusion","authors":"Xiongfeng Zhu, Qianjin Feng","doi":"10.1109/ISBI48211.2021.9434000","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434000","url":null,"abstract":"Chest radiography is a critical imaging modality to access thorax diseases. Automated radiograph classification algorithms have enormous potential to support clinical assistant diagnosis. Most algorithms focus solely on the single-view radiograph to make a prediction. However, both frontal and lateral images are valuable information sources for disease diagnosis. In this paper, we present multi-view chest radiograph classification network (MVC-Net) to fuse paired frontal and lateral views at both the feature and decision level. Specifically, back projection transposition(BPT) explicitly incorporates the spatial information from two orthogonal X-rays at feature level, and mimicry loss enables cross-view predictions to mimic from each other at decision level. The experimental results on 13 pathologies from MIMIC-CXR dataset show that MVC-Net yields the highest average AUROC score of 0.810, which gives better classification metrics as compared with various baseline methods. The code is available at https://github.com/fzfs/Multi-view-Chest-X-ray-Classification.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116737388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433937
Tianqi Song, C. Bodin, O. Coulon
Cortical folding, an essential characteristic of the brain cortex, shows variability across individuals. Plis de passages (PPs), namely annectant gyri buried inside the fold, can explain part of the variability. However, a systematic method of automatically detecting all PPs is still not available. In this paper, we present a method to detect the PPs on the cortex automatically. We first extract the geometry information of the localized areas on the cortex via surface profiling. Then, an ensemble support vector machine (SVM) is developed to identify the PPs. Experimental results show the effectiveness and robustness of our method.
皮层折叠是大脑皮层的一个基本特征,在个体之间表现出差异。Plis de passages (PPs),即埋在褶皱内部的邻近脑回,可以部分解释这种变异性。然而,一种系统的自动检测所有PPs的方法仍然不可用。本文提出了一种自动检测脑皮层PPs的方法。我们首先通过表面轮廓提取皮层局部区域的几何信息。然后,提出了一种集成支持向量机(SVM)来识别pp。实验结果表明了该方法的有效性和鲁棒性。
{"title":"Automatic Detection of Plis De Passage in the Superior Temporal Sulcus using Surface Profiling and Ensemble SVM","authors":"Tianqi Song, C. Bodin, O. Coulon","doi":"10.1109/ISBI48211.2021.9433937","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433937","url":null,"abstract":"Cortical folding, an essential characteristic of the brain cortex, shows variability across individuals. Plis de passages (PPs), namely annectant gyri buried inside the fold, can explain part of the variability. However, a systematic method of automatically detecting all PPs is still not available. In this paper, we present a method to detect the PPs on the cortex automatically. We first extract the geometry information of the localized areas on the cortex via surface profiling. Then, an ensemble support vector machine (SVM) is developed to identify the PPs. Experimental results show the effectiveness and robustness of our method.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116853045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433854
Léo Milecki, S. Bodard, J. Correas, M. Timsit, M. Vakalopoulou
Image segmentation is one of the most popular problems in medical image analysis. Recently, with the success of deep neural networks, these powerful methods provide state of the art performance on various segmentation tasks. However, one of the main challenges relies on the high number of annotations that they need to be trained, which is crucial in medical applications. In this paper, we propose an unsupervised method based on deep learning for the segmentation of kidney grafts. Our method is composed of two different stages, the detection of the area of interest and the segmentation model that is able, through an iterative process, to provide accurate kidney draft segmentation without the need for annotations. The proposed framework works in the 3D space to explore all the available information and extract meaningful representations from Dynamic Contrast-Enhanced and T2 MRI sequences. Our method reports a dice of 89.8±3.1%, Hausdorff distance at percentile 95% of 5.8±0.4lmm and percentage of kidney volume difference of 5.9±5.7% on a test dataset of 29 patients subject to a kidney transplant.
{"title":"3d Unsupervised Kidney Graft Segmentation Based On Deep Learning And Multi-Sequence Mri","authors":"Léo Milecki, S. Bodard, J. Correas, M. Timsit, M. Vakalopoulou","doi":"10.1109/ISBI48211.2021.9433854","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433854","url":null,"abstract":"Image segmentation is one of the most popular problems in medical image analysis. Recently, with the success of deep neural networks, these powerful methods provide state of the art performance on various segmentation tasks. However, one of the main challenges relies on the high number of annotations that they need to be trained, which is crucial in medical applications. In this paper, we propose an unsupervised method based on deep learning for the segmentation of kidney grafts. Our method is composed of two different stages, the detection of the area of interest and the segmentation model that is able, through an iterative process, to provide accurate kidney draft segmentation without the need for annotations. The proposed framework works in the 3D space to explore all the available information and extract meaningful representations from Dynamic Contrast-Enhanced and T2 MRI sequences. Our method reports a dice of 89.8±3.1%, Hausdorff distance at percentile 95% of 5.8±0.4lmm and percentage of kidney volume difference of 5.9±5.7% on a test dataset of 29 patients subject to a kidney transplant.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128613439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433791
D. L. F. Cabrera, Éloïse Grossiord, N. Gogin, D. Papathanassiou, Nicolas Passat
In the context of breast cancer, the detection and segmentation of cancerous lymph nodes in PET/CT imaging is of crucial importance, in particular for staging issues. In order to guide such image analysis procedures, some dedicated descriptors can be considered, especially region-based features. In this article, we focus on the issue of choosing which features should be embedded for lymph node tumor segmentation from PET/CT. This study is divided into two steps. We first investigate the relevance of various features by considering a Random Forest framework. In a second time, we validate the expected relevance of the best scored features by involving them in a U-Net segmentation architecture. We handle the region-based definition of these features thanks to a hierarchical modeling of the PET images. This analysis emphasizes a set of features that can significantly improve / guide the segmentation of lymph nodes in PET/CT.
{"title":"Analysis Of Lymph Node Tumor Features In Pet/Ct For Segmentation","authors":"D. L. F. Cabrera, Éloïse Grossiord, N. Gogin, D. Papathanassiou, Nicolas Passat","doi":"10.1109/ISBI48211.2021.9433791","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433791","url":null,"abstract":"In the context of breast cancer, the detection and segmentation of cancerous lymph nodes in PET/CT imaging is of crucial importance, in particular for staging issues. In order to guide such image analysis procedures, some dedicated descriptors can be considered, especially region-based features. In this article, we focus on the issue of choosing which features should be embedded for lymph node tumor segmentation from PET/CT. This study is divided into two steps. We first investigate the relevance of various features by considering a Random Forest framework. In a second time, we validate the expected relevance of the best scored features by involving them in a U-Net segmentation architecture. We handle the region-based definition of these features thanks to a hierarchical modeling of the PET images. This analysis emphasizes a set of features that can significantly improve / guide the segmentation of lymph nodes in PET/CT.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124588258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433996
Martin Jammes-Floreani, A. Laine, E. Angelini
Lung Computed Tomography (CT) scans are extensively used to screen lung diseases. Strategies such as large slice spacing and low-dose CT scans are often preferred to reduce radiation exposure and therefore the risk for patients’ health. The counterpart is a significant degradation of image quality and/or resolution. In this work we investigate a generative adversarial network (GAN) for lung CT image enhanced-quality (EQ). Our EQ-GAN is trained on a high-quality lung CT cohort to recover the visual quality of scans degraded by blur and noise. The capability of our trained GAN to generate EQ CT scans is further illustrated on two test cohorts. Results confirm gains in visual quality metrics, remarkable visual enhancement of vessels, airways and lung parenchyma, as well as other enhancement patterns that require further investigation. We also compared automatic lung lobe segmentation on original versus EQ scans. Average Dice scores vary between lobes, can be as low as 0.3 and EQ scans enable segmentation of some lobes missed in the original scans. This paves the way to using EQ as pre-processing for lung lobe segmentation, further research to evaluate the impact of EQ to add robustness to airway and vessel segmentation, and to investigate anatomical details revealed in EQ scans.
{"title":"Enhanced-Quality Gan (EQ-GAN) on Lung CT Scans: Toward Truth and Potential Hallucinations","authors":"Martin Jammes-Floreani, A. Laine, E. Angelini","doi":"10.1109/ISBI48211.2021.9433996","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433996","url":null,"abstract":"Lung Computed Tomography (CT) scans are extensively used to screen lung diseases. Strategies such as large slice spacing and low-dose CT scans are often preferred to reduce radiation exposure and therefore the risk for patients’ health. The counterpart is a significant degradation of image quality and/or resolution. In this work we investigate a generative adversarial network (GAN) for lung CT image enhanced-quality (EQ). Our EQ-GAN is trained on a high-quality lung CT cohort to recover the visual quality of scans degraded by blur and noise. The capability of our trained GAN to generate EQ CT scans is further illustrated on two test cohorts. Results confirm gains in visual quality metrics, remarkable visual enhancement of vessels, airways and lung parenchyma, as well as other enhancement patterns that require further investigation. We also compared automatic lung lobe segmentation on original versus EQ scans. Average Dice scores vary between lobes, can be as low as 0.3 and EQ scans enable segmentation of some lobes missed in the original scans. This paves the way to using EQ as pre-processing for lung lobe segmentation, further research to evaluate the impact of EQ to add robustness to airway and vessel segmentation, and to investigate anatomical details revealed in EQ scans.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"20 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113968412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433974
Bin Cai, Erkang Cheng, Pengpeng Liang, Chi Xiong, Zhiyong Sun, Qiang Zhang, Bo Song
Accurate 3D whole heart segmentation provides more details of the morphological and pathological information that could help doctors with more effective patient-specific treatments. 3D CNN network has been recognized as an important role in accurate volumetric segmentation. Typically, 3D CNN network has a large number of parameters as well as the floating point operations (FLOPs), which leads to heavy and complex computation. In this paper, we introduce an efficient 3D network (Ghost-Light-3DNet) for heart segmentation. Our solution is characterized by two key components: First, inspired by GhostNet in 2D, we extend the Ghost module to 3D which can generate more feature maps from cheap operations. Second, a sequential separable conv with residual module is applied as a light plug-and-play component to further reduce network parameters and FLOPs. For evaluation, the proposed method is validated on the MM-WHS heart segmentation Challenge 2017 datasets. Compared to state-of-the-art solution using 3D UNet-like architecture, our Ghost-Light-3DNet achieves comparable segmentation accuracy with the 2. 18x fewer parameters and 4. 48x less FLOPs, respectively.
{"title":"Ghost-Light-3dnet: Efficient Network For Heart Segmentation","authors":"Bin Cai, Erkang Cheng, Pengpeng Liang, Chi Xiong, Zhiyong Sun, Qiang Zhang, Bo Song","doi":"10.1109/ISBI48211.2021.9433974","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433974","url":null,"abstract":"Accurate 3D whole heart segmentation provides more details of the morphological and pathological information that could help doctors with more effective patient-specific treatments. 3D CNN network has been recognized as an important role in accurate volumetric segmentation. Typically, 3D CNN network has a large number of parameters as well as the floating point operations (FLOPs), which leads to heavy and complex computation. In this paper, we introduce an efficient 3D network (Ghost-Light-3DNet) for heart segmentation. Our solution is characterized by two key components: First, inspired by GhostNet in 2D, we extend the Ghost module to 3D which can generate more feature maps from cheap operations. Second, a sequential separable conv with residual module is applied as a light plug-and-play component to further reduce network parameters and FLOPs. For evaluation, the proposed method is validated on the MM-WHS heart segmentation Challenge 2017 datasets. Compared to state-of-the-art solution using 3D UNet-like architecture, our Ghost-Light-3DNet achieves comparable segmentation accuracy with the 2. 18x fewer parameters and 4. 48x less FLOPs, respectively.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"16 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113976361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434091
Laura Estacio, M. Ehlke, A. Tack, Eveling Castro Gutierrez, H. Lamecker, R. Mora, S. Zachow
We present a method based on a generative model for detection of disturbances such as prosthesis, screws, zippers, and metals in 2D radiographs. The generative model is trained in an unsupervised fashion using clinical radiographs as well as simulated data, none of which contain disturbances. Our approach employs a latent space consistency loss which has the benefit of identifying similarities, and is enforced to reconstruct X-rays without disturbances. In order to detect images with disturbances, an anomaly score is computed also employing the Frechet distance between the input X-ray and the reconstructed one using our generative model. Validation was performed using clinical pelvis radiographs. We achieved an AUC of 0.77 and 0.83 with clinical and synthetic data, respectively. The results demonstrated a good accuracy of our method for detecting outliers as well as the advantage of utilizing synthetic data.
{"title":"Unsupervised Detection Of Disturbances In 2d Radiographs","authors":"Laura Estacio, M. Ehlke, A. Tack, Eveling Castro Gutierrez, H. Lamecker, R. Mora, S. Zachow","doi":"10.1109/ISBI48211.2021.9434091","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434091","url":null,"abstract":"We present a method based on a generative model for detection of disturbances such as prosthesis, screws, zippers, and metals in 2D radiographs. The generative model is trained in an unsupervised fashion using clinical radiographs as well as simulated data, none of which contain disturbances. Our approach employs a latent space consistency loss which has the benefit of identifying similarities, and is enforced to reconstruct X-rays without disturbances. In order to detect images with disturbances, an anomaly score is computed also employing the Frechet distance between the input X-ray and the reconstructed one using our generative model. Validation was performed using clinical pelvis radiographs. We achieved an AUC of 0.77 and 0.83 with clinical and synthetic data, respectively. The results demonstrated a good accuracy of our method for detecting outliers as well as the advantage of utilizing synthetic data.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114777047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434142
L. Gjesteby, Tzofi Klinghoffer, Meagan Ash, Matthew A. Melton, K. Otto, Damon G. Lamb, S. Burke, L. Brattain
A fundamental challenge in machine learning-based segmentation of large-scale brain microscopy images is the time and domain expertise required by humans to generate ground truth for model training. Weakly supervised and semi-supervised approaches can greatly reduce the burden of human annotation. Here we present a study of three-dimensional U-Nets with varying levels of supervision to perform neuronal nuclei segmentation in light-sheet microscopy volumes. We leverage automated blob detection with classical algorithms to generate noisy labels on a large volume, and our experiments show that weak supervision, with or without additional fine-tuning, can outperform resource-limited fully supervised learning. These methods are extended to analyze coincidence between multiple fluorescent stains in cleared brain tissue. This is an initial step towards automated whole-brain analysis of plasticity-related gene expression.
{"title":"Annotation-Efficient 3d U-Nets For Brain Plasticity Network Mapping","authors":"L. Gjesteby, Tzofi Klinghoffer, Meagan Ash, Matthew A. Melton, K. Otto, Damon G. Lamb, S. Burke, L. Brattain","doi":"10.1109/ISBI48211.2021.9434142","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434142","url":null,"abstract":"A fundamental challenge in machine learning-based segmentation of large-scale brain microscopy images is the time and domain expertise required by humans to generate ground truth for model training. Weakly supervised and semi-supervised approaches can greatly reduce the burden of human annotation. Here we present a study of three-dimensional U-Nets with varying levels of supervision to perform neuronal nuclei segmentation in light-sheet microscopy volumes. We leverage automated blob detection with classical algorithms to generate noisy labels on a large volume, and our experiments show that weak supervision, with or without additional fine-tuning, can outperform resource-limited fully supervised learning. These methods are extended to analyze coincidence between multiple fluorescent stains in cleared brain tissue. This is an initial step towards automated whole-brain analysis of plasticity-related gene expression.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114787613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}