Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433991
Samuel Cros, Eugene Vorontsov, S. Kadoury
Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.
{"title":"Managing Class Imbalance in Multi-Organ CT Segmentation in Head and Neck Cancer Patients","authors":"Samuel Cros, Eugene Vorontsov, S. Kadoury","doi":"10.1109/ISBI48211.2021.9433991","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433991","url":null,"abstract":"Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131763319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433846
B. R. Chintada, R. Rau, O. Goksel
Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.
{"title":"Time Of Arrival Delineation In Echo Traces For Reflection Ultrasound Tomography","authors":"B. R. Chintada, R. Rau, O. Goksel","doi":"10.1109/ISBI48211.2021.9433846","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433846","url":null,"abstract":"Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133453363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434137
Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince
To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.
{"title":"Slice Profile Estimation From 2D MRI Acquisition Using Generative Adversarial Networks","authors":"Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince","doi":"10.1109/ISBI48211.2021.9434137","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434137","url":null,"abstract":"To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131853053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434028
Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo
An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.
{"title":"Focal-Balanced Attention U-Net with Dynamic Thresholding by Spatial Regression for Segmentation of Aortic Dissection in CT Imagery","authors":"Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo","doi":"10.1109/ISBI48211.2021.9434028","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434028","url":null,"abstract":"An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134497378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434101
Sh. Shakeri, W. Le, C. Ménard, S. Kadoury
Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.
{"title":"Deformable Mri To Transrectal Ultrasound Registration For Prostate Interventions With Shape-Based Deep Variational Auto-Encoders","authors":"Sh. Shakeri, W. Le, C. Ménard, S. Kadoury","doi":"10.1109/ISBI48211.2021.9434101","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434101","url":null,"abstract":"Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134571219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434074
Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika
Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.
{"title":"A More Interpretable Classifier For Multiple Sclerosis","authors":"Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika","doi":"10.1109/ISBI48211.2021.9434074","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434074","url":null,"abstract":"Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"726 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133847116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433914
A. Adishesha, D. Vanselow, P. L. Rivière, Xiaolei Huang, K. Cheng
X-ray “Histotomography” built on the basic principles of CT can be used to create 3D images of zebrafish at resolutions one thousand times greater than CT, enabling the visualization of cell nuclei and other subcellular structures in 3D. Noise in the scans caused either through natural Xray phenomena or other distortions can lead to low accuracy in tasks related to detection and segmentation of anatomically significant objects. We evaluate the use of supervised Encoder-Decoder models for noise removal in projection and reconstruction domain images in absence of clean training targets. We propose the use of a Noise-2-Noise architecture with U-Net backbone along with structural similarity index loss as an addendum to help maintain and sharpen pathologically relevant details. We empirically show that our technique outperforms existing methods, with an average peak signal to noise ratio (PSNR) gain of 14. 50dB and 15. 05dB for noise removal in the reconstruction domain when trained without and with clean targets respectively. Using the same network architecture, we obtain a gain in structural similarity index (SSIM) in the projection domain by an average of 0.213 when trained without clean targets and 0.259 with clean targets. Additionally, by comparing reconstructions from denoised projections with those from original projections, we establish that noise removal in the projection domain is beneficial to improve the quality of reconstructed scans.
{"title":"Zebrafish Histotomography Noise Removal In Projection And Reconstruction Domains","authors":"A. Adishesha, D. Vanselow, P. L. Rivière, Xiaolei Huang, K. Cheng","doi":"10.1109/ISBI48211.2021.9433914","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433914","url":null,"abstract":"X-ray “Histotomography” built on the basic principles of CT can be used to create 3D images of zebrafish at resolutions one thousand times greater than CT, enabling the visualization of cell nuclei and other subcellular structures in 3D. Noise in the scans caused either through natural Xray phenomena or other distortions can lead to low accuracy in tasks related to detection and segmentation of anatomically significant objects. We evaluate the use of supervised Encoder-Decoder models for noise removal in projection and reconstruction domain images in absence of clean training targets. We propose the use of a Noise-2-Noise architecture with U-Net backbone along with structural similarity index loss as an addendum to help maintain and sharpen pathologically relevant details. We empirically show that our technique outperforms existing methods, with an average peak signal to noise ratio (PSNR) gain of 14. 50dB and 15. 05dB for noise removal in the reconstruction domain when trained without and with clean targets respectively. Using the same network architecture, we obtain a gain in structural similarity index (SSIM) in the projection domain by an average of 0.213 when trained without clean targets and 0.259 with clean targets. Additionally, by comparing reconstructions from denoised projections with those from original projections, we establish that noise removal in the projection domain is beneficial to improve the quality of reconstructed scans.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114425193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.
{"title":"Cu-Segnet: Corneal Ulcer Segmentation Network","authors":"Tingting Wang, Weifang Zhu, Meng Wang, Zhongyue Chen, Xinjian Chen","doi":"10.1109/ISBI48211.2021.9433934","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433934","url":null,"abstract":"Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433858
Olivia Mariani, François Marelli, C. Jaques, Alexander Ernst, M. Liebling
In vivo microscopy is an important tool to study developing organs such as the heart of the zebrafish embryo but is often limited by slow image frame acquisition speed. While collections of still images of the beating heart at arbitrary phases can be sorted to obtain a virtual heartbeat, the presence of identical heart configurations at two or more heartbeat phases can derail this approach. Here, we propose a dual illumination method to encode movement in alternate frames to disambiguate heartbeat phases in the still frames. We propose to alternately acquire images with a ramp and pulse illumination then sort all successive image pairs based on the ramp-illuminated data but use the pulse-illuminated images for display and analysis. We characterized our method on synthetic data, and show its applicability on experimental data and found that an exposure time of about 7% of the heartbeat or more is necessary to encode the movement reliably in a single heartbeat with a single redundant node. Our method opens the possibility to use sorting algorithms without prior information on the phase, even when the movement presents redundant frames.
{"title":"Unequivocal Cardiac Phase Sorting From Alternating Ramp-And Pulse-Illuminated Microscopy Image Sequences","authors":"Olivia Mariani, François Marelli, C. Jaques, Alexander Ernst, M. Liebling","doi":"10.1109/ISBI48211.2021.9433858","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433858","url":null,"abstract":"In vivo microscopy is an important tool to study developing organs such as the heart of the zebrafish embryo but is often limited by slow image frame acquisition speed. While collections of still images of the beating heart at arbitrary phases can be sorted to obtain a virtual heartbeat, the presence of identical heart configurations at two or more heartbeat phases can derail this approach. Here, we propose a dual illumination method to encode movement in alternate frames to disambiguate heartbeat phases in the still frames. We propose to alternately acquire images with a ramp and pulse illumination then sort all successive image pairs based on the ramp-illuminated data but use the pulse-illuminated images for display and analysis. We characterized our method on synthetic data, and show its applicability on experimental data and found that an exposure time of about 7% of the heartbeat or more is necessary to encode the movement reliably in a single heartbeat with a single redundant node. Our method opens the possibility to use sorting algorithms without prior information on the phase, even when the movement presents redundant frames.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114597675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433903
Andrew Zhen, Minjeong Kim, Guorong Wu
Alzheimer’s disease (AD) is clinically heterogeneous in presentation and progression, demonstrating variable topographic distributions of clinical phenotypes, progression rate, and underlying neuro-degeneration mechanisms. Although striking efforts have been made to disentangle the massive heterogeneity in AD by identifying latent clusters with similar imaging or phenotype patterns, such unsupervised clustering techniques often yield sub-optimal stratification results that do not agree with clinical manifestations. To address this limitation, we present a novel deep predictive stratification network (DPS-Net) to learn the best feature representations from neuroimages, which allows us to identify latent fine-grained clusters (aka subtypes) with greater neuroscientific insight. The driving force of DPS-Net is a series of clinical outcomes from different cognitive domains (such as language and memory), which we consider as the benchmark to alleviate the heterogeneity issue of neurodegeneration pathways in the AD population. Since subject-specific longitudinal change is more relevant to disease progression, we propose to identify the latent subtypes from longitudinal neuroimaging data. Because AD manifests disconnection syndrome, we have applied our datadriven subtyping approach to longitudinal structural connectivity networks from the ADNI database. Our deep neural network identified more separated and clinically backed subtypes than conventional unsupervised methods used to solve the subtyping task– indicating its great applicability in future neuroimaging studies.
{"title":"Disentangling The Spatio-Temporal Heterogeneity of Alzheimer’s Disease Using A Deep Predictive Stratification Network","authors":"Andrew Zhen, Minjeong Kim, Guorong Wu","doi":"10.1109/ISBI48211.2021.9433903","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433903","url":null,"abstract":"Alzheimer’s disease (AD) is clinically heterogeneous in presentation and progression, demonstrating variable topographic distributions of clinical phenotypes, progression rate, and underlying neuro-degeneration mechanisms. Although striking efforts have been made to disentangle the massive heterogeneity in AD by identifying latent clusters with similar imaging or phenotype patterns, such unsupervised clustering techniques often yield sub-optimal stratification results that do not agree with clinical manifestations. To address this limitation, we present a novel deep predictive stratification network (DPS-Net) to learn the best feature representations from neuroimages, which allows us to identify latent fine-grained clusters (aka subtypes) with greater neuroscientific insight. The driving force of DPS-Net is a series of clinical outcomes from different cognitive domains (such as language and memory), which we consider as the benchmark to alleviate the heterogeneity issue of neurodegeneration pathways in the AD population. Since subject-specific longitudinal change is more relevant to disease progression, we propose to identify the latent subtypes from longitudinal neuroimaging data. Because AD manifests disconnection syndrome, we have applied our datadriven subtyping approach to longitudinal structural connectivity networks from the ADNI database. Our deep neural network identified more separated and clinically backed subtypes than conventional unsupervised methods used to solve the subtyping task– indicating its great applicability in future neuroimaging studies.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115356495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}