Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433991
Samuel Cros, Eugene Vorontsov, S. Kadoury
Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.
{"title":"Managing Class Imbalance in Multi-Organ CT Segmentation in Head and Neck Cancer Patients","authors":"Samuel Cros, Eugene Vorontsov, S. Kadoury","doi":"10.1109/ISBI48211.2021.9433991","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433991","url":null,"abstract":"Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131763319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433846
B. R. Chintada, R. Rau, O. Goksel
Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.
{"title":"Time Of Arrival Delineation In Echo Traces For Reflection Ultrasound Tomography","authors":"B. R. Chintada, R. Rau, O. Goksel","doi":"10.1109/ISBI48211.2021.9433846","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433846","url":null,"abstract":"Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133453363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434137
Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince
To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.
{"title":"Slice Profile Estimation From 2D MRI Acquisition Using Generative Adversarial Networks","authors":"Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince","doi":"10.1109/ISBI48211.2021.9434137","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434137","url":null,"abstract":"To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131853053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434028
Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo
An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.
{"title":"Focal-Balanced Attention U-Net with Dynamic Thresholding by Spatial Regression for Segmentation of Aortic Dissection in CT Imagery","authors":"Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo","doi":"10.1109/ISBI48211.2021.9434028","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434028","url":null,"abstract":"An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134497378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434101
Sh. Shakeri, W. Le, C. Ménard, S. Kadoury
Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.
{"title":"Deformable Mri To Transrectal Ultrasound Registration For Prostate Interventions With Shape-Based Deep Variational Auto-Encoders","authors":"Sh. Shakeri, W. Le, C. Ménard, S. Kadoury","doi":"10.1109/ISBI48211.2021.9434101","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434101","url":null,"abstract":"Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134571219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434074
Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika
Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.
{"title":"A More Interpretable Classifier For Multiple Sclerosis","authors":"Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika","doi":"10.1109/ISBI48211.2021.9434074","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434074","url":null,"abstract":"Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"726 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133847116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.
{"title":"Cu-Segnet: Corneal Ulcer Segmentation Network","authors":"Tingting Wang, Weifang Zhu, Meng Wang, Zhongyue Chen, Xinjian Chen","doi":"10.1109/ISBI48211.2021.9433934","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433934","url":null,"abstract":"Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433773
Zishun Feng, J. Sivak, Ashok K. Krishnamurthy
There is considerable interest in AI systems that can assist a cardiologist to diagnose echocardiograms, and can also be used to train residents in classifying echocardiograms. Prior work has focused on the analysis of a single frame. Classifying echocardiograms at the video-level is challenging due to intra-frame and inter-frame noise. We propose a two-stream deep network which learns from the spatial context and optical flow for the classification of echocardiography videos. Each stream contains two parts: a Convolutional Neural Network (CNN) for spatial features and a bi-directional Long Short-Term Memory (LSTM) network with Attention for temporal. The features from these two streams are fused for classification. We verify our experimental results on a dataset of 170 (80 normal and 90 abnormal) videos that have been manually labeled by trained cardiologists. Our method provides an overall accuracy of 91.18%, with a sensitivity of 94.11% and a specificity of 88.24%.
{"title":"Two-Stream Attention Spatio-Temporal Network For Classification Of Echocardiography Videos","authors":"Zishun Feng, J. Sivak, Ashok K. Krishnamurthy","doi":"10.1109/ISBI48211.2021.9433773","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433773","url":null,"abstract":"There is considerable interest in AI systems that can assist a cardiologist to diagnose echocardiograms, and can also be used to train residents in classifying echocardiograms. Prior work has focused on the analysis of a single frame. Classifying echocardiograms at the video-level is challenging due to intra-frame and inter-frame noise. We propose a two-stream deep network which learns from the spatial context and optical flow for the classification of echocardiography videos. Each stream contains two parts: a Convolutional Neural Network (CNN) for spatial features and a bi-directional Long Short-Term Memory (LSTM) network with Attention for temporal. The features from these two streams are fused for classification. We verify our experimental results on a dataset of 170 (80 normal and 90 abnormal) videos that have been manually labeled by trained cardiologists. Our method provides an overall accuracy of 91.18%, with a sensitivity of 94.11% and a specificity of 88.24%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433957
Tran Thien Dat Nguyen, Changbeom Shim, Wooil Kim
Automatic cell tracking has long been a challenging problem due to the uncertainty of cell dynamic and observation process, where detection probability and clutter rate are unknown and time-varying. This is compounded when cell lineages are also to be inferred. In this paper, we propose a novel biological cell tracking method based on the Labeled Random Finite Set (RFS) approach to study cell migration patterns. Our method tracks cells with lineage by using a Generalised Label Multi-Bernoulli (GLMB) filter with objects spawning, and a robust Cardinalised Probability Hypothesis Density (CPHD) to address unknown and time-varying detection probability and clutter rate. The proposed method is capable of quantifying the certainty level of the tracking solutions. The capability of the algorithm on population dynamic inference is demonstrated on a migration sequence of breast cancer cells.
{"title":"Biological Cell Tracking And Lineage Inference Via Random Finite Sets","authors":"Tran Thien Dat Nguyen, Changbeom Shim, Wooil Kim","doi":"10.1109/ISBI48211.2021.9433957","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433957","url":null,"abstract":"Automatic cell tracking has long been a challenging problem due to the uncertainty of cell dynamic and observation process, where detection probability and clutter rate are unknown and time-varying. This is compounded when cell lineages are also to be inferred. In this paper, we propose a novel biological cell tracking method based on the Labeled Random Finite Set (RFS) approach to study cell migration patterns. Our method tracks cells with lineage by using a Generalised Label Multi-Bernoulli (GLMB) filter with objects spawning, and a robust Cardinalised Probability Hypothesis Density (CPHD) to address unknown and time-varying detection probability and clutter rate. The proposed method is capable of quantifying the certainty level of the tracking solutions. The capability of the algorithm on population dynamic inference is demonstrated on a migration sequence of breast cancer cells.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129098570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433801
Suemin Lee, I. Bajić
Deep Neural Networks (DNNs) have become ubiquitous in medical image processing and analysis. Among them, U-Nets are very popular in various image segmentation tasks. Yet, little is known about how information flows through these networks and whether they are indeed properly designed for the tasks they are being proposed for. In this paper, we employ information-theoretic tools in order to gain insight into information flow through U-Nets. In particular, we show how mutual information between input/output and an intermediate layer can be a useful tool to understand information flow through various portions of a U-Net, assess its architectural efficiency, and even propose more efficient designs.
{"title":"Information Flow Through U-Nets","authors":"Suemin Lee, I. Bajić","doi":"10.1109/ISBI48211.2021.9433801","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433801","url":null,"abstract":"Deep Neural Networks (DNNs) have become ubiquitous in medical image processing and analysis. Among them, U-Nets are very popular in various image segmentation tasks. Yet, little is known about how information flows through these networks and whether they are indeed properly designed for the tasks they are being proposed for. In this paper, we employ information-theoretic tools in order to gain insight into information flow through U-Nets. In particular, we show how mutual information between input/output and an intermediate layer can be a useful tool to understand information flow through various portions of a U-Net, assess its architectural efficiency, and even propose more efficient designs.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131495400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}