Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434011
P. Cole, A. Pyrros, Oluwasanmi Koyejo
Radiology exams require exposing a patient to a variable dosage of radiation. Importantly, the amount of radiation used during the exam directly corresponds to the level of noise in the resulting image, and increased amounts of radiation can pose health risks to patients. This results in a tradeoff, as radiologists need a high-quality image to make a diagnosis. In this work, we propose a method to recover image fidelity given a noisy, or low-dose, sample. Using a two-part criterion that consists of a pixel-wise loss and an adversarial loss, we are able to recover the structure and fine detail of the normal-dose sample. To evaluate the denoising method, we implement simulations of realistic low-dose noise for a computed tomography exam, which may be of independent interest. Quantitative and qualitative results highlight the performance of our approach as compared to existing baselines.
{"title":"Learning To Recover Sharp Detail From Simulated Low-Dose Ct Studies","authors":"P. Cole, A. Pyrros, Oluwasanmi Koyejo","doi":"10.1109/ISBI48211.2021.9434011","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434011","url":null,"abstract":"Radiology exams require exposing a patient to a variable dosage of radiation. Importantly, the amount of radiation used during the exam directly corresponds to the level of noise in the resulting image, and increased amounts of radiation can pose health risks to patients. This results in a tradeoff, as radiologists need a high-quality image to make a diagnosis. In this work, we propose a method to recover image fidelity given a noisy, or low-dose, sample. Using a two-part criterion that consists of a pixel-wise loss and an adversarial loss, we are able to recover the structure and fine detail of the normal-dose sample. To evaluate the denoising method, we implement simulations of realistic low-dose noise for a computed tomography exam, which may be of independent interest. Quantitative and qualitative results highlight the performance of our approach as compared to existing baselines.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121263237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic segmentation of knee bone structures is an important task in orthopedics diagnosis of knee disease based on MRI images. Inspired by doctors’ diagnosis of knee in sagittal plane of MR image, we propose to first calculate the sagittal maximum intensity projection (MIP) of MR image, then construct a high precision 2D-3D hierarchical feature fusion network for automatic segmentation of knee based on convolutional encoding and decoding architecture. It includes: 1) A 2D bypass network extracting global features based on MIP; 2) A 3D backbone network calculating local details based on MR volume; 3) A feature fusion module integrating 2D global context and 3D local details hierarchically. Particularly, the global features as anchor points will be fused with the local details at each level of the encoding path to enrich the context of local details and improve the segmentation accuracy. Our method is verified on SKI10 dataset. The average dice coefficients of femur, femoral cartilage, tibia and tibia cartilage are 0.978, 0.848, 0.979 and 0.848, respectively, and the segmentation performance is far better than the state-of-the-art methods.
{"title":"2d-3d Hierarchical Feature Fusion Network For Segmentation Of Bone Structure In Knee Mr Image","authors":"Hui Wang, Demin Yao, Jiayi Chen, Yanjing Liu, Wensheng Li, Yonghong Shi","doi":"10.1109/ISBI48211.2021.9433777","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433777","url":null,"abstract":"Automatic segmentation of knee bone structures is an important task in orthopedics diagnosis of knee disease based on MRI images. Inspired by doctors’ diagnosis of knee in sagittal plane of MR image, we propose to first calculate the sagittal maximum intensity projection (MIP) of MR image, then construct a high precision 2D-3D hierarchical feature fusion network for automatic segmentation of knee based on convolutional encoding and decoding architecture. It includes: 1) A 2D bypass network extracting global features based on MIP; 2) A 3D backbone network calculating local details based on MR volume; 3) A feature fusion module integrating 2D global context and 3D local details hierarchically. Particularly, the global features as anchor points will be fused with the local details at each level of the encoding path to enrich the context of local details and improve the segmentation accuracy. Our method is verified on SKI10 dataset. The average dice coefficients of femur, femoral cartilage, tibia and tibia cartilage are 0.978, 0.848, 0.979 and 0.848, respectively, and the segmentation performance is far better than the state-of-the-art methods.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116451874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433967
Minjeong Kim, Guorong Wu
Networks of biomarker covariance based on neuropathological events or neuro-degeneration degree is important to understand genetic influence and trophic reinforcement in the brain development/aging process. It is a common to quantiry the covariance of inter-subject biomarker profiles by linear correlation metrics such as Pearson’s correlation. Due to the heterogeneity and noise in the observed neurobiological data, however, it is difficult to construct a reliable covariance network using gross statistical measurement. To this, we propose a graph learning approach to infer the brain connectivity based on the harmonized inter-subject biomarker profiles. Specifically, we progressively estimate brain network until region-to-region connectivities reach the largest consensus of biomarker covariance across individuals. A better understanding of the network topology allows us to harmonize the neurobiological data effectively which eventually facilitates the graph inference. Since the network of biomarker covariance represents the region-wise associations in the entire population, we further promote diversity by adaptively penalizing the predominant influence from a group of biomarker profiles exhibiting statistically correlated patterns. We applied our method to the cortical thickness from MRI and amyloid-beta burden from PET images, which are biomarkers in Alzheimer’s disease (AD). Enhanced statistical power and replicability have been achieved by our approach in identifying network alterations between cognitive normal (CN) and AD cohorts.
{"title":"Constructing Reliable Network Of Biomarker Covariance By Joint Data Harmonization And Graph Learning","authors":"Minjeong Kim, Guorong Wu","doi":"10.1109/ISBI48211.2021.9433967","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433967","url":null,"abstract":"Networks of biomarker covariance based on neuropathological events or neuro-degeneration degree is important to understand genetic influence and trophic reinforcement in the brain development/aging process. It is a common to quantiry the covariance of inter-subject biomarker profiles by linear correlation metrics such as Pearson’s correlation. Due to the heterogeneity and noise in the observed neurobiological data, however, it is difficult to construct a reliable covariance network using gross statistical measurement. To this, we propose a graph learning approach to infer the brain connectivity based on the harmonized inter-subject biomarker profiles. Specifically, we progressively estimate brain network until region-to-region connectivities reach the largest consensus of biomarker covariance across individuals. A better understanding of the network topology allows us to harmonize the neurobiological data effectively which eventually facilitates the graph inference. Since the network of biomarker covariance represents the region-wise associations in the entire population, we further promote diversity by adaptively penalizing the predominant influence from a group of biomarker profiles exhibiting statistically correlated patterns. We applied our method to the cortical thickness from MRI and amyloid-beta burden from PET images, which are biomarkers in Alzheimer’s disease (AD). Enhanced statistical power and replicability have been achieved by our approach in identifying network alterations between cognitive normal (CN) and AD cohorts.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127081909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434136
Zixun Huang, Rui Zhao, Frank H. F. Leung, K. Lam, S. Ling, Juan Lyu, Sunetra Banerjee, T. Lee, De Yang, Y. Zheng
Ultrasound volume projection imaging (VPI) has shown to be appealing from a clinical perspective, because of its harmlessness, flexibility, and efficiency in scoliosis assessment. However, the limitations in hardware devices degrade the resultant image content with strong structured noise. Owing to the unavailability of reference data and the unpredictable degradation model, VPI image recovery is a challenging problem. In this paper, we propose a novel framework to learn the structured noise removal from unpaired samples. We introduce the attention mechanism into the generative adversarial network to enhance the learning by focusing on the salient corrupted patterns. We also present a dual adversarial learning strategy and integrate the denoiser with a segmentation model to produce the task-oriented noiseless estimation. Experimental results show that the proposed method can improve both the visual quality and the segmentation accuracy on spine images.
{"title":"DA-GAN: Learning Structured Noise Removal In Ultrasound Volume Projection Imaging For Enhanced Spine Segmentation","authors":"Zixun Huang, Rui Zhao, Frank H. F. Leung, K. Lam, S. Ling, Juan Lyu, Sunetra Banerjee, T. Lee, De Yang, Y. Zheng","doi":"10.1109/ISBI48211.2021.9434136","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434136","url":null,"abstract":"Ultrasound volume projection imaging (VPI) has shown to be appealing from a clinical perspective, because of its harmlessness, flexibility, and efficiency in scoliosis assessment. However, the limitations in hardware devices degrade the resultant image content with strong structured noise. Owing to the unavailability of reference data and the unpredictable degradation model, VPI image recovery is a challenging problem. In this paper, we propose a novel framework to learn the structured noise removal from unpaired samples. We introduce the attention mechanism into the generative adversarial network to enhance the learning by focusing on the salient corrupted patterns. We also present a dual adversarial learning strategy and integrate the denoiser with a segmentation model to produce the task-oriented noiseless estimation. Experimental results show that the proposed method can improve both the visual quality and the segmentation accuracy on spine images.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127207869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434159
Tran Minh Quan, Huynh Minh Thanh, Ta Duc Huy, Nguyen Do Trung Chanh, Nguyen Thi Hong Anh, Phan Hoan Vu, N. H. Nam, Tran Quy Tuong, Vu Minh Dien, B. Giang, Bui Huu Trung, S. Q. Truong
This work aims to fight against the current outbreak pandemic by developing a method to classify suspected infected COVID-19 cases. Driven by the urgency, due to the vastly increased number of patients and deaths worldwide, we rely on situationally pragmatic chest X-ray scans and state-of-the-art deep learning techniques to build a robust diagnosis for massive screening, early detection, and in-time isolation decision making. The proposed solution, X-ray Projected Generative Adversarial Network (XPGAN), addresses the most fundamental issue in training such a deep neural network on limited human-annotated datasets. By leveraging the generative adversarial network, we can synthesize a large amount of chest X-ray images with prior categories from more accurate 3D Computed Tomography data, including COVID-19, and jointly train a model with a few hundreds of positive samples. As a result, XPGAN outperforms the vanilla DenseNet121 models and other competing baselines trained on the same frontal chest X-ray images.
{"title":"XPGAN: X-Ray Projected Generative Adversarial Network For Improving Covid-19 Image Classification","authors":"Tran Minh Quan, Huynh Minh Thanh, Ta Duc Huy, Nguyen Do Trung Chanh, Nguyen Thi Hong Anh, Phan Hoan Vu, N. H. Nam, Tran Quy Tuong, Vu Minh Dien, B. Giang, Bui Huu Trung, S. Q. Truong","doi":"10.1109/ISBI48211.2021.9434159","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434159","url":null,"abstract":"This work aims to fight against the current outbreak pandemic by developing a method to classify suspected infected COVID-19 cases. Driven by the urgency, due to the vastly increased number of patients and deaths worldwide, we rely on situationally pragmatic chest X-ray scans and state-of-the-art deep learning techniques to build a robust diagnosis for massive screening, early detection, and in-time isolation decision making. The proposed solution, X-ray Projected Generative Adversarial Network (XPGAN), addresses the most fundamental issue in training such a deep neural network on limited human-annotated datasets. By leveraging the generative adversarial network, we can synthesize a large amount of chest X-ray images with prior categories from more accurate 3D Computed Tomography data, including COVID-19, and jointly train a model with a few hundreds of positive samples. As a result, XPGAN outperforms the vanilla DenseNet121 models and other competing baselines trained on the same frontal chest X-ray images.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126088813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434075
C. Meyer, V. Mallouh, D. Spehner, É. Baudrier, P. Schultz, B. Naegel
Focused Ion Beam milling combined with Scanning Electron Microscopy (FIB-SEM) technique is an electron microscopy imaging method that offers the possibility of acquiring 3D isotropic images of biological structures at the nanometric scale. Automated image segmentation is required for morphological analysis of huge image stacks and to save time consuming manual intervention. Current methods are either specific to data and organelles or lack accuracy. We propose a robust multi-class semantic segmentation method for FIBSEM images, based on deep neural networks. We evaluate and compare our proposed method on two FIB-SEM images, for the segmentation of mitochondria, cell membrane and endoplasmic reticulum. We achieve results close to inter-expert variability.
{"title":"Automatic Multi Class Organelle Segmentation For Cellular Fib-Sem Images","authors":"C. Meyer, V. Mallouh, D. Spehner, É. Baudrier, P. Schultz, B. Naegel","doi":"10.1109/ISBI48211.2021.9434075","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434075","url":null,"abstract":"Focused Ion Beam milling combined with Scanning Electron Microscopy (FIB-SEM) technique is an electron microscopy imaging method that offers the possibility of acquiring 3D isotropic images of biological structures at the nanometric scale. Automated image segmentation is required for morphological analysis of huge image stacks and to save time consuming manual intervention. Current methods are either specific to data and organelles or lack accuracy. We propose a robust multi-class semantic segmentation method for FIBSEM images, based on deep neural networks. We evaluate and compare our proposed method on two FIB-SEM images, for the segmentation of mitochondria, cell membrane and endoplasmic reticulum. We achieve results close to inter-expert variability.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125334966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9434093
C. Román, N. López-López, J. Houenou, C. Poupon, J. F. Mangin, C. Hernández, P. Guevara
The study of the superficial white matter and its description is essential for the understanding of human brain function and the study of pathogenesis. However, the study of these fibers is still an incomplete task due to the high inter-subject variability and the size of this kind of fibers. In this work, a superficial white matter bundle identification based on fiber clustering was performed using probabilistic tractography on 100 subjects from the The Human Connectome Project (HCP) data, aligned with a non-linear registration. The method starts with an intra-subject clustering, followed by a segmentation of fibers connecting the precentral (PrC) and postcentral (PoC) regions, based on a ROI atlas. Due to the high amount of fibers, they were randomly separated into groups. An inter-subject clustering was applied on the fibers of each group, and then two clustering levels were applied to select the most reproducible bundles. Seven bundles per hemisphere were obtained, connecting the PrC and PoC regions. These were compared with bundles from previous atlases, showing in general more coverage and some bundles not found in previous atlases.
{"title":"Study Of Precentral-Postcentral Connections On Hcp Data Using Probabilistic Tractography And Fiber Clustering","authors":"C. Román, N. López-López, J. Houenou, C. Poupon, J. F. Mangin, C. Hernández, P. Guevara","doi":"10.1109/ISBI48211.2021.9434093","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434093","url":null,"abstract":"The study of the superficial white matter and its description is essential for the understanding of human brain function and the study of pathogenesis. However, the study of these fibers is still an incomplete task due to the high inter-subject variability and the size of this kind of fibers. In this work, a superficial white matter bundle identification based on fiber clustering was performed using probabilistic tractography on 100 subjects from the The Human Connectome Project (HCP) data, aligned with a non-linear registration. The method starts with an intra-subject clustering, followed by a segmentation of fibers connecting the precentral (PrC) and postcentral (PoC) regions, based on a ROI atlas. Due to the high amount of fibers, they were randomly separated into groups. An inter-subject clustering was applied on the fibers of each group, and then two clustering levels were applied to select the most reproducible bundles. Seven bundles per hemisphere were obtained, connecting the PrC and PoC regions. These were compared with bundles from previous atlases, showing in general more coverage and some bundles not found in previous atlases.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126364950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433948
Ignacio Sarasua, Jonwong Lee, C. Wachinger
Geometric deep learning can find representations that are optimal for a given task and therefore improve the performance over pre-defined representations. While current work has mainly focused on point representations, meshes also contain connectivity information and are therefore a more comprehensive characterization of the underlying anatomical surface. In this work, we evaluate four recent geometric deep learning approaches that operate on mesh representations. These approaches can be grouped into template-free and template-based approaches, where the template-based methods need a more elaborate pre-processing step with the definition of a common reference template and correspondences. We compare the different networks for the prediction of Alzheimer’s disease based on the meshes of the hippocampus. Our results show advantages for template-based methods in terms of accuracy, number of learnable parameters, and training speed. While the template creation may be limiting for some applications, neuroimaging has a long history of building templates with automated tools readily available. Overall, working with meshes is more involved than working with simplistic point clouds, but they also offer new avenues for designing geometric deep learning architectures.
{"title":"Geometric Deep Learning on Anatomical Meshes for the Prediction of Alzheimer’s Disease","authors":"Ignacio Sarasua, Jonwong Lee, C. Wachinger","doi":"10.1109/ISBI48211.2021.9433948","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433948","url":null,"abstract":"Geometric deep learning can find representations that are optimal for a given task and therefore improve the performance over pre-defined representations. While current work has mainly focused on point representations, meshes also contain connectivity information and are therefore a more comprehensive characterization of the underlying anatomical surface. In this work, we evaluate four recent geometric deep learning approaches that operate on mesh representations. These approaches can be grouped into template-free and template-based approaches, where the template-based methods need a more elaborate pre-processing step with the definition of a common reference template and correspondences. We compare the different networks for the prediction of Alzheimer’s disease based on the meshes of the hippocampus. Our results show advantages for template-based methods in terms of accuracy, number of learnable parameters, and training speed. While the template creation may be limiting for some applications, neuroimaging has a long history of building templates with automated tools readily available. Overall, working with meshes is more involved than working with simplistic point clouds, but they also offer new avenues for designing geometric deep learning architectures.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127828841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433990
Gayathri Malamal, Mahesh Raveendranatha Panicker
In the case of typical beamforming in ultrasound imaging, apodization schemes assume a geometric delay driven diffuse reflection model and are not robust for specular reflections. Conversely, the beamforming schemes exclusive to emphasizing specularity suppress the diffuse reflections and speckles. This results in separate beamforming modes for normal tissue scanning and specular reflectors like needles. However, most tissue reflections compose of both diffuse and specular components and a synergistic approach is important. Towards this, a novel approach called reflection tuned apodization (RTA) using coherent plane-wave compounding is proposed, where the apodization window is aligned appropriately by analyzing the reflections from the transmitted plane wave angles for each pixel. A reflection similarity measure is estimated from the plane wave angles to differentiate and characterize the tissue reflections. The beamforming results with the proposed RTA on experimental data show a remarkable improvement in the visibility of specular regions without the suppression of diffuse reflections and speckles compared to the conventional apodization approach.
{"title":"Towards Diffuse Beamforming For Specular Reflectors: A Pixel-Level Reflection Tuned Apodization Scheme For Ultrasound Imaging","authors":"Gayathri Malamal, Mahesh Raveendranatha Panicker","doi":"10.1109/ISBI48211.2021.9433990","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433990","url":null,"abstract":"In the case of typical beamforming in ultrasound imaging, apodization schemes assume a geometric delay driven diffuse reflection model and are not robust for specular reflections. Conversely, the beamforming schemes exclusive to emphasizing specularity suppress the diffuse reflections and speckles. This results in separate beamforming modes for normal tissue scanning and specular reflectors like needles. However, most tissue reflections compose of both diffuse and specular components and a synergistic approach is important. Towards this, a novel approach called reflection tuned apodization (RTA) using coherent plane-wave compounding is proposed, where the apodization window is aligned appropriately by analyzing the reflections from the transmitted plane wave angles for each pixel. A reflection similarity measure is estimated from the plane wave angles to differentiate and characterize the tissue reflections. The beamforming results with the proposed RTA on experimental data show a remarkable improvement in the visibility of specular regions without the suppression of diffuse reflections and speckles compared to the conventional apodization approach.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127883028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-13DOI: 10.1109/ISBI48211.2021.9433957
Tran Thien Dat Nguyen, Changbeom Shim, Wooil Kim
Automatic cell tracking has long been a challenging problem due to the uncertainty of cell dynamic and observation process, where detection probability and clutter rate are unknown and time-varying. This is compounded when cell lineages are also to be inferred. In this paper, we propose a novel biological cell tracking method based on the Labeled Random Finite Set (RFS) approach to study cell migration patterns. Our method tracks cells with lineage by using a Generalised Label Multi-Bernoulli (GLMB) filter with objects spawning, and a robust Cardinalised Probability Hypothesis Density (CPHD) to address unknown and time-varying detection probability and clutter rate. The proposed method is capable of quantifying the certainty level of the tracking solutions. The capability of the algorithm on population dynamic inference is demonstrated on a migration sequence of breast cancer cells.
{"title":"Biological Cell Tracking And Lineage Inference Via Random Finite Sets","authors":"Tran Thien Dat Nguyen, Changbeom Shim, Wooil Kim","doi":"10.1109/ISBI48211.2021.9433957","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433957","url":null,"abstract":"Automatic cell tracking has long been a challenging problem due to the uncertainty of cell dynamic and observation process, where detection probability and clutter rate are unknown and time-varying. This is compounded when cell lineages are also to be inferred. In this paper, we propose a novel biological cell tracking method based on the Labeled Random Finite Set (RFS) approach to study cell migration patterns. Our method tracks cells with lineage by using a Generalised Label Multi-Bernoulli (GLMB) filter with objects spawning, and a robust Cardinalised Probability Hypothesis Density (CPHD) to address unknown and time-varying detection probability and clutter rate. The proposed method is capable of quantifying the certainty level of the tracking solutions. The capability of the algorithm on population dynamic inference is demonstrated on a migration sequence of breast cancer cells.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129098570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}