T. Richard, Yan Chastagnier, V. Szabo, K. Chalard, B. Summa, Jean-Marc Thiery, T. Boubekeur, Noura Faraj
We introduce a novel multi-modal 3D image registration framework based on 3D user-guided deformation of both volume’s shape and intensity values. Being able to apply deformations in 3D gives access to a wide new range of interactions allowing for the registration of images from any acquisition method and of any organ, complete or partial. Our framework uses a state of the art 3D volume rendering method for real-time feedback on the registration accuracy as well as the image deformation. We propose a novel methodological variation to accurately display 3D segmented voxel grids, which is a requirement in a registration context for visualizing a segmented atlas. Our pipeline is implemented in an open-source software (available via GitHub) and was directly used by biologists for registration of mouse brain model autofluorescence acquisition on the Allen Brain Atlas. The latter mapping allows them to retrieve regions of interest properly identified on the segmented atlas in acquired brain datasets and therefore extract only high-resolution images of those areas, avoiding the creation of images too large
{"title":"Multi-modal 3D Image Registration Using Interactive Voxel Grid Deformation and Rendering","authors":"T. Richard, Yan Chastagnier, V. Szabo, K. Chalard, B. Summa, Jean-Marc Thiery, T. Boubekeur, Noura Faraj","doi":"10.2312/vcbm.20221191","DOIUrl":"https://doi.org/10.2312/vcbm.20221191","url":null,"abstract":"We introduce a novel multi-modal 3D image registration framework based on 3D user-guided deformation of both volume’s shape and intensity values. Being able to apply deformations in 3D gives access to a wide new range of interactions allowing for the registration of images from any acquisition method and of any organ, complete or partial. Our framework uses a state of the art 3D volume rendering method for real-time feedback on the registration accuracy as well as the image deformation. We propose a novel methodological variation to accurately display 3D segmented voxel grids, which is a requirement in a registration context for visualizing a segmented atlas. Our pipeline is implemented in an open-source software (available via GitHub) and was directly used by biologists for registration of mouse brain model autofluorescence acquisition on the Allen Brain Atlas. The latter mapping allows them to retrieve regions of interest properly identified on the segmented atlas in acquired brain datasets and therefore extract only high-resolution images of those areas, avoiding the creation of images too large","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"30 1","pages":"93-97"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90503349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate delineations of anatomically relevant structures are required for cancer treatment planning. Despite its accuracy, manual labeling is time-consuming and tedious—hence, the potential of automatic approaches, such as deep learning models, is being investigated. A promising trend in deep learning tumor segmentation is cross-modal domain adaptation, where knowledge learned on one source distribution (e.g., one modality) is transferred to another distribution. Yet, artificial intelligence (AI) engineers developing such models, need to thoroughly assess the robustness of their approaches, which demands a deep understanding of the model(s) behavior. In this paper, we propose a web-based visual analytics application that supports the visual assessment of the predictive performance of deep learning-based models built for cross-modal brain tumor segmentation. Our application supports the multi-level comparison of multiple models drilling from entire cohorts of patients down to individual slices, facilitates the analysis of the relationship between image-derived features and model performance, and enables the comparative exploration of the predictive outcomes of the models. All this is realized in an interactive interface with multiple linked views. We present three use cases, analyzing differences in deep learning segmentation approaches, the influence of the tumor size, and the relationship of other data set characteristics to the performance. From these scenarios, we discovered that the tumor size, i.e., both volumetric in 3D data and pixel count in 2D data, highly affects the model performance, as samples with small tumors often yield poorer results. Our approach is able to reveal the best algorithms and their optimal configurations to support AI engineers in obtaining more insights for the development of their segmentation models.
{"title":"Visual Analytics to Assess Deep Learning Models for Cross-Modal Brain Tumor Segmentation","authors":"C. Magg, R. Raidou","doi":"10.2312/vcbm.20221193","DOIUrl":"https://doi.org/10.2312/vcbm.20221193","url":null,"abstract":"Accurate delineations of anatomically relevant structures are required for cancer treatment planning. Despite its accuracy, manual labeling is time-consuming and tedious—hence, the potential of automatic approaches, such as deep learning models, is being investigated. A promising trend in deep learning tumor segmentation is cross-modal domain adaptation, where knowledge learned on one source distribution (e.g., one modality) is transferred to another distribution. Yet, artificial intelligence (AI) engineers developing such models, need to thoroughly assess the robustness of their approaches, which demands a deep understanding of the model(s) behavior. In this paper, we propose a web-based visual analytics application that supports the visual assessment of the predictive performance of deep learning-based models built for cross-modal brain tumor segmentation. Our application supports the multi-level comparison of multiple models drilling from entire cohorts of patients down to individual slices, facilitates the analysis of the relationship between image-derived features and model performance, and enables the comparative exploration of the predictive outcomes of the models. All this is realized in an interactive interface with multiple linked views. We present three use cases, analyzing differences in deep learning segmentation approaches, the influence of the tumor size, and the relationship of other data set characteristics to the performance. From these scenarios, we discovered that the tumor size, i.e., both volumetric in 3D data and pixel count in 2D data, highly affects the model performance, as samples with small tumors often yield poorer results. Our approach is able to reveal the best algorithms and their optimal configurations to support AI engineers in obtaining more insights for the development of their segmentation models.","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"121 ","pages":"111-115"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72505211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose PACO , a visual analytics framework to support the prediction, analysis, and communication of COVID-19 hospitalization outcomes. Although several real-world data sets about COVID-19 are openly available, most of the current research focuses on the detection of the disease. Until now, no previous work exists on combining insights from medical image data with knowledge extracted from clinical data, predicting the likelihood of an intensive care unit (ICU) visit, ventilation, or decease. Moreover, available literature has not yet focused on communicating such results to the broader society. To support the prediction, analysis and communication of the outcomes of COVID-19 hospitalizations on the basis of a publicly available data set comprising both electronic health data and medical image data [SSP ∗ 21], we conduct the following three steps: (1) automated segmentation of the available X-ray images and processing of clinical data, (2) development of a model for the prediction of disease outcomes and a comparison to state-of-the-art prediction scores for both data sources, i.e., medical images and clinical data, and (3) the communication of outcomes to two different groups (i.e., clinical experts and the general population) through interactive dashboards. Preliminary results indicate that the prediction, analysis and communication of hospitalization outcomes is a significant topic in the context of COVID-19 prevention.
{"title":"Predicting, Analyzing and Communicating Outcomes of COVID-19 Hospitalizations with Medical Images and Clinical Data","authors":"Oliver Stritzel, R. Raidou","doi":"10.2312/vcbm.20221196","DOIUrl":"https://doi.org/10.2312/vcbm.20221196","url":null,"abstract":"We propose PACO , a visual analytics framework to support the prediction, analysis, and communication of COVID-19 hospitalization outcomes. Although several real-world data sets about COVID-19 are openly available, most of the current research focuses on the detection of the disease. Until now, no previous work exists on combining insights from medical image data with knowledge extracted from clinical data, predicting the likelihood of an intensive care unit (ICU) visit, ventilation, or decease. Moreover, available literature has not yet focused on communicating such results to the broader society. To support the prediction, analysis and communication of the outcomes of COVID-19 hospitalizations on the basis of a publicly available data set comprising both electronic health data and medical image data [SSP ∗ 21], we conduct the following three steps: (1) automated segmentation of the available X-ray images and processing of clinical data, (2) development of a model for the prediction of disease outcomes and a comparison to state-of-the-art prediction scores for both data sources, i.e., medical images and clinical data, and (3) the communication of outcomes to two different groups (i.e., clinical experts and the general population) through interactive dashboards. Preliminary results indicate that the prediction, analysis and communication of hospitalization outcomes is a significant topic in the context of COVID-19 prevention.","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"20 1","pages":"129-133"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81852732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Raidou, B. Sommer, M. Meuschke, S. Voß, P. Eulzer, G. Janiga, C. Arens, R. Wickenhöfer, B. Preim, K. Lawonn
Simulations of human blood and airflow are playing an increasing role in personalized medicine. Comparing flow data of different treatment scenarios or before and after an intervention is important to assess treatment options and success. However, existing visualization tools are either designed for the evaluation of a single data set or limit the comparison to a few partial aspects such as scalar fields defined on the vessel wall or internal flow patterns. Therefore, we present C OMFIS , a system for the comparative visual analysis of two simulated medical flow data sets, e.g. before and after an intervention. We combine various visualization and interaction methods for comparing different aspects of the underlying, often time-dependent data. These include comparative views of different scalar fields defined on the vessel/mucous wall, comparative depictions of the underlying volume data, and comparisons of flow patterns. We evaluated C OMFIS with CFD engineers and medical experts, who were able to efficiently find interesting data insights that help to assess treatment options.
{"title":"COMFIS - Comparative Visualization of Simulated Medical Flow Data","authors":"R. Raidou, B. Sommer, M. Meuschke, S. Voß, P. Eulzer, G. Janiga, C. Arens, R. Wickenhöfer, B. Preim, K. Lawonn","doi":"10.2312/vcbm.20221185","DOIUrl":"https://doi.org/10.2312/vcbm.20221185","url":null,"abstract":"Simulations of human blood and airflow are playing an increasing role in personalized medicine. Comparing flow data of different treatment scenarios or before and after an intervention is important to assess treatment options and success. However, existing visualization tools are either designed for the evaluation of a single data set or limit the comparison to a few partial aspects such as scalar fields defined on the vessel wall or internal flow patterns. Therefore, we present C OMFIS , a system for the comparative visual analysis of two simulated medical flow data sets, e.g. before and after an intervention. We combine various visualization and interaction methods for comparing different aspects of the underlying, often time-dependent data. These include comparative views of different scalar fields defined on the vessel/mucous wall, comparative depictions of the underlying volume data, and comparisons of flow patterns. We evaluated C OMFIS with CFD engineers and medical experts, who were able to efficiently find interesting data insights that help to assess treatment options.","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"101 1","pages":"29-40"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90655433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Magg, L. Toussaint, L. Muren, D. Indelicato, R. Raidou
Pediatric brain tumor radiotherapy research is investigating how radiation influences the development and function of a patient’s brain. To better understand how brain growth is affected by the treatment, the brain structures of the patient need to be explored and analyzed preand post-treatment. In this way, anatomical changes are observed over a long period, and are assessed as potential early markers of cognitive or functional damage. In this early work, we propose an automated approach for the visual assessment of the growth prediction of brain structures in pediatric brain tumor radiotherapy patients. Our approach reduces the need for re-segmentation, and the time required for it. We employ as a basis pre-treatment Computed Tomography (CT) scans with manual delineations (i.e., segmentation masks) of specific brain structures of interest. These pre-treatment masks are used as initialization, to predict the corresponding masks on multiple post-treatment follow-up Magnetic Resonance (MR) images, using an active contour model approach. For the accuracy quantification of the automatically predicted posttreatment masks, a support vector regressor (SVR) with features related to geometry, intensity, and gradients is trained on the pre-treatment data. Finally, a distance transform is employed to calculate the distances between preand post-treatment data and to visualize the predicted growth of a brain structure, along with its respective accuracy. Although segmentations of larger structures are more accurately predicted, the growth behavior of all structures is learned correctly, as indicated by the SVR results. This suggests that our pipeline is a positive initial step for the visual assessment of brain structure growth prediction. CCS Concepts • Applied computing → Life and medical sciences; • Human-centered computing → Visualization;
{"title":"Visual Assessment of Growth Prediction in Brain Structures after Pediatric Radiotherapy","authors":"C. Magg, L. Toussaint, L. Muren, D. Indelicato, R. Raidou","doi":"10.2312/vcbm.20211343","DOIUrl":"https://doi.org/10.2312/vcbm.20211343","url":null,"abstract":"Pediatric brain tumor radiotherapy research is investigating how radiation influences the development and function of a patient’s brain. To better understand how brain growth is affected by the treatment, the brain structures of the patient need to be explored and analyzed preand post-treatment. In this way, anatomical changes are observed over a long period, and are assessed as potential early markers of cognitive or functional damage. In this early work, we propose an automated approach for the visual assessment of the growth prediction of brain structures in pediatric brain tumor radiotherapy patients. Our approach reduces the need for re-segmentation, and the time required for it. We employ as a basis pre-treatment Computed Tomography (CT) scans with manual delineations (i.e., segmentation masks) of specific brain structures of interest. These pre-treatment masks are used as initialization, to predict the corresponding masks on multiple post-treatment follow-up Magnetic Resonance (MR) images, using an active contour model approach. For the accuracy quantification of the automatically predicted posttreatment masks, a support vector regressor (SVR) with features related to geometry, intensity, and gradients is trained on the pre-treatment data. Finally, a distance transform is employed to calculate the distances between preand post-treatment data and to visualize the predicted growth of a brain structure, along with its respective accuracy. Although segmentations of larger structures are more accurately predicted, the growth behavior of all structures is learned correctly, as indicated by the SVR results. This suggests that our pipeline is a positive initial step for the visual assessment of brain structure growth prediction. CCS Concepts • Applied computing → Life and medical sciences; • Human-centered computing → Visualization;","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"39 1","pages":"31-35"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74858644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Behrendt, David Pleuss-Engelhardt, M. Gutberlet, B. Preim
Four-dimensional phase-contrast magnetic resonance imaging (4D PC-MRI) allows for a non-invasive acquisition of timeresolved blood flow measurements, providing a valuable aid to clinicians and researchers seeking a better understanding of the interrelation between pathologies of the cardiovascular system and changes in blood flow patterns. Such research requires extensive analysis and comparison of blood flow data within and between different patient cohorts representing different age groups, genders and pathologies. However, a direct comparison between large numbers of datasets is not feasible due to the complexity of the data. In this paper, we present a novel approach to normalize aortic 4D PC-MRI datasets to enable qualitative and quantitative comparisons. We define normalized coordinate systems for the vessel surface as well as the intravascular volume, allowing for the computation of quantitative measures between datasets for both hemodynamic surface parameters as well as flow or pressure fields. To support the understanding of the geometric deformations involved in this process, individual transformations can not only be toggled on or off, but smoothly transitioned between anatomically faithful and fully abstracted states. In an informal interview with an expert radiologist, we confirm the usefulness of our technique. We also report on initial findings from exploring a database of 138 datasets consisting of both patient and healthy volunteers. CCS Concepts • Human-centered computing → Visualization toolkits; Information visualization;
{"title":"2.5D Geometric Mapping of Aortic Blood Flow Data for Cohort Visualization","authors":"B. Behrendt, David Pleuss-Engelhardt, M. Gutberlet, B. Preim","doi":"10.2312/vcbm.20211348","DOIUrl":"https://doi.org/10.2312/vcbm.20211348","url":null,"abstract":"Four-dimensional phase-contrast magnetic resonance imaging (4D PC-MRI) allows for a non-invasive acquisition of timeresolved blood flow measurements, providing a valuable aid to clinicians and researchers seeking a better understanding of the interrelation between pathologies of the cardiovascular system and changes in blood flow patterns. Such research requires extensive analysis and comparison of blood flow data within and between different patient cohorts representing different age groups, genders and pathologies. However, a direct comparison between large numbers of datasets is not feasible due to the complexity of the data. In this paper, we present a novel approach to normalize aortic 4D PC-MRI datasets to enable qualitative and quantitative comparisons. We define normalized coordinate systems for the vessel surface as well as the intravascular volume, allowing for the computation of quantitative measures between datasets for both hemodynamic surface parameters as well as flow or pressure fields. To support the understanding of the geometric deformations involved in this process, individual transformations can not only be toggled on or off, but smoothly transitioned between anatomically faithful and fully abstracted states. In an informal interview with an expert radiologist, we confirm the usefulness of our technique. We also report on initial findings from exploring a database of 138 datasets consisting of both patient and healthy volunteers. CCS Concepts • Human-centered computing → Visualization toolkits; Information visualization;","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"21 1","pages":"91-100"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82814905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joost Wooning, M. Benmahdjoub, T. Walsum, R. Marroquim
{"title":"AR-Assisted Craniotomy Planning for Tumour Resection","authors":"Joost Wooning, M. Benmahdjoub, T. Walsum, R. Marroquim","doi":"10.2312/vcbm.20211353","DOIUrl":"https://doi.org/10.2312/vcbm.20211353","url":null,"abstract":"","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"66 1","pages":"135-144"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90061574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diffusion MRI (dMRI) tractography permits the non-invasive reconstruction of major white matter tracts, and is therefore widely used in neurosurgical planning and in neuroscience. However, it is affected by various sources of uncertainty. In this work, we consider the model uncertainty that arises in crossing fiber tractography, from having to select between alternative mathematical models for the estimation of multiple fiber orientations in a given voxel. This type of model uncertainty is a source of instability in dMRI tractography that has not received much attention so far. We develop a mathematical framework to quantify it, based on computing posterior probabilities of competing models, given the local dMRI data. Moreover, we explore a novel strategy for crossing fiber tractography, which computes tracking directions from a consensus of multiple mathematical models, each one contributing with a weight that is proportional to its probability. Experiments on different white matter tracts in multiple subjects indicate that reducing model uncertainty in this way increases the accuracy of crossing fiber tractography. CCS Concepts • Applied computing → Life and medical sciences; • Mathematics of computing → Probabilistic algorithms; • Humancentered computing → Visualization techniques;
{"title":"Reducing Model Uncertainty in Crossing Fiber Tractography","authors":"J. Gruen, G. V. D. Voort, T. Schultz","doi":"10.2312/vcbm.20211345","DOIUrl":"https://doi.org/10.2312/vcbm.20211345","url":null,"abstract":"Diffusion MRI (dMRI) tractography permits the non-invasive reconstruction of major white matter tracts, and is therefore widely used in neurosurgical planning and in neuroscience. However, it is affected by various sources of uncertainty. In this work, we consider the model uncertainty that arises in crossing fiber tractography, from having to select between alternative mathematical models for the estimation of multiple fiber orientations in a given voxel. This type of model uncertainty is a source of instability in dMRI tractography that has not received much attention so far. We develop a mathematical framework to quantify it, based on computing posterior probabilities of competing models, given the local dMRI data. Moreover, we explore a novel strategy for crossing fiber tractography, which computes tracking directions from a consensus of multiple mathematical models, each one contributing with a weight that is proportional to its probability. Experiments on different white matter tracts in multiple subjects indicate that reducing model uncertainty in this way increases the accuracy of crossing fiber tractography. CCS Concepts • Applied computing → Life and medical sciences; • Mathematics of computing → Probabilistic algorithms; • Humancentered computing → Visualization techniques;","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"125 1","pages":"55-64"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76152544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Ostendorf, D. Mastrodicasa, K. Bäumler, M. Codari, V. Turner, M. Willemink, D. Fleischmann, B. Preim, G. Mistelbauer
Current blood vessel rendering usually depicts solely the surface of vascular structures and does not visualize any interior structures. While this approach is suitable for most applications, certain cardiovascular diseases, such as aortic dissection would benefit from a more comprehensive visualization. In this work, we investigate different shading styles for the visualization of the aortic inner and outer wall, including the dissection flap. Finding suitable shading algorithms, techniques, and appropriate parameters is time-consuming when practitioners fine-tune them manually. Therefore, we build a shading pipeline using well-known shading algorithms such as Blinn-Phong, Oren-Nayar, Cook-Torrance, Toon, and extended Lit-Sphere shading with techniques such as the Fresnel effect and screen space ambient occlusion. We interviewed six experts from various domains to find the best combination of shadings for preset combinations that maximize user experience and the applicability in clinical settings.
{"title":"Shading Style Assessment for Vessel Wall and Lumen Visualization","authors":"Kai Ostendorf, D. Mastrodicasa, K. Bäumler, M. Codari, V. Turner, M. Willemink, D. Fleischmann, B. Preim, G. Mistelbauer","doi":"10.2312/vcbm.20211350","DOIUrl":"https://doi.org/10.2312/vcbm.20211350","url":null,"abstract":"Current blood vessel rendering usually depicts solely the surface of vascular structures and does not visualize any interior structures. While this approach is suitable for most applications, certain cardiovascular diseases, such as aortic dissection would benefit from a more comprehensive visualization. In this work, we investigate different shading styles for the visualization of the aortic inner and outer wall, including the dissection flap. Finding suitable shading algorithms, techniques, and appropriate parameters is time-consuming when practitioners fine-tune them manually. Therefore, we build a shading pipeline using well-known shading algorithms such as Blinn-Phong, Oren-Nayar, Cook-Torrance, Toon, and extended Lit-Sphere shading with techniques such as the Fresnel effect and screen space ambient occlusion. We interviewed six experts from various domains to find the best combination of shadings for preset combinations that maximize user experience and the applicability in clinical settings.","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"6 1","pages":"107-111"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79630482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an averaging method for expert segmentation proposals of microbial organisms, resulting in a smooth, naturally looking segmentation ground truth. The approach exploits a geometrical property of the majority of the organisms – star-shapedness – and is based on contour averaging in polar space. It is robust and computationally efficient, where robustness is due to the absence of tuneable parameters. Moreover, the algorithm preserves the uncertainty (in terms of the standard deviation) of the experts’ opinion, which allows to introduce an uncertainty-aware metric for estimation of the segmentation quality. This metric emphasizes the influence of ground truth regions with low variance. We study the performance of the proposed averaging method on time-lapse microscopy data of Corynebacterium glutamicum and the uncertainty-aware metric on synthetic data. CCS Concepts • Applied computing → Imaging; • Computing methodologies → Image processing;
{"title":"Polar Space Based Shape Averaging for Star-shaped Biological Objects","authors":"Karina Ruzaeva, K. Nöh, B. Berkels","doi":"10.2312/vcbm.20211340","DOIUrl":"https://doi.org/10.2312/vcbm.20211340","url":null,"abstract":"In this paper, we propose an averaging method for expert segmentation proposals of microbial organisms, resulting in a smooth, naturally looking segmentation ground truth. The approach exploits a geometrical property of the majority of the organisms – star-shapedness – and is based on contour averaging in polar space. It is robust and computationally efficient, where robustness is due to the absence of tuneable parameters. Moreover, the algorithm preserves the uncertainty (in terms of the standard deviation) of the experts’ opinion, which allows to introduce an uncertainty-aware metric for estimation of the segmentation quality. This metric emphasizes the influence of ground truth regions with low variance. We study the performance of the proposed averaging method on time-lapse microscopy data of Corynebacterium glutamicum and the uncertainty-aware metric on synthetic data. CCS Concepts • Applied computing → Imaging; • Computing methodologies → Image processing;","PeriodicalId":88872,"journal":{"name":"Eurographics Workshop on Visual Computing for Biomedicine","volume":"116 1","pages":"13-17"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85296035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}