Pub Date : 2022-10-01DOI: 10.1109/VIS54862.2022.00015
Alex Godwin
Spatial networks present unique challenges to understanding topo-logical structure. Each node occupies a location in physical space (e.g., longitude and latitude) and each link may either indicate a fixed and well-described path (e.g., along streets) or a logical connection only. Placing these elements in a map maintains these physical relationships while making it more difficult to identify topological features of interest such as clusters, cliques, and paths. While some systems provide coordinated representations of a spatial network, it is almost universally assumed that the network itself is the sole mechanism for movement across space. In this paper, I present a novel approach for exploring spatial networks by orienting them along a point or path in physical space that provides the guide for parameters in a force-directed layout. By specifying a path across topography, networks can be spatially filtered independently of the topology of the network. Initial case studies indicate promising results for exploring spatial networks in transportation and energy distribution.
{"title":"Paths Through Spatial Networks","authors":"Alex Godwin","doi":"10.1109/VIS54862.2022.00015","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00015","url":null,"abstract":"Spatial networks present unique challenges to understanding topo-logical structure. Each node occupies a location in physical space (e.g., longitude and latitude) and each link may either indicate a fixed and well-described path (e.g., along streets) or a logical connection only. Placing these elements in a map maintains these physical relationships while making it more difficult to identify topological features of interest such as clusters, cliques, and paths. While some systems provide coordinated representations of a spatial network, it is almost universally assumed that the network itself is the sole mechanism for movement across space. In this paper, I present a novel approach for exploring spatial networks by orienting them along a point or path in physical space that provides the guide for parameters in a force-directed layout. By specifying a path across topography, networks can be spatially filtered independently of the topology of the network. Initial case studies indicate promising results for exploring spatial networks in transportation and energy distribution.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114137943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/VIS54862.2022.00020
D. Antweiler, G. Fuchs
Deteriorating conditions in hospital patients are a major factor in clinical patient mortality. Currently, timely detection is based on clinical experience, expertise, and attention. However, healthcare trends towards larger patient cohorts, more data, and the desire for better and more personalized care are pushing the existing, simple scoring systems to their limits. Data-driven approaches can extract decision rules from available medical coding data, which offer good interpretability and thus are key for successful adoption in practice. Before deployment, models need to be scrutinized by domain experts to identify errors and check them against existing medical knowledge. We propose a visual analytics system to support health-care professionals in inspecting and enhancing rule-based classifier through identification of similarities and contradictions, as well as modification of rules. This work was developed iteratively in close collaboration with medical professionals. We discuss how our tool supports the inspection and assessment of rule-based classifiers in the clinical coding domain and propose possible extensions.
{"title":"Visualizing Rule-based Classifiers for Clinical Risk Prognosis","authors":"D. Antweiler, G. Fuchs","doi":"10.1109/VIS54862.2022.00020","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00020","url":null,"abstract":"Deteriorating conditions in hospital patients are a major factor in clinical patient mortality. Currently, timely detection is based on clinical experience, expertise, and attention. However, healthcare trends towards larger patient cohorts, more data, and the desire for better and more personalized care are pushing the existing, simple scoring systems to their limits. Data-driven approaches can extract decision rules from available medical coding data, which offer good interpretability and thus are key for successful adoption in practice. Before deployment, models need to be scrutinized by domain experts to identify errors and check them against existing medical knowledge. We propose a visual analytics system to support health-care professionals in inspecting and enhancing rule-based classifier through identification of similarities and contradictions, as well as modification of rules. This work was developed iteratively in close collaboration with medical professionals. We discuss how our tool supports the inspection and assessment of rule-based classifiers in the clinical coding domain and propose possible extensions.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121943929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/VIS54862.2022.00035
Marco Agus, A. Aboulhassan, K. Al Thelaya, G. Pintore, E. Gobbetti, C. Calì, J. Schneider
A variety of application domains, including material science, neuroscience, and connectomics, commonly use segmented volume data for exploratory visual analysis. In many cases, segmented objects are characterized by multivariate attributes expressing specific geometric or physical features. Objects with similar characteristics, determined by selected attribute configurations, can create peculiar spatial patterns, whose detection and study is of fundamental importance. This task is notoriously difficult, especially when the number of attributes per segment is large. In this work, we propose an interactive framework that combines a state-of-the-art direct volume renderer for categorical volumes with techniques for the analysis of the attribute space and for the automatic creation of 2D transfer function. We show, in particular, how dimensionality reduction, kernel-density estimation, and topological techniques such as Morse analysis combined with scatter and density plots allow the efficient design of two-dimensional color maps that highlight spatial patterns. The capabilities of our framework are demonstrated on synthetic and real-world data from several domains.
{"title":"Volume Puzzle: visual analysis of segmented volume data with multivariate attributes","authors":"Marco Agus, A. Aboulhassan, K. Al Thelaya, G. Pintore, E. Gobbetti, C. Calì, J. Schneider","doi":"10.1109/VIS54862.2022.00035","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00035","url":null,"abstract":"A variety of application domains, including material science, neuroscience, and connectomics, commonly use segmented volume data for exploratory visual analysis. In many cases, segmented objects are characterized by multivariate attributes expressing specific geometric or physical features. Objects with similar characteristics, determined by selected attribute configurations, can create peculiar spatial patterns, whose detection and study is of fundamental importance. This task is notoriously difficult, especially when the number of attributes per segment is large. In this work, we propose an interactive framework that combines a state-of-the-art direct volume renderer for categorical volumes with techniques for the analysis of the attribute space and for the automatic creation of 2D transfer function. We show, in particular, how dimensionality reduction, kernel-density estimation, and topological techniques such as Morse analysis combined with scatter and density plots allow the efficient design of two-dimensional color maps that highlight spatial patterns. The capabilities of our framework are demonstrated on synthetic and real-world data from several domains.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114508198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/VIS54862.2022.00017
Ji-Won Choi, Jaemin Jo
We present Intentable, a mixed-initiative caption authoring system that allows the author to steer an automatic caption generation pro-cess to reflect their intent, e.g., the finding that the author gained from visualization and thus wants to write a caption for. We first derive a grammar for specifying the intent, i.e., a caption recipe, and build a neural network that generates caption sentences given a recipe. Our quantitative evaluation revealed that our intent-based generation system not only allows the author to engage in the generation process but also produces more fluent captions than the previous end-to-end approaches without user intervention. Finally, we demonstrate the versatility of our system, such as context adaptation, unit conversion, and sentence reordering.
{"title":"Intentable: A Mixed-Initiative System for Intent-Based Chart Captioning","authors":"Ji-Won Choi, Jaemin Jo","doi":"10.1109/VIS54862.2022.00017","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00017","url":null,"abstract":"We present Intentable, a mixed-initiative caption authoring system that allows the author to steer an automatic caption generation pro-cess to reflect their intent, e.g., the finding that the author gained from visualization and thus wants to write a caption for. We first derive a grammar for specifying the intent, i.e., a caption recipe, and build a neural network that generates caption sentences given a recipe. Our quantitative evaluation revealed that our intent-based generation system not only allows the author to engage in the generation process but also produces more fluent captions than the previous end-to-end approaches without user intervention. Finally, we demonstrate the versatility of our system, such as context adaptation, unit conversion, and sentence reordering.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128200587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/VIS54862.2022.00024
Chien-Hsun Lai, M. Kuo, Yun-Hsuan Lien, Kuan-An Su, Yu-Shuen Wang
We extend a well-known dimension reduction method, t-distributed stochastic neighbor embedding (t-SNE), from non-parametric to parametric by training neural networks. The main advantage of a parametric technique is the generalization of handling new data, which is beneficial for streaming data visualization. While previous parametric methods either require a network pre-training by the restricted Boltzmann machine or intermediate results obtained from the traditional non-parametric t-SNE, we found that recent network training skills can enable a direct optimization for the t-SNE objective function. Accordingly, our method achieves high embedding quality while enjoying generalization. Due to mini-batch network training, our parametric dimension reduction method is highly efficient. For evaluation, we compared our method to several baselines on a variety of datasets. Experiment results demonstrate the feasibility of our method. The source code is available at https://github.com/a07458666/parametric_dr.
{"title":"Parametric Dimension Reduction by Preserving Local Structure","authors":"Chien-Hsun Lai, M. Kuo, Yun-Hsuan Lien, Kuan-An Su, Yu-Shuen Wang","doi":"10.1109/VIS54862.2022.00024","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00024","url":null,"abstract":"We extend a well-known dimension reduction method, t-distributed stochastic neighbor embedding (t-SNE), from non-parametric to parametric by training neural networks. The main advantage of a parametric technique is the generalization of handling new data, which is beneficial for streaming data visualization. While previous parametric methods either require a network pre-training by the restricted Boltzmann machine or intermediate results obtained from the traditional non-parametric t-SNE, we found that recent network training skills can enable a direct optimization for the t-SNE objective function. Accordingly, our method achieves high embedding quality while enjoying generalization. Due to mini-batch network training, our parametric dimension reduction method is highly efficient. For evaluation, we compared our method to several baselines on a variety of datasets. Experiment results demonstrate the feasibility of our method. The source code is available at https://github.com/a07458666/parametric_dr.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134522186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/VIS54862.2022.00032
M. Vardhan, Harvey Shi, David Urick, M. Patel, J. Leopold, A. Randles
Immersive visual displays are becoming more common in the diagnostic imaging and pre-procedural planning of complex cardiology revascularization surgeries. One such procedure is coronary artery bypass grafting (CABG) surgery, which is a gold standard treat-ment for patients with advanced coronary heart disease. Treatment planning of the CABG surgery can be aided by extended reality (XR) displays as they are known for offering advantageous visual-ization of spatially heterogeneous and complex tasks. Despite the benefits of XR, it remains unknown whether clinicians will benefit from higher visual immersion offered by XR. In order to assess the impact of increased immersion as well as the latent factor of geometrical complexity, a quantitative user evaluation $(mathrm{n}=14)$ was performed with clinicians of advanced cardiology training simulating CABG placement on sixteen 3D arterial tree models derived from 6 patients two levels of anatomic complexity. These arterial models were rendered on 3D/XR and 2D display modes with the same tactile interaction input device. The findings of this study reveal that compared to a monoscopic 2D display, the greater visual immersion of 3D/XR does not significantly alter clinician accuracy in the task of bypass graft placement. Latent factors such as arterial complexity and clinical experience both influence the accuracy of graft placement. In addition, an anatomically less complex model
{"title":"The role of extended reality for planning coronary artery bypass graft surgery","authors":"M. Vardhan, Harvey Shi, David Urick, M. Patel, J. Leopold, A. Randles","doi":"10.1109/VIS54862.2022.00032","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00032","url":null,"abstract":"Immersive visual displays are becoming more common in the diagnostic imaging and pre-procedural planning of complex cardiology revascularization surgeries. One such procedure is coronary artery bypass grafting (CABG) surgery, which is a gold standard treat-ment for patients with advanced coronary heart disease. Treatment planning of the CABG surgery can be aided by extended reality (XR) displays as they are known for offering advantageous visual-ization of spatially heterogeneous and complex tasks. Despite the benefits of XR, it remains unknown whether clinicians will benefit from higher visual immersion offered by XR. In order to assess the impact of increased immersion as well as the latent factor of geometrical complexity, a quantitative user evaluation $(mathrm{n}=14)$ was performed with clinicians of advanced cardiology training simulating CABG placement on sixteen 3D arterial tree models derived from 6 patients two levels of anatomic complexity. These arterial models were rendered on 3D/XR and 2D display modes with the same tactile interaction input device. The findings of this study reveal that compared to a monoscopic 2D display, the greater visual immersion of 3D/XR does not significantly alter clinician accuracy in the task of bypass graft placement. Latent factors such as arterial complexity and clinical experience both influence the accuracy of graft placement. In addition, an anatomically less complex model","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115532395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/VIS54862.2022.00014
Seongmin Lee, Sadia Afroz, Haekyu Park, Zijie J. Wang, Omar Shaikh, Vibhor Sehgal, Ankit Peshin, Duen Horng Chau
As the information on the Internet continues growing exponentially, understanding and assessing the reliability of a website is becoming increasingly important. Misinformation has far-ranging repercussions, from sowing mistrust in media to undermining democratic elections. While some research investigates how to alert people to misinformation on the web, much less research has been conducted on explaining how websites engage in spreading false information. To fill the research gap, we present MisVis, a web-based interactive visualization tool that helps users assess a website's reliability by understanding how it engages in spreading false information on the World Wide Web. MisVis visualizes the hyperlink connectivity of the website and summarizes key characteristics of the Twitter accounts that mention the site. A large-scale user study with 139 participants demonstrates that MisVis facilitates users to assess and understand false information on the web and node-link diagrams can be used to communicate with non-experts. MisVis is available at the public demo link: https://poloclub.github.io/MisVis.
{"title":"Explaining Website Reliability by Visualizing Hyperlink Connectivity","authors":"Seongmin Lee, Sadia Afroz, Haekyu Park, Zijie J. Wang, Omar Shaikh, Vibhor Sehgal, Ankit Peshin, Duen Horng Chau","doi":"10.1109/VIS54862.2022.00014","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00014","url":null,"abstract":"As the information on the Internet continues growing exponentially, understanding and assessing the reliability of a website is becoming increasingly important. Misinformation has far-ranging repercussions, from sowing mistrust in media to undermining democratic elections. While some research investigates how to alert people to misinformation on the web, much less research has been conducted on explaining how websites engage in spreading false information. To fill the research gap, we present MisVis, a web-based interactive visualization tool that helps users assess a website's reliability by understanding how it engages in spreading false information on the World Wide Web. MisVis visualizes the hyperlink connectivity of the website and summarizes key characteristics of the Twitter accounts that mention the site. A large-scale user study with 139 participants demonstrates that MisVis facilitates users to assess and understand false information on the web and node-link diagrams can be used to communicate with non-experts. MisVis is available at the public demo link: https://poloclub.github.io/MisVis.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114483092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}