Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00022
Seokweon Jung, Kiroong Choe, Seokhyeon Park, Hyung-Kwon Ko, Youngtaek Kim, Jinwook Seo
Extracting data from pictures of medical records is a common task in the insurance industry as the patients often send their medical invoices taken by smartphone cameras. However, the overall process is still challenging to be fully automated because of low image quality and variation of templates that exist in the status quo. In this paper, we propose a mixed-initiative pipeline for extracting data from pictures of medical invoices, where deep-learning-based automatic prediction models and task-specific heuristics work together under the mediation of a user. In the user study with 12 participants, we confirmed our mixed-initiative approach can supplement the drawbacks of a fully automated approach within an acceptable completion time. We further discuss the findings, limitations, and future works for designing a mixed-initiative system to extract data from pictures of a complicated table.
{"title":"Mixed-Initiative Approach to Extract Data from Pictures of Medical Invoice","authors":"Seokweon Jung, Kiroong Choe, Seokhyeon Park, Hyung-Kwon Ko, Youngtaek Kim, Jinwook Seo","doi":"10.1109/PacificVis52677.2021.00022","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00022","url":null,"abstract":"Extracting data from pictures of medical records is a common task in the insurance industry as the patients often send their medical invoices taken by smartphone cameras. However, the overall process is still challenging to be fully automated because of low image quality and variation of templates that exist in the status quo. In this paper, we propose a mixed-initiative pipeline for extracting data from pictures of medical invoices, where deep-learning-based automatic prediction models and task-specific heuristics work together under the mediation of a user. In the user study with 12 participants, we confirmed our mixed-initiative approach can supplement the drawbacks of a fully automated approach within an acceptable completion time. We further discuss the findings, limitations, and future works for designing a mixed-initiative system to extract data from pictures of a complicated table.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131098643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00028
Seok-Hee Hong, P. Eades, Marnijati Torkel, James Wood, Kunsoo Park
The multi-level graph drawing is a popular approach to visualize large and complex graphs. It recursively coarsens a graph and then uncoarsens the drawing using layout refinement. In this paper, we leverage the Louvain community detection algorithm for the multi-level graph drawing paradigm.More specifically, we present the Louvain-based multi-level graph drawing algorithm, and compare with other community detection algorithms such as Label Propagation and Infomap clustering. Experiments show that Louvain-based multi-level algorithm performs best in terms of efficiency (i.e., fastest runtime), while Label Propagation and Infomap-based multi-level algorithms perform better in terms of effectiveness (i.e., better visualization in quality metrics).
{"title":"Louvain-based Multi-level Graph Drawing","authors":"Seok-Hee Hong, P. Eades, Marnijati Torkel, James Wood, Kunsoo Park","doi":"10.1109/PacificVis52677.2021.00028","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00028","url":null,"abstract":"The multi-level graph drawing is a popular approach to visualize large and complex graphs. It recursively coarsens a graph and then uncoarsens the drawing using layout refinement. In this paper, we leverage the Louvain community detection algorithm for the multi-level graph drawing paradigm.More specifically, we present the Louvain-based multi-level graph drawing algorithm, and compare with other community detection algorithms such as Label Propagation and Infomap clustering. Experiments show that Louvain-based multi-level algorithm performs best in terms of efficiency (i.e., fastest runtime), while Label Propagation and Infomap-based multi-level algorithms perform better in terms of effectiveness (i.e., better visualization in quality metrics).","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116688843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00027
A. Meidiana, Seok-Hee Hong, Shijun Cai, Marnijati Torkel, P. Eades
Recent works in graph visualization attempt to reduce the runtime of repulsion force computation of force-directed algorithms using sampling, however they fail to reduce the runtime for attraction force computation to sublinear in the number of edges.We present new sublinear-time algorithms for the attraction force computation of force-directed algorithms and integrate them with sublinear-time repulsion force computation.Extensive experiments show that our algorithms, operated as part of a fully sublinear-time force computation framework, compute graph layouts on average 80% faster than existing linear-time force computation algorithm, with surprisingly significantly better quality metrics on edge crossing and shape-based metrics.
{"title":"Sublinear-Time Attraction Force Computation for Large Complex Graph Drawing","authors":"A. Meidiana, Seok-Hee Hong, Shijun Cai, Marnijati Torkel, P. Eades","doi":"10.1109/PacificVis52677.2021.00027","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00027","url":null,"abstract":"Recent works in graph visualization attempt to reduce the runtime of repulsion force computation of force-directed algorithms using sampling, however they fail to reduce the runtime for attraction force computation to sublinear in the number of edges.We present new sublinear-time algorithms for the attraction force computation of force-directed algorithms and integrate them with sublinear-time repulsion force computation.Extensive experiments show that our algorithms, operated as part of a fully sublinear-time force computation framework, compute graph layouts on average 80% faster than existing linear-time force computation algorithm, with surprisingly significantly better quality metrics on edge crossing and shape-based metrics.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125998184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00037
Kiroong Choe, Seokweon Jung, Seokhyeon Park, Hwajung Hong, Jinwook Seo
A literature review is a critical task in performing research. However, even browsing an academic database and choosing must-read items can be daunting for novice researchers. In this paper, we introduce Papers101, an interactive system that supports novice researchers’ discovery of papers relevant to their research topics. Prior to system design, we performed a formative study to investigate what difficul-ties novice researchers often face and how experienced researchers address them. We found that novice researchers have difficulty in identifying appropriate search terms, choosing which papers to read first, and ensuring whether they have examined enough candidates. In this work, we identified key requirements for the system dedicated to novices: prioritizing search results, unifying the contexts of multiple search results, and refining and validating the search queries. Accordingly, Papers101 provides an opinionated perspective on selecting important metadata among papers. It also visualizes how the priority among papers is developed along with the users’ knowledge discovery process. Finally, we demonstrate the potential usefulness of our system with the case study on the metadata collection of papers in visualization and HCI community.
{"title":"Papers101: Supporting the Discovery Process in the Literature Review Workflow for Novice Researchers","authors":"Kiroong Choe, Seokweon Jung, Seokhyeon Park, Hwajung Hong, Jinwook Seo","doi":"10.1109/PacificVis52677.2021.00037","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00037","url":null,"abstract":"A literature review is a critical task in performing research. However, even browsing an academic database and choosing must-read items can be daunting for novice researchers. In this paper, we introduce Papers101, an interactive system that supports novice researchers’ discovery of papers relevant to their research topics. Prior to system design, we performed a formative study to investigate what difficul-ties novice researchers often face and how experienced researchers address them. We found that novice researchers have difficulty in identifying appropriate search terms, choosing which papers to read first, and ensuring whether they have examined enough candidates. In this work, we identified key requirements for the system dedicated to novices: prioritizing search results, unifying the contexts of multiple search results, and refining and validating the search queries. Accordingly, Papers101 provides an opinionated perspective on selecting important metadata among papers. It also visualizes how the priority among papers is developed along with the users’ knowledge discovery process. Finally, we demonstrate the potential usefulness of our system with the case study on the metadata collection of papers in visualization and HCI community.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121694782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-01DOI: 10.1109/PacificVis52677.2021.00009
Shijun Cai, Seok-Hee Hong, Jialiang Shen, Tongliang Liu
Understanding what graph layout human prefer and why they prefer such graph layout is significant and challenging due to the highly complex visual perception and cognition system in human brain. In this paper, we present the first machine learning approach for predicting human preference for graph layouts.In general, the data sets with human preference labels are limited and insufficient for training deep networks. To address this, we train our deep learning model by employing the transfer learning method, e.g., exploiting the quality metrics, such as shape-based metrics, edge crossing and stress, which are shown to be correlated to human preference on graph layouts. Experimental results using the ground truth human preference data sets show that our model can successfully predict human preference for graph layouts. To our best knowledge, this is the first approach for predicting qualitative evaluation of graph layouts using human preference experiment data.
{"title":"A Machine Learning Approach for Predicting Human Preference for Graph Layouts*","authors":"Shijun Cai, Seok-Hee Hong, Jialiang Shen, Tongliang Liu","doi":"10.1109/PacificVis52677.2021.00009","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00009","url":null,"abstract":"Understanding what graph layout human prefer and why they prefer such graph layout is significant and challenging due to the highly complex visual perception and cognition system in human brain. In this paper, we present the first machine learning approach for predicting human preference for graph layouts.In general, the data sets with human preference labels are limited and insufficient for training deep networks. To address this, we train our deep learning model by employing the transfer learning method, e.g., exploiting the quality metrics, such as shape-based metrics, edge crossing and stress, which are shown to be correlated to human preference on graph layouts. Experimental results using the ground truth human preference data sets show that our model can successfully predict human preference for graph layouts. To our best knowledge, this is the first approach for predicting qualitative evaluation of graph layouts using human preference experiment data.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130847344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-15DOI: 10.1109/PacificVis52677.2021.00020
Thomas Köppel, E. Gröller, Hsiang-Yun Wu
Route planning and navigation are common tasks that often require additional information on points of interest. Augmented Reality (AR) enables mobile users to utilize text labels, in order to provide a composite view associated with additional information in a real-world environment. Nonetheless, displaying all labels for points of interest on a mobile device will lead to unwanted overlaps between information, and thus a context-responsive strategy to properly arrange labels is expected. The technique should remove overlaps, show the right level-of-detail, and maintain label coherence. This is necessary as the viewing angle in an AR system may change rapidly due to users’ behaviors. Coherence plays an essential role in retaining user experience and knowledge, as well as avoiding motion sickness. In this paper, we develop an approach that systematically manages label visibility and levels-of-detail, as well as eliminates unexpected incoherent movement. We introduce three label management strategies, including (1) occlusion management, (2) level-of-detail management, and (3) coherence management by balancing the usage of the mobile phone screen. A greedy approach is developed for fast occlusion handling in AR. A level-of-detail scheme is adopted to arrange various types of labels. A 3D scene manipulation is then built to simultaneously suppress the incoherent behaviors induced by viewing angle changes. Finally, we present the feasibility and applicability of our approach through one synthetic and two real-world scenarios, followed by a qualitative user study.
{"title":"Context-Responsive Labeling in Augmented Reality","authors":"Thomas Köppel, E. Gröller, Hsiang-Yun Wu","doi":"10.1109/PacificVis52677.2021.00020","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00020","url":null,"abstract":"Route planning and navigation are common tasks that often require additional information on points of interest. Augmented Reality (AR) enables mobile users to utilize text labels, in order to provide a composite view associated with additional information in a real-world environment. Nonetheless, displaying all labels for points of interest on a mobile device will lead to unwanted overlaps between information, and thus a context-responsive strategy to properly arrange labels is expected. The technique should remove overlaps, show the right level-of-detail, and maintain label coherence. This is necessary as the viewing angle in an AR system may change rapidly due to users’ behaviors. Coherence plays an essential role in retaining user experience and knowledge, as well as avoiding motion sickness. In this paper, we develop an approach that systematically manages label visibility and levels-of-detail, as well as eliminates unexpected incoherent movement. We introduce three label management strategies, including (1) occlusion management, (2) level-of-detail management, and (3) coherence management by balancing the usage of the mobile phone screen. A greedy approach is developed for fast occlusion handling in AR. A level-of-detail scheme is adopted to arrange various types of labels. A 3D scene manipulation is then built to simultaneously suppress the incoherent behaviors induced by viewing angle changes. Finally, we present the feasibility and applicability of our approach through one synthetic and two real-world scenarios, followed by a qualitative user study.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130567716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1109/PacificVis52677.2021.00021
Youjia Zhou, N. Chalapathi, Archit Rathore, Yaodong Zhao, Bei Wang
The mapper algorithm is a popular tool from topological data analysis for extracting topological summaries of high-dimensional datasets. In this paper, we present Mapper Interactive, a web-based framework for the interactive analysis and visualization of high-dimensional point cloud data. It implements the mapper algorithm in an interactive, scalable, and easily extendable way, thus supporting practical data analysis. In particular, its command-line API can compute mapper graphs for 1 million points of 256 dimensions in about 3 minutes (4 times faster than the vanilla implementation). Its visual interface allows on-the-fly computation and manipulation of the mapper graph based on user-specified parameters and supports the addition of new analysis modules with a few lines of code. Mapper Interactive makes the mapper algorithm accessible to nonspecialists and accelerates topological analytics workflows.
{"title":"Mapper Interactive: A Scalable, Extendable, and Interactive Toolbox for the Visual Exploration of High-Dimensional Data","authors":"Youjia Zhou, N. Chalapathi, Archit Rathore, Yaodong Zhao, Bei Wang","doi":"10.1109/PacificVis52677.2021.00021","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00021","url":null,"abstract":"The mapper algorithm is a popular tool from topological data analysis for extracting topological summaries of high-dimensional datasets. In this paper, we present Mapper Interactive, a web-based framework for the interactive analysis and visualization of high-dimensional point cloud data. It implements the mapper algorithm in an interactive, scalable, and easily extendable way, thus supporting practical data analysis. In particular, its command-line API can compute mapper graphs for 1 million points of 256 dimensions in about 3 minutes (4 times faster than the vanilla implementation). Its visual interface allows on-the-fly computation and manipulation of the mapper graph based on user-specified parameters and supports the addition of new analysis modules with a few lines of code. Mapper Interactive makes the mapper algorithm accessible to nonspecialists and accelerates topological analytics workflows.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115307870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-16DOI: 10.1109/PacificVis52677.2021.00023
Loraine Franke, D. Weidele, Fan Zhang, Suheyla Cetin Karayumak, Steve Pieper, L. O’Donnell, Y. Rathi, D. Haehn
Tractography from high-dimensional diffusion magnetic resonance imaging (dMRI) data allows brain’s structural connectivity analysis. Recent dMRI studies aim to compare connectivity patterns across subject groups and disease populations to understand subtle abnormalities in the brain’s white matter connectivity and distributions of biologically sensitive dMRI derived metrics. Existing software products focus solely on the anatomy, are not intuitive or restrict the comparison of multiple subjects. In this paper, we present the design and implementation of FiberStars, a visual analysis tool for tractography data that allows the interactive visualization of brain fiber clusters combining existing 3D anatomy with compact 2D visualizations. With FiberStars, researchers can analyze and compare multiple subjects in large collections of brain fibers using different views. To evaluate the usability of our software, we performed a quantitative user study. We asked domain experts and non-experts to find patterns in a tractography dataset with either FiberStars or an existing dMRI exploration tool. Our results show that participants using FiberStars can navigate extensive collections of tractography faster and more accurately. All our research, software, and results are available openly.
{"title":"FiberStars: Visual Comparison of Diffusion Tractography Data between Multiple Subjects","authors":"Loraine Franke, D. Weidele, Fan Zhang, Suheyla Cetin Karayumak, Steve Pieper, L. O’Donnell, Y. Rathi, D. Haehn","doi":"10.1109/PacificVis52677.2021.00023","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00023","url":null,"abstract":"Tractography from high-dimensional diffusion magnetic resonance imaging (dMRI) data allows brain’s structural connectivity analysis. Recent dMRI studies aim to compare connectivity patterns across subject groups and disease populations to understand subtle abnormalities in the brain’s white matter connectivity and distributions of biologically sensitive dMRI derived metrics. Existing software products focus solely on the anatomy, are not intuitive or restrict the comparison of multiple subjects. In this paper, we present the design and implementation of FiberStars, a visual analysis tool for tractography data that allows the interactive visualization of brain fiber clusters combining existing 3D anatomy with compact 2D visualizations. With FiberStars, researchers can analyze and compare multiple subjects in large collections of brain fibers using different views. To evaluate the usability of our software, we performed a quantitative user study. We asked domain experts and non-experts to find patterns in a tractography dataset with either FiberStars or an existing dMRI exploration tool. Our results show that participants using FiberStars can navigate extensive collections of tractography faster and more accurately. All our research, software, and results are available openly.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116215038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-02DOI: 10.1109/PacificVis52677.2021.00016
J. Wulms, J. Buchmüller, Wouter Meulemans, Kevin Verbeek, B. Speckmann
The availability of devices that track moving objects has led to an explosive growth in trajectory data. When exploring the resulting large trajectory collections, visual summaries are a useful tool to identify time intervals of interest. A typical approach is to represent the spatial positions of the tracked objects at each time step via a one-dimensional ordering; visualizations of such orderings can then be placed in temporal order along a time line. There are two main criteria to assess the quality of the resulting visual summary: spatial quality – how well does the ordering capture the structure of the data at each time step, and stability – how coherent are the orderings over consecutive time steps or temporal ranges?In this paper we introduce a new Stable Principal Component (SPC) method to compute such orderings, which is explicitly parameterized for stability, allowing a trade-off between the spatial quality and stability. We conduct extensive computational experiments that quantitatively compare the orderings produced by ours and other stable dimensionality-reduction methods to various state-of-the-art approaches using a set of well-established quality metrics that capture spatial quality and stability. We conclude that stable dimensionality reduction outperforms existing methods on stability, without sacrificing spatial quality or efficiency; in particular, our new SPC method does so at a fraction of the computational costs.
{"title":"Stable Visual Summaries for Trajectory Collections","authors":"J. Wulms, J. Buchmüller, Wouter Meulemans, Kevin Verbeek, B. Speckmann","doi":"10.1109/PacificVis52677.2021.00016","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00016","url":null,"abstract":"The availability of devices that track moving objects has led to an explosive growth in trajectory data. When exploring the resulting large trajectory collections, visual summaries are a useful tool to identify time intervals of interest. A typical approach is to represent the spatial positions of the tracked objects at each time step via a one-dimensional ordering; visualizations of such orderings can then be placed in temporal order along a time line. There are two main criteria to assess the quality of the resulting visual summary: spatial quality – how well does the ordering capture the structure of the data at each time step, and stability – how coherent are the orderings over consecutive time steps or temporal ranges?In this paper we introduce a new Stable Principal Component (SPC) method to compute such orderings, which is explicitly parameterized for stability, allowing a trade-off between the spatial quality and stability. We conduct extensive computational experiments that quantitatively compare the orderings produced by ours and other stable dimensionality-reduction methods to various state-of-the-art approaches using a set of well-established quality metrics that capture spatial quality and stability. We conclude that stable dimensionality reduction outperforms existing methods on stability, without sacrificing spatial quality or efficiency; in particular, our new SPC method does so at a fraction of the computational costs.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127041374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}