Pub Date : 2023-04-12DOI: 10.1177/14738716231167180
S. Tomasi, Jeanny Liu, Feng Cheng, Chaodong Han
Widely employed by innovative organizations, a well-designed simple data visualization has been shown to enhance user experience and aid in decision making; while a more embellished visualization may cause overload, it has the potential to create deeper processing and learning. Furthermore, individual characteristics may impact on how users seek information on these different types of visualization. This study proposes that thinking styles (analytical vs holistic) and domain expertise moderate the effects of data visualization types on decision performances in terms of decision accuracy, decision confidence, memory recall, and cognitive load. To test our hypotheses, an experimental study involving visual manipulations in the context of personal finance was conducted on two types of visualizations (simple and clutter). Results suggest that simple visualizations enhance decision accuracy and reduce cognitive load. We also find that cognitive load is further reduced when analytical thinkers are presented with simple visualizations. These findings can help designers understand how user characteristics may be considered when designing and evaluating visualizations for decision makers.
{"title":"The role of individual characteristics: How thinking style and domain expertise affect performances on visualization","authors":"S. Tomasi, Jeanny Liu, Feng Cheng, Chaodong Han","doi":"10.1177/14738716231167180","DOIUrl":"https://doi.org/10.1177/14738716231167180","url":null,"abstract":"Widely employed by innovative organizations, a well-designed simple data visualization has been shown to enhance user experience and aid in decision making; while a more embellished visualization may cause overload, it has the potential to create deeper processing and learning. Furthermore, individual characteristics may impact on how users seek information on these different types of visualization. This study proposes that thinking styles (analytical vs holistic) and domain expertise moderate the effects of data visualization types on decision performances in terms of decision accuracy, decision confidence, memory recall, and cognitive load. To test our hypotheses, an experimental study involving visual manipulations in the context of personal finance was conducted on two types of visualizations (simple and clutter). Results suggest that simple visualizations enhance decision accuracy and reduce cognitive load. We also find that cognitive load is further reduced when analytical thinkers are presented with simple visualizations. These findings can help designers understand how user characteristics may be considered when designing and evaluating visualizations for decision makers.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"265 - 276"},"PeriodicalIF":2.3,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48112104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: 10.1177/14738716231158046
L. Stuart, Christopher Haynes, K. Tantam, R. Gardner, Marco A. Palomino
This paper presents a detailed case study of the application of techniques from Information Visualization to data collected in Critical Care Units (CCUs). This data is heterogeneous and sometimes incomplete due to the pressures on staff in the environment. Thus, it can be difficult to use conventional means to visualize it meaningfully. The paper presents the software tool called CCViews. It was developed to support visualization of CCU data. It enables clinicians to view the trajectory of patient recovery and track the effectiveness of different interventions such as physiotherapy. Note that this work is underpinned by the world-famous information seeking mantra, which emphasizes the need to provide users with views of their data at differing levels of granularity.
{"title":"Visualizing the recovery of patients in Critical Care Units","authors":"L. Stuart, Christopher Haynes, K. Tantam, R. Gardner, Marco A. Palomino","doi":"10.1177/14738716231158046","DOIUrl":"https://doi.org/10.1177/14738716231158046","url":null,"abstract":"This paper presents a detailed case study of the application of techniques from Information Visualization to data collected in Critical Care Units (CCUs). This data is heterogeneous and sometimes incomplete due to the pressures on staff in the environment. Thus, it can be difficult to use conventional means to visualize it meaningfully. The paper presents the software tool called CCViews. It was developed to support visualization of CCU data. It enables clinicians to view the trajectory of patient recovery and track the effectiveness of different interventions such as physiotherapy. Note that this work is underpinned by the world-famous information seeking mantra, which emphasizes the need to provide users with views of their data at differing levels of granularity.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"209 - 222"},"PeriodicalIF":2.3,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41719207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-10DOI: 10.1177/14738716231157081
Debanjan Datta, Nathan Self, J. Simeone, A. Meadows, Willow Outhwaite, Linda Walker, N. Elmqvist, Naren Ramkrishnan
Detecting illegal shipments in the global timber trade poses a massive challenge to enforcement agencies. The massive volume and complexity of timber shipments and obfuscations within international trade data, intentional or not, necessitates an automated system to aid in detecting specific shipments that potentially contain illegally harvested wood. To address these requirements we build a novel human-in-the-loop visual analytics system called TIMBERSLEUTH. TimberSleuth uses a novel scoring model reinforced through human feedback to improve upon the relevance of the results of the system while using an off-the-shelf anomaly detection model. Detailed evaluation is performed using real data with synthetic anomalies to test the machine intelligence that drives the system. We design interactive visualizations to enable analysis of pertinent details of anomalous trade records so that analysts can determine if a record is relevant and provide iterative feedback. This feedback is utilized by the machine learning model to improve the precision of the output.
{"title":"TimberSleuth: Visual anomaly detection with human feedback for mitigating the illegal timber trade","authors":"Debanjan Datta, Nathan Self, J. Simeone, A. Meadows, Willow Outhwaite, Linda Walker, N. Elmqvist, Naren Ramkrishnan","doi":"10.1177/14738716231157081","DOIUrl":"https://doi.org/10.1177/14738716231157081","url":null,"abstract":"Detecting illegal shipments in the global timber trade poses a massive challenge to enforcement agencies. The massive volume and complexity of timber shipments and obfuscations within international trade data, intentional or not, necessitates an automated system to aid in detecting specific shipments that potentially contain illegally harvested wood. To address these requirements we build a novel human-in-the-loop visual analytics system called TIMBERSLEUTH. TimberSleuth uses a novel scoring model reinforced through human feedback to improve upon the relevance of the results of the system while using an off-the-shelf anomaly detection model. Detailed evaluation is performed using real data with synthetic anomalies to test the machine intelligence that drives the system. We design interactive visualizations to enable analysis of pertinent details of anomalous trade records so that analysts can determine if a record is relevant and provide iterative feedback. This feedback is utilized by the machine learning model to improve the precision of the output.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"223 - 245"},"PeriodicalIF":2.3,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47208835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-27DOI: 10.1177/14738716231157082
Helen H. Huang, Hanspeter Pfister, Yalong Yang
Network visualizations are commonly used to analyze relationships in various contexts, such as social, biological, and geographical interactions. To efficiently explore a network visualization, the user needs to quickly navigate to different parts of the network and analyze local details. Recent advancements in display and interaction technologies inspire new visions for improved visualization and interaction design. Past research into network design has identified some key benefits to visualizing networks in 3D versus 2D. However, little work has been done to study the impact of varying levels of embodied interaction on network analysis. We present a controlled user study that compared four network visualization environments featuring conditions and hardware that leveraged different amounts of embodiment and visual perception ranging from a 2D visualization desktop environment with a standard mouse to a 3D visualization virtual reality environment. We measured the accuracy, speed, perceived workload, and preferences of 20 participants as they completed three network analytic tasks, each of which required unique navigation and substantial effort to complete. For the task that required participants to iterate over the entire visualization rather than focus on a specific area, we found that participants were more accurate using a VR HMD and a trackball mouse than conventional desktop settings. From a workload perspective, VR was generally considered the least mentally demanding and least frustrating to use in two of our three tasks. It was also preferred and ranked as the most effective and visually appealing condition overall. However, using VR to compare two side-by-side networks was difficult, and it was similar to or slower than other conditions in two of the three tasks. Overall, the accuracy and workload advantages of conditions with greater embodiment in specific tasks suggest promising opportunities to create more effective environments in which to analyze network visualizations.
{"title":"Is embodied interaction beneficial? A study on navigating network visualizations","authors":"Helen H. Huang, Hanspeter Pfister, Yalong Yang","doi":"10.1177/14738716231157082","DOIUrl":"https://doi.org/10.1177/14738716231157082","url":null,"abstract":"Network visualizations are commonly used to analyze relationships in various contexts, such as social, biological, and geographical interactions. To efficiently explore a network visualization, the user needs to quickly navigate to different parts of the network and analyze local details. Recent advancements in display and interaction technologies inspire new visions for improved visualization and interaction design. Past research into network design has identified some key benefits to visualizing networks in 3D versus 2D. However, little work has been done to study the impact of varying levels of embodied interaction on network analysis. We present a controlled user study that compared four network visualization environments featuring conditions and hardware that leveraged different amounts of embodiment and visual perception ranging from a 2D visualization desktop environment with a standard mouse to a 3D visualization virtual reality environment. We measured the accuracy, speed, perceived workload, and preferences of 20 participants as they completed three network analytic tasks, each of which required unique navigation and substantial effort to complete. For the task that required participants to iterate over the entire visualization rather than focus on a specific area, we found that participants were more accurate using a VR HMD and a trackball mouse than conventional desktop settings. From a workload perspective, VR was generally considered the least mentally demanding and least frustrating to use in two of our three tasks. It was also preferred and ranked as the most effective and visually appealing condition overall. However, using VR to compare two side-by-side networks was difficult, and it was similar to or slower than other conditions in two of the three tasks. Overall, the accuracy and workload advantages of conditions with greater embodiment in specific tasks suggest promising opportunities to create more effective environments in which to analyze network visualizations.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"169 - 185"},"PeriodicalIF":2.3,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47670991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-05DOI: 10.1177/14738716221147289
Wenkai Han, Hans-Jörg Schulz
Guidance in visual analytics aims to support users in accomplishing their analytical goals and generating insights. Different approaches for guidance are widely adopted in many tools and frameworks for various purposes – from helping to focus on relevant data subspaces to selecting suitable visualization techniques. With each of these different purposes come specific considerations on how to provide the needed guidance. In this paper, we propose a generic method for making these considerations by framing the guidance problem as a decision problem and applying decision making theory and models toward its solution. This method passes through three stages: (1) identifying decision points; (2) deriving and evaluating alternatives; (3) visualizing the resulting alternatives to support users in comparing them and making their choice. Our method is realized as a set of practical worksheets and illustrated by applying it to a use case of providing guidance among different clustering methods. Finally, we compare our method with existing guidance frameworks to relate and delineate the respective goals and contributions of each.
{"title":"Providing visual analytics guidance through decision support","authors":"Wenkai Han, Hans-Jörg Schulz","doi":"10.1177/14738716221147289","DOIUrl":"https://doi.org/10.1177/14738716221147289","url":null,"abstract":"Guidance in visual analytics aims to support users in accomplishing their analytical goals and generating insights. Different approaches for guidance are widely adopted in many tools and frameworks for various purposes – from helping to focus on relevant data subspaces to selecting suitable visualization techniques. With each of these different purposes come specific considerations on how to provide the needed guidance. In this paper, we propose a generic method for making these considerations by framing the guidance problem as a decision problem and applying decision making theory and models toward its solution. This method passes through three stages: (1) identifying decision points; (2) deriving and evaluating alternatives; (3) visualizing the resulting alternatives to support users in comparing them and making their choice. Our method is realized as a set of practical worksheets and illustrated by applying it to a use case of providing guidance among different clustering methods. Finally, we compare our method with existing guidance frameworks to relate and delineate the respective goals and contributions of each.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"140 - 165"},"PeriodicalIF":2.3,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43349249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/14738716221126992
Xiaoxiao Liu, Mohammad Alharbi, Jing Chen, A. Diehl, Dylan Rees, Elif E. Firat, Qiru Wang, R. Laramee
Visualization, a vibrant field for researchers, practitioners, and higher educational institutions, is growing and evolving very rapidly. Tremendous progress has been made since 1987, the year often cited as the beginning of data visualization as a distinct field. As such, the number of visualization resources and the demand for those resources is increasing at a rapid pace. After a decades-equivalent long search process, we present a survey of open visualization resources for all those with an interest in interactive data visualization and visual analytics. Because the number of resources is so large, we focus on collections of resources, of which there are already many ranging from literature collections to collections of practitioner resources. Based on this, we develop a classification of visualization resource collections with a focus on the resource type, e.g. literature-based, web-based, developer focused and special topics. The result is an overview and details-on-demand of many useful resources. The collection offers a valuable jump-start for those seeking out data visualization resources from all backgrounds spanning from beginners such as students to teachers, practitioners, developers, and researchers wishing to create their own advanced or novel visual designs. This paper is a response to students and others who frequently ask for visualization resources available to them.
{"title":"Visualization Resources: A Survey","authors":"Xiaoxiao Liu, Mohammad Alharbi, Jing Chen, A. Diehl, Dylan Rees, Elif E. Firat, Qiru Wang, R. Laramee","doi":"10.1177/14738716221126992","DOIUrl":"https://doi.org/10.1177/14738716221126992","url":null,"abstract":"Visualization, a vibrant field for researchers, practitioners, and higher educational institutions, is growing and evolving very rapidly. Tremendous progress has been made since 1987, the year often cited as the beginning of data visualization as a distinct field. As such, the number of visualization resources and the demand for those resources is increasing at a rapid pace. After a decades-equivalent long search process, we present a survey of open visualization resources for all those with an interest in interactive data visualization and visual analytics. Because the number of resources is so large, we focus on collections of resources, of which there are already many ranging from literature collections to collections of practitioner resources. Based on this, we develop a classification of visualization resource collections with a focus on the resource type, e.g. literature-based, web-based, developer focused and special topics. The result is an overview and details-on-demand of many useful resources. The collection offers a valuable jump-start for those seeking out data visualization resources from all backgrounds spanning from beginners such as students to teachers, practitioners, developers, and researchers wishing to create their own advanced or novel visual designs. This paper is a response to students and others who frequently ask for visualization resources available to them.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"3 - 30"},"PeriodicalIF":2.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46665993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-09DOI: 10.1177/14738716221138090
Wei Li, Shanshan Wang, Weidong Xie, Kun Yu, Chaolu Feng
The development of medical device technology has led to the rapid growth of medical imaging data. The reconstruction from two-dimensional images to three-dimensional volume visualization not only shows the location and shape of lesions from multiple views but also provides intuitive simulation for surgical treatment. However, the three-dimensional reconstruction process requires the high performance execution of image data acquisition and reconstruction algorithms, which limits the application to equipments with limited resources. Therefore, it is difficult to apply on many online scenarios, and mobile devices cannot meet high-performance hardware and software requirements. This paper proposes an online medical image rendering and real-time three-dimensional (3D) visualization method based on Web Graphics Library (WebGL). This method is based on a four-tier client-server architecture and uses the method of medical image data synchronization to reconstruct at both sides of the client and the server. The reconstruction method is designed to achieve the dual requirements of reconstruction speed and quality. The real-time 3D reconstruction visualization of large-scale medical images is tested in real environments. During the interaction with the reconstruction model, users can obtain the reconstructed results in real-time and observe and analyze it from all angles. The proposed four-tier client-server architecture will provide instant visual feedback and interactive information for many medical practitioners in collaborative therapy and tele-medicine applications. The experiments also show that the method of online 3D image reconstruction is applied in clinical practice on large scale image data while maintaining high reconstruction speed and quality.
{"title":"Large scale medical image online three-dimensional reconstruction based on WebGL using four tier client server architecture","authors":"Wei Li, Shanshan Wang, Weidong Xie, Kun Yu, Chaolu Feng","doi":"10.1177/14738716221138090","DOIUrl":"https://doi.org/10.1177/14738716221138090","url":null,"abstract":"The development of medical device technology has led to the rapid growth of medical imaging data. The reconstruction from two-dimensional images to three-dimensional volume visualization not only shows the location and shape of lesions from multiple views but also provides intuitive simulation for surgical treatment. However, the three-dimensional reconstruction process requires the high performance execution of image data acquisition and reconstruction algorithms, which limits the application to equipments with limited resources. Therefore, it is difficult to apply on many online scenarios, and mobile devices cannot meet high-performance hardware and software requirements. This paper proposes an online medical image rendering and real-time three-dimensional (3D) visualization method based on Web Graphics Library (WebGL). This method is based on a four-tier client-server architecture and uses the method of medical image data synchronization to reconstruct at both sides of the client and the server. The reconstruction method is designed to achieve the dual requirements of reconstruction speed and quality. The real-time 3D reconstruction visualization of large-scale medical images is tested in real environments. During the interaction with the reconstruction model, users can obtain the reconstructed results in real-time and observe and analyze it from all angles. The proposed four-tier client-server architecture will provide instant visual feedback and interactive information for many medical practitioners in collaborative therapy and tele-medicine applications. The experiments also show that the method of online 3D image reconstruction is applied in clinical practice on large scale image data while maintaining high reconstruction speed and quality.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"100 - 114"},"PeriodicalIF":2.3,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45330322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-30DOI: 10.1177/14738716221137908
Alessandro Rego de Lima, Diana Carvalho, Tânia de Jesus Vilela da Rocha
This article presents a novel management and information visualization system proposal based on the tesseract, the 4D-hypercube. The concept comprises metaphors that mimic the tesseract geometrical properties using interaction and information visualization techniques, made possible by modern computer systems and human capabilities such as spatial cognition. The discussion compares the Hypercube and the traditional desktop metaphor systems. An operational prototype is also available for reader testing. Finally, a preliminary assessment with 31 participants revealed that 81.05% “agree” or “totally agree” that the proposed concepts offer real gains compared to the desktop metaphor.
{"title":"HyperCube4x: A viewport management system proposal","authors":"Alessandro Rego de Lima, Diana Carvalho, Tânia de Jesus Vilela da Rocha","doi":"10.1177/14738716221137908","DOIUrl":"https://doi.org/10.1177/14738716221137908","url":null,"abstract":"This article presents a novel management and information visualization system proposal based on the tesseract, the 4D-hypercube. The concept comprises metaphors that mimic the tesseract geometrical properties using interaction and information visualization techniques, made possible by modern computer systems and human capabilities such as spatial cognition. The discussion compares the Hypercube and the traditional desktop metaphor systems. An operational prototype is also available for reader testing. Finally, a preliminary assessment with 31 participants revealed that 81.05% “agree” or “totally agree” that the proposed concepts offer real gains compared to the desktop metaphor.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"87 - 99"},"PeriodicalIF":2.3,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48029529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.
{"title":"Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing","authors":"Haoyu Li, Junpeng Wang, Yan-luan Zheng, Liang Wang, Wei Zhang, Han-Wei Shen","doi":"10.1177/14738716221130338","DOIUrl":"https://doi.org/10.1177/14738716221130338","url":null,"abstract":"Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"52 - 68"},"PeriodicalIF":2.3,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44274390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-09DOI: 10.1177/14738716221120064
Pramod Chundury, M. A. Yalçın, Jon Crabtree, A. Mahurkar, Lisa M Shulman, N. Elmqvist
As the complexity of data analysis increases, even well-designed data interfaces must guide experts in transforming their theoretical knowledge into actual features supported by the tool. This challenge is even greater for casual users who are increasingly turning to data analysis to solve everyday problems. To address this challenge, we propose data-driven, contextual, in situ help features that can be implemented in visual data interfaces. We introduce five modes of help-seeking: (1) contextual help on selected interface elements, (2) topic listing, (3) overview, (4) guided tour, and (5) notifications. The difference between our work and general user interface help systems is that data visualization provide a unique environment for embedding context-dependent data inside on-screen messaging. We demonstrate the usefulness of such contextual help through two case studies of two visual data interfaces: Keshif and POD-Vis. We implemented and evaluated the help modes with two sets of participants, and found that directly selecting user interface elements was the most useful.
{"title":"Contextual in situ help for visual data interfaces","authors":"Pramod Chundury, M. A. Yalçın, Jon Crabtree, A. Mahurkar, Lisa M Shulman, N. Elmqvist","doi":"10.1177/14738716221120064","DOIUrl":"https://doi.org/10.1177/14738716221120064","url":null,"abstract":"As the complexity of data analysis increases, even well-designed data interfaces must guide experts in transforming their theoretical knowledge into actual features supported by the tool. This challenge is even greater for casual users who are increasingly turning to data analysis to solve everyday problems. To address this challenge, we propose data-driven, contextual, in situ help features that can be implemented in visual data interfaces. We introduce five modes of help-seeking: (1) contextual help on selected interface elements, (2) topic listing, (3) overview, (4) guided tour, and (5) notifications. The difference between our work and general user interface help systems is that data visualization provide a unique environment for embedding context-dependent data inside on-screen messaging. We demonstrate the usefulness of such contextual help through two case studies of two visual data interfaces: Keshif and POD-Vis. We implemented and evaluated the help modes with two sets of participants, and found that directly selecting user interface elements was the most useful.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"69 - 84"},"PeriodicalIF":2.3,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44671567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}