首页 > 最新文献

Visual Informatics最新文献

英文 中文
Leveraging personality as a proxy of perceived transparency in hierarchical visualizations
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-22 DOI: 10.1016/j.visinf.2025.01.002
Tomás Alves , Carlota Dias , Daniel Gonçalves , Sandra Gama
Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research, especially since trust models how users build on the knowledge and use it. This work extends the current body of research by studying the user’s subjective evaluation of the visualization transparency of hierarchical charts through the clarity, coverage, and look and feel dimensions. Additionally, we extend the user profile to better understand whether personality facets manifest a biasing effect on the trust-building process. Our results show that the data encodings do not affect how users perceive visualization transparency while controlling for personality factors. Regarding personality, the propensity to trust affects how they judge the clarity of a hierarchical chart. Our findings provide new insights into the research challenges of measuring trust and understanding the transparency of information visualization. Specifically, we explore how personality factors manifest in this trust-building relationship and user interaction within visualization systems.
{"title":"Leveraging personality as a proxy of perceived transparency in hierarchical visualizations","authors":"Tomás Alves ,&nbsp;Carlota Dias ,&nbsp;Daniel Gonçalves ,&nbsp;Sandra Gama","doi":"10.1016/j.visinf.2025.01.002","DOIUrl":"10.1016/j.visinf.2025.01.002","url":null,"abstract":"<div><div>Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research, especially since trust models how users build on the knowledge and use it. This work extends the current body of research by studying the user’s subjective evaluation of the visualization transparency of hierarchical charts through the clarity, coverage, and look and feel dimensions. Additionally, we extend the user profile to better understand whether personality facets manifest a biasing effect on the trust-building process. Our results show that the data encodings do not affect how users perceive visualization transparency while controlling for personality factors. Regarding personality, the propensity to trust affects how they judge the clarity of a hierarchical chart. Our findings provide new insights into the research challenges of measuring trust and understanding the transparency of information visualization. Specifically, we explore how personality factors manifest in this trust-building relationship and user interaction within visualization systems.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 43-57"},"PeriodicalIF":3.8,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual comparative analytics of multimodal transportation
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-16 DOI: 10.1016/j.visinf.2025.01.001
Zikun Deng , Haoming Chen , Qing-Long Lu , Zicheng Su , Tobias Schreck , Jie Bao , Yi Cai
Contemporary urban transportation systems frequently depend on a variety of modes to provide residents with travel services. Understanding a multimodal transportation system is pivotal for devising well-informed planning; however, it is also inherently challenging for traffic analysts and planners. This challenge stems from the necessity of evaluating and contrasting the quality of transportation services across multiple modes. Existing methods are constrained in offering comprehensive insights into the system, primarily due to the inadequacy of multimodal traffic data necessary for fair comparisons and their inability to equip analysts and planners with the means for exploration and reasoned analysis within the urban spatial context. To this end, we first acquire sufficient multimodal trips leveraging well-established navigation platforms that can estimate the routes with the least travel time given an origin and a destination (an OD pair). We also propose TraDyssey, a visual analytics system that enables analysts and planners to evaluate and compare multiple modes by exploring acquired massive multimodal trips. TraDyssey follows a streamlined query-and-explore workflow supported by user-friendly and effective interactive visualizations. Specifically, a revisited difference-aware parallel coordinate plot (PCP) is designed for overall mode comparisons based on multimodal trips. Trip groups can be flexibly queried on the PCP based on differential features across modes. The queried trips are then organized and presented on a geographic map by OD pairs, forming a group-OD-trip hierarchy of visual exploration. Domain experts gained valuable insights into transportation planning through real-world case studies using TraDyssey.
当代城市交通系统通常依靠多种模式为居民提供出行服务。了解多式联运系统对于制定明智的规划至关重要,但对交通分析师和规划师来说,这本身也是一项挑战。这一挑战源于对多种交通方式的交通服务质量进行评估和对比的必要性。现有方法在提供对系统的全面见解方面受到限制,主要原因是缺乏进行公平比较所需的多模式交通数据,以及无法为分析师和规划师提供在城市空间背景下进行探索和合理分析的手段。为此,我们首先利用成熟的导航平台获取足够的多式联运出行数据,这些平台可以根据起点和终点(OD 对)估算出旅行时间最少的路线。我们还提出了 TraDyssey,这是一个可视化分析系统,使分析师和规划师能够通过探索获取的大量多式联运行程来评估和比较多种模式。TraDyssey 采用简化的查询和探索工作流程,并辅以用户友好和有效的交互式可视化。具体来说,基于多式联运的整体模式比较设计了一个重新设计的差异感知平行坐标图(PCP)。根据不同模式的差异特征,可以在平行坐标图上灵活地查询行程组。然后,查询到的行程按 OD 对在地理地图上进行组织和展示,形成一个可视化探索的组-OD-行程层次结构。领域专家通过使用 TraDyssey 进行实际案例研究,对交通规划获得了宝贵的见解。
{"title":"Visual comparative analytics of multimodal transportation","authors":"Zikun Deng ,&nbsp;Haoming Chen ,&nbsp;Qing-Long Lu ,&nbsp;Zicheng Su ,&nbsp;Tobias Schreck ,&nbsp;Jie Bao ,&nbsp;Yi Cai","doi":"10.1016/j.visinf.2025.01.001","DOIUrl":"10.1016/j.visinf.2025.01.001","url":null,"abstract":"<div><div>Contemporary urban transportation systems frequently depend on a variety of modes to provide residents with travel services. Understanding a multimodal transportation system is pivotal for devising well-informed planning; however, it is also inherently challenging for traffic analysts and planners. This challenge stems from the necessity of evaluating and contrasting the quality of transportation services across multiple modes. Existing methods are constrained in offering comprehensive insights into the system, primarily due to the inadequacy of multimodal traffic data necessary for fair comparisons and their inability to equip analysts and planners with the means for exploration and reasoned analysis within the urban spatial context. To this end, we first acquire sufficient multimodal trips leveraging well-established navigation platforms that can estimate the routes with the least travel time given an origin and a destination (an OD pair). We also propose TraDyssey, a visual analytics system that enables analysts and planners to evaluate and compare multiple modes by exploring acquired massive multimodal trips. TraDyssey follows a streamlined query-and-explore workflow supported by user-friendly and effective interactive visualizations. Specifically, a revisited difference-aware parallel coordinate plot (PCP) is designed for overall mode comparisons based on multimodal trips. Trip groups can be flexibly queried on the PCP based on differential features across modes. The queried trips are then organized and presented on a geographic map by OD pairs, forming a group-OD-trip hierarchy of visual exploration. Domain experts gained valuable insights into transportation planning through real-world case studies using TraDyssey.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 18-30"},"PeriodicalIF":3.8,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Out-of-focus artifacts mitigation and autofocus methods for 3D displays
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-20 DOI: 10.1016/j.visinf.2024.12.001
T. Chlubna , T. Milet , P. Zemčík
This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.
{"title":"Out-of-focus artifacts mitigation and autofocus methods for 3D displays","authors":"T. Chlubna ,&nbsp;T. Milet ,&nbsp;P. Zemčík","doi":"10.1016/j.visinf.2024.12.001","DOIUrl":"10.1016/j.visinf.2024.12.001","url":null,"abstract":"<div><div>This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 31-42"},"PeriodicalIF":3.8,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming cinematography lighting education in the metaverse
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-05 DOI: 10.1016/j.visinf.2024.11.003
Xian Xu , Wai Tong , Zheng Wei , Meng Xia , Lik-Hang Lee , Huamin Qu
Lighting education is a foundational component of cinematography education. However, many art schools do not have expensive soundstages for traditional cinematography lessons. Migrating physical setups to virtual experiences is a potential solution driven by metaverse initiatives. Yet there is still a lack of knowledge on the design of a VR system for teaching cinematography. We first analyzed the educational needs for cinematography lighting education by conducting interviews with six cinematography professionals from academia and industry. Accordingly, we presented Art Mirror, a VR soundstage for teachers and students to emulate cinematography lighting in virtual scenarios. We evaluated Art Mirror from the aspects of usability, realism, presence, sense of agency, and collaboration. Sixteen participants were invited to take a cinematography lighting course and assess the design elements of Art Mirror. Our results demonstrate that Art Mirror is usable and useful for cinematography lighting education, which sheds light on the design of VR cinematography education.
{"title":"Transforming cinematography lighting education in the metaverse","authors":"Xian Xu ,&nbsp;Wai Tong ,&nbsp;Zheng Wei ,&nbsp;Meng Xia ,&nbsp;Lik-Hang Lee ,&nbsp;Huamin Qu","doi":"10.1016/j.visinf.2024.11.003","DOIUrl":"10.1016/j.visinf.2024.11.003","url":null,"abstract":"<div><div>Lighting education is a foundational component of cinematography education. However, many art schools do not have expensive soundstages for traditional cinematography lessons. Migrating physical setups to virtual experiences is a potential solution driven by metaverse initiatives. Yet there is still a lack of knowledge on the design of a VR system for teaching cinematography. We first analyzed the educational needs for cinematography lighting education by conducting interviews with six cinematography professionals from academia and industry. Accordingly, we presented <em>Art Mirror</em>, a VR soundstage for teachers and students to emulate cinematography lighting in virtual scenarios. We evaluated <em>Art Mirror</em> from the aspects of usability, realism, presence, sense of agency, and collaboration. Sixteen participants were invited to take a cinematography lighting course and assess the design elements of <em>Art Mirror</em>. Our results demonstrate that <em>Art Mirror</em> is usable and useful for cinematography lighting education, which sheds light on the design of VR cinematography education.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 1-17"},"PeriodicalIF":3.8,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArtEyer: Enriching GPT-based agents with contextual data visualizations for fine art authentication
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.11.001
Tan Tang , Yanhong Wu , Junming Gao , Kejia Ruan , Yanjie Zhang , Shuainan Ye , Yingcai Wu , Xiaojiao Chen
Fine art authentication plays a significant role in protecting cultural heritage and ensuring the integrity of artworks. Traditional authentication methods require professionals to collect many reference materials and conduct detailed analyses. To ease the difficulty, we collaborate with domain experts to develop a GPT-based agent, namely ArtEyer, that offers accurate attributions, determines the origin and authorship, and executes visual analytics. Despite the convenience of the conversational user interface, novice users may still face challenges due to the hallucination issue and the steep learning curve associated with prompting. To face these obstacles, we propose a novel solution that places interactive data visualizations into the conversations. We create contextual visualizations from an external domain-dependent database to ensure data trustworthiness and allow users to provide precise instructions to the agent by interacting directly with these visualizations, thus overcoming the vagueness inherent in natural language-based prompting. We evaluate ArtEyer through an in-lab user study and demonstrate its usage with a real-world case.
{"title":"ArtEyer: Enriching GPT-based agents with contextual data visualizations for fine art authentication","authors":"Tan Tang ,&nbsp;Yanhong Wu ,&nbsp;Junming Gao ,&nbsp;Kejia Ruan ,&nbsp;Yanjie Zhang ,&nbsp;Shuainan Ye ,&nbsp;Yingcai Wu ,&nbsp;Xiaojiao Chen","doi":"10.1016/j.visinf.2024.11.001","DOIUrl":"10.1016/j.visinf.2024.11.001","url":null,"abstract":"<div><div>Fine art authentication plays a significant role in protecting cultural heritage and ensuring the integrity of artworks. Traditional authentication methods require professionals to collect many reference materials and conduct detailed analyses. To ease the difficulty, we collaborate with domain experts to develop a GPT-based agent, namely ArtEyer, that offers accurate attributions, determines the origin and authorship, and executes visual analytics. Despite the convenience of the conversational user interface, novice users may still face challenges due to the hallucination issue and the steep learning curve associated with prompting. To face these obstacles, we propose a novel solution that places interactive data visualizations into the conversations. We create contextual visualizations from an external domain-dependent database to ensure data trustworthiness and allow users to provide precise instructions to the agent by interacting directly with these visualizations, thus overcoming the vagueness inherent in natural language-based prompting. We evaluate ArtEyer through an in-lab user study and demonstrate its usage with a real-world case.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 48-59"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer Vision in Augmented, Virtual, Mixed and Extended Reality environments—A bibliometric review
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.11.002
Júlio Castro Lopes, Rui Pedro Lopes
This work describes a bibliometric analysis of the literature on the use of computer vision algorithms in Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and Extended Reality (XR) environments. The analysis aims to highlight the evolution, trends, and effects of research in this field. This review provides an overview of immersive technologies and their applications, as well as the role of computer vision algorithms in enabling these technologies and the potential benefits of using such algorithms. This study identifies important authors, institutions, and research themes by using bibliometric indicators such as citation counts, co-citation analysis, and network analysis. The analysis also identifies gaps and opportunities for additional research in this area, as well as a critical assessment of the quality and relevance of the publications.
{"title":"Computer Vision in Augmented, Virtual, Mixed and Extended Reality environments—A bibliometric review","authors":"Júlio Castro Lopes,&nbsp;Rui Pedro Lopes","doi":"10.1016/j.visinf.2024.11.002","DOIUrl":"10.1016/j.visinf.2024.11.002","url":null,"abstract":"<div><div>This work describes a bibliometric analysis of the literature on the use of computer vision algorithms in Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and Extended Reality (XR) environments. The analysis aims to highlight the evolution, trends, and effects of research in this field. This review provides an overview of immersive technologies and their applications, as well as the role of computer vision algorithms in enabling these technologies and the potential benefits of using such algorithms. This study identifies important authors, institutions, and research themes by using bibliometric indicators such as citation counts, co-citation analysis, and network analysis. The analysis also identifies gaps and opportunities for additional research in this area, as well as a critical assessment of the quality and relevance of the publications.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 13-22"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative model-assisted sample selection for interest-driven progressive visual analytics
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.004
Jie Liu, Jie Li, Jielong Kuang
We propose interest-driven progressive visual analytics. The core idea is to filter samples with features of interest to analysts from the given dataset for analysis. The approach relies on a generative model (GM) trained using the given dataset as the training set. The GM characteristics make it convenient to find ideal generated samples from its latent space. Then, we filter the original samples similar to the ideal generated ones to explore patterns. Our research involves two methods for achieving and applying the idea. First, we give a method to explore ideal samples from a GM’s latent space. Second, we integrate the method into a system to form an embedding-based analytical workflow. Patterns found on open datasets in case studies, results of quantitative experiments, and positive feedback from experts illustrate the general usability and effectiveness of the approach.
{"title":"Generative model-assisted sample selection for interest-driven progressive visual analytics","authors":"Jie Liu,&nbsp;Jie Li,&nbsp;Jielong Kuang","doi":"10.1016/j.visinf.2024.10.004","DOIUrl":"10.1016/j.visinf.2024.10.004","url":null,"abstract":"<div><div>We propose interest-driven progressive visual analytics. The core idea is to filter samples with features of interest to analysts from the given dataset for analysis. The approach relies on a generative model (GM) trained using the given dataset as the training set. The GM characteristics make it convenient to find ideal generated samples from its latent space. Then, we filter the original samples similar to the ideal generated ones to explore patterns. Our research involves two methods for achieving and applying the idea. First, we give a method to explore ideal samples from a GM’s latent space. Second, we integrate the method into a system to form an embedding-based analytical workflow. Patterns found on open datasets in case studies, results of quantitative experiments, and positive feedback from experts illustrate the general usability and effectiveness of the approach.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 97-108"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChemNav: An interactive visual tool to navigate in the latent space for chemical molecules discovery
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.002
Yang Zhang, Jie Li, Xu Chao
In recent years, AI-driven drug development has emerged as a prominent research topic in computer chemistry. A key focus is the application of generative models for molecule synthesis, which create extensive virtual libraries of chemical molecules based on latent spaces. However, locating molecules with desirable properties within the vast latent spaces remains a significant challenge. Large regions of invalid samples in the latent space, called “dead zones”, can impede the exploration efficiency. The process is always time-consuming and repetitive. Therefore, we aim to propose a visualization system to help experts identify potential molecules with desirable properties as they wander in the latent space. Specifically, we conducted a literature survey about the application of generative networks in drug synthesis to summarize the tasks and followed this with expert interviews to determine their requirements. Based on the above requirements, we introduce ChemNav, an interactive visual tool for navigating latent space for desirable molecules search. ChemNav incorporates a heuristic latent space interpolation path search algorithm to enhance the efficiency of valid molecule generation, and a similar sample search algorithm to accelerate the discovery of similar molecules. Evaluations of ChemNav through two case studies, a user study, and experiments demonstrated its effectiveness in inspiring researchers to explore the latent space for chemical molecule discovery.
{"title":"ChemNav: An interactive visual tool to navigate in the latent space for chemical molecules discovery","authors":"Yang Zhang,&nbsp;Jie Li,&nbsp;Xu Chao","doi":"10.1016/j.visinf.2024.10.002","DOIUrl":"10.1016/j.visinf.2024.10.002","url":null,"abstract":"<div><div>In recent years, AI-driven drug development has emerged as a prominent research topic in computer chemistry. A key focus is the application of generative models for molecule synthesis, which create extensive virtual libraries of chemical molecules based on latent spaces. However, locating molecules with desirable properties within the vast latent spaces remains a significant challenge. Large regions of invalid samples in the latent space, called “dead zones”, can impede the exploration efficiency. The process is always time-consuming and repetitive. Therefore, we aim to propose a visualization system to help experts identify potential molecules with desirable properties as they wander in the latent space. Specifically, we conducted a literature survey about the application of generative networks in drug synthesis to summarize the tasks and followed this with expert interviews to determine their requirements. Based on the above requirements, we introduce ChemNav, an interactive visual tool for navigating latent space for desirable molecules search. ChemNav incorporates a heuristic latent space interpolation path search algorithm to enhance the efficiency of valid molecule generation, and a similar sample search algorithm to accelerate the discovery of similar molecules. Evaluations of ChemNav through two case studies, a user study, and experiments demonstrated its effectiveness in inspiring researchers to explore the latent space for chemical molecule discovery.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 60-70"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glyph design for communication initiation in real-time human-automation collaboration
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.09.006
Magnus Nylin , Jonas Lundberg , Magnus Bång , Kostiantyn Kucher
Initiating communication and conveying critical information to the human operator is a key problem in human-automation collaboration. This problem is particularly pronounced in time-constrained safety critical domains such as in Air Traffic Management. A visual representation should aid operators understanding why the system initiates the communication, when the operator must act, and the consequences of not responding to the cue. Data glyphs can be used to present multidimensional data, including temporal data in a compact format to facilitate this type of communication. In this paper, we propose a glyph design for communication initialization for highly automated systems in Air Traffic Management, Vessel Traffic Service, and Train Traffic Management. The design was assessed by experts in these domains in three workshop sessions. The results showed that the number of glyphs to be presented simultaneously and the type of situation were domain-specific glyph design aspects that needed to be adjusted for each work domain. The results also showed that the core of the glyph design could be reused between domains, and that the operators could successfully interpret the temporal data representations. We discuss similarities and differences in the applicability of the glyph design between the different domains, and finally, we provide some suggestions for future work based on the results from this study.
{"title":"Glyph design for communication initiation in real-time human-automation collaboration","authors":"Magnus Nylin ,&nbsp;Jonas Lundberg ,&nbsp;Magnus Bång ,&nbsp;Kostiantyn Kucher","doi":"10.1016/j.visinf.2024.09.006","DOIUrl":"10.1016/j.visinf.2024.09.006","url":null,"abstract":"<div><div>Initiating communication and conveying critical information to the human operator is a key problem in human-automation collaboration. This problem is particularly pronounced in time-constrained safety critical domains such as in Air Traffic Management. A visual representation should aid operators understanding <em>why</em> the system initiates the communication, <em>when</em> the operator must act, and the <em>consequences of not responding</em> to the cue. Data <em>glyphs</em> can be used to present multidimensional data, including temporal data in a compact format to facilitate this type of communication. In this paper, we propose a glyph design for communication initialization for highly automated systems in Air Traffic Management, Vessel Traffic Service, and Train Traffic Management. The design was assessed by experts in these domains in three workshop sessions. The results showed that the number of glyphs to be presented simultaneously and the type of situation were domain-specific glyph design aspects that needed to be adjusted for each work domain. The results also showed that the core of the glyph design could be reused between domains, and that the operators could successfully interpret the temporal data representations. We discuss similarities and differences in the applicability of the glyph design between the different domains, and finally, we provide some suggestions for future work based on the results from this study.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 23-35"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ATVis: Understanding and diagnosing adversarial training processes through visual analytics
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.003
Fang Zhu , Xufei Zhu , Xumeng Wang , Yuxin Ma , Jieqiong Zhao
Adversarial training has emerged as a major strategy against adversarial perturbations in deep neural networks, which mitigates the issue of exploiting model vulnerabilities to generate incorrect predictions. Despite enhancing robustness, adversarial training often results in a trade-off with standard accuracy on normal data, a phenomenon that remains a contentious issue. In addition, the opaque nature of deep neural network models renders it more difficult to inspect and diagnose how adversarial training processes evolve. This paper introduces ATVis, a visual analytics framework for examining and diagnosing adversarial training processes. Through multi-level visualization design, ATVis enables the examination of model robustness from various granularity, facilitating a detailed understanding of the dynamics in the training epochs. The framework reveals the complex relationship between adversarial robustness and standard accuracy, which further offers insights into the mechanisms that drive the trade-offs observed in adversarial training. The effectiveness of the framework is demonstrated through case studies.
{"title":"ATVis: Understanding and diagnosing adversarial training processes through visual analytics","authors":"Fang Zhu ,&nbsp;Xufei Zhu ,&nbsp;Xumeng Wang ,&nbsp;Yuxin Ma ,&nbsp;Jieqiong Zhao","doi":"10.1016/j.visinf.2024.10.003","DOIUrl":"10.1016/j.visinf.2024.10.003","url":null,"abstract":"<div><div>Adversarial training has emerged as a major strategy against adversarial perturbations in deep neural networks, which mitigates the issue of exploiting model vulnerabilities to generate incorrect predictions. Despite enhancing robustness, adversarial training often results in a trade-off with standard accuracy on normal data, a phenomenon that remains a contentious issue. In addition, the opaque nature of deep neural network models renders it more difficult to inspect and diagnose how adversarial training processes evolve. This paper introduces ATVis, a visual analytics framework for examining and diagnosing adversarial training processes. Through multi-level visualization design, ATVis enables the examination of model robustness from various granularity, facilitating a detailed understanding of the dynamics in the training epochs. The framework reveals the complex relationship between adversarial robustness and standard accuracy, which further offers insights into the mechanisms that drive the trade-offs observed in adversarial training. The effectiveness of the framework is demonstrated through case studies.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 71-84"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1