Pub Date : 2025-01-06DOI: 10.1109/TVCG.2024.3502911
Deng Luo;Zainab Alsuwaykit;Dawar Khan;Ondřej Strnad;Tobias Isenberg;Ivan Viola
The authors would like to make the following errata after correcting the initialization related bugs in the associated program.
在纠正了相关程序中与初始化相关的错误后,作者希望做出以下勘误表。
{"title":"Errata to “DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map”","authors":"Deng Luo;Zainab Alsuwaykit;Dawar Khan;Ondřej Strnad;Tobias Isenberg;Ivan Viola","doi":"10.1109/TVCG.2024.3502911","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3502911","url":null,"abstract":"The authors would like to make the following errata after correcting the initialization related bugs in the associated program.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 2","pages":"1645-1646"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10829749","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1109/TVCG.2024.3513275
Matt I. B. Oddo;Stephen Kobourov;Tamara Munzner
An ‘invariant descriptor’ captures meaningful structural features of networks, useful where traditional visualizations, like node-link views, face challenges like the ’hairball phenomenon’ (inscrutable overlap of points and lines). Designing invariant descriptors involves balancing abstraction and information retention, as richer data summaries demand more storage and computational resources. Building on prior work, chiefly the BMatrix—a matrix descriptor visualized as the invariant ’network portrait’ heatmap—we introduce BFS-Census, a new algorithm computing our Census data structures: Census-Node, Census-Edge, and Census-Stub. Our experiments show Census-Stub, which focuses on ’stubs’ (half-edges), has orders of magnitude greater discerning power (ability to tell non-isomorphic graphs apart) than any other descriptor in this study, without a difficult trade-off: the substantial increase in resolution doesn't come at a commensurate cost in storage space or computation power. We also present new visualizations—our Hop-Census polylines and Census-Census trajectories—and evaluate them using real-world graphs, including a sensitivity analysis that shows graph topology change maps to visual Census change.
{"title":"The Census-Stub Graph Invariant Descriptor","authors":"Matt I. B. Oddo;Stephen Kobourov;Tamara Munzner","doi":"10.1109/TVCG.2024.3513275","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3513275","url":null,"abstract":"An ‘invariant descriptor’ captures meaningful structural features of networks, useful where traditional visualizations, like node-link views, face challenges like the ’hairball phenomenon’ (inscrutable overlap of points and lines). Designing invariant descriptors involves balancing abstraction and information retention, as richer data summaries demand more storage and computational resources. Building on prior work, chiefly the BMatrix—a matrix descriptor visualized as the invariant ’network portrait’ heatmap—we introduce BFS-Census, a new algorithm computing our Census data structures: Census-Node, Census-Edge, and Census-Stub. Our experiments show Census-Stub, which focuses on ’stubs’ (half-edges), has orders of magnitude greater discerning power (ability to tell non-isomorphic graphs apart) than any other descriptor in this study, without a difficult trade-off: the substantial increase in resolution doesn't come at a commensurate cost in storage space or computation power. We also present new visualizations—our Hop-Census polylines and Census-Census trajectories—and evaluate them using real-world graphs, including a sensitivity analysis that shows graph topology change maps to visual Census change.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 3","pages":"1945-1961"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In temporal (event-based) networks, time is a continuous axis, with real-valued time coordinates for each node and edge. Computing a layout for such graphs means embedding the node trajectories and edge surfaces over time in a $2D + t$ space, known as the space-time cube. Currently, these space-time cube layouts are visualized through animation or by slicing the cube at regular intervals. However, both techniques present problems such as below-average performance on tasks as well as loss of precision and difficulties in selecting timeslice intervals. In this article, we present TimeLighting, a novel visual analytics approach to visualize and explore temporal graphs embedded in the space-time cube. Our interactive approach highlights node trajectories and their movement over time, visualizes node “aging”, and provides guidance to support users during exploration by indicating interesting time intervals (“when”) and network elements (“where”) are located for a detail-oriented investigation. This combined focus helps to gain deeper insights into the temporal network's underlying behavior. We assess the utility and efficacy of our approach through two case studies and qualitative expert evaluation. The results demonstrate how TimeLighting supports identifying temporal patterns, extracting insights from nodes with high activity, and guiding the exploration and analysis process.
{"title":"TimeLighting: Guided Exploration of 2D Temporal Network Projections","authors":"Velitchko Filipov;Davide Ceneda;Daniel Archambault;Alessio Arleo","doi":"10.1109/TVCG.2024.3514858","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3514858","url":null,"abstract":"In temporal (<i>event-based</i>) networks, time is a continuous axis, with real-valued time coordinates for each node and edge. Computing a layout for such graphs means embedding the node trajectories and edge surfaces over time in a <inline-formula><tex-math>$2D + t$</tex-math></inline-formula> space, known as the space-time cube. Currently, these space-time cube layouts are visualized through animation or by slicing the cube at regular intervals. However, both techniques present problems such as below-average performance on tasks as well as loss of precision and difficulties in selecting timeslice intervals. In this article, we present <monospace>TimeLighting</monospace>, a novel visual analytics approach to visualize and explore temporal graphs embedded in the space-time cube. Our interactive approach highlights node trajectories and their movement over time, visualizes node “aging”, and provides guidance to support users during exploration by indicating interesting time intervals (“when”) and network elements (“where”) are located for a detail-oriented investigation. This combined focus helps to gain deeper insights into the temporal network's underlying behavior. We assess the utility and efficacy of our approach through two case studies and qualitative expert evaluation. The results demonstrate how <monospace>TimeLighting</monospace> supports identifying temporal patterns, extracting insights from nodes with high activity, and guiding the exploration and analysis process.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 3","pages":"1932-1944"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10787140","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1109/TVCG.2024.3473129
Paul Rosen;Kristi Potter;Remco Chang
We are excited to welcome you to IEEE VIS 2024 in sunny St. Pete Beach, Florida! The conference program is shaping up to be one of the best we have seen, and the conference venue is undoubtedly one of the most fun locations we have ever held the VIS conference.
我们非常高兴地欢迎您参加在佛罗里达州阳光明媚的圣皮特海滩举行的 IEEE VIS 2024 大会!会议日程将是我们所见过的最好的日程之一,而会议地点无疑是我们举办 VIS 会议以来最有趣的地点之一。
{"title":"Welcome: Message from the VIS 2024 General Chairs","authors":"Paul Rosen;Kristi Potter;Remco Chang","doi":"10.1109/TVCG.2024.3473129","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3473129","url":null,"abstract":"We are excited to welcome you to IEEE VIS 2024 in sunny St. Pete Beach, Florida! The conference program is shaping up to be one of the best we have seen, and the conference venue is undoubtedly one of the most fun locations we have ever held the VIS conference.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"xiv-xiv"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767346","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1109/TVCG.2024.3473388
{"title":"IEEE Visualization and Graphics Technical Committee (VGTC)","authors":"","doi":"10.1109/TVCG.2024.3473388","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3473388","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"xxvi-xxvi"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1109/TVCG.2024.3473271
{"title":"VIS 2024 Best Papers Committee","authors":"","doi":"10.1109/TVCG.2024.3473271","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3473271","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"xliii-xliii"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767326","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1109/TVCG.2024.3473335
Dominik Moritz;Mennatallah El-Assady
The 2023 VGTC Visualization Significant New Researcher Award goes to Dominik Moritz for impactful work in the development of software infrastructure that has influenced both research and practice.
{"title":"2024 VGTC Visualization Significant New Researcher Award","authors":"Dominik Moritz;Mennatallah El-Assady","doi":"10.1109/TVCG.2024.3473335","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3473335","url":null,"abstract":"The 2023 VGTC Visualization Significant New Researcher Award goes to Dominik Moritz for impactful work in the development of software infrastructure that has influenced both research and practice.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"xxx-xxxi"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767344","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}