Pub Date : 2025-03-19DOI: 10.1109/TVCG.2025.3549542
Lior Maman, Ilan Vol, Sarit F A Szpiro
Avoiding obstacles while navigating is a challenge for people with low vision, who have impaired yet functional vision, which impacts their mobility, safety, and independence. This study investigates the impact of using Augmented Reality (AR) to enhance the visibility of obstacles for people with low vision. Twenty-five participants (14 with low vision and 11 typically sighted) wore smart glasses and completed a real-world obstacle course under two conditions: with obstacles enhanced using 3D AR markings and without any enhancement (i.e., passthrough only - control condition). Our results reveal that AR enhancements significantly decreased walking time, with the low vision group demonstrating a notable reduction in time. Additionally, the path length was significantly shorter with AR enhancements. The decrease in time and path length did not lead to more collisions, suggesting improved obstacle avoidance. Participants also reported a positive user experience with the AR system, highlighting its potential to enhance mobility for low vision users. These results suggest that AR technology can play a critical role in supporting the independence and confidence of low vision individuals in mobility tasks within complex environments. We discuss design guidelines for future AR systems to assist low vision people.
{"title":"Enhancing Obstacle Visibility with Augmented Reality Improves Mobility in People with Low Vision.","authors":"Lior Maman, Ilan Vol, Sarit F A Szpiro","doi":"10.1109/TVCG.2025.3549542","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549542","url":null,"abstract":"<p><p>Avoiding obstacles while navigating is a challenge for people with low vision, who have impaired yet functional vision, which impacts their mobility, safety, and independence. This study investigates the impact of using Augmented Reality (AR) to enhance the visibility of obstacles for people with low vision. Twenty-five participants (14 with low vision and 11 typically sighted) wore smart glasses and completed a real-world obstacle course under two conditions: with obstacles enhanced using 3D AR markings and without any enhancement (i.e., passthrough only - control condition). Our results reveal that AR enhancements significantly decreased walking time, with the low vision group demonstrating a notable reduction in time. Additionally, the path length was significantly shorter with AR enhancements. The decrease in time and path length did not lead to more collisions, suggesting improved obstacle avoidance. Participants also reported a positive user experience with the AR system, highlighting its potential to enhance mobility for low vision users. These results suggest that AR technology can play a critical role in supporting the independence and confidence of low vision individuals in mobility tasks within complex environments. We discuss design guidelines for future AR systems to assist low vision people.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143665778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1109/TVCG.2025.3549901
Maggie K McCracken, Corey S Shayman, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr
Virtual reality (VR) has become a popular tool for studying navigation, providing the experimental control of a laboratory setting but also the potential for immersive and natural experiences that resemble the real world. For VR to be an effective tool to study navigation and be used for training or rehabilitation, it is important to establish whether performance is similar across virtual and real environments. Much of the existing navigation research has focused on young adult performance either in a virtual or a real environment, resulting in an open question regarding the validity of VR for studying age-related effects on spatial navigation. In this paper, young (18-30 years old) and older adults (60 years and older) performed the same navigation task in similar real and virtual environments. They completed a homing task, requiring walking along two legs of a triangle and returning to a home location, under three sensory conditions: visual cues (environmental landmarks present), body-based self-motion cues, and the combination of both cues. Our findings reveal that homing performance in VR demonstrates the same age-related differences as those observed in the real-world task. That said, within-age group differences arise when comparing cue use across environment types. In particular, young adults are less accurate and more variable with self-motion cues than visual cues in VR, while older adults show similar deficits with both cues. However, when both age groups can access multiple sensory cues, navigation performance does not differ between environment types. These results demonstrate that VR effectively captures age-related differences, with navigation performance most closely resembling performance in the real world when navigators can rely on an array of sensory information. Such findings have implications for future research on the aging population, highlighting that VR can be a valuable tool, particularly when multisensory cues are available.
{"title":"A Comparison of the Effects of Older Age on Homing Performance in Real and Virtual Environments.","authors":"Maggie K McCracken, Corey S Shayman, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1109/TVCG.2025.3549901","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549901","url":null,"abstract":"<p><p>Virtual reality (VR) has become a popular tool for studying navigation, providing the experimental control of a laboratory setting but also the potential for immersive and natural experiences that resemble the real world. For VR to be an effective tool to study navigation and be used for training or rehabilitation, it is important to establish whether performance is similar across virtual and real environments. Much of the existing navigation research has focused on young adult performance either in a virtual or a real environment, resulting in an open question regarding the validity of VR for studying age-related effects on spatial navigation. In this paper, young (18-30 years old) and older adults (60 years and older) performed the same navigation task in similar real and virtual environments. They completed a homing task, requiring walking along two legs of a triangle and returning to a home location, under three sensory conditions: visual cues (environmental landmarks present), body-based self-motion cues, and the combination of both cues. Our findings reveal that homing performance in VR demonstrates the same age-related differences as those observed in the real-world task. That said, within-age group differences arise when comparing cue use across environment types. In particular, young adults are less accurate and more variable with self-motion cues than visual cues in VR, while older adults show similar deficits with both cues. However, when both age groups can access multiple sensory cues, navigation performance does not differ between environment types. These results demonstrate that VR effectively captures age-related differences, with navigation performance most closely resembling performance in the real world when navigators can rely on an array of sensory information. Such findings have implications for future research on the aging population, highlighting that VR can be a valuable tool, particularly when multisensory cues are available.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1109/TVCG.2025.3552091
Xiwen Cai, Xiaodong Ge, Kai Xiong, Shuainan Ye, Di Weng, Ke Xu, Datong Wei, Jiang Long, Yingcai Wu
In a formal data analysis workflow, data validation is a necessary step that helps data analysts verify the quality of the data and ensure the reliability of the results. Data analysts usually need to validate the result when encountering an unexpected result, such as an abnormal record in a table. In order to understand how a specific record is derived, they would backtrace it in the pipeline step by step via checking the code lines, exposing the intermediate tables, and finding the data records from which it is derived. However, manually reviewing code and backtracing data requires certain expertise, while inspecting the traced records in multiple tables and interpreting their relationships is tedious. In this work, we propose HYPNOS, a visualization system that supports interactive data lineage tracing for data transformation scripts. HYPNOS uses a lineage module for parsing and adapting code to capture both schema-level and instance-level data lineage from data transformation scripts. Then, it provides users with a lineage view for obtaining an overview of the data transformation process and a detail view for tracing instance-level data lineage and inspecting details. HYPNOS reveals different levels of data relationships and helps users with data lineage tracing. We demonstrate the usability and effectiveness of HYPNOS through a use case, interviews of four expert users, and a user study.
{"title":"HYPNOS: Interactive Data Lineage Tracing for Data Transformation Scripts.","authors":"Xiwen Cai, Xiaodong Ge, Kai Xiong, Shuainan Ye, Di Weng, Ke Xu, Datong Wei, Jiang Long, Yingcai Wu","doi":"10.1109/TVCG.2025.3552091","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3552091","url":null,"abstract":"<p><p>In a formal data analysis workflow, data validation is a necessary step that helps data analysts verify the quality of the data and ensure the reliability of the results. Data analysts usually need to validate the result when encountering an unexpected result, such as an abnormal record in a table. In order to understand how a specific record is derived, they would backtrace it in the pipeline step by step via checking the code lines, exposing the intermediate tables, and finding the data records from which it is derived. However, manually reviewing code and backtracing data requires certain expertise, while inspecting the traced records in multiple tables and interpreting their relationships is tedious. In this work, we propose HYPNOS, a visualization system that supports interactive data lineage tracing for data transformation scripts. HYPNOS uses a lineage module for parsing and adapting code to capture both schema-level and instance-level data lineage from data transformation scripts. Then, it provides users with a lineage view for obtaining an overview of the data transformation process and a detail view for tracing instance-level data lineage and inspecting details. HYPNOS reveals different levels of data relationships and helps users with data lineage tracing. We demonstrate the usability and effectiveness of HYPNOS through a use case, interviews of four expert users, and a user study.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1109/TVCG.2025.3549895
Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega
In Virtual Reality (VR), navigating small physical spaces often relies on handheld controllers, such as teleportation and joystick movements, due to the limited space for natural walking. However, walking-based techniques can enhance immersion by enabling more natural movement. This paper presents three room-connection techniques - portals, corridors, and central hubs - that can be used in virtual environments (VEs) to create "impossible spaces". These spaces use overlapping areas to maximize available physical space, promising for walking even in constrained spaces. We conducted a user study with 33 participants to assess the effectiveness of these techniques within a small physical area (2.5 × 2.5 m). The results show that all three techniques are viable for connecting rooms in VR, each offering distinct characteristics. Each method positively impacts presence, cybersickness, spatial awareness, orientation, and overall user experience. Specifically, portals offer a flexible and straightforward solution, corridors provide a seamless and natural transition between spaces, and central hubs simplify navigation. The primary contribution of this work is demonstrating how these room-connection techniques can be applied to dynamically adapt VEs to fit small, uncluttered physical spaces, such as those commonly available to VR users at home. Applications such as virtual museum tours, training simulations, and emergency preparedness exercises can greatly benefit from these methods, providing users with a more natural and engaging experience, even within the limited space typical in home settings.
{"title":"Techniques for Multiple Room Connection in Virtual Reality: Walking Within Small Physical Spaces.","authors":"Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega","doi":"10.1109/TVCG.2025.3549895","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549895","url":null,"abstract":"<p><p>In Virtual Reality (VR), navigating small physical spaces often relies on handheld controllers, such as teleportation and joystick movements, due to the limited space for natural walking. However, walking-based techniques can enhance immersion by enabling more natural movement. This paper presents three room-connection techniques - portals, corridors, and central hubs - that can be used in virtual environments (VEs) to create \"impossible spaces\". These spaces use overlapping areas to maximize available physical space, promising for walking even in constrained spaces. We conducted a user study with 33 participants to assess the effectiveness of these techniques within a small physical area (2.5 × 2.5 m). The results show that all three techniques are viable for connecting rooms in VR, each offering distinct characteristics. Each method positively impacts presence, cybersickness, spatial awareness, orientation, and overall user experience. Specifically, portals offer a flexible and straightforward solution, corridors provide a seamless and natural transition between spaces, and central hubs simplify navigation. The primary contribution of this work is demonstrating how these room-connection techniques can be applied to dynamically adapt VEs to fit small, uncluttered physical spaces, such as those commonly available to VR users at home. Applications such as virtual museum tours, training simulations, and emergency preparedness exercises can greatly benefit from these methods, providing users with a more natural and engaging experience, even within the limited space typical in home settings.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1109/TVCG.2025.3549538
Manuel Rey-Area, Christian Richardt
360° images are a popular medium for bringing photography into virtual reality. While users can look in any direction by rotating their heads, 360° images ultimately look flat. That is because they lack depth information and thus cannot create motion parallax when translating the head. To achieve a fully immersive VR experience from a single 360° image, we introduce a novel method to upgrade 360° images to free-viewpoint renderings with 6 degrees of freedom. Alternative approaches reconstruct textured 3D geometry, which is fast to render but suffers from visible reconstruction artifacts, or use neural radiance fields that produce high-quality novel views but too slowly for VR applications. Our 360° 3D photos build on 3D Gaussian splatting as the underlying scene representation to simultaneously achieve high visual quality and real-time rendering speed. To fill plausible content in previously unseen regions, we introduce a novel combination of latent diffusion inpainting and monocular depth estimation with Poisson-based blending. Our results demonstrate state-of-the-art visual and depth quality at rendering rates of 105 FPS per megapixel on a commodity GPU.
{"title":"360° 3D Photos from a Single 360° Input Image.","authors":"Manuel Rey-Area, Christian Richardt","doi":"10.1109/TVCG.2025.3549538","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549538","url":null,"abstract":"<p><p>360° images are a popular medium for bringing photography into virtual reality. While users can look in any direction by rotating their heads, 360° images ultimately look flat. That is because they lack depth information and thus cannot create motion parallax when translating the head. To achieve a fully immersive VR experience from a single 360° image, we introduce a novel method to upgrade 360° images to free-viewpoint renderings with 6 degrees of freedom. Alternative approaches reconstruct textured 3D geometry, which is fast to render but suffers from visible reconstruction artifacts, or use neural radiance fields that produce high-quality novel views but too slowly for VR applications. Our 360° 3D photos build on 3D Gaussian splatting as the underlying scene representation to simultaneously achieve high visual quality and real-time rendering speed. To fill plausible content in previously unseen regions, we introduce a novel combination of latent diffusion inpainting and monocular depth estimation with Poisson-based blending. Our results demonstrate state-of-the-art visual and depth quality at rendering rates of 105 FPS per megapixel on a commodity GPU.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rendering quality and performance greatly affect the user's immersion in VR experiences. 3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. We introduce a 3D Gaussian forest representation that represents the scene as a forest. To construct the 3D Gaussian forest, we propose a 3D Gaussian forest initialization method based on dynamic-static separation. Subsequently, we propose a 3D Gaussian forest optimization method based on deformation field and Gaussian decomposition to optimize the forest and deformation field. To achieve real-time dynamic scene rendering, we present a 3D Gaussian forest rendering method based on HVS models. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to 11.33× speedup. We also conducted a user study, and the results prove that the perceptual quality of our method has a high visual similarity with the ground truth.
{"title":"Fov-GS: Foveated 3D Gaussian Splatting for Dynamic Scenes.","authors":"Runze Fan, Jian Wu, Xuehuai Shi, Lizhi Zhao, Qixiang Ma, Lili Wang","doi":"10.1109/TVCG.2025.3549576","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549576","url":null,"abstract":"<p><p>Rendering quality and performance greatly affect the user's immersion in VR experiences. 3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. We introduce a 3D Gaussian forest representation that represents the scene as a forest. To construct the 3D Gaussian forest, we propose a 3D Gaussian forest initialization method based on dynamic-static separation. Subsequently, we propose a 3D Gaussian forest optimization method based on deformation field and Gaussian decomposition to optimize the forest and deformation field. To achieve real-time dynamic scene rendering, we present a 3D Gaussian forest rendering method based on HVS models. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to 11.33× speedup. We also conducted a user study, and the results prove that the perceptual quality of our method has a high visual similarity with the ground truth.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1109/TVCG.2025.3549123
Loen Boban, Ronan Boulic, Bruno Herbelin
The perception of one's own body is subject to systematic distortions and can be influenced by exposure to visual stimuli showing distorted bodies. In Virtual Reality (VR), echoing such body judgment inaccuracies, avatars with strong appearance dissimilarities with respect to users' bodies can be successfully embodied. The present experimental work investigates, in the healthy population, the perception of the own body in immersive and embodied VR, as well as the impact of being co-present with virtual humans on such self-perception. Participants were successively presented with different avatars, corresponding to various upper- and lower-body proportions, and were asked to compare them with their perceived own body morphology. To investigate the influence of co-present virtual humans on this judgment, the task was performed in co-presence with virtual agents corresponding to various body appearances. Results show an overall overestimation of one's leg length and no influence of the co-present agent's appearance. Importantly, the embodiment scores reflect such body morphology judgment inaccuracy, with participants reporting lower levels of embodiment for avatars with very short legs than for avatars with very long legs. Our findings suggest specifics of embodied body judgment methods, likely resulting from the experience of embodying the avatar as compared to visual appreciation only.
{"title":"An embodied body morphology task for investigating self-avatar proportions perception in Virtual Reality.","authors":"Loen Boban, Ronan Boulic, Bruno Herbelin","doi":"10.1109/TVCG.2025.3549123","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549123","url":null,"abstract":"<p><p>The perception of one's own body is subject to systematic distortions and can be influenced by exposure to visual stimuli showing distorted bodies. In Virtual Reality (VR), echoing such body judgment inaccuracies, avatars with strong appearance dissimilarities with respect to users' bodies can be successfully embodied. The present experimental work investigates, in the healthy population, the perception of the own body in immersive and embodied VR, as well as the impact of being co-present with virtual humans on such self-perception. Participants were successively presented with different avatars, corresponding to various upper- and lower-body proportions, and were asked to compare them with their perceived own body morphology. To investigate the influence of co-present virtual humans on this judgment, the task was performed in co-presence with virtual agents corresponding to various body appearances. Results show an overall overestimation of one's leg length and no influence of the co-present agent's appearance. Importantly, the embodiment scores reflect such body morphology judgment inaccuracy, with participants reporting lower levels of embodiment for avatars with very short legs than for avatars with very long legs. Our findings suggest specifics of embodied body judgment methods, likely resulting from the experience of embodying the avatar as compared to visual appreciation only.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing synthetic routes for novel molecules is pivotal in various fields like medicine and chemistry. In this process, researchers need to explore a set of synthetic reactions to transform starting molecules into intermediates step by step until the target novel molecule is obtained. However, designing synthetic routes presents challenges for researchers. First, researchers need to make decisions among numerous possible synthetic reactions at each step, considering various criteria (e.g., yield, experimental duration, and the count of experimental steps) to construct the synthetic route. Second, they must consider the potential impact of one choice at each step on the overall synthetic route. To address these challenges, we proposed SynthLens, a visual analytics system to facilitate the iterative construction of synthetic routes by exploring multiple possibilities for synthetic reactions at each step of construction. Specifically, we have introduced a tree-form visualization in SynthLens to compare and evaluate all the explored routes at various exploration steps, considering both the exploration step and multiple criteria. Our system empowers researchers to consider their construction process comprehensively, guiding them toward promising exploration directions to complete the synthetic route. We validated the usability and effectiveness of SynthLens through a quantitative evaluation and expert interviews, highlighting its role in facilitating the design process of synthetic routes. Finally, we discussed the insights of SynthLens to inspire other multi-criteria decision-making scenarios with visual analytics.
{"title":"SynthLens: Visual Analytics for Facilitating Multi-step Synthetic Route Design.","authors":"Qipeng Wang, Rui Sheng, Shaolun Ruan, Xiaofu Jin, Chuhan Shi, Min Zhu","doi":"10.1109/TVCG.2025.3552134","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3552134","url":null,"abstract":"<p><p>Designing synthetic routes for novel molecules is pivotal in various fields like medicine and chemistry. In this process, researchers need to explore a set of synthetic reactions to transform starting molecules into intermediates step by step until the target novel molecule is obtained. However, designing synthetic routes presents challenges for researchers. First, researchers need to make decisions among numerous possible synthetic reactions at each step, considering various criteria (e.g., yield, experimental duration, and the count of experimental steps) to construct the synthetic route. Second, they must consider the potential impact of one choice at each step on the overall synthetic route. To address these challenges, we proposed SynthLens, a visual analytics system to facilitate the iterative construction of synthetic routes by exploring multiple possibilities for synthetic reactions at each step of construction. Specifically, we have introduced a tree-form visualization in SynthLens to compare and evaluate all the explored routes at various exploration steps, considering both the exploration step and multiple criteria. Our system empowers researchers to consider their construction process comprehensively, guiding them toward promising exploration directions to complete the synthetic route. We validated the usability and effectiveness of SynthLens through a quantitative evaluation and expert interviews, highlighting its role in facilitating the design process of synthetic routes. Finally, we discussed the insights of SynthLens to inspire other multi-criteria decision-making scenarios with visual analytics.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-17DOI: 10.1109/TVCG.2025.3549908
Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha
We study the correlations between redirected walking (RDW) rotation gains and patterns in users' posture and gaze data during locomotion in virtual reality (VR). To do this, we conducted a psychophysical experiment to measure users' sensitivity to RDW rotation gains and collect gaze and posture data during the experiment. Using multilevel modeling, we studied how different factors of the VR system and user affected their physiological signals. In particular, we studied the effects of redirection gain, trial duration, trial number (i.e., time spent in VR), and participant gender on postural sway, gaze velocity (a proxy for gaze stability), and saccade and blink rate. Our results showed that, in general, physiological signals were significantly positively correlated with the strength of redirection gain, the duration of trials, and the trial number. Gaze velocity was negatively correlated with trial duration. Additionally, we measured users' sensitivity to rotation gains in well-lit (photopic) and dimly-lit (mesopic) virtual lighting conditions. Results showed that there were no significant differences in RDW detection thresholds between the photopic and mesopic luminance conditions.
{"title":"Sensitivity to Redirected Walking Considering Gaze, Posture, and Luminance.","authors":"Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha","doi":"10.1109/TVCG.2025.3549908","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549908","url":null,"abstract":"<p><p>We study the correlations between redirected walking (RDW) rotation gains and patterns in users' posture and gaze data during locomotion in virtual reality (VR). To do this, we conducted a psychophysical experiment to measure users' sensitivity to RDW rotation gains and collect gaze and posture data during the experiment. Using multilevel modeling, we studied how different factors of the VR system and user affected their physiological signals. In particular, we studied the effects of redirection gain, trial duration, trial number (i.e., time spent in VR), and participant gender on postural sway, gaze velocity (a proxy for gaze stability), and saccade and blink rate. Our results showed that, in general, physiological signals were significantly positively correlated with the strength of redirection gain, the duration of trials, and the trial number. Gaze velocity was negatively correlated with trial duration. Additionally, we measured users' sensitivity to rotation gains in well-lit (photopic) and dimly-lit (mesopic) virtual lighting conditions. Results showed that there were no significant differences in RDW detection thresholds between the photopic and mesopic luminance conditions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-17DOI: 10.1109/TVCG.2025.3552017
Haotian Li, Yun Wang, Q Vera Liao, Huamin Qu
This paper explores the potential for human-AI collaboration in the context of data storytelling for data workers. Data storytelling communicates insights and knowledge from data analysis. It plays a vital role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers need to spend tremendous effort on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies focus more on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To address this gap, we conducted an interview study with 18 data workers to explore their preferences for AI collaboration in the planning, implementation, and communication stages of their workflow. We propose a framework for expected AI collaborators' roles, categorize people's expectations for the level of automation for different tasks, and delve into the reasons behind them. Our research provides insights and suggestions for the design of future AI-powered data storytelling tools.
{"title":"Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling.","authors":"Haotian Li, Yun Wang, Q Vera Liao, Huamin Qu","doi":"10.1109/TVCG.2025.3552017","DOIUrl":"10.1109/TVCG.2025.3552017","url":null,"abstract":"<p><p>This paper explores the potential for human-AI collaboration in the context of data storytelling for data workers. Data storytelling communicates insights and knowledge from data analysis. It plays a vital role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers need to spend tremendous effort on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies focus more on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To address this gap, we conducted an interview study with 18 data workers to explore their preferences for AI collaboration in the planning, implementation, and communication stages of their workflow. We propose a framework for expected AI collaborators' roles, categorize people's expectations for the level of automation for different tasks, and delve into the reasons behind them. Our research provides insights and suggestions for the design of future AI-powered data storytelling tools.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}