首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Enhancing Obstacle Visibility with Augmented Reality Improves Mobility in People with Low Vision.
Pub Date : 2025-03-19 DOI: 10.1109/TVCG.2025.3549542
Lior Maman, Ilan Vol, Sarit F A Szpiro

Avoiding obstacles while navigating is a challenge for people with low vision, who have impaired yet functional vision, which impacts their mobility, safety, and independence. This study investigates the impact of using Augmented Reality (AR) to enhance the visibility of obstacles for people with low vision. Twenty-five participants (14 with low vision and 11 typically sighted) wore smart glasses and completed a real-world obstacle course under two conditions: with obstacles enhanced using 3D AR markings and without any enhancement (i.e., passthrough only - control condition). Our results reveal that AR enhancements significantly decreased walking time, with the low vision group demonstrating a notable reduction in time. Additionally, the path length was significantly shorter with AR enhancements. The decrease in time and path length did not lead to more collisions, suggesting improved obstacle avoidance. Participants also reported a positive user experience with the AR system, highlighting its potential to enhance mobility for low vision users. These results suggest that AR technology can play a critical role in supporting the independence and confidence of low vision individuals in mobility tasks within complex environments. We discuss design guidelines for future AR systems to assist low vision people.

{"title":"Enhancing Obstacle Visibility with Augmented Reality Improves Mobility in People with Low Vision.","authors":"Lior Maman, Ilan Vol, Sarit F A Szpiro","doi":"10.1109/TVCG.2025.3549542","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549542","url":null,"abstract":"<p><p>Avoiding obstacles while navigating is a challenge for people with low vision, who have impaired yet functional vision, which impacts their mobility, safety, and independence. This study investigates the impact of using Augmented Reality (AR) to enhance the visibility of obstacles for people with low vision. Twenty-five participants (14 with low vision and 11 typically sighted) wore smart glasses and completed a real-world obstacle course under two conditions: with obstacles enhanced using 3D AR markings and without any enhancement (i.e., passthrough only - control condition). Our results reveal that AR enhancements significantly decreased walking time, with the low vision group demonstrating a notable reduction in time. Additionally, the path length was significantly shorter with AR enhancements. The decrease in time and path length did not lead to more collisions, suggesting improved obstacle avoidance. Participants also reported a positive user experience with the AR system, highlighting its potential to enhance mobility for low vision users. These results suggest that AR technology can play a critical role in supporting the independence and confidence of low vision individuals in mobility tasks within complex environments. We discuss design guidelines for future AR systems to assist low vision people.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143665778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparison of the Effects of Older Age on Homing Performance in Real and Virtual Environments.
Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549901
Maggie K McCracken, Corey S Shayman, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr

Virtual reality (VR) has become a popular tool for studying navigation, providing the experimental control of a laboratory setting but also the potential for immersive and natural experiences that resemble the real world. For VR to be an effective tool to study navigation and be used for training or rehabilitation, it is important to establish whether performance is similar across virtual and real environments. Much of the existing navigation research has focused on young adult performance either in a virtual or a real environment, resulting in an open question regarding the validity of VR for studying age-related effects on spatial navigation. In this paper, young (18-30 years old) and older adults (60 years and older) performed the same navigation task in similar real and virtual environments. They completed a homing task, requiring walking along two legs of a triangle and returning to a home location, under three sensory conditions: visual cues (environmental landmarks present), body-based self-motion cues, and the combination of both cues. Our findings reveal that homing performance in VR demonstrates the same age-related differences as those observed in the real-world task. That said, within-age group differences arise when comparing cue use across environment types. In particular, young adults are less accurate and more variable with self-motion cues than visual cues in VR, while older adults show similar deficits with both cues. However, when both age groups can access multiple sensory cues, navigation performance does not differ between environment types. These results demonstrate that VR effectively captures age-related differences, with navigation performance most closely resembling performance in the real world when navigators can rely on an array of sensory information. Such findings have implications for future research on the aging population, highlighting that VR can be a valuable tool, particularly when multisensory cues are available.

{"title":"A Comparison of the Effects of Older Age on Homing Performance in Real and Virtual Environments.","authors":"Maggie K McCracken, Corey S Shayman, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1109/TVCG.2025.3549901","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549901","url":null,"abstract":"<p><p>Virtual reality (VR) has become a popular tool for studying navigation, providing the experimental control of a laboratory setting but also the potential for immersive and natural experiences that resemble the real world. For VR to be an effective tool to study navigation and be used for training or rehabilitation, it is important to establish whether performance is similar across virtual and real environments. Much of the existing navigation research has focused on young adult performance either in a virtual or a real environment, resulting in an open question regarding the validity of VR for studying age-related effects on spatial navigation. In this paper, young (18-30 years old) and older adults (60 years and older) performed the same navigation task in similar real and virtual environments. They completed a homing task, requiring walking along two legs of a triangle and returning to a home location, under three sensory conditions: visual cues (environmental landmarks present), body-based self-motion cues, and the combination of both cues. Our findings reveal that homing performance in VR demonstrates the same age-related differences as those observed in the real-world task. That said, within-age group differences arise when comparing cue use across environment types. In particular, young adults are less accurate and more variable with self-motion cues than visual cues in VR, while older adults show similar deficits with both cues. However, when both age groups can access multiple sensory cues, navigation performance does not differ between environment types. These results demonstrate that VR effectively captures age-related differences, with navigation performance most closely resembling performance in the real world when navigators can rely on an array of sensory information. Such findings have implications for future research on the aging population, highlighting that VR can be a valuable tool, particularly when multisensory cues are available.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HYPNOS: Interactive Data Lineage Tracing for Data Transformation Scripts.
Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3552091
Xiwen Cai, Xiaodong Ge, Kai Xiong, Shuainan Ye, Di Weng, Ke Xu, Datong Wei, Jiang Long, Yingcai Wu

In a formal data analysis workflow, data validation is a necessary step that helps data analysts verify the quality of the data and ensure the reliability of the results. Data analysts usually need to validate the result when encountering an unexpected result, such as an abnormal record in a table. In order to understand how a specific record is derived, they would backtrace it in the pipeline step by step via checking the code lines, exposing the intermediate tables, and finding the data records from which it is derived. However, manually reviewing code and backtracing data requires certain expertise, while inspecting the traced records in multiple tables and interpreting their relationships is tedious. In this work, we propose HYPNOS, a visualization system that supports interactive data lineage tracing for data transformation scripts. HYPNOS uses a lineage module for parsing and adapting code to capture both schema-level and instance-level data lineage from data transformation scripts. Then, it provides users with a lineage view for obtaining an overview of the data transformation process and a detail view for tracing instance-level data lineage and inspecting details. HYPNOS reveals different levels of data relationships and helps users with data lineage tracing. We demonstrate the usability and effectiveness of HYPNOS through a use case, interviews of four expert users, and a user study.

{"title":"HYPNOS: Interactive Data Lineage Tracing for Data Transformation Scripts.","authors":"Xiwen Cai, Xiaodong Ge, Kai Xiong, Shuainan Ye, Di Weng, Ke Xu, Datong Wei, Jiang Long, Yingcai Wu","doi":"10.1109/TVCG.2025.3552091","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3552091","url":null,"abstract":"<p><p>In a formal data analysis workflow, data validation is a necessary step that helps data analysts verify the quality of the data and ensure the reliability of the results. Data analysts usually need to validate the result when encountering an unexpected result, such as an abnormal record in a table. In order to understand how a specific record is derived, they would backtrace it in the pipeline step by step via checking the code lines, exposing the intermediate tables, and finding the data records from which it is derived. However, manually reviewing code and backtracing data requires certain expertise, while inspecting the traced records in multiple tables and interpreting their relationships is tedious. In this work, we propose HYPNOS, a visualization system that supports interactive data lineage tracing for data transformation scripts. HYPNOS uses a lineage module for parsing and adapting code to capture both schema-level and instance-level data lineage from data transformation scripts. Then, it provides users with a lineage view for obtaining an overview of the data transformation process and a detail view for tracing instance-level data lineage and inspecting details. HYPNOS reveals different levels of data relationships and helps users with data lineage tracing. We demonstrate the usability and effectiveness of HYPNOS through a use case, interviews of four expert users, and a user study.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Techniques for Multiple Room Connection in Virtual Reality: Walking Within Small Physical Spaces.
Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549895
Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega

In Virtual Reality (VR), navigating small physical spaces often relies on handheld controllers, such as teleportation and joystick movements, due to the limited space for natural walking. However, walking-based techniques can enhance immersion by enabling more natural movement. This paper presents three room-connection techniques - portals, corridors, and central hubs - that can be used in virtual environments (VEs) to create "impossible spaces". These spaces use overlapping areas to maximize available physical space, promising for walking even in constrained spaces. We conducted a user study with 33 participants to assess the effectiveness of these techniques within a small physical area (2.5 × 2.5 m). The results show that all three techniques are viable for connecting rooms in VR, each offering distinct characteristics. Each method positively impacts presence, cybersickness, spatial awareness, orientation, and overall user experience. Specifically, portals offer a flexible and straightforward solution, corridors provide a seamless and natural transition between spaces, and central hubs simplify navigation. The primary contribution of this work is demonstrating how these room-connection techniques can be applied to dynamically adapt VEs to fit small, uncluttered physical spaces, such as those commonly available to VR users at home. Applications such as virtual museum tours, training simulations, and emergency preparedness exercises can greatly benefit from these methods, providing users with a more natural and engaging experience, even within the limited space typical in home settings.

{"title":"Techniques for Multiple Room Connection in Virtual Reality: Walking Within Small Physical Spaces.","authors":"Ana Rita Rebelo, Pedro A Ferreira, Rui Nobrega","doi":"10.1109/TVCG.2025.3549895","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549895","url":null,"abstract":"<p><p>In Virtual Reality (VR), navigating small physical spaces often relies on handheld controllers, such as teleportation and joystick movements, due to the limited space for natural walking. However, walking-based techniques can enhance immersion by enabling more natural movement. This paper presents three room-connection techniques - portals, corridors, and central hubs - that can be used in virtual environments (VEs) to create \"impossible spaces\". These spaces use overlapping areas to maximize available physical space, promising for walking even in constrained spaces. We conducted a user study with 33 participants to assess the effectiveness of these techniques within a small physical area (2.5 × 2.5 m). The results show that all three techniques are viable for connecting rooms in VR, each offering distinct characteristics. Each method positively impacts presence, cybersickness, spatial awareness, orientation, and overall user experience. Specifically, portals offer a flexible and straightforward solution, corridors provide a seamless and natural transition between spaces, and central hubs simplify navigation. The primary contribution of this work is demonstrating how these room-connection techniques can be applied to dynamically adapt VEs to fit small, uncluttered physical spaces, such as those commonly available to VR users at home. Applications such as virtual museum tours, training simulations, and emergency preparedness exercises can greatly benefit from these methods, providing users with a more natural and engaging experience, even within the limited space typical in home settings.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
360° 3D Photos from a Single 360° Input Image.
Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549538
Manuel Rey-Area, Christian Richardt

360° images are a popular medium for bringing photography into virtual reality. While users can look in any direction by rotating their heads, 360° images ultimately look flat. That is because they lack depth information and thus cannot create motion parallax when translating the head. To achieve a fully immersive VR experience from a single 360° image, we introduce a novel method to upgrade 360° images to free-viewpoint renderings with 6 degrees of freedom. Alternative approaches reconstruct textured 3D geometry, which is fast to render but suffers from visible reconstruction artifacts, or use neural radiance fields that produce high-quality novel views but too slowly for VR applications. Our 360° 3D photos build on 3D Gaussian splatting as the underlying scene representation to simultaneously achieve high visual quality and real-time rendering speed. To fill plausible content in previously unseen regions, we introduce a novel combination of latent diffusion inpainting and monocular depth estimation with Poisson-based blending. Our results demonstrate state-of-the-art visual and depth quality at rendering rates of 105 FPS per megapixel on a commodity GPU.

{"title":"360° 3D Photos from a Single 360° Input Image.","authors":"Manuel Rey-Area, Christian Richardt","doi":"10.1109/TVCG.2025.3549538","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549538","url":null,"abstract":"<p><p>360° images are a popular medium for bringing photography into virtual reality. While users can look in any direction by rotating their heads, 360° images ultimately look flat. That is because they lack depth information and thus cannot create motion parallax when translating the head. To achieve a fully immersive VR experience from a single 360° image, we introduce a novel method to upgrade 360° images to free-viewpoint renderings with 6 degrees of freedom. Alternative approaches reconstruct textured 3D geometry, which is fast to render but suffers from visible reconstruction artifacts, or use neural radiance fields that produce high-quality novel views but too slowly for VR applications. Our 360° 3D photos build on 3D Gaussian splatting as the underlying scene representation to simultaneously achieve high visual quality and real-time rendering speed. To fill plausible content in previously unseen regions, we introduce a novel combination of latent diffusion inpainting and monocular depth estimation with Poisson-based blending. Our results demonstrate state-of-the-art visual and depth quality at rendering rates of 105 FPS per megapixel on a commodity GPU.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fov-GS: Foveated 3D Gaussian Splatting for Dynamic Scenes.
Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549576
Runze Fan, Jian Wu, Xuehuai Shi, Lizhi Zhao, Qixiang Ma, Lili Wang

Rendering quality and performance greatly affect the user's immersion in VR experiences. 3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. We introduce a 3D Gaussian forest representation that represents the scene as a forest. To construct the 3D Gaussian forest, we propose a 3D Gaussian forest initialization method based on dynamic-static separation. Subsequently, we propose a 3D Gaussian forest optimization method based on deformation field and Gaussian decomposition to optimize the forest and deformation field. To achieve real-time dynamic scene rendering, we present a 3D Gaussian forest rendering method based on HVS models. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to 11.33× speedup. We also conducted a user study, and the results prove that the perceptual quality of our method has a high visual similarity with the ground truth.

{"title":"Fov-GS: Foveated 3D Gaussian Splatting for Dynamic Scenes.","authors":"Runze Fan, Jian Wu, Xuehuai Shi, Lizhi Zhao, Qixiang Ma, Lili Wang","doi":"10.1109/TVCG.2025.3549576","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549576","url":null,"abstract":"<p><p>Rendering quality and performance greatly affect the user's immersion in VR experiences. 3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. We introduce a 3D Gaussian forest representation that represents the scene as a forest. To construct the 3D Gaussian forest, we propose a 3D Gaussian forest initialization method based on dynamic-static separation. Subsequently, we propose a 3D Gaussian forest optimization method based on deformation field and Gaussian decomposition to optimize the forest and deformation field. To achieve real-time dynamic scene rendering, we present a 3D Gaussian forest rendering method based on HVS models. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to 11.33× speedup. We also conducted a user study, and the results prove that the perceptual quality of our method has a high visual similarity with the ground truth.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An embodied body morphology task for investigating self-avatar proportions perception in Virtual Reality.
Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3549123
Loen Boban, Ronan Boulic, Bruno Herbelin

The perception of one's own body is subject to systematic distortions and can be influenced by exposure to visual stimuli showing distorted bodies. In Virtual Reality (VR), echoing such body judgment inaccuracies, avatars with strong appearance dissimilarities with respect to users' bodies can be successfully embodied. The present experimental work investigates, in the healthy population, the perception of the own body in immersive and embodied VR, as well as the impact of being co-present with virtual humans on such self-perception. Participants were successively presented with different avatars, corresponding to various upper- and lower-body proportions, and were asked to compare them with their perceived own body morphology. To investigate the influence of co-present virtual humans on this judgment, the task was performed in co-presence with virtual agents corresponding to various body appearances. Results show an overall overestimation of one's leg length and no influence of the co-present agent's appearance. Importantly, the embodiment scores reflect such body morphology judgment inaccuracy, with participants reporting lower levels of embodiment for avatars with very short legs than for avatars with very long legs. Our findings suggest specifics of embodied body judgment methods, likely resulting from the experience of embodying the avatar as compared to visual appreciation only.

{"title":"An embodied body morphology task for investigating self-avatar proportions perception in Virtual Reality.","authors":"Loen Boban, Ronan Boulic, Bruno Herbelin","doi":"10.1109/TVCG.2025.3549123","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549123","url":null,"abstract":"<p><p>The perception of one's own body is subject to systematic distortions and can be influenced by exposure to visual stimuli showing distorted bodies. In Virtual Reality (VR), echoing such body judgment inaccuracies, avatars with strong appearance dissimilarities with respect to users' bodies can be successfully embodied. The present experimental work investigates, in the healthy population, the perception of the own body in immersive and embodied VR, as well as the impact of being co-present with virtual humans on such self-perception. Participants were successively presented with different avatars, corresponding to various upper- and lower-body proportions, and were asked to compare them with their perceived own body morphology. To investigate the influence of co-present virtual humans on this judgment, the task was performed in co-presence with virtual agents corresponding to various body appearances. Results show an overall overestimation of one's leg length and no influence of the co-present agent's appearance. Importantly, the embodiment scores reflect such body morphology judgment inaccuracy, with participants reporting lower levels of embodiment for avatars with very short legs than for avatars with very long legs. Our findings suggest specifics of embodied body judgment methods, likely resulting from the experience of embodying the avatar as compared to visual appreciation only.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SynthLens: Visual Analytics for Facilitating Multi-step Synthetic Route Design.
Pub Date : 2025-03-18 DOI: 10.1109/TVCG.2025.3552134
Qipeng Wang, Rui Sheng, Shaolun Ruan, Xiaofu Jin, Chuhan Shi, Min Zhu

Designing synthetic routes for novel molecules is pivotal in various fields like medicine and chemistry. In this process, researchers need to explore a set of synthetic reactions to transform starting molecules into intermediates step by step until the target novel molecule is obtained. However, designing synthetic routes presents challenges for researchers. First, researchers need to make decisions among numerous possible synthetic reactions at each step, considering various criteria (e.g., yield, experimental duration, and the count of experimental steps) to construct the synthetic route. Second, they must consider the potential impact of one choice at each step on the overall synthetic route. To address these challenges, we proposed SynthLens, a visual analytics system to facilitate the iterative construction of synthetic routes by exploring multiple possibilities for synthetic reactions at each step of construction. Specifically, we have introduced a tree-form visualization in SynthLens to compare and evaluate all the explored routes at various exploration steps, considering both the exploration step and multiple criteria. Our system empowers researchers to consider their construction process comprehensively, guiding them toward promising exploration directions to complete the synthetic route. We validated the usability and effectiveness of SynthLens through a quantitative evaluation and expert interviews, highlighting its role in facilitating the design process of synthetic routes. Finally, we discussed the insights of SynthLens to inspire other multi-criteria decision-making scenarios with visual analytics.

{"title":"SynthLens: Visual Analytics for Facilitating Multi-step Synthetic Route Design.","authors":"Qipeng Wang, Rui Sheng, Shaolun Ruan, Xiaofu Jin, Chuhan Shi, Min Zhu","doi":"10.1109/TVCG.2025.3552134","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3552134","url":null,"abstract":"<p><p>Designing synthetic routes for novel molecules is pivotal in various fields like medicine and chemistry. In this process, researchers need to explore a set of synthetic reactions to transform starting molecules into intermediates step by step until the target novel molecule is obtained. However, designing synthetic routes presents challenges for researchers. First, researchers need to make decisions among numerous possible synthetic reactions at each step, considering various criteria (e.g., yield, experimental duration, and the count of experimental steps) to construct the synthetic route. Second, they must consider the potential impact of one choice at each step on the overall synthetic route. To address these challenges, we proposed SynthLens, a visual analytics system to facilitate the iterative construction of synthetic routes by exploring multiple possibilities for synthetic reactions at each step of construction. Specifically, we have introduced a tree-form visualization in SynthLens to compare and evaluate all the explored routes at various exploration steps, considering both the exploration step and multiple criteria. Our system empowers researchers to consider their construction process comprehensively, guiding them toward promising exploration directions to complete the synthetic route. We validated the usability and effectiveness of SynthLens through a quantitative evaluation and expert interviews, highlighting its role in facilitating the design process of synthetic routes. Finally, we discussed the insights of SynthLens to inspire other multi-criteria decision-making scenarios with visual analytics.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensitivity to Redirected Walking Considering Gaze, Posture, and Luminance.
Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3549908
Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha

We study the correlations between redirected walking (RDW) rotation gains and patterns in users' posture and gaze data during locomotion in virtual reality (VR). To do this, we conducted a psychophysical experiment to measure users' sensitivity to RDW rotation gains and collect gaze and posture data during the experiment. Using multilevel modeling, we studied how different factors of the VR system and user affected their physiological signals. In particular, we studied the effects of redirection gain, trial duration, trial number (i.e., time spent in VR), and participant gender on postural sway, gaze velocity (a proxy for gaze stability), and saccade and blink rate. Our results showed that, in general, physiological signals were significantly positively correlated with the strength of redirection gain, the duration of trials, and the trial number. Gaze velocity was negatively correlated with trial duration. Additionally, we measured users' sensitivity to rotation gains in well-lit (photopic) and dimly-lit (mesopic) virtual lighting conditions. Results showed that there were no significant differences in RDW detection thresholds between the photopic and mesopic luminance conditions.

我们研究了在虚拟现实(VR)中运动时,重定向行走(RDW)旋转增益与用户姿势和注视数据模式之间的相关性。为此,我们进行了一项心理物理实验,以测量用户对 RDW 旋转增益的敏感度,并在实验过程中收集注视和姿势数据。通过多层次建模,我们研究了 VR 系统和用户的不同因素如何影响他们的生理信号。特别是,我们研究了重定向增益、试验持续时间、试验次数(即在 VR 中花费的时间)和参与者性别对姿势摇摆、注视速度(注视稳定性的代表)以及囊回和眨眼率的影响。我们的研究结果表明,总体而言,生理信号与重定向增益强度、试验持续时间和试验次数呈显著正相关。凝视速度与试验持续时间呈负相关。此外,我们还测量了用户在光线充足(近视)和光线昏暗(中近视)的虚拟照明条件下对旋转增益的敏感度。结果表明,在光度条件和中度光度条件下,RDW 检测阈值没有明显差异。
{"title":"Sensitivity to Redirected Walking Considering Gaze, Posture, and Luminance.","authors":"Niall L Williams, Logan C Stevens, Aniket Bera, Dinesh Manocha","doi":"10.1109/TVCG.2025.3549908","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549908","url":null,"abstract":"<p><p>We study the correlations between redirected walking (RDW) rotation gains and patterns in users' posture and gaze data during locomotion in virtual reality (VR). To do this, we conducted a psychophysical experiment to measure users' sensitivity to RDW rotation gains and collect gaze and posture data during the experiment. Using multilevel modeling, we studied how different factors of the VR system and user affected their physiological signals. In particular, we studied the effects of redirection gain, trial duration, trial number (i.e., time spent in VR), and participant gender on postural sway, gaze velocity (a proxy for gaze stability), and saccade and blink rate. Our results showed that, in general, physiological signals were significantly positively correlated with the strength of redirection gain, the duration of trials, and the trial number. Gaze velocity was negatively correlated with trial duration. Additionally, we measured users' sensitivity to rotation gains in well-lit (photopic) and dimly-lit (mesopic) virtual lighting conditions. Results showed that there were no significant differences in RDW detection thresholds between the photopic and mesopic luminance conditions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling.
Pub Date : 2025-03-17 DOI: 10.1109/TVCG.2025.3552017
Haotian Li, Yun Wang, Q Vera Liao, Huamin Qu

This paper explores the potential for human-AI collaboration in the context of data storytelling for data workers. Data storytelling communicates insights and knowledge from data analysis. It plays a vital role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers need to spend tremendous effort on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies focus more on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To address this gap, we conducted an interview study with 18 data workers to explore their preferences for AI collaboration in the planning, implementation, and communication stages of their workflow. We propose a framework for expected AI collaborators' roles, categorize people's expectations for the level of automation for different tasks, and delve into the reasons behind them. Our research provides insights and suggestions for the design of future AI-powered data storytelling tools.

本文探讨了在为数据工作者讲述数据故事的背景下人类与人工智能合作的潜力。讲数据故事可以传播从数据分析中获得的见解和知识。它在数据工作者的日常工作中发挥着至关重要的作用,因为它能促进团队协作和公共交流。然而,要制作一个吸引人的数据故事,数据工作者需要在各种任务上花费巨大精力,包括故事的大纲和样式。最近,一种日益增长的研究趋势是探索如何利用先进的人工智能(AI)来辅助数据讲故事。然而,现有研究更多关注的是数据讲故事工作流程中的单个任务,并没有全面揭示人类与人工智能合作的偏好。为了弥补这一不足,我们对 18 名数据工作者进行了访谈研究,探讨他们在工作流程的规划、实施和交流阶段对人工智能协作的偏好。我们提出了一个预期人工智能协作者角色的框架,将人们对不同任务自动化程度的期望进行了分类,并深入探讨了其背后的原因。我们的研究为未来人工智能驱动的数据讲故事工具的设计提供了见解和建议。
{"title":"Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling.","authors":"Haotian Li, Yun Wang, Q Vera Liao, Huamin Qu","doi":"10.1109/TVCG.2025.3552017","DOIUrl":"10.1109/TVCG.2025.3552017","url":null,"abstract":"<p><p>This paper explores the potential for human-AI collaboration in the context of data storytelling for data workers. Data storytelling communicates insights and knowledge from data analysis. It plays a vital role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers need to spend tremendous effort on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies focus more on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To address this gap, we conducted an interview study with 18 data workers to explore their preferences for AI collaboration in the planning, implementation, and communication stages of their workflow. We propose a framework for expected AI collaborators' roles, categorize people's expectations for the level of automation for different tasks, and delve into the reasons behind them. Our research provides insights and suggestions for the design of future AI-powered data storytelling tools.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1