Pub Date : 2025-12-04DOI: 10.1016/j.cag.2025.104511
Kirsten W.H. Maas , Thiam-Wai Chua , Danny Ruijters , Nicola Pezzotti , Anna Vilanova
Neural Radiance Field (NeRF) is a promising deep learning approach for three-dimensional (3D) scene reconstruction and view synthesis, with various applications in fields like robotics and medical imaging. However, similar to other deep learning models, understanding NeRF model inaccuracies and their causes is challenging. The 3D nature of NeRFs further adds challenges such as identifying complex geometrical features and analyzing 2D views that suffer from object occlusions. Existing methods for uncertainty quantification (UQ) in NeRFs address the lack of NeRF model understanding by expressing uncertainty in model predictions, exposing limitations in model design or training data. However, these UQ techniques typically rely on quantitative evaluation that does not facilitate human interpretation. We introduce NeRVis, a visual analytics system that supports model users to explore and analyze uncertainty in NeRF scenes. NeRVis combines spatial uncertainty analysis with per-view uncertainty summaries, fostering analysis of the uncertainty in Lambertian NeRF scenes. As a proof-of-concept, we illustrate our approach using two UQ methods. We demonstrate the effectiveness of NeRVis with two different use scenarios, tackling key challenges in the NeRF UQ literature.
{"title":"NeRVis: Neural Radiance Field Model-Uncertainty Visualization","authors":"Kirsten W.H. Maas , Thiam-Wai Chua , Danny Ruijters , Nicola Pezzotti , Anna Vilanova","doi":"10.1016/j.cag.2025.104511","DOIUrl":"10.1016/j.cag.2025.104511","url":null,"abstract":"<div><div>Neural Radiance Field (NeRF) is a promising deep learning approach for three-dimensional (3D) scene reconstruction and view synthesis, with various applications in fields like robotics and medical imaging. However, similar to other deep learning models, understanding NeRF model inaccuracies and their causes is challenging. The 3D nature of NeRFs further adds challenges such as identifying complex geometrical features and analyzing 2D views that suffer from object occlusions. Existing methods for uncertainty quantification (UQ) in NeRFs address the lack of NeRF model understanding by expressing uncertainty in model predictions, exposing limitations in model design or training data. However, these UQ techniques typically rely on quantitative evaluation that does not facilitate human interpretation. We introduce NeRVis, a visual analytics system that supports model users to explore and analyze uncertainty in NeRF scenes. NeRVis combines spatial uncertainty analysis with per-view uncertainty summaries, fostering analysis of the uncertainty in Lambertian NeRF scenes. As a proof-of-concept, we illustrate our approach using two UQ methods. We demonstrate the effectiveness of NeRVis with two different use scenarios, tackling key challenges in the NeRF UQ literature.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104511"},"PeriodicalIF":2.8,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1016/j.cag.2025.104490
Julián E. Guzmán , David Mould , Eric Paquette
We propose an approach to synthesize textures for the animated free surfaces of fluids. Because fluids deform and experience topological changes, it is challenging to maintain fidelity to a reference texture exemplar while avoiding visual artifacts such as distortion and discontinuities. We introduce an adaptive multiresolution synthesis approach that balances fidelity to the exemplar and consistency with the fluid motion. Given a 2D exemplar texture, an orientation field from the first frame, an animated velocity field, and polygonal meshes corresponding to the animated liquid, our approach advects the texture and the orientation field across frames, yielding a coherent sequence of textures conforming to the per-frame geometry. Our adaptiveness relies on local 2D and 3D distortion measures, which guide multiresolution decisions to resynthesize or preserve the advected content. We prevent popping artifacts by enforcing gradual changes in color over time. Our approach works well both on slow-moving liquids and on turbulent ones with splashes. In addition, we demonstrate good performance on a variety of stationary texture exemplars.
{"title":"Adaptive multiresolution exemplar-based texture synthesis on animated fluids","authors":"Julián E. Guzmán , David Mould , Eric Paquette","doi":"10.1016/j.cag.2025.104490","DOIUrl":"10.1016/j.cag.2025.104490","url":null,"abstract":"<div><div>We propose an approach to synthesize textures for the animated free surfaces of fluids. Because fluids deform and experience topological changes, it is challenging to maintain fidelity to a reference texture exemplar while avoiding visual artifacts such as distortion and discontinuities. We introduce an adaptive multiresolution synthesis approach that balances fidelity to the exemplar and consistency with the fluid motion. Given a 2D exemplar texture, an orientation field from the first frame, an animated velocity field, and polygonal meshes corresponding to the animated liquid, our approach advects the texture and the orientation field across frames, yielding a coherent sequence of textures conforming to the per-frame geometry. Our adaptiveness relies on local 2D and 3D distortion measures, which guide multiresolution decisions to resynthesize or preserve the advected content. We prevent popping artifacts by enforcing gradual changes in color over time. Our approach works well both on slow-moving liquids and on turbulent ones with splashes. In addition, we demonstrate good performance on a variety of stationary texture exemplars.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104490"},"PeriodicalIF":2.8,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.cag.2025.104507
George Westergaard , Mark Ellis , Jacob Barker , Sofia Garces Palacios , Alexis Desir , Ganesh Sankaranarayanan , Suvranu De , Doga Demirel
In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.
{"title":"Real-time haptic-based soft body suturing in virtual open surgery simulations","authors":"George Westergaard , Mark Ellis , Jacob Barker , Sofia Garces Palacios , Alexis Desir , Ganesh Sankaranarayanan , Suvranu De , Doga Demirel","doi":"10.1016/j.cag.2025.104507","DOIUrl":"10.1016/j.cag.2025.104507","url":null,"abstract":"<div><div>In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104507"},"PeriodicalIF":2.8,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In architectural design, floor planning plays a crucial role in shaping the functionality and efficiency of a building, requiring designers to strike a balance between diverse and often conflicting objectives. It is a multi-constraint problem, and over the past few years, many tools have been proposed to generate floor plans automatically, most of which are based on AI/ML techniques.
In this paper, we propose software based on graph algorithms for the automated generation of housing layouts (floor plans) having rectangular boundaries while addressing adjacency and non-adjacency constraints, room positions (interior or exterior), and circulations. Once the user provides the input constraints (where many of them are built-in, say dining is on the exterior and adjacent to the kitchen, and the kitchen is not adjacent to the toilets), the software will generate a range of graphs that represent these connections and use them to generate all possible dimensioned housing layout options for users to choose from.
{"title":"Automated generation of housing layouts using graph-rules","authors":"Shiksha, Rohit Lohani, Krishnendra Shekhawat, Arsh Singh, Karan Agrawal","doi":"10.1016/j.cag.2025.104506","DOIUrl":"10.1016/j.cag.2025.104506","url":null,"abstract":"<div><div>In architectural design, floor planning plays a crucial role in shaping the functionality and efficiency of a building, requiring designers to strike a balance between diverse and often conflicting objectives. It is a multi-constraint problem, and over the past few years, many tools have been proposed to generate floor plans automatically, most of which are based on AI/ML techniques.</div><div>In this paper, we propose software based on graph algorithms for the automated generation of housing layouts (floor plans) having rectangular boundaries while addressing adjacency and non-adjacency constraints, room positions (interior or exterior), and circulations. Once the user provides the input constraints (where many of them are built-in, say dining is on the exterior and adjacent to the kitchen, and the kitchen is not adjacent to the toilets), the software will generate a range of graphs that represent these connections and use them to generate all possible dimensioned housing layout options for users to choose from.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104506"},"PeriodicalIF":2.8,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1016/j.cag.2025.104489
Germain Garcia-Zanabria , Daniel A. Gutierrez-Pachas , Jorge Poco , Erick Gomez-Nieto
Student dropout is a major concern for universities, leading them to invest heavily in strategies to lower attrition rates. Analytical tools are crucial for predicting dropout risks and informing policies on academic and social support. However, many of these tools depend solely on automate tu d predictions, ignoring valuable insights from professors, mentors, and specialists. These experts can help identify the root causes of dropout and develop effective interventions. This paper introduces CSDA-Vis, a visualization system designed to analyze the influence of individual, institutional, and socioeconomic factors on student dropout rates. CSDA-Vis facilitates the identification of actionable strategies to mitigate dropout by integrating counterfactual and survival analysis methods. Unlike traditional approaches, our tool enables decision-makers to incorporate their expertise into the evaluation of different dropout scenarios. Developed in collaboration with domain experts, CSDA-Vis builds upon previous visualization tools and was validated through a case study using real datasets from a Latin American university. Additionally, we conducted an expert evaluation with professionals specializing in dropout analysis, further demonstrating the tool’s practical value and effectiveness.
{"title":"CSDA-Vis: A (What-If-and-When) visual system for early dropout detection using counterfactual and survival analysis interactions","authors":"Germain Garcia-Zanabria , Daniel A. Gutierrez-Pachas , Jorge Poco , Erick Gomez-Nieto","doi":"10.1016/j.cag.2025.104489","DOIUrl":"10.1016/j.cag.2025.104489","url":null,"abstract":"<div><div>Student dropout is a major concern for universities, leading them to invest heavily in strategies to lower attrition rates. Analytical tools are crucial for predicting dropout risks and informing policies on academic and social support. However, many of these tools depend solely on automate tu d predictions, ignoring valuable insights from professors, mentors, and specialists. These experts can help identify the root causes of dropout and develop effective interventions. This paper introduces <em>CSDA-Vis</em>, a visualization system designed to analyze the influence of individual, institutional, and socioeconomic factors on student dropout rates. CSDA-Vis facilitates the identification of actionable strategies to mitigate dropout by integrating counterfactual and survival analysis methods. Unlike traditional approaches, our tool enables decision-makers to incorporate their expertise into the evaluation of different dropout scenarios. Developed in collaboration with domain experts, CSDA-Vis builds upon previous visualization tools and was validated through a case study using real datasets from a Latin American university. Additionally, we conducted an expert evaluation with professionals specializing in dropout analysis, further demonstrating the tool’s practical value and effectiveness.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104489"},"PeriodicalIF":2.8,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1016/j.cag.2025.104501
Sérgio Oliveira , Bernardo Marques , Paula Amorim , Mariana Leite , Carlos Ferreira , Beatriz Sousa Santos
Stroke is one of the world’s leading causes of death and disability and can have profound consequences that require effective rehabilitation to support survivors’ recovery and improve their quality of life. Despite their value, traditional rehabilitation methods tend to be repetitive and lack variety, which challenges survivors’ motivation. Additionally, these rehabilitation sessions are usually solitary, leaving survivors to practice exercises alone, which can lead to physical setbacks and social isolation. Such isolation can further reduce enthusiasm for therapy, delay recovery, and affect their mental well-being. This work details an extended version from the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS), conducted at IEEE VR 2025. It proposes a collaborative Virtual Reality (VR) framework that aims to increase survivors’ motivation during rehabilitation. Through its collaborative nature, it can involve multiple users in the same virtual space, from stroke survivors to healthcare professionals, giving them a common goal that they must join forces to accomplish. Various serious games were designed through a series of activities focused on specific gestures related to the rehabilitation of the upper limbs, thus improving physical recovery and mental well-being. The design and development were guided by a human-centered methodology that included survivors and professionals, resulting in a user study with a total of 53 participants, 18 from a rehabilitation center. The results indicate that this collaborative VR tool effectively boosts motivation, social interaction, and engagement while maintaining an accessible and manageable level of physical and mental demand for stroke recovery, underscoring its suitability for stroke recovery.
{"title":"Recovering through play: Studying the effects of collaborative Virtual Reality serious games for stroke rehabilitation through a human-centered design methodology","authors":"Sérgio Oliveira , Bernardo Marques , Paula Amorim , Mariana Leite , Carlos Ferreira , Beatriz Sousa Santos","doi":"10.1016/j.cag.2025.104501","DOIUrl":"10.1016/j.cag.2025.104501","url":null,"abstract":"<div><div>Stroke is one of the world’s leading causes of death and disability and can have profound consequences that require effective rehabilitation to support survivors’ recovery and improve their quality of life. Despite their value, traditional rehabilitation methods tend to be repetitive and lack variety, which challenges survivors’ motivation. Additionally, these rehabilitation sessions are usually solitary, leaving survivors to practice exercises alone, which can lead to physical setbacks and social isolation. Such isolation can further reduce enthusiasm for therapy, delay recovery, and affect their mental well-being. This work details an extended version from the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS), conducted at IEEE VR 2025. It proposes a collaborative Virtual Reality (VR) framework that aims to increase survivors’ motivation during rehabilitation. Through its collaborative nature, it can involve multiple users in the same virtual space, from stroke survivors to healthcare professionals, giving them a common goal that they must join forces to accomplish. Various serious games were designed through a series of activities focused on specific gestures related to the rehabilitation of the upper limbs, thus improving physical recovery and mental well-being. The design and development were guided by a human-centered methodology that included survivors and professionals, resulting in a user study with a total of 53 participants, 18 from a rehabilitation center. The results indicate that this collaborative VR tool effectively boosts motivation, social interaction, and engagement while maintaining an accessible and manageable level of physical and mental demand for stroke recovery, underscoring its suitability for stroke recovery.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104501"},"PeriodicalIF":2.8,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1016/j.cag.2025.104500
Zeinab BagheriFard , Miruna Maria Vasiliu , Emma Jane Pretty , Luis Quintero , Benjamin Edvinsson , Mario Romero , Renan Guarese
Immersive technologies offer advantages for the visualization of and interaction with complex setups within manufacturing maintenance processes. The present work catalogs different applications of AR/VR in manufacturing maintenance practices as an extended version of a workshop paper presented at the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS). Through a scoping review in three computing and engineering digital libraries, we outline the key attributes of immersive solutions () for industrial maintenance, categorizing functional prototypes with ten parameters related to interaction, visualization, and research methods. Moreover, we conducted a workshop with three manufacturing experts discussing the future of maintenance interfaces. By bringing forth their recommendations and insights, we targeted a key training challenge in maintenance. We designed and implemented a situated visualization prototype for a safety-critical procedure with real-time, in-depth, spatially relevant instructions. We compared the effects of 2D labels and 3D ghosts in a VR-simulated AR environment. In a preliminary between-subjects evaluation study (), we measured usability, workload, simulator sickness, completion time, and delayed recall. Although we did not find statistically significant differences between conditions, 3D ghosts showed slightly lower perceived workload and discomfort levels, along with shorter completion times. On the other hand, 2D labels produced higher usability. Overall, we contribute by a mapping out the state of the art and discovering knowledge gaps within immersive maintenance, presenting a design and preliminary user study that adheres to our recommendations.
{"title":"Situated visualization towards manufacturing maintenance training: Scoping review, design and user study","authors":"Zeinab BagheriFard , Miruna Maria Vasiliu , Emma Jane Pretty , Luis Quintero , Benjamin Edvinsson , Mario Romero , Renan Guarese","doi":"10.1016/j.cag.2025.104500","DOIUrl":"10.1016/j.cag.2025.104500","url":null,"abstract":"<div><div>Immersive technologies offer advantages for the visualization of and interaction with complex setups within manufacturing maintenance processes. The present work catalogs different applications of AR/VR in manufacturing maintenance practices as an extended version of a workshop paper presented at the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS). Through a scoping review in three computing and engineering digital libraries, we outline the key attributes of immersive solutions (<span><math><mrow><msub><mrow><mi>N</mi></mrow><mrow><mi>p</mi></mrow></msub><mo>=</mo><mn>115</mn></mrow></math></span>) for industrial maintenance, categorizing functional prototypes with ten parameters related to interaction, visualization, and research methods. Moreover, we conducted a workshop with three manufacturing experts discussing the future of maintenance interfaces. By bringing forth their recommendations and insights, we targeted a key training challenge in maintenance. We designed and implemented a situated visualization prototype for a safety-critical procedure with real-time, in-depth, spatially relevant instructions. We compared the effects of 2D labels and 3D ghosts in a VR-simulated AR environment. In a preliminary between-subjects evaluation study (<span><math><mrow><msub><mrow><mi>N</mi></mrow><mrow><mi>u</mi></mrow></msub><mo>=</mo><mn>24</mn></mrow></math></span>), we measured usability, workload, simulator sickness, completion time, and delayed recall. Although we did not find statistically significant differences between conditions, 3D ghosts showed slightly lower perceived workload and discomfort levels, along with shorter completion times. On the other hand, 2D labels produced higher usability. Overall, we contribute by a mapping out the state of the art and discovering knowledge gaps within immersive maintenance, presenting a design and preliminary user study that adheres to our recommendations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104500"},"PeriodicalIF":2.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1016/j.cag.2025.104493
Mengyi Wang, Beiqi Chen, Shengfang Pan, Niansheng Liu, Jinhe Su
Efficient and high-quality reconstruction of large-scale 3D scenes remains a key challenge for novel view synthesis. Recent advances in 3D Gaussian Splatting (3DGS) have achieved photorealistic rendering and real-time performance, but scaling 3DGS to city-scale environments typically relies on block-based training. This divide-and-conquer approach suffers from two major limitations: (1) the Gaussian properties of overlapping regions of adjacent blocks are inconsistent, resulting in noticeable visual artifacts after merging; (2) the sparse Gaussian distribution near block boundaries causes cracks or holes. To address these challenges, we propose a novel framework that regularizes the Gaussian properties of overlapping regions and enhances the Gaussian density near block edges, thus ensuring smooth transitions and seamless rendering. In addition, we introduce appearance decouple to further adapt to viewpoint-dependent appearance variations in urban scenes and adopt a multi-scale densification strategy to balance details and efficiency at different scene scales. Experimental results show that in large-scale urban scenes with densely partitioned blocks, our method achieves consistently better reconstruction quality, with an average PSNR improvement of 0.25 dB over strong baselines on both aerial and street datasets.
{"title":"Consistency-preserving Gaussian splatting for block-based large-scale scene reconstruction","authors":"Mengyi Wang, Beiqi Chen, Shengfang Pan, Niansheng Liu, Jinhe Su","doi":"10.1016/j.cag.2025.104493","DOIUrl":"10.1016/j.cag.2025.104493","url":null,"abstract":"<div><div>Efficient and high-quality reconstruction of large-scale 3D scenes remains a key challenge for novel view synthesis. Recent advances in 3D Gaussian Splatting (3DGS) have achieved photorealistic rendering and real-time performance, but scaling 3DGS to city-scale environments typically relies on block-based training. This divide-and-conquer approach suffers from two major limitations: (1) the Gaussian properties of overlapping regions of adjacent blocks are inconsistent, resulting in noticeable visual artifacts after merging; (2) the sparse Gaussian distribution near block boundaries causes cracks or holes. To address these challenges, we propose a novel framework that regularizes the Gaussian properties of overlapping regions and enhances the Gaussian density near block edges, thus ensuring smooth transitions and seamless rendering. In addition, we introduce appearance decouple to further adapt to viewpoint-dependent appearance variations in urban scenes and adopt a multi-scale densification strategy to balance details and efficiency at different scene scales. Experimental results show that in large-scale urban scenes with densely partitioned blocks, our method achieves consistently better reconstruction quality, with an average PSNR improvement of 0.25 dB over strong baselines on both aerial and street datasets.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104493"},"PeriodicalIF":2.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.cag.2025.104492
Jin-Feng Li , Sen-Zhe Xu , Qiang Tong , Peng-Hui Yuan , Ling-Long Zou , Er-Xia Luo , Qi Wen Gan , Song-Hai Zhang
Redirected walking (RDW) is a virtual reality locomotion technique that enables users to explore large virtual environments within a limited physical space. While state-of-the-art methods based on physical trajectory planning make effective use of physical space, some of them often compromise user comfort due to frequent directional reversals in curvature gain. To address this, this paper proposes a novel RDW method that integrates strafing gain with pose score guidance. Our approach discretizes the physical space into a series of standard poses, each with a long-term safety score, and redirects the user toward the optimal pose. The main contribution is a path generation algorithm that decomposes redirection into two sequential stages to ensure stable gains for each planned path: it first uses the curvature gain to steer the user along an arc for orientation alignment, and then inserts a straight path segment with constant strafing gain to achieve positional alignment with the target pose. Simulation experiments demonstrate a reduction in resets, while the user study shows lower Simulator Sickness Questionnaire scores compared to previous methods. Our work explores the potential of combining novel gains with state-of-the-art methods to create a more effective and comfortable RDW controller algorithm.
{"title":"Incorporating strafing gain into redirected walking with pose score guidance","authors":"Jin-Feng Li , Sen-Zhe Xu , Qiang Tong , Peng-Hui Yuan , Ling-Long Zou , Er-Xia Luo , Qi Wen Gan , Song-Hai Zhang","doi":"10.1016/j.cag.2025.104492","DOIUrl":"10.1016/j.cag.2025.104492","url":null,"abstract":"<div><div>Redirected walking (RDW) is a virtual reality locomotion technique that enables users to explore large virtual environments within a limited physical space. While state-of-the-art methods based on physical trajectory planning make effective use of physical space, some of them often compromise user comfort due to frequent directional reversals in curvature gain. To address this, this paper proposes a novel RDW method that integrates strafing gain with pose score guidance. Our approach discretizes the physical space into a series of standard poses, each with a long-term safety score, and redirects the user toward the optimal pose. The main contribution is a path generation algorithm that decomposes redirection into two sequential stages to ensure stable gains for each planned path: it first uses the curvature gain to steer the user along an arc for orientation alignment, and then inserts a straight path segment with constant strafing gain to achieve positional alignment with the target pose. Simulation experiments demonstrate a reduction in resets, while the user study shows lower Simulator Sickness Questionnaire scores compared to previous methods. Our work explores the potential of combining novel gains with state-of-the-art methods to create a more effective and comfortable RDW controller algorithm.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104492"},"PeriodicalIF":2.8,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}