首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Real-time haptic-based soft body suturing in virtual open surgery simulations 虚拟开放手术模拟中基于触觉的实时软体缝合
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-02 DOI: 10.1016/j.cag.2025.104507
George Westergaard , Mark Ellis , Jacob Barker , Sofia Garces Palacios , Alexis Desir , Ganesh Sankaranarayanan , Suvranu De , Doga Demirel
In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.
在这项工作中,我们提出了一个基于实时虚拟现实的开放手术模拟器,可以通过双手触觉反馈实现真实的软组织缝合。我们的系统使用扩展的基于位置的动力学(XPBD)进行软体和缝线模拟,为连续缝合和打结等复杂的相互作用提供了稳定的实时物理效果。在对四种常用缝合技术(wallet -string、Connell、stay和Lembert)的测试中,该模拟器可以模拟4155个粒子,保持高帧率(50-80 FPS),显示出一致的实时性能。作为我们工作的一部分,我们使用我们的缝合模拟器进行了一项用户研究,其中24名外科培训生和专家使用了虚拟结直肠手术培训师-直肠脱垂模拟器。用户研究显示,71%的参与者(n=17)将解剖真实感评为中等至非常高。一半(n=12)的参与者认为力反馈是真实的,54% (n=13)的参与者认为力反馈是有用的,这表明了有效的沉浸感,同时也强调了提高触觉保真度的必要性。总体而言,该模拟为开放式手术缝合提供了低成本,高保真度的培训平台,解决了当前虚拟现实教育工具的关键空白。
{"title":"Real-time haptic-based soft body suturing in virtual open surgery simulations","authors":"George Westergaard ,&nbsp;Mark Ellis ,&nbsp;Jacob Barker ,&nbsp;Sofia Garces Palacios ,&nbsp;Alexis Desir ,&nbsp;Ganesh Sankaranarayanan ,&nbsp;Suvranu De ,&nbsp;Doga Demirel","doi":"10.1016/j.cag.2025.104507","DOIUrl":"10.1016/j.cag.2025.104507","url":null,"abstract":"<div><div>In this work, we present a real-time virtual reality-based open surgery simulator that enables realistic soft-tissue suturing with bimanual haptic feedback. Our system uses eXtended Position-Based Dynamics (XPBD) for soft body and suture thread simulation, allowing stable real-time physics for complex interactions like continuous sutures and knot tying. In tests with all four common suturing techniques, purse-string, Connell, stay, and Lembert, the simulator maintained high frame rates (50–80 FPS) with up to 4155 simulated particles, demonstrating consistent real-time performance. As part of our work, we conducted a user study using our suturing simulator, where 24 surgical trainees and experts used the Virtual Colorectal Surgery Trainer – Rectal Prolapse simulator. The user study showed that 71% of participants (n=17) rated the anatomical realism as moderate to very high. Half (n=12) found the force feedback realistic, and 54% (n=13) participants found the force feedback useful, indicating effective immersion while also highlighting the need for improved haptic fidelity. Overall, the simulation provides a low-cost, high-fidelity training platform for open surgical suturing, addressing a critical gap in current virtual reality educational tools.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104507"},"PeriodicalIF":2.8,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note for Issue 133 of Computers & Graphics 《计算机与图形学》第133期的编辑说明
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 DOI: 10.1016/j.cag.2025.104508
{"title":"Editorial Note for Issue 133 of Computers & Graphics","authors":"","doi":"10.1016/j.cag.2025.104508","DOIUrl":"10.1016/j.cag.2025.104508","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104508"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated generation of housing layouts using graph-rules 使用图形规则自动生成房屋布局
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-29 DOI: 10.1016/j.cag.2025.104506
Shiksha, Rohit Lohani, Krishnendra Shekhawat, Arsh Singh, Karan Agrawal
In architectural design, floor planning plays a crucial role in shaping the functionality and efficiency of a building, requiring designers to strike a balance between diverse and often conflicting objectives. It is a multi-constraint problem, and over the past few years, many tools have been proposed to generate floor plans automatically, most of which are based on AI/ML techniques.
In this paper, we propose software based on graph algorithms for the automated generation of housing layouts (floor plans) having rectangular boundaries while addressing adjacency and non-adjacency constraints, room positions (interior or exterior), and circulations. Once the user provides the input constraints (where many of them are built-in, say dining is on the exterior and adjacent to the kitchen, and the kitchen is not adjacent to the toilets), the software will generate a range of graphs that represent these connections and use them to generate all possible dimensioned housing layout options for users to choose from.
在建筑设计中,楼层规划在塑造建筑的功能和效率方面起着至关重要的作用,要求设计师在多样化和经常相互冲突的目标之间取得平衡。这是一个多约束问题,在过去的几年里,已经提出了许多工具来自动生成平面图,其中大多数是基于AI/ML技术。在本文中,我们提出了基于图形算法的软件,用于自动生成具有矩形边界的房屋布局(平面图),同时解决邻接和非邻接约束,房间位置(内部或外部)和循环。一旦用户提供了输入限制(其中许多是内置的,比如餐厅在外面,靠近厨房,厨房不靠近厕所),软件就会生成一系列表示这些连接的图表,并使用它们生成所有可能的房屋尺寸布局选项,供用户选择。
{"title":"Automated generation of housing layouts using graph-rules","authors":"Shiksha,&nbsp;Rohit Lohani,&nbsp;Krishnendra Shekhawat,&nbsp;Arsh Singh,&nbsp;Karan Agrawal","doi":"10.1016/j.cag.2025.104506","DOIUrl":"10.1016/j.cag.2025.104506","url":null,"abstract":"<div><div>In architectural design, floor planning plays a crucial role in shaping the functionality and efficiency of a building, requiring designers to strike a balance between diverse and often conflicting objectives. It is a multi-constraint problem, and over the past few years, many tools have been proposed to generate floor plans automatically, most of which are based on AI/ML techniques.</div><div>In this paper, we propose software based on graph algorithms for the automated generation of housing layouts (floor plans) having rectangular boundaries while addressing adjacency and non-adjacency constraints, room positions (interior or exterior), and circulations. Once the user provides the input constraints (where many of them are built-in, say dining is on the exterior and adjacent to the kitchen, and the kitchen is not adjacent to the toilets), the software will generate a range of graphs that represent these connections and use them to generate all possible dimensioned housing layout options for users to choose from.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104506"},"PeriodicalIF":2.8,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSDA-Vis: A (What-If-and-When) visual system for early dropout detection using counterfactual and survival analysis interactions CSDA-Vis:一个使用反事实和生存分析交互作用的早期辍学检测的(如果和何时)视觉系统
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-25 DOI: 10.1016/j.cag.2025.104489
Germain Garcia-Zanabria , Daniel A. Gutierrez-Pachas , Jorge Poco , Erick Gomez-Nieto
Student dropout is a major concern for universities, leading them to invest heavily in strategies to lower attrition rates. Analytical tools are crucial for predicting dropout risks and informing policies on academic and social support. However, many of these tools depend solely on automate tu d predictions, ignoring valuable insights from professors, mentors, and specialists. These experts can help identify the root causes of dropout and develop effective interventions. This paper introduces CSDA-Vis, a visualization system designed to analyze the influence of individual, institutional, and socioeconomic factors on student dropout rates. CSDA-Vis facilitates the identification of actionable strategies to mitigate dropout by integrating counterfactual and survival analysis methods. Unlike traditional approaches, our tool enables decision-makers to incorporate their expertise into the evaluation of different dropout scenarios. Developed in collaboration with domain experts, CSDA-Vis builds upon previous visualization tools and was validated through a case study using real datasets from a Latin American university. Additionally, we conducted an expert evaluation with professionals specializing in dropout analysis, further demonstrating the tool’s practical value and effectiveness.
学生辍学是大学的一个主要问题,导致他们在降低流失率的策略上投入大量资金。分析工具对于预测辍学风险和为学术和社会支持政策提供信息至关重要。然而,这些工具中的许多只依赖于自动预测,而忽略了来自教授、导师和专家的有价值的见解。这些专家可以帮助确定辍学的根本原因,并制定有效的干预措施。本文介绍了CSDA-Vis,一个可视化系统,旨在分析个人,机构和社会经济因素对学生辍学率的影响。CSDA-Vis通过整合反事实和生存分析方法,促进确定减少辍学的可行战略。与传统方法不同,我们的工具使决策者能够将他们的专业知识纳入不同辍学情景的评估中。CSDA-Vis是与领域专家合作开发的,以以前的可视化工具为基础,并通过使用来自拉丁美洲大学的真实数据集的案例研究进行了验证。此外,我们还与专门从事辍学分析的专业人士进行了专家评估,进一步证明了该工具的实用价值和有效性。
{"title":"CSDA-Vis: A (What-If-and-When) visual system for early dropout detection using counterfactual and survival analysis interactions","authors":"Germain Garcia-Zanabria ,&nbsp;Daniel A. Gutierrez-Pachas ,&nbsp;Jorge Poco ,&nbsp;Erick Gomez-Nieto","doi":"10.1016/j.cag.2025.104489","DOIUrl":"10.1016/j.cag.2025.104489","url":null,"abstract":"<div><div>Student dropout is a major concern for universities, leading them to invest heavily in strategies to lower attrition rates. Analytical tools are crucial for predicting dropout risks and informing policies on academic and social support. However, many of these tools depend solely on automate tu d predictions, ignoring valuable insights from professors, mentors, and specialists. These experts can help identify the root causes of dropout and develop effective interventions. This paper introduces <em>CSDA-Vis</em>, a visualization system designed to analyze the influence of individual, institutional, and socioeconomic factors on student dropout rates. CSDA-Vis facilitates the identification of actionable strategies to mitigate dropout by integrating counterfactual and survival analysis methods. Unlike traditional approaches, our tool enables decision-makers to incorporate their expertise into the evaluation of different dropout scenarios. Developed in collaboration with domain experts, CSDA-Vis builds upon previous visualization tools and was validated through a case study using real datasets from a Latin American university. Additionally, we conducted an expert evaluation with professionals specializing in dropout analysis, further demonstrating the tool’s practical value and effectiveness.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104489"},"PeriodicalIF":2.8,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recovering through play: Studying the effects of collaborative Virtual Reality serious games for stroke rehabilitation through a human-centered design methodology 通过游戏恢复:通过以人为本的设计方法研究协作式虚拟现实严肃游戏对中风康复的影响
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-25 DOI: 10.1016/j.cag.2025.104501
Sérgio Oliveira , Bernardo Marques , Paula Amorim , Mariana Leite , Carlos Ferreira , Beatriz Sousa Santos
Stroke is one of the world’s leading causes of death and disability and can have profound consequences that require effective rehabilitation to support survivors’ recovery and improve their quality of life. Despite their value, traditional rehabilitation methods tend to be repetitive and lack variety, which challenges survivors’ motivation. Additionally, these rehabilitation sessions are usually solitary, leaving survivors to practice exercises alone, which can lead to physical setbacks and social isolation. Such isolation can further reduce enthusiasm for therapy, delay recovery, and affect their mental well-being. This work details an extended version from the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS), conducted at IEEE VR 2025. It proposes a collaborative Virtual Reality (VR) framework that aims to increase survivors’ motivation during rehabilitation. Through its collaborative nature, it can involve multiple users in the same virtual space, from stroke survivors to healthcare professionals, giving them a common goal that they must join forces to accomplish. Various serious games were designed through a series of activities focused on specific gestures related to the rehabilitation of the upper limbs, thus improving physical recovery and mental well-being. The design and development were guided by a human-centered methodology that included survivors and professionals, resulting in a user study with a total of 53 participants, 18 from a rehabilitation center. The results indicate that this collaborative VR tool effectively boosts motivation, social interaction, and engagement while maintaining an accessible and manageable level of physical and mental demand for stroke recovery, underscoring its suitability for stroke recovery.
中风是世界上导致死亡和残疾的主要原因之一,可能产生深远的后果,需要有效的康复,以支持幸存者康复并改善他们的生活质量。传统的康复方法尽管有其价值,但往往是重复的,缺乏多样性,这挑战了幸存者的动机。此外,这些康复课程通常是单独的,让幸存者独自练习,这可能导致身体上的挫折和社会隔离。这种隔离会进一步降低治疗的热情,延迟康复,并影响他们的心理健康。这项工作详细介绍了在IEEE VR 2025上进行的工业和职业支持扩展现实国际研讨会(XRIOS)的扩展版本。它提出了一个协作式虚拟现实(VR)框架,旨在提高幸存者在康复过程中的动力。通过其协作性质,它可以让同一虚拟空间中的多个用户参与进来,从中风幸存者到医疗保健专业人员,为他们提供一个共同的目标,他们必须联合起来实现这个目标。通过一系列与上肢康复相关的特定手势活动设计各种严肃游戏,从而促进身体恢复和心理健康。设计和开发以包括幸存者和专业人员在内的以人为本的方法为指导,最终完成了一项用户研究,共有53名参与者,其中18名来自康复中心。结果表明,这种协作式VR工具有效地提高了动机、社交互动和参与度,同时保持了卒中恢复的可访问和可管理的身心需求水平,强调了其对卒中恢复的适用性。
{"title":"Recovering through play: Studying the effects of collaborative Virtual Reality serious games for stroke rehabilitation through a human-centered design methodology","authors":"Sérgio Oliveira ,&nbsp;Bernardo Marques ,&nbsp;Paula Amorim ,&nbsp;Mariana Leite ,&nbsp;Carlos Ferreira ,&nbsp;Beatriz Sousa Santos","doi":"10.1016/j.cag.2025.104501","DOIUrl":"10.1016/j.cag.2025.104501","url":null,"abstract":"<div><div>Stroke is one of the world’s leading causes of death and disability and can have profound consequences that require effective rehabilitation to support survivors’ recovery and improve their quality of life. Despite their value, traditional rehabilitation methods tend to be repetitive and lack variety, which challenges survivors’ motivation. Additionally, these rehabilitation sessions are usually solitary, leaving survivors to practice exercises alone, which can lead to physical setbacks and social isolation. Such isolation can further reduce enthusiasm for therapy, delay recovery, and affect their mental well-being. This work details an extended version from the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS), conducted at IEEE VR 2025. It proposes a collaborative Virtual Reality (VR) framework that aims to increase survivors’ motivation during rehabilitation. Through its collaborative nature, it can involve multiple users in the same virtual space, from stroke survivors to healthcare professionals, giving them a common goal that they must join forces to accomplish. Various serious games were designed through a series of activities focused on specific gestures related to the rehabilitation of the upper limbs, thus improving physical recovery and mental well-being. The design and development were guided by a human-centered methodology that included survivors and professionals, resulting in a user study with a total of 53 participants, 18 from a rehabilitation center. The results indicate that this collaborative VR tool effectively boosts motivation, social interaction, and engagement while maintaining an accessible and manageable level of physical and mental demand for stroke recovery, underscoring its suitability for stroke recovery.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104501"},"PeriodicalIF":2.8,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Situated visualization towards manufacturing maintenance training: Scoping review, design and user study 面向制造维修培训的可视化:范围审查,设计和用户研究
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-24 DOI: 10.1016/j.cag.2025.104500
Zeinab BagheriFard , Miruna Maria Vasiliu , Emma Jane Pretty , Luis Quintero , Benjamin Edvinsson , Mario Romero , Renan Guarese
Immersive technologies offer advantages for the visualization of and interaction with complex setups within manufacturing maintenance processes. The present work catalogs different applications of AR/VR in manufacturing maintenance practices as an extended version of a workshop paper presented at the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS). Through a scoping review in three computing and engineering digital libraries, we outline the key attributes of immersive solutions (Np=115) for industrial maintenance, categorizing functional prototypes with ten parameters related to interaction, visualization, and research methods. Moreover, we conducted a workshop with three manufacturing experts discussing the future of maintenance interfaces. By bringing forth their recommendations and insights, we targeted a key training challenge in maintenance. We designed and implemented a situated visualization prototype for a safety-critical procedure with real-time, in-depth, spatially relevant instructions. We compared the effects of 2D labels and 3D ghosts in a VR-simulated AR environment. In a preliminary between-subjects evaluation study (Nu=24), we measured usability, workload, simulator sickness, completion time, and delayed recall. Although we did not find statistically significant differences between conditions, 3D ghosts showed slightly lower perceived workload and discomfort levels, along with shorter completion times. On the other hand, 2D labels produced higher usability. Overall, we contribute by a mapping out the state of the art and discovering knowledge gaps within immersive maintenance, presenting a design and preliminary user study that adheres to our recommendations.
沉浸式技术为制造维护过程中复杂设置的可视化和交互提供了优势。本工作将AR/VR在制造维护实践中的不同应用作为在工业和职业支持扩展现实国际研讨会(XRIOS)上发表的研讨会论文的扩展版本。通过对三个计算和工程数字图书馆的范围审查,我们概述了用于工业维护的沉浸式解决方案(Np=115)的关键属性,并将功能原型与交互、可视化和研究方法相关的十个参数进行了分类。此外,我们还举办了一个研讨会,邀请三位制造业专家讨论维护接口的未来。通过提出他们的建议和见解,我们瞄准了维护中的关键培训挑战。我们设计并实现了一个具有实时、深入、空间相关指令的安全关键程序的位置可视化原型。我们在vr模拟的AR环境中比较了2D标签和3D幽灵的效果。在初步的受试者间评估研究中(Nu=24),我们测量了可用性、工作量、模拟器不适、完成时间和延迟回忆。虽然我们在不同条件下没有发现统计学上的显著差异,但3D幽灵显示出较低的感知工作量和不适程度,以及较短的完成时间。另一方面,2D标签产生了更高的可用性。总的来说,我们的贡献是绘制出最先进的状态,发现沉浸式维护中的知识差距,提出符合我们建议的设计和初步用户研究。
{"title":"Situated visualization towards manufacturing maintenance training: Scoping review, design and user study","authors":"Zeinab BagheriFard ,&nbsp;Miruna Maria Vasiliu ,&nbsp;Emma Jane Pretty ,&nbsp;Luis Quintero ,&nbsp;Benjamin Edvinsson ,&nbsp;Mario Romero ,&nbsp;Renan Guarese","doi":"10.1016/j.cag.2025.104500","DOIUrl":"10.1016/j.cag.2025.104500","url":null,"abstract":"<div><div>Immersive technologies offer advantages for the visualization of and interaction with complex setups within manufacturing maintenance processes. The present work catalogs different applications of AR/VR in manufacturing maintenance practices as an extended version of a workshop paper presented at the International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS). Through a scoping review in three computing and engineering digital libraries, we outline the key attributes of immersive solutions (<span><math><mrow><msub><mrow><mi>N</mi></mrow><mrow><mi>p</mi></mrow></msub><mo>=</mo><mn>115</mn></mrow></math></span>) for industrial maintenance, categorizing functional prototypes with ten parameters related to interaction, visualization, and research methods. Moreover, we conducted a workshop with three manufacturing experts discussing the future of maintenance interfaces. By bringing forth their recommendations and insights, we targeted a key training challenge in maintenance. We designed and implemented a situated visualization prototype for a safety-critical procedure with real-time, in-depth, spatially relevant instructions. We compared the effects of 2D labels and 3D ghosts in a VR-simulated AR environment. In a preliminary between-subjects evaluation study (<span><math><mrow><msub><mrow><mi>N</mi></mrow><mrow><mi>u</mi></mrow></msub><mo>=</mo><mn>24</mn></mrow></math></span>), we measured usability, workload, simulator sickness, completion time, and delayed recall. Although we did not find statistically significant differences between conditions, 3D ghosts showed slightly lower perceived workload and discomfort levels, along with shorter completion times. On the other hand, 2D labels produced higher usability. Overall, we contribute by a mapping out the state of the art and discovering knowledge gaps within immersive maintenance, presenting a design and preliminary user study that adheres to our recommendations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104500"},"PeriodicalIF":2.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistency-preserving Gaussian splatting for block-based large-scale scene reconstruction 基于块的大规模场景重建中保持一致性的高斯溅射
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-24 DOI: 10.1016/j.cag.2025.104493
Mengyi Wang, Beiqi Chen, Shengfang Pan, Niansheng Liu, Jinhe Su
Efficient and high-quality reconstruction of large-scale 3D scenes remains a key challenge for novel view synthesis. Recent advances in 3D Gaussian Splatting (3DGS) have achieved photorealistic rendering and real-time performance, but scaling 3DGS to city-scale environments typically relies on block-based training. This divide-and-conquer approach suffers from two major limitations: (1) the Gaussian properties of overlapping regions of adjacent blocks are inconsistent, resulting in noticeable visual artifacts after merging; (2) the sparse Gaussian distribution near block boundaries causes cracks or holes. To address these challenges, we propose a novel framework that regularizes the Gaussian properties of overlapping regions and enhances the Gaussian density near block edges, thus ensuring smooth transitions and seamless rendering. In addition, we introduce appearance decouple to further adapt to viewpoint-dependent appearance variations in urban scenes and adopt a multi-scale densification strategy to balance details and efficiency at different scene scales. Experimental results show that in large-scale urban scenes with densely partitioned blocks, our method achieves consistently better reconstruction quality, with an average PSNR improvement of 0.25 dB over strong baselines on both aerial and street datasets.
高效、高质量的大规模三维场景重建仍然是新视图合成的关键挑战。3D高斯飞溅(3DGS)的最新进展已经实现了逼真的渲染和实时性能,但将3DGS扩展到城市规模的环境通常依赖于基于块的训练。这种分而治之的方法有两个主要的局限性:(1)相邻块重叠区域的高斯属性不一致,合并后会产生明显的视觉伪影;(2)块边界附近的稀疏高斯分布导致裂纹或孔洞。为了解决这些挑战,我们提出了一种新的框架,该框架规范了重叠区域的高斯属性,增强了块边缘附近的高斯密度,从而确保平滑过渡和无缝渲染。此外,我们引入了外观解耦,以进一步适应城市场景中依赖视点的外观变化,并采用多尺度致密化策略来平衡不同场景尺度下的细节和效率。实验结果表明,在具有密集分区的大规模城市场景中,我们的方法在空中和街道数据集的强基线上平均PSNR提高了0.25 dB,获得了更好的重建质量。
{"title":"Consistency-preserving Gaussian splatting for block-based large-scale scene reconstruction","authors":"Mengyi Wang,&nbsp;Beiqi Chen,&nbsp;Shengfang Pan,&nbsp;Niansheng Liu,&nbsp;Jinhe Su","doi":"10.1016/j.cag.2025.104493","DOIUrl":"10.1016/j.cag.2025.104493","url":null,"abstract":"<div><div>Efficient and high-quality reconstruction of large-scale 3D scenes remains a key challenge for novel view synthesis. Recent advances in 3D Gaussian Splatting (3DGS) have achieved photorealistic rendering and real-time performance, but scaling 3DGS to city-scale environments typically relies on block-based training. This divide-and-conquer approach suffers from two major limitations: (1) the Gaussian properties of overlapping regions of adjacent blocks are inconsistent, resulting in noticeable visual artifacts after merging; (2) the sparse Gaussian distribution near block boundaries causes cracks or holes. To address these challenges, we propose a novel framework that regularizes the Gaussian properties of overlapping regions and enhances the Gaussian density near block edges, thus ensuring smooth transitions and seamless rendering. In addition, we introduce appearance decouple to further adapt to viewpoint-dependent appearance variations in urban scenes and adopt a multi-scale densification strategy to balance details and efficiency at different scene scales. Experimental results show that in large-scale urban scenes with densely partitioned blocks, our method achieves consistently better reconstruction quality, with an average PSNR improvement of 0.25 dB over strong baselines on both aerial and street datasets.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104493"},"PeriodicalIF":2.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating strafing gain into redirected walking with pose score guidance 结合扫射增益到重定向行走与姿态得分指导
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-19 DOI: 10.1016/j.cag.2025.104492
Jin-Feng Li , Sen-Zhe Xu , Qiang Tong , Peng-Hui Yuan , Ling-Long Zou , Er-Xia Luo , Qi Wen Gan , Song-Hai Zhang
Redirected walking (RDW) is a virtual reality locomotion technique that enables users to explore large virtual environments within a limited physical space. While state-of-the-art methods based on physical trajectory planning make effective use of physical space, some of them often compromise user comfort due to frequent directional reversals in curvature gain. To address this, this paper proposes a novel RDW method that integrates strafing gain with pose score guidance. Our approach discretizes the physical space into a series of standard poses, each with a long-term safety score, and redirects the user toward the optimal pose. The main contribution is a path generation algorithm that decomposes redirection into two sequential stages to ensure stable gains for each planned path: it first uses the curvature gain to steer the user along an arc for orientation alignment, and then inserts a straight path segment with constant strafing gain to achieve positional alignment with the target pose. Simulation experiments demonstrate a reduction in resets, while the user study shows lower Simulator Sickness Questionnaire scores compared to previous methods. Our work explores the potential of combining novel gains with state-of-the-art methods to create a more effective and comfortable RDW controller algorithm.
重定向行走(RDW)是一种虚拟现实运动技术,使用户能够在有限的物理空间内探索大型虚拟环境。虽然基于物理轨迹规划的最先进的方法可以有效地利用物理空间,但由于曲率增益的频繁方向反转,其中一些方法往往会损害用户的舒适度。为了解决这一问题,本文提出了一种将扫射增益与姿态分数制导相结合的RDW方法。我们的方法将物理空间离散为一系列标准姿势,每个姿势都有一个长期的安全评分,并将用户重新定向到最佳姿势。主要贡献是一种路径生成算法,该算法将重定向分解为两个连续的阶段,以确保每个规划路径的稳定增益:它首先使用曲率增益引导用户沿着弧线进行方向对齐,然后插入具有恒定定向增益的直线路径段,以实现与目标姿态的位置对齐。模拟实验证明了重置的减少,而用户研究表明,与以前的方法相比,模拟器疾病问卷得分更低。我们的工作探索了将新增益与最先进的方法相结合的潜力,以创建更有效和舒适的RDW控制器算法。
{"title":"Incorporating strafing gain into redirected walking with pose score guidance","authors":"Jin-Feng Li ,&nbsp;Sen-Zhe Xu ,&nbsp;Qiang Tong ,&nbsp;Peng-Hui Yuan ,&nbsp;Ling-Long Zou ,&nbsp;Er-Xia Luo ,&nbsp;Qi Wen Gan ,&nbsp;Song-Hai Zhang","doi":"10.1016/j.cag.2025.104492","DOIUrl":"10.1016/j.cag.2025.104492","url":null,"abstract":"<div><div>Redirected walking (RDW) is a virtual reality locomotion technique that enables users to explore large virtual environments within a limited physical space. While state-of-the-art methods based on physical trajectory planning make effective use of physical space, some of them often compromise user comfort due to frequent directional reversals in curvature gain. To address this, this paper proposes a novel RDW method that integrates strafing gain with pose score guidance. Our approach discretizes the physical space into a series of standard poses, each with a long-term safety score, and redirects the user toward the optimal pose. The main contribution is a path generation algorithm that decomposes redirection into two sequential stages to ensure stable gains for each planned path: it first uses the curvature gain to steer the user along an arc for orientation alignment, and then inserts a straight path segment with constant strafing gain to achieve positional alignment with the target pose. Simulation experiments demonstrate a reduction in resets, while the user study shows lower Simulator Sickness Questionnaire scores compared to previous methods. Our work explores the potential of combining novel gains with state-of-the-art methods to create a more effective and comfortable RDW controller algorithm.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104492"},"PeriodicalIF":2.8,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bioresponsive avatars: Perceiving emotions through virtual avatar representation in empathic social VR 生物反应化身:通过共情社交VR中的虚拟化身表征来感知情感
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104474
Danyang Peng , Zicheng Xia , Tinghui Li , Yixin Wang , Mark Armstrong , Kinga Skierś , Anish Kundu , Kouta Minamizawa , Yun Suen Pai
Social virtual reality (VR) is the experience of a shared virtual space populated by virtual representations of each individual, allowing them to communicate, collaborate and interact with each other not unlike the real world. Conventional avatars generally mirror factors like an individual’s appearance, speech, movement, and so on, yet a VR can offer many more possibilities to represent a person beyond what reality can offer. One such representation is that of emotions and empathy. To that regard, we propose Bioresponsive Avatars, an avatar system that predicts user emotional states and represents them visually via their avatar appearance. To achieve this, we first conducted an avatar design workshop to understand how user’s imagine emotional states to appear on an avatar. Then, we performed an in-the-wild demonstration of a social VR prototype where dyadic users are presented with affective topics to communicate while their avatars adapt based on their predicted emotions.
社交虚拟现实(VR)是一种共享虚拟空间的体验,由每个人的虚拟表示填充,允许他们与现实世界不同,彼此沟通,协作和互动。传统的虚拟形象通常反映个人的外表、语言、动作等因素,而VR可以提供更多的可能性来代表一个人,而不是现实所能提供的。其中一种表现就是情感和同理心。在这方面,我们提出了bioresive Avatars,这是一个可以预测用户情绪状态并通过他们的化身外观直观地表示它们的化身系统。为了实现这一点,我们首先进行了一个化身设计研讨会,以了解用户如何想象情感状态出现在化身上。然后,我们进行了一个社交VR原型的野外演示,其中向二元用户提供情感主题进行交流,同时他们的化身根据他们预测的情绪进行调整。
{"title":"Bioresponsive avatars: Perceiving emotions through virtual avatar representation in empathic social VR","authors":"Danyang Peng ,&nbsp;Zicheng Xia ,&nbsp;Tinghui Li ,&nbsp;Yixin Wang ,&nbsp;Mark Armstrong ,&nbsp;Kinga Skierś ,&nbsp;Anish Kundu ,&nbsp;Kouta Minamizawa ,&nbsp;Yun Suen Pai","doi":"10.1016/j.cag.2025.104474","DOIUrl":"10.1016/j.cag.2025.104474","url":null,"abstract":"<div><div>Social virtual reality (VR) is the experience of a shared virtual space populated by virtual representations of each individual, allowing them to communicate, collaborate and interact with each other not unlike the real world. Conventional avatars generally mirror factors like an individual’s appearance, speech, movement, and so on, yet a VR can offer many more possibilities to represent a person beyond what reality can offer. One such representation is that of emotions and empathy. To that regard, we propose Bioresponsive Avatars, an avatar system that predicts user emotional states and represents them visually via their avatar appearance. To achieve this, we first conducted an avatar design workshop to understand how user’s imagine emotional states to appear on an avatar. Then, we performed an in-the-wild demonstration of a social VR prototype where dyadic users are presented with affective topics to communicate while their avatars adapt based on their predicted emotions.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104474"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArchComplete: Autoregressive 3D architectural design generation with hierarchical diffusion-based upsampling ArchComplete:自回归3D建筑设计生成与分层扩散为基础的上采样
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-13 DOI: 10.1016/j.cag.2025.104477
Shervin Rasoulzadeh , Mathias Bank Stigsen , Iva Kovacic , Kristina Schinegger , Stefan Rutzinger , Michael Wimmer
Recent advances in 3D generative models have shown promising results but often fall short in capturing the complexity of architectural geometries and topologies. To tackle this, we present ArchComplete, a two-stage voxel-based 3D generative pipeline consisting of a vector-quantized model, whose composition is modeled with an autoregressive transformer for generating coarse shapes, followed by a set of multiscale diffusion models for augmenting with fine geometric details. Key to our pipeline is (i) learning a contextually rich codebook of local patch embeddings, optimized alongside a 2.5D perceptual loss that captures global spatial correspondence of projections onto three axis-aligned orthogonal planes, and (ii) redefining upsampling as a set of multiscale conditional diffusion models learning over a hierarchy of coarse-to-fine local volumetric patches, with a guided denoising process using 3D Gaussian windows that smooths noise estimates across overlapping patches during inference. Trained on our introduced dataset of 3D house models, ArchComplete autoregressively generates models at the resolution of 643 and progressively refines them up to 5123, with voxel sizes as small as 9cm. ArchComplete solves a variety of tasks, including genetic interpolation and variation, unconditional synthesis, shape and plan-drawing completion, as well as geometric detailization, while achieving state-of-the-art performance.
3D生成模型的最新进展显示出有希望的结果,但在捕捉建筑几何形状和拓扑结构的复杂性方面往往不足。为了解决这个问题,我们提出了ArchComplete,这是一个基于体素的两阶段3D生成管道,由矢量量化模型组成,其组成由自回归变压器建模,用于生成粗形状,然后是一组多尺度扩散模型,用于增强精细几何细节。我们的管道的关键是(i)学习上下文丰富的局部补丁嵌入码本,与2.5D感知损失一起优化,该感知损失捕获投影到三个轴线对齐的正交平面上的全局空间对应,以及(ii)将上采样重新定义为一组多尺度条件扩散模型,学习粗糙到精细的局部体积补丁的层次结构。在推理过程中,使用3D高斯窗口平滑重叠补丁之间的噪声估计。在我们引入的3D房屋模型数据集上进行训练,ArchComplete自动回归生成分辨率为643的模型,并逐步将其细化到5123,体素尺寸小至≈9cm。ArchComplete解决了各种任务,包括遗传插值和变异,无条件合成,形状和平面图完成,以及几何细节,同时实现了最先进的性能。
{"title":"ArchComplete: Autoregressive 3D architectural design generation with hierarchical diffusion-based upsampling","authors":"Shervin Rasoulzadeh ,&nbsp;Mathias Bank Stigsen ,&nbsp;Iva Kovacic ,&nbsp;Kristina Schinegger ,&nbsp;Stefan Rutzinger ,&nbsp;Michael Wimmer","doi":"10.1016/j.cag.2025.104477","DOIUrl":"10.1016/j.cag.2025.104477","url":null,"abstract":"<div><div>Recent advances in 3D generative models have shown promising results but often fall short in capturing the complexity of architectural geometries and topologies. To tackle this, we present ArchComplete, a two-stage voxel-based 3D generative pipeline consisting of a vector-quantized model, whose composition is modeled with an autoregressive transformer for generating coarse shapes, followed by a set of multiscale diffusion models for augmenting with fine geometric details. Key to our pipeline is (i) learning a contextually rich codebook of local patch embeddings, optimized alongside a 2.5D perceptual loss that captures global spatial correspondence of projections onto three axis-aligned orthogonal planes, and (ii) redefining upsampling as a set of multiscale conditional diffusion models learning over a hierarchy of coarse-to-fine local volumetric patches, with a guided denoising process using 3D Gaussian windows that smooths noise estimates across overlapping patches during inference. Trained on our introduced dataset of 3D house models, ArchComplete autoregressively generates models at the resolution of <span><math><mrow><mn>6</mn><msup><mrow><mn>4</mn></mrow><mrow><mn>3</mn></mrow></msup></mrow></math></span> and progressively refines them up to <span><math><mrow><mn>51</mn><msup><mrow><mn>2</mn></mrow><mrow><mn>3</mn></mrow></msup></mrow></math></span>, with voxel sizes as small as <span><math><mrow><mo>≈</mo><mn>9</mn><mspace></mspace><mtext>cm</mtext></mrow></math></span>. ArchComplete solves a variety of tasks, including genetic interpolation and variation, unconditional synthesis, shape and plan-drawing completion, as well as geometric detailization, while achieving state-of-the-art performance.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104477"},"PeriodicalIF":2.8,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1