首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Correlations between instant and prolonged stimuli with physiological and subjective responses in VR horror 虚拟现实恐怖中即时和长时间刺激与生理和主观反应的相关性
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-05 DOI: 10.1016/j.cag.2025.104470
Zeren Tao, Qilei Sun, Xiaohan Wang, Zuoqing Yang, Shengqiao Wu, Yibang Zhao, Binwei Lei
Virtual reality (VR) horror games can evoke intense feelings of fear and anxiety, yet it remains unclear how different types of fear stimuli within VR environments contribute to these physiological and emotional responses. While prior studies often investigate multisensory tension scenarios as a whole based on full-featured horror games, few have directly compared the effects of distinct fear stimuli—specifically, instant threat-based (e.g., sudden jump scares or chasing events) and prolonged atmospheric (e.g., persistent eerie ambiance) cues—on physiological indicators of fear. To address this gap, we developed a custom VR horror game that isolates these two categories of stimuli, enabling controlled experiments to examine their respective impacts on user physiology and self-reported fear. We compared experimental scenes featuring instant and prolonged stimuli against a baseline control scene to evaluate their influence. The results validate that instant stimuli exert a more pronounced influence on heart rate (HR) data, particularly in Maximum BPM and Average BPM metrics, while prolonged stimuli have a stronger effect on electrodermal activity (EDA), especially in EDA Max and EDA Mean Absolute Difference (MAD) metrics. The findings also reveal significant gender differences in certain physiological indicators and suggest that VR-based interventions could be tailored to modulate specific physiological systems by manipulating the type of emotional stimuli presented to the patient, potentially enhancing the effectiveness of therapeutic outcomes.
虚拟现实(VR)恐怖游戏可以唤起强烈的恐惧和焦虑感,但目前尚不清楚VR环境中不同类型的恐惧刺激如何影响这些生理和情绪反应。虽然之前的研究通常是基于全功能恐怖游戏来调查多感官紧张情境,但很少有人直接比较不同的恐惧刺激——特别是基于即时威胁(如突然的跳跃恐惧或追逐事件)和长时间氛围(如持续的怪异氛围)线索对恐惧生理指标的影响。为了解决这一差距,我们开发了一款定制的VR恐怖游戏,将这两类刺激分离开来,使对照实验能够检查它们各自对用户生理和自我报告恐惧的影响。我们将具有即时和长时间刺激的实验场景与基线控制场景进行比较,以评估其影响。结果证实,即时刺激对心率(HR)数据有更明显的影响,特别是在最大BPM和平均BPM指标中,而长时间刺激对皮电活动(EDA)有更强的影响,特别是在EDA Max和EDA Mean Absolute Difference (MAD)指标中。研究结果还揭示了某些生理指标的显著性别差异,并表明基于vr的干预措施可以通过操纵呈现给患者的情绪刺激类型来调节特定的生理系统,从而潜在地提高治疗结果的有效性。
{"title":"Correlations between instant and prolonged stimuli with physiological and subjective responses in VR horror","authors":"Zeren Tao,&nbsp;Qilei Sun,&nbsp;Xiaohan Wang,&nbsp;Zuoqing Yang,&nbsp;Shengqiao Wu,&nbsp;Yibang Zhao,&nbsp;Binwei Lei","doi":"10.1016/j.cag.2025.104470","DOIUrl":"10.1016/j.cag.2025.104470","url":null,"abstract":"<div><div>Virtual reality (VR) horror games can evoke intense feelings of fear and anxiety, yet it remains unclear how different types of fear stimuli within VR environments contribute to these physiological and emotional responses. While prior studies often investigate multisensory tension scenarios as a whole based on full-featured horror games, few have directly compared the effects of distinct fear stimuli—specifically, instant threat-based (e.g., sudden jump scares or chasing events) and prolonged atmospheric (e.g., persistent eerie ambiance) cues—on physiological indicators of fear. To address this gap, we developed a custom VR horror game that isolates these two categories of stimuli, enabling controlled experiments to examine their respective impacts on user physiology and self-reported fear. We compared experimental scenes featuring instant and prolonged stimuli against a baseline control scene to evaluate their influence. The results validate that instant stimuli exert a more pronounced influence on heart rate (HR) data, particularly in Maximum BPM and Average BPM metrics, while prolonged stimuli have a stronger effect on electrodermal activity (EDA), especially in EDA Max and EDA Mean Absolute Difference (MAD) metrics. The findings also reveal significant gender differences in certain physiological indicators and suggest that VR-based interventions could be tailored to modulate specific physiological systems by manipulating the type of emotional stimuli presented to the patient, potentially enhancing the effectiveness of therapeutic outcomes.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104470"},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ForwardTerrain: One-pass terrain modeling through mid-air sketching without backtracking ForwardTerrain:通过空中草图的一遍地形建模,没有回溯
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-04 DOI: 10.1016/j.cag.2025.104468
Yang Zhou , Wentao Chen , Xinyu Zhang , Mingyu Zhai , Huawei Tu , Guihuan Feng , Bin Luo
Terrains are key elements in 3D scene creation for applications such as VR/AR, simulation, and games. Current sketch-based terrain modeling tools rely on backtracking workflows where users usually repeatedly undo and revise previous strokes to adjust the terrain model. This backtracking tends to interrupt creative flow and increase cognitive effort. We present ForwardTerrain, an interactive system that enables one-pass terrain modeling through mid-air sketching without backtracking. Instead of reverting to earlier sketches, users can directly edit and extend the existing terrain model, supporting a continuous and fluid authoring experience. Powered by a StyleGAN2-based generator, our system interprets sketches into plausible terrain shapes in real time. In a controlled user study (N = 24), ForwardTerrain significantly improved both modeling efficiency and accuracy compared to backtracking workflows, reducing task time by 15% and 11%, and increasing accuracy by 13% and 20% in generation and modification tasks, respectively. Participants also reported higher perceived creativity support and lower cognitive load, particularly among non-expert users. These results highlight the value of one-pass workflows in 3D modeling, fostering smoother creative experiences and greater accessibility.
地形是用于VR/AR、模拟和游戏等应用程序的3D场景创建的关键元素。当前基于草图的地形建模工具依赖于回溯工作流,用户通常会反复撤消和修改以前的笔画来调整地形模型。这种回溯倾向于打断创意流并增加认知努力。我们提出了ForwardTerrain,这是一个交互式系统,可以通过空中草图进行一次地形建模,而无需回溯。用户可以直接编辑和扩展现有的地形模型,而不是恢复到早期的草图,从而支持连续和流畅的创作体验。由基于stylegan2的生成器提供动力,我们的系统可以实时将草图解释为合理的地形形状。在一项受控用户研究中(N = 24),与回溯工作流相比,ForwardTerrain显著提高了建模效率和准确性,在生成和修改任务中分别减少了15%和11%的任务时间,提高了13%和20%的准确性。参与者还报告了更高的感知创造力支持和更低的认知负荷,特别是在非专业用户中。这些结果突出了3D建模中一次工作流程的价值,促进了更流畅的创作体验和更大的可访问性。
{"title":"ForwardTerrain: One-pass terrain modeling through mid-air sketching without backtracking","authors":"Yang Zhou ,&nbsp;Wentao Chen ,&nbsp;Xinyu Zhang ,&nbsp;Mingyu Zhai ,&nbsp;Huawei Tu ,&nbsp;Guihuan Feng ,&nbsp;Bin Luo","doi":"10.1016/j.cag.2025.104468","DOIUrl":"10.1016/j.cag.2025.104468","url":null,"abstract":"<div><div>Terrains are key elements in 3D scene creation for applications such as VR/AR, simulation, and games. Current sketch-based terrain modeling tools rely on backtracking workflows where users usually repeatedly undo and revise previous strokes to adjust the terrain model. This backtracking tends to interrupt creative flow and increase cognitive effort. We present ForwardTerrain, an interactive system that enables one-pass terrain modeling through mid-air sketching without backtracking. Instead of reverting to earlier sketches, users can directly edit and extend the existing terrain model, supporting a continuous and fluid authoring experience. Powered by a StyleGAN2-based generator, our system interprets sketches into plausible terrain shapes in real time. In a controlled user study (N = 24), ForwardTerrain significantly improved both modeling efficiency and accuracy compared to backtracking workflows, reducing task time by 15% and 11%, and increasing accuracy by 13% and 20% in generation and modification tasks, respectively. Participants also reported higher perceived creativity support and lower cognitive load, particularly among non-expert users. These results highlight the value of one-pass workflows in 3D modeling, fostering smoother creative experiences and greater accessibility.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104468"},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WebGS360: Towards web-based visualization of Gaussian Splatting from panoramic images WebGS360:基于web的全景图像高斯飞溅可视化
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-04 DOI: 10.1016/j.cag.2025.104462
Chongli Zhang , Pengyu Wang , Hao Zhang , Jing Lv , Xiuquan Qiao , Yakun Huang
Panoramic images are widely used in Web-based Augmented Reality applications to reduce capture complexity and storage requirements. However, delivering real-time, fluid novel view media services on the Web for a high-fidelity immersive experience remains a pressing challenge. To surpass these challenges, we propose WebGS360, a Web-based real-time rendering system that utilizes end-to-end Gaussian Splatting mapping based on panoramic images. WebGS360 first introduces an end-to-end modeling integrating with appearance and geometric constraints to address overfitting issues caused by non-uniform distortion and sparse viewpoint information from panoramic images. Furthermore, through in-depth analysis of Gaussian neural rendering, WebGS360 proposes innovative optimization techniques, including level of detail control, parallel sorting of Gaussian points, and tile-based rendering. These innovations effectively resolve the rendering artifacts in large-scale Gaussian scenes, significantly enhancing the fluidity and immersion of free-viewpoint observation. Experimental results demonstrate that WebGS360 significantly outperforms other advanced baselines in novel view synthesis quality and generalization. Specifically, it improves PSNR by approximately 0.859–3.774 dB in scene mapping, and achieves a high rendering frame rate of no less than 100FPS, surpassing traditional WebGL-based methods, fully showcasing its superiority in real-time interaction.
全景图像广泛应用于基于web的增强现实应用中,以降低捕获复杂性和存储要求。然而,在网络上提供实时、流畅的新颖视点媒体服务以获得高保真的沉浸式体验仍然是一个紧迫的挑战。为了克服这些挑战,我们提出了WebGS360,这是一个基于web的实时渲染系统,利用基于全景图像的端到端高斯飞溅映射。WebGS360首先引入了集成外观和几何约束的端到端建模,以解决全景图像中不均匀失真和稀疏视点信息引起的过拟合问题。此外,通过对高斯神经渲染的深入分析,WebGS360提出了创新的优化技术,包括细节层次控制、高斯点并行排序和基于tile的渲染。这些创新有效地解决了大规模高斯场景的渲染伪影,显著增强了自由视点观察的流动性和沉浸感。实验结果表明,WebGS360在新视图合成质量和泛化方面明显优于其他高级基线。具体而言,该方法在场景映射方面提高了约0.859-3.774 dB的PSNR,并实现了不低于100FPS的高渲染帧率,超越了传统的基于webgl的方法,充分展示了其在实时交互方面的优势。
{"title":"WebGS360: Towards web-based visualization of Gaussian Splatting from panoramic images","authors":"Chongli Zhang ,&nbsp;Pengyu Wang ,&nbsp;Hao Zhang ,&nbsp;Jing Lv ,&nbsp;Xiuquan Qiao ,&nbsp;Yakun Huang","doi":"10.1016/j.cag.2025.104462","DOIUrl":"10.1016/j.cag.2025.104462","url":null,"abstract":"<div><div>Panoramic images are widely used in Web-based Augmented Reality applications to reduce capture complexity and storage requirements. However, delivering real-time, fluid novel view media services on the Web for a high-fidelity immersive experience remains a pressing challenge. To surpass these challenges, we propose WebGS360, a Web-based real-time rendering system that utilizes end-to-end Gaussian Splatting mapping based on panoramic images. WebGS360 first introduces an end-to-end modeling integrating with appearance and geometric constraints to address overfitting issues caused by non-uniform distortion and sparse viewpoint information from panoramic images. Furthermore, through in-depth analysis of Gaussian neural rendering, WebGS360 proposes innovative optimization techniques, including level of detail control, parallel sorting of Gaussian points, and tile-based rendering. These innovations effectively resolve the rendering artifacts in large-scale Gaussian scenes, significantly enhancing the fluidity and immersion of free-viewpoint observation. Experimental results demonstrate that WebGS360 significantly outperforms other advanced baselines in novel view synthesis quality and generalization. Specifically, it improves PSNR by approximately 0.859–3.774 dB in scene mapping, and achieves a high rendering frame rate of no less than 100FPS, surpassing traditional WebGL-based methods, fully showcasing its superiority in real-time interaction.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104462"},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDF-Former: A cross-domain HDR deghosting network with Statistical Deviation Fuzzy Membership SDF-Former:具有统计偏差模糊隶属度的跨域HDR去虚影网络
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-03 DOI: 10.1016/j.cag.2025.104465
Ying Qi , Zhaoyuan Huang , Qiushi Li , Jian Li , Teng Wan , Qiang Zhang
High Dynamic Range (HDR) imaging through multi-exposure fusion aims to reconstruct the full scene radiance by merging multiple Low Dynamic Range (LDR) images. A critical challenge is the ghosting artifact, induced by object motion or camera shake in dynamic scenes. Current deep learning methods often lack fine-grained, pixel-level control and tend to apply uniform processing across simple and challenging regions, hindering effective adaptive computational resource allocation. This paper presents SDF-Former, a novel HDR deghosting network. Its core innovation is the Statistical Deviation Fuzzy Membership (SDFM) mechanism, which uses fuzzy logic to quantify the statistical deviation of local pixel features. This enables the precise identification of challenging regions, such as motion edges and saturated areas, providing pixel-level difficulty awareness. To leverage this awareness, we design a cross-domain collaborative framework with a FEM and a FAT. This framework integrates the strengths of spatial-domain feature alignment with frequency-domain global modeling. The membership map from SDFM acts as an adaptive gating signal, selectively activating the computationally demanding FAT module. This approach directs global context modeling to focus more intensively on critical regions, thus ensuring inference efficiency. Extensive evaluations on public HDR datasets demonstrate that SDF-Former achieves state-of-the-art performance in both quantitative metrics and visual quality, showing clear advantages in complex scenarios involving large-scale motion and extreme exposures. By fusing fuzzy statistics-based, pixel-level adaptive control with efficient cross-domain processing, SDF-Former provides a computationally optimized solution for high-quality dynamic HDR reconstruction.
高动态范围(HDR)多曝光融合成像的目的是通过合并多幅低动态范围(LDR)图像来重建整个场景的亮度。一个关键的挑战是在动态场景中由物体运动或相机抖动引起的鬼影伪影。目前的深度学习方法往往缺乏细粒度的、像素级的控制,并且倾向于在简单和具有挑战性的区域应用统一的处理,阻碍了有效的自适应计算资源分配。本文提出了一种新的HDR去虚影网络SDF-Former。其核心创新是SDFM (Statistical Deviation Fuzzy Membership)机制,利用模糊逻辑对局部像素特征的统计偏差进行量化。这使得精确识别具有挑战性的区域,如运动边缘和饱和区域,提供像素级的难度感知。为了利用这种意识,我们设计了一个具有FEM和FAT的跨域协作框架。该框架将空域特征对齐与频域全局建模相结合。来自SDFM的成员映射充当自适应门控信号,选择性地激活计算要求高的FAT模块。这种方法使全局上下文建模更加集中于关键区域,从而保证了推理效率。对公共HDR数据集的广泛评估表明,SDF-Former在定量指标和视觉质量方面都达到了最先进的性能,在涉及大规模运动和极端暴露的复杂场景中显示出明显的优势。通过融合基于模糊统计的像素级自适应控制和高效的跨域处理,SDF-Former为高质量的动态HDR重建提供了计算优化的解决方案。
{"title":"SDF-Former: A cross-domain HDR deghosting network with Statistical Deviation Fuzzy Membership","authors":"Ying Qi ,&nbsp;Zhaoyuan Huang ,&nbsp;Qiushi Li ,&nbsp;Jian Li ,&nbsp;Teng Wan ,&nbsp;Qiang Zhang","doi":"10.1016/j.cag.2025.104465","DOIUrl":"10.1016/j.cag.2025.104465","url":null,"abstract":"<div><div>High Dynamic Range (HDR) imaging through multi-exposure fusion aims to reconstruct the full scene radiance by merging multiple Low Dynamic Range (LDR) images. A critical challenge is the ghosting artifact, induced by object motion or camera shake in dynamic scenes. Current deep learning methods often lack fine-grained, pixel-level control and tend to apply uniform processing across simple and challenging regions, hindering effective adaptive computational resource allocation. This paper presents SDF-Former, a novel HDR deghosting network. Its core innovation is the Statistical Deviation Fuzzy Membership (SDFM) mechanism, which uses fuzzy logic to quantify the statistical deviation of local pixel features. This enables the precise identification of challenging regions, such as motion edges and saturated areas, providing pixel-level difficulty awareness. To leverage this awareness, we design a cross-domain collaborative framework with a FEM and a FAT. This framework integrates the strengths of spatial-domain feature alignment with frequency-domain global modeling. The membership map from SDFM acts as an adaptive gating signal, selectively activating the computationally demanding FAT module. This approach directs global context modeling to focus more intensively on critical regions, thus ensuring inference efficiency. Extensive evaluations on public HDR datasets demonstrate that SDF-Former achieves state-of-the-art performance in both quantitative metrics and visual quality, showing clear advantages in complex scenarios involving large-scale motion and extreme exposures. By fusing fuzzy statistics-based, pixel-level adaptive control with efficient cross-domain processing, SDF-Former provides a computationally optimized solution for high-quality dynamic HDR reconstruction.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104465"},"PeriodicalIF":2.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth perception in virtual reality: The impact of spatial interaction and color-based cues 虚拟现实中的深度感知:空间交互和基于颜色的线索的影响
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1016/j.cag.2025.104461
Yusi Sun, Haoyan Guan, Leith K.Y. Chan
Depth perception is crucial for accurate spatial interaction in both physical and virtual environments. However, in Virtual Reality (VR), users often experience perceptual distortions, such as foreshortening, which impair tasks requiring precise depth judgments, including 3D pointing and object manipulation. Traditional depth cues, such as occlusion and relative size, often fail in VR, highlighting the need for alternative visual cues. Among these, contrast and hue have shown potential in influencing depth perception, yet their independent effects remain unclear. This paper presents two-phase study to systematically examining axis-specific distortions and the effects of color contrast and hue on depth estimation in VR using the CIELAB color space. Phase 1 quantifies axis-specific perceptual biases using 3D sketching tasks, revealing that reconstruction errors in the depth dimension are twice as large as those along other axes. Phase 2 examines the impact of color attributes on depth perception using CIELAB-based perceptual matching, showing that color contrast within a specific range improve depth discrimination accuracy. Our findings provide empirical evidence of axis-specific depth distortions in VR and suggest design guidelines that prioritize contrast over hue variations to enhance spatial perception. These insights contribute to VR interface improvements, refining depth perception mechanisms for applications requiring precise spatial awareness.
在物理和虚拟环境中,深度感知对于精确的空间交互至关重要。然而,在虚拟现实(VR)中,用户经常会经历感知扭曲,例如缩短视野,这会损害需要精确深度判断的任务,包括3D指向和物体操作。传统的深度线索,如遮挡和相对大小,在VR中经常失败,这突出了对替代视觉线索的需求。其中,对比度和色调已经显示出影响深度感知的潜力,但它们的独立作用尚不清楚。本文提出了两阶段的研究,系统地研究了使用CIELAB颜色空间的VR中特定轴的扭曲以及颜色对比度和色调对深度估计的影响。阶段1使用3D草图任务量化特定轴的感知偏差,揭示深度维度的重建误差是沿其他轴的两倍。第二阶段使用基于cielab的感知匹配来检验颜色属性对深度感知的影响,表明在特定范围内的颜色对比度提高了深度识别精度。我们的研究结果提供了VR中轴特定深度扭曲的经验证据,并建议设计指南优先考虑对比度而不是色调变化,以增强空间感知。这些见解有助于VR界面的改进,为需要精确空间感知的应用程序改进深度感知机制。
{"title":"Depth perception in virtual reality: The impact of spatial interaction and color-based cues","authors":"Yusi Sun,&nbsp;Haoyan Guan,&nbsp;Leith K.Y. Chan","doi":"10.1016/j.cag.2025.104461","DOIUrl":"10.1016/j.cag.2025.104461","url":null,"abstract":"<div><div>Depth perception is crucial for accurate spatial interaction in both physical and virtual environments. However, in Virtual Reality (VR), users often experience perceptual distortions, such as foreshortening, which impair tasks requiring precise depth judgments, including 3D pointing and object manipulation. Traditional depth cues, such as occlusion and relative size, often fail in VR, highlighting the need for alternative visual cues. Among these, contrast and hue have shown potential in influencing depth perception, yet their independent effects remain unclear. This paper presents two-phase study to systematically examining axis-specific distortions and the effects of color contrast and hue on depth estimation in VR using the CIELAB color space. Phase 1 quantifies axis-specific perceptual biases using 3D sketching tasks, revealing that reconstruction errors in the depth dimension are twice as large as those along other axes. Phase 2 examines the impact of color attributes on depth perception using CIELAB-based perceptual matching, showing that color contrast within a specific range improve depth discrimination accuracy. Our findings provide empirical evidence of axis-specific depth distortions in VR and suggest design guidelines that prioritize contrast over hue variations to enhance spatial perception. These insights contribute to VR interface improvements, refining depth perception mechanisms for applications requiring precise spatial awareness.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104461"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note for Issue 132 of Computers & Graphics 第132期《计算机与图形学》的编辑说明
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-01 DOI: 10.1016/j.cag.2025.104491
Joaquim Jorge
{"title":"Editorial Note for Issue 132 of Computers & Graphics","authors":"Joaquim Jorge","doi":"10.1016/j.cag.2025.104491","DOIUrl":"10.1016/j.cag.2025.104491","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104491"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145578744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InstantHuman: Single-image to high-fidelity 3D human in under one second InstantHuman:单图像高保真3D人在一秒钟内
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-31 DOI: 10.1016/j.cag.2025.104464
Tianze Gao , Bowei Yin , Hangtao Feng , Zhangjin Huang
We present InstantHuman, a novel method for high-fidelity 3D human reconstruction from a single RGB image with fast inference. Existing approaches either regress directly from 2D images to 3D models, which often struggle to capture fine details due to structural misalignment between modalities, or adopt per-pixel Gaussian representations that lack explicit human priors. To overcome these limitations, we propose a novel framework that integrates a projection-aware feature sampler which effectively bridges the structural gap between 2D pixels and 3D vertices, with a dual-embedding strategy that enriches vertex-level features through learnable identifiers and pose-specific embeddings. Given the monocular setting, reasoning about occlusion is essential. We introduce a visibility-aware mechanism to distinguish and handle visible and occluded vertices. Furthermore, to enhance face reconstruction quality, we apply additional supervisory losses in the face region by leveraging off-axis projection, which significantly enhances geometric fidelity in face areas. Comprehensive experiments on public datasets demonstrate that InstantHuman outperforms state-of-the-art methods in reconstruction accuracy and face detail preservation, particularly under novel views. Notably, InstantHuman achieves fast inference, producing complete 3D human reconstructions in under one second.
我们提出了一种基于快速推理的单张RGB图像进行高保真三维人体重建的新方法InstantHuman。现有的方法要么直接从2D图像回归到3D模型,这往往很难捕捉到精细的细节,因为模式之间的结构不一致,要么采用缺乏明确的人类先验的逐像素高斯表示。为了克服这些限制,我们提出了一种新的框架,该框架集成了一个投影感知特征采样器,该采样器有效地弥合了2D像素和3D顶点之间的结构差距,并采用双嵌入策略,通过可学习的标识符和特定姿态的嵌入来丰富顶点级特征。考虑到单目设置,对遮挡的推理是必不可少的。我们引入了一种可见性感知机制来区分和处理可见和遮挡的顶点。此外,为了提高人脸重建质量,我们利用离轴投影在人脸区域施加额外的监督损失,这大大提高了人脸区域的几何保真度。在公共数据集上的综合实验表明,InstantHuman在重建精度和面部细节保存方面优于最先进的方法,特别是在新视图下。值得注意的是,InstantHuman实现了快速推理,在一秒钟内生成完整的3D人体重建。
{"title":"InstantHuman: Single-image to high-fidelity 3D human in under one second","authors":"Tianze Gao ,&nbsp;Bowei Yin ,&nbsp;Hangtao Feng ,&nbsp;Zhangjin Huang","doi":"10.1016/j.cag.2025.104464","DOIUrl":"10.1016/j.cag.2025.104464","url":null,"abstract":"<div><div>We present InstantHuman, a novel method for high-fidelity 3D human reconstruction from a single RGB image with fast inference. Existing approaches either regress directly from 2D images to 3D models, which often struggle to capture fine details due to structural misalignment between modalities, or adopt per-pixel Gaussian representations that lack explicit human priors. To overcome these limitations, we propose a novel framework that integrates a projection-aware feature sampler which effectively bridges the structural gap between 2D pixels and 3D vertices, with a dual-embedding strategy that enriches vertex-level features through learnable identifiers and pose-specific embeddings. Given the monocular setting, reasoning about occlusion is essential. We introduce a visibility-aware mechanism to distinguish and handle visible and occluded vertices. Furthermore, to enhance face reconstruction quality, we apply additional supervisory losses in the face region by leveraging off-axis projection, which significantly enhances geometric fidelity in face areas. Comprehensive experiments on public datasets demonstrate that InstantHuman outperforms state-of-the-art methods in reconstruction accuracy and face detail preservation, particularly under novel views. Notably, InstantHuman achieves fast inference, producing complete 3D human reconstructions in under one second.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104464"},"PeriodicalIF":2.8,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Peridynamics-based simulation of viscoelastic solids and granular materials 粘弹性固体和颗粒材料的周动力学模拟
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-29 DOI: 10.1016/j.cag.2025.104463
Jiamin Wang , Haoping Wang , Xiaokun Wang , Yalan Zhang , Jiří Kosinka , Steffen Frey , Alexandru Telea , Xiaojuan Ban
Viscoelastic solids and granular materials have been extensively studied in Classical Continuum Mechanics (CCM). However, CCM faces inherent limitations when dealing with discontinuity problems. Peridynamics, as a non-local continuum theory, provides a novel approach for simulating complex material behavior. We propose a unified viscoelasto-plastic simulation framework based on State-Based Peridynamics (SBPD) which derives a time-dependent unified force density expression through the introduction of the Prony model. Within SBPD, we integrate various yield criteria and mapping strategies to support granular flow simulation, and dynamically adjust material stiffness according to local density. Additionally, we construct a multi-material coupling system incorporating viscoelastic materials, granular flows, and rigid bodies, enhancing computational stability while expanding the diversity of simulation scenarios. Experiments show that our method can effectively simulate relaxation, creep, and hysteresis behaviors of viscoelastic solids, as well as flow and accumulation phenomena of granular materials, all of which are very challenging to simulate with earlier methods. Furthermore, our method allows flexible parameter adjustment to meet various simulation requirements.
粘弹性固体和颗粒材料在经典连续介质力学中得到了广泛的研究。然而,CCM在处理不连续问题时面临着固有的局限性。周动力学作为一种非局部连续介质理论,为模拟复杂的材料行为提供了一种新的方法。本文提出了一种基于状态周动力学(SBPD)的粘弹塑性统一仿真框架,该框架通过引入proony模型导出了一个随时间变化的统一力密度表达式。在SBPD中,我们整合了各种屈服标准和映射策略来支持颗粒流动模拟,并根据局部密度动态调整材料刚度。此外,我们构建了一个包含粘弹性材料、颗粒流和刚体的多材料耦合系统,提高了计算稳定性,同时扩大了模拟场景的多样性。实验表明,该方法可以有效地模拟粘弹性固体的松弛、蠕变和滞后行为,以及颗粒材料的流动和堆积现象,这些都是以往方法难以模拟的。此外,我们的方法允许灵活的参数调整,以满足各种仿真要求。
{"title":"Peridynamics-based simulation of viscoelastic solids and granular materials","authors":"Jiamin Wang ,&nbsp;Haoping Wang ,&nbsp;Xiaokun Wang ,&nbsp;Yalan Zhang ,&nbsp;Jiří Kosinka ,&nbsp;Steffen Frey ,&nbsp;Alexandru Telea ,&nbsp;Xiaojuan Ban","doi":"10.1016/j.cag.2025.104463","DOIUrl":"10.1016/j.cag.2025.104463","url":null,"abstract":"<div><div>Viscoelastic solids and granular materials have been extensively studied in Classical Continuum Mechanics (CCM). However, CCM faces inherent limitations when dealing with discontinuity problems. Peridynamics, as a non-local continuum theory, provides a novel approach for simulating complex material behavior. We propose a unified viscoelasto-plastic simulation framework based on State-Based Peridynamics (SBPD) which derives a time-dependent unified force density expression through the introduction of the Prony model. Within SBPD, we integrate various yield criteria and mapping strategies to support granular flow simulation, and dynamically adjust material stiffness according to local density. Additionally, we construct a multi-material coupling system incorporating viscoelastic materials, granular flows, and rigid bodies, enhancing computational stability while expanding the diversity of simulation scenarios. Experiments show that our method can effectively simulate relaxation, creep, and hysteresis behaviors of viscoelastic solids, as well as flow and accumulation phenomena of granular materials, all of which are very challenging to simulate with earlier methods. Furthermore, our method allows flexible parameter adjustment to meet various simulation requirements.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104463"},"PeriodicalIF":2.8,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preface to the special issue: SIBGRAPI 2024 tutorials 特刊前言:SIBGRAPI 2024教程
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-24 DOI: 10.1016/j.cag.2025.104460
Soraia Raupp Musse, Ricardo Marroquim, Zenilton K.G. Patrocínio
{"title":"Preface to the special issue: SIBGRAPI 2024 tutorials","authors":"Soraia Raupp Musse,&nbsp;Ricardo Marroquim,&nbsp;Zenilton K.G. Patrocínio","doi":"10.1016/j.cag.2025.104460","DOIUrl":"10.1016/j.cag.2025.104460","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104460"},"PeriodicalIF":2.8,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Customization of 3D printed sensing devices in the layered fabrication space 在分层制造空间中定制3D打印传感设备
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-10-22 DOI: 10.1016/j.cag.2025.104459
José Eduardo Aguilar-Segovia , Salim Perchy , Pierre-Alexandre Hugron , Sylvain Guégan , Marie Babel , Sylvain Lefebvre
Additive manufacturing (AM) enables the fabrication of multi-material 3D structures with sensing capabilities by integrating conductive materials alongside flexible or rigid parts. Recent research focuses on embedding sensing elements into 3D structures using complex algorithms or intricate design approaches to create interactive or monitoring devices. A challenge in manufacturing bespoke devices is performing user-specific ergonomic and anthropomorphic adjustments to multi-material 3D models. This paper proposes a novel computational fabrication method that operates directly within the layered fabrication space. The proposed method transforms simple sensing structures into complex designs through layer transformations — rotation, translation, and scaling — while addressing constrains from design for additive manufacturing (DfAM). These constraints include ensuring electrical conductivity in conductive parts and preventing unintended electrical connections during fabrication. In particular, a staggered layer deposition strategy is introduced to avoid these unwanted electrical connections and to reduce material contamination. To validate our approach, a custom-fit handle with embedded capacitive sensors is manufactured as a one-shot multi-material print. The handle measures forces applied to the embedded sensors by the user’s fingertips and palm. A user study validates that our method successfully adjusts the handle to a set of users, ensuring ergonomic comfort. Our results demonstrate the potential of our method for fabricating personalized sensing devices, enabling designers to explore diverse structures by transforming a single template design.
增材制造(AM)通过将导电材料与柔性或刚性部件集成在一起,可以制造具有传感能力的多材料3D结构。最近的研究重点是使用复杂的算法或复杂的设计方法将传感元件嵌入到3D结构中,以创建交互式或监测设备。制造定制设备的一个挑战是对多材料3D模型执行用户特定的人体工程学和拟人化调整。本文提出了一种新的计算制造方法,该方法直接在分层制造空间内操作。该方法通过旋转、平移和缩放等层变换,将简单的传感结构转化为复杂的设计,同时解决了增材制造(DfAM)设计的限制。这些限制包括确保导电部件的导电性和防止制造过程中意外的电气连接。特别是,引入了交错层沉积策略,以避免这些不必要的电气连接并减少材料污染。为了验证我们的方法,我们制造了一个带有嵌入式电容传感器的定制手柄,作为一次多材料打印。手柄通过用户的指尖和手掌测量施加在嵌入式传感器上的力。一项用户研究证实,我们的方法成功地调整了一组用户的手柄,确保符合人体工程学的舒适性。我们的研究结果证明了我们的方法在制造个性化传感设备方面的潜力,使设计师能够通过转换单个模板设计来探索不同的结构。
{"title":"Customization of 3D printed sensing devices in the layered fabrication space","authors":"José Eduardo Aguilar-Segovia ,&nbsp;Salim Perchy ,&nbsp;Pierre-Alexandre Hugron ,&nbsp;Sylvain Guégan ,&nbsp;Marie Babel ,&nbsp;Sylvain Lefebvre","doi":"10.1016/j.cag.2025.104459","DOIUrl":"10.1016/j.cag.2025.104459","url":null,"abstract":"<div><div>Additive manufacturing (AM) enables the fabrication of multi-material 3D structures with sensing capabilities by integrating conductive materials alongside flexible or rigid parts. Recent research focuses on embedding sensing elements into 3D structures using complex algorithms or intricate design approaches to create interactive or monitoring devices. A challenge in manufacturing bespoke devices is performing user-specific ergonomic and anthropomorphic adjustments to multi-material 3D models. This paper proposes a novel computational fabrication method that operates directly within the layered fabrication space. The proposed method transforms simple sensing structures into complex designs through layer transformations — rotation, translation, and scaling — while addressing constrains from design for additive manufacturing (DfAM). These constraints include ensuring electrical conductivity in conductive parts and preventing unintended electrical connections during fabrication. In particular, a staggered layer deposition strategy is introduced to avoid these unwanted electrical connections and to reduce material contamination. To validate our approach, a custom-fit handle with embedded capacitive sensors is manufactured as a one-shot multi-material print. The handle measures forces applied to the embedded sensors by the user’s fingertips and palm. A user study validates that our method successfully adjusts the handle to a set of users, ensuring ergonomic comfort. Our results demonstrate the potential of our method for fabricating personalized sensing devices, enabling designers to explore diverse structures by transforming a single template design.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104459"},"PeriodicalIF":2.8,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1