首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
InstantHuman: Single-image to high-fidelity 3D human in under one second InstantHuman:单图像高保真3D人在一秒钟内
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-10-31 DOI: 10.1016/j.cag.2025.104464
Tianze Gao , Bowei Yin , Hangtao Feng , Zhangjin Huang
We present InstantHuman, a novel method for high-fidelity 3D human reconstruction from a single RGB image with fast inference. Existing approaches either regress directly from 2D images to 3D models, which often struggle to capture fine details due to structural misalignment between modalities, or adopt per-pixel Gaussian representations that lack explicit human priors. To overcome these limitations, we propose a novel framework that integrates a projection-aware feature sampler which effectively bridges the structural gap between 2D pixels and 3D vertices, with a dual-embedding strategy that enriches vertex-level features through learnable identifiers and pose-specific embeddings. Given the monocular setting, reasoning about occlusion is essential. We introduce a visibility-aware mechanism to distinguish and handle visible and occluded vertices. Furthermore, to enhance face reconstruction quality, we apply additional supervisory losses in the face region by leveraging off-axis projection, which significantly enhances geometric fidelity in face areas. Comprehensive experiments on public datasets demonstrate that InstantHuman outperforms state-of-the-art methods in reconstruction accuracy and face detail preservation, particularly under novel views. Notably, InstantHuman achieves fast inference, producing complete 3D human reconstructions in under one second.
我们提出了一种基于快速推理的单张RGB图像进行高保真三维人体重建的新方法InstantHuman。现有的方法要么直接从2D图像回归到3D模型,这往往很难捕捉到精细的细节,因为模式之间的结构不一致,要么采用缺乏明确的人类先验的逐像素高斯表示。为了克服这些限制,我们提出了一种新的框架,该框架集成了一个投影感知特征采样器,该采样器有效地弥合了2D像素和3D顶点之间的结构差距,并采用双嵌入策略,通过可学习的标识符和特定姿态的嵌入来丰富顶点级特征。考虑到单目设置,对遮挡的推理是必不可少的。我们引入了一种可见性感知机制来区分和处理可见和遮挡的顶点。此外,为了提高人脸重建质量,我们利用离轴投影在人脸区域施加额外的监督损失,这大大提高了人脸区域的几何保真度。在公共数据集上的综合实验表明,InstantHuman在重建精度和面部细节保存方面优于最先进的方法,特别是在新视图下。值得注意的是,InstantHuman实现了快速推理,在一秒钟内生成完整的3D人体重建。
{"title":"InstantHuman: Single-image to high-fidelity 3D human in under one second","authors":"Tianze Gao ,&nbsp;Bowei Yin ,&nbsp;Hangtao Feng ,&nbsp;Zhangjin Huang","doi":"10.1016/j.cag.2025.104464","DOIUrl":"10.1016/j.cag.2025.104464","url":null,"abstract":"<div><div>We present InstantHuman, a novel method for high-fidelity 3D human reconstruction from a single RGB image with fast inference. Existing approaches either regress directly from 2D images to 3D models, which often struggle to capture fine details due to structural misalignment between modalities, or adopt per-pixel Gaussian representations that lack explicit human priors. To overcome these limitations, we propose a novel framework that integrates a projection-aware feature sampler which effectively bridges the structural gap between 2D pixels and 3D vertices, with a dual-embedding strategy that enriches vertex-level features through learnable identifiers and pose-specific embeddings. Given the monocular setting, reasoning about occlusion is essential. We introduce a visibility-aware mechanism to distinguish and handle visible and occluded vertices. Furthermore, to enhance face reconstruction quality, we apply additional supervisory losses in the face region by leveraging off-axis projection, which significantly enhances geometric fidelity in face areas. Comprehensive experiments on public datasets demonstrate that InstantHuman outperforms state-of-the-art methods in reconstruction accuracy and face detail preservation, particularly under novel views. Notably, InstantHuman achieves fast inference, producing complete 3D human reconstructions in under one second.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104464"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDF-Former: A cross-domain HDR deghosting network with Statistical Deviation Fuzzy Membership SDF-Former:具有统计偏差模糊隶属度的跨域HDR去虚影网络
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-11-03 DOI: 10.1016/j.cag.2025.104465
Ying Qi , Zhaoyuan Huang , Qiushi Li , Jian Li , Teng Wan , Qiang Zhang
High Dynamic Range (HDR) imaging through multi-exposure fusion aims to reconstruct the full scene radiance by merging multiple Low Dynamic Range (LDR) images. A critical challenge is the ghosting artifact, induced by object motion or camera shake in dynamic scenes. Current deep learning methods often lack fine-grained, pixel-level control and tend to apply uniform processing across simple and challenging regions, hindering effective adaptive computational resource allocation. This paper presents SDF-Former, a novel HDR deghosting network. Its core innovation is the Statistical Deviation Fuzzy Membership (SDFM) mechanism, which uses fuzzy logic to quantify the statistical deviation of local pixel features. This enables the precise identification of challenging regions, such as motion edges and saturated areas, providing pixel-level difficulty awareness. To leverage this awareness, we design a cross-domain collaborative framework with a FEM and a FAT. This framework integrates the strengths of spatial-domain feature alignment with frequency-domain global modeling. The membership map from SDFM acts as an adaptive gating signal, selectively activating the computationally demanding FAT module. This approach directs global context modeling to focus more intensively on critical regions, thus ensuring inference efficiency. Extensive evaluations on public HDR datasets demonstrate that SDF-Former achieves state-of-the-art performance in both quantitative metrics and visual quality, showing clear advantages in complex scenarios involving large-scale motion and extreme exposures. By fusing fuzzy statistics-based, pixel-level adaptive control with efficient cross-domain processing, SDF-Former provides a computationally optimized solution for high-quality dynamic HDR reconstruction.
高动态范围(HDR)多曝光融合成像的目的是通过合并多幅低动态范围(LDR)图像来重建整个场景的亮度。一个关键的挑战是在动态场景中由物体运动或相机抖动引起的鬼影伪影。目前的深度学习方法往往缺乏细粒度的、像素级的控制,并且倾向于在简单和具有挑战性的区域应用统一的处理,阻碍了有效的自适应计算资源分配。本文提出了一种新的HDR去虚影网络SDF-Former。其核心创新是SDFM (Statistical Deviation Fuzzy Membership)机制,利用模糊逻辑对局部像素特征的统计偏差进行量化。这使得精确识别具有挑战性的区域,如运动边缘和饱和区域,提供像素级的难度感知。为了利用这种意识,我们设计了一个具有FEM和FAT的跨域协作框架。该框架将空域特征对齐与频域全局建模相结合。来自SDFM的成员映射充当自适应门控信号,选择性地激活计算要求高的FAT模块。这种方法使全局上下文建模更加集中于关键区域,从而保证了推理效率。对公共HDR数据集的广泛评估表明,SDF-Former在定量指标和视觉质量方面都达到了最先进的性能,在涉及大规模运动和极端暴露的复杂场景中显示出明显的优势。通过融合基于模糊统计的像素级自适应控制和高效的跨域处理,SDF-Former为高质量的动态HDR重建提供了计算优化的解决方案。
{"title":"SDF-Former: A cross-domain HDR deghosting network with Statistical Deviation Fuzzy Membership","authors":"Ying Qi ,&nbsp;Zhaoyuan Huang ,&nbsp;Qiushi Li ,&nbsp;Jian Li ,&nbsp;Teng Wan ,&nbsp;Qiang Zhang","doi":"10.1016/j.cag.2025.104465","DOIUrl":"10.1016/j.cag.2025.104465","url":null,"abstract":"<div><div>High Dynamic Range (HDR) imaging through multi-exposure fusion aims to reconstruct the full scene radiance by merging multiple Low Dynamic Range (LDR) images. A critical challenge is the ghosting artifact, induced by object motion or camera shake in dynamic scenes. Current deep learning methods often lack fine-grained, pixel-level control and tend to apply uniform processing across simple and challenging regions, hindering effective adaptive computational resource allocation. This paper presents SDF-Former, a novel HDR deghosting network. Its core innovation is the Statistical Deviation Fuzzy Membership (SDFM) mechanism, which uses fuzzy logic to quantify the statistical deviation of local pixel features. This enables the precise identification of challenging regions, such as motion edges and saturated areas, providing pixel-level difficulty awareness. To leverage this awareness, we design a cross-domain collaborative framework with a FEM and a FAT. This framework integrates the strengths of spatial-domain feature alignment with frequency-domain global modeling. The membership map from SDFM acts as an adaptive gating signal, selectively activating the computationally demanding FAT module. This approach directs global context modeling to focus more intensively on critical regions, thus ensuring inference efficiency. Extensive evaluations on public HDR datasets demonstrate that SDF-Former achieves state-of-the-art performance in both quantitative metrics and visual quality, showing clear advantages in complex scenarios involving large-scale motion and extreme exposures. By fusing fuzzy statistics-based, pixel-level adaptive control with efficient cross-domain processing, SDF-Former provides a computationally optimized solution for high-quality dynamic HDR reconstruction.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104465"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design, development, and evaluation of an immersive augmented virtuality training system for transcatheter aortic valve replacement 经导管主动脉瓣置换术的沉浸式增强虚拟训练系统的设计、开发和评估
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-09-12 DOI: 10.1016/j.cag.2025.104414
Jorik Jakober , Matthias Kunz , Robert Kreher , Matteo Pantano , Daniel Braß , Janine Weidling , Christian Hansen , Rüdiger Braun-Dullaeus , Bernhard Preim
Strong procedural skills are essential to perform safe and effective transcatheter aortic valve replacement (TAVR). Traditional training takes place in the operating room (OR) on real patients and requires learning new motor skills, resulting in longer procedure times, increased risk of complications, and greater radiation exposure for patients and medical personnel. Desktop-based simulators in interventional cardiology have shown some validity but lack true depth perception, whereas head-mounted display based Virtual Reality (VR) offers intuitive 3D interaction that enhances training effectiveness and spatial understanding. However, providing realistic and immersive training remains a challenging task as both lack tactile feedback. We have developed an augmented virtuality (AV) training system for transfemoral TAVR, combining a catheter tracking device (for translational input) with a simulated virtual OR. The system enables users to manually control a virtual angiography system via hand tracking and navigate a guidewire through a virtual patient up to the aortic valve using fluoroscopic-like imaging. In addition, we conducted a preliminary user study with 12 participants, assessing cybersickness, usability, workload, sense of presence, and qualitative factors. Preliminary results indicate that the system provides realistic interaction for key procedural steps, making it a suitable learning tool for novices. Limitations in angiography system operation include the lack of haptic resistance and usability limitations related to C-arm control, particularly due to hand tracking constraints and split attention between interaction and monitoring. Suggestions for improvement include catheter rotation tracking, expanded procedural coverage, and enhanced fluoroscopic image fidelity.
良好的操作技巧是进行安全有效的经导管主动脉瓣置换术(TAVR)必不可少的。传统的培训是在手术室(OR)对真正的病人进行的,需要学习新的运动技能,导致手术时间更长,并发症风险增加,患者和医务人员暴露在更大的辐射下。在介入心脏病学中,基于桌面的模拟器显示出一定的有效性,但缺乏真正的深度感知,而基于头戴式显示器的虚拟现实(VR)提供直观的3D交互,增强了训练效果和空间理解。然而,提供逼真和身临其境的训练仍然是一项具有挑战性的任务,因为两者都缺乏触觉反馈。我们开发了一种用于经股TAVR的增强虚拟(AV)培训系统,将导管跟踪装置(用于平移输入)与模拟虚拟手术室相结合。该系统使用户能够通过手动跟踪来手动控制虚拟血管造影系统,并使用类似透视镜的成像技术引导导丝穿过虚拟患者到达主动脉瓣。此外,我们对12名参与者进行了初步的用户研究,评估了晕动症、可用性、工作量、存在感和定性因素。初步结果表明,该系统为关键的程序步骤提供了真实的交互,使其成为一个适合初学者的学习工具。血管造影系统操作的限制包括缺乏触觉阻力和与c臂控制相关的可用性限制,特别是由于手部跟踪限制以及交互和监测之间的注意力分散。改进建议包括导管旋转跟踪,扩大手术覆盖范围,增强透视图像保真度。
{"title":"Design, development, and evaluation of an immersive augmented virtuality training system for transcatheter aortic valve replacement","authors":"Jorik Jakober ,&nbsp;Matthias Kunz ,&nbsp;Robert Kreher ,&nbsp;Matteo Pantano ,&nbsp;Daniel Braß ,&nbsp;Janine Weidling ,&nbsp;Christian Hansen ,&nbsp;Rüdiger Braun-Dullaeus ,&nbsp;Bernhard Preim","doi":"10.1016/j.cag.2025.104414","DOIUrl":"10.1016/j.cag.2025.104414","url":null,"abstract":"<div><div>Strong procedural skills are essential to perform safe and effective transcatheter aortic valve replacement (TAVR). Traditional training takes place in the operating room (OR) on real patients and requires learning new motor skills, resulting in longer procedure times, increased risk of complications, and greater radiation exposure for patients and medical personnel. Desktop-based simulators in interventional cardiology have shown some validity but lack true depth perception, whereas head-mounted display based Virtual Reality (VR) offers intuitive 3D interaction that enhances training effectiveness and spatial understanding. However, providing realistic and immersive training remains a challenging task as both lack tactile feedback. We have developed an augmented virtuality (AV) training system for transfemoral TAVR, combining a catheter tracking device (for translational input) with a simulated virtual OR. The system enables users to manually control a virtual angiography system via hand tracking and navigate a guidewire through a virtual patient up to the aortic valve using fluoroscopic-like imaging. In addition, we conducted a preliminary user study with 12 participants, assessing cybersickness, usability, workload, sense of presence, and qualitative factors. Preliminary results indicate that the system provides realistic interaction for key procedural steps, making it a suitable learning tool for novices. Limitations in angiography system operation include the lack of haptic resistance and usability limitations related to C-arm control, particularly due to hand tracking constraints and split attention between interaction and monitoring. Suggestions for improvement include catheter rotation tracking, expanded procedural coverage, and enhanced fluoroscopic image fidelity.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104414"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusing multi-stage clicks with deep feedback aggregation for interactive image segmentation 融合多阶段点击与深度反馈聚合的交互式图像分割
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-09-24 DOI: 10.1016/j.cag.2025.104445
Jianwu Long, Yuanqin Liu, Shaoyi Wang, Shuang Chen, Qi Luo
The objective of interactive image segmentation is to generate a segmentation mask for the target object using minimal user interaction. During the interaction process, segmentation results from previous iterations are typically used as feedback to guide subsequent user input. However, existing approaches often concatenate user interactions, feedback, and low-level image features as direct inputs to the network, overlooking the high-level semantic information contained in the feedback and the issue of information dilution from click signals. To address these limitations, we propose a novel interactive image segmentation model called Multi-stage Click Fusion with deep Feedback Aggregation(MCFA). MCFA introduces a new information fusion strategy. Specifically, for feedback information, it refines previous-round feedback using deep features and integrates the optimized feedback into the feature representation. For user clicks, MCFA performs multi-stage fusion to enhance click propagation while constraining its direction through the refined feedback. Experimental results demonstrate that MCFA consistently outperforms existing methods across five benchmark datasets: GrabCut, Berkeley, SBD, DAVIS and CVC-ClinicDB.
交互式图像分割的目的是使用最少的用户交互为目标对象生成分割掩码。在交互过程中,以前迭代的分割结果通常用作指导后续用户输入的反馈。然而,现有的方法通常将用户交互、反馈和低级图像特征连接起来作为网络的直接输入,忽略了反馈中包含的高级语义信息和点击信号的信息稀释问题。为了解决这些限制,我们提出了一种新的交互式图像分割模型,称为深度反馈聚合的多阶段点击融合(Multi-stage Click Fusion with deep Feedback Aggregation, MCFA)。MCFA引入了一种新的信息融合策略。具体而言,对于反馈信息,它使用深度特征对前一轮反馈进行细化,并将优化后的反馈集成到特征表示中。对于用户点击,MCFA进行多阶段融合,增强点击传播,同时通过精细反馈约束点击传播方向。实验结果表明,MCFA在五个基准数据集(GrabCut、Berkeley、SBD、DAVIS和CVC-ClinicDB)上始终优于现有方法。
{"title":"Fusing multi-stage clicks with deep feedback aggregation for interactive image segmentation","authors":"Jianwu Long,&nbsp;Yuanqin Liu,&nbsp;Shaoyi Wang,&nbsp;Shuang Chen,&nbsp;Qi Luo","doi":"10.1016/j.cag.2025.104445","DOIUrl":"10.1016/j.cag.2025.104445","url":null,"abstract":"<div><div>The objective of interactive image segmentation is to generate a segmentation mask for the target object using minimal user interaction. During the interaction process, segmentation results from previous iterations are typically used as feedback to guide subsequent user input. However, existing approaches often concatenate user interactions, feedback, and low-level image features as direct inputs to the network, overlooking the high-level semantic information contained in the feedback and the issue of information dilution from click signals. To address these limitations, we propose a novel interactive image segmentation model called Multi-stage Click Fusion with deep Feedback Aggregation(MCFA). MCFA introduces a new information fusion strategy. Specifically, for feedback information, it refines previous-round feedback using deep features and integrates the optimized feedback into the feature representation. For user clicks, MCFA performs multi-stage fusion to enhance click propagation while constraining its direction through the refined feedback. Experimental results demonstrate that MCFA consistently outperforms existing methods across five benchmark datasets: GrabCut, Berkeley, SBD, DAVIS and CVC-ClinicDB.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104445"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note for Issue 133 of Computers & Graphics 《计算机与图形学》第133期的编辑说明
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-12-02 DOI: 10.1016/j.cag.2025.104508
{"title":"Editorial Note for Issue 133 of Computers & Graphics","authors":"","doi":"10.1016/j.cag.2025.104508","DOIUrl":"10.1016/j.cag.2025.104508","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104508"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing and evaluating an immersive VR experience of a historic sailing ship in museum contexts 设计和评估博物馆背景下历史帆船的沉浸式VR体验
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-10-09 DOI: 10.1016/j.cag.2025.104439
Spyros Vosinakis , Panayiotis Koutsabasis , George Anastassakis , Andreas Papasalouros , Kostas Damianidis
Museums and exhibitions can benefit from immersive technologies by embodying visitors in rich interactive environments, where they can experience digitally reconstructed scenes and stories of the past. Nevertheless, public-space Virtual Reality (VR) interactions need to be short in duration, carefully designed to communicate the intended message, and optimized for the user experience, especially for first-time users. This paper contributes to the ongoing research on user experience in VR for cultural heritage through the presentation of the design and user evaluation of an installation that immerses users on board a historic sailing ship and has been part of a museum exhibition. We present the process of reconstructing the ship and developing the application with emphasis on design choices about the user experience (scene presentation, content delivery, navigation and interaction modes, assistance, etc.). We have performed a thorough user experience evaluation and present its results and our reflections on design issues regarding public VR installations for museums.
博物馆和展览可以受益于沉浸式技术,将参观者融入丰富的互动环境中,在那里他们可以体验到数字化重建的场景和过去的故事。然而,公共空间的虚拟现实(VR)交互需要持续时间短,精心设计以传达预期的信息,并针对用户体验进行优化,特别是对于首次用户。本文通过展示一个装置的设计和用户评价,为正在进行的VR文化遗产用户体验研究做出了贡献,该装置将用户沉浸在一艘历史悠久的帆船上,并已成为博物馆展览的一部分。我们介绍了重建船舶和开发应用程序的过程,重点是关于用户体验的设计选择(场景呈现,内容传递,导航和交互模式,辅助等)。我们进行了全面的用户体验评估,并介绍了其结果以及我们对博物馆公共VR装置设计问题的思考。
{"title":"Designing and evaluating an immersive VR experience of a historic sailing ship in museum contexts","authors":"Spyros Vosinakis ,&nbsp;Panayiotis Koutsabasis ,&nbsp;George Anastassakis ,&nbsp;Andreas Papasalouros ,&nbsp;Kostas Damianidis","doi":"10.1016/j.cag.2025.104439","DOIUrl":"10.1016/j.cag.2025.104439","url":null,"abstract":"<div><div>Museums and exhibitions can benefit from immersive technologies by embodying visitors in rich interactive environments, where they can experience digitally reconstructed scenes and stories of the past. Nevertheless, public-space Virtual Reality (VR) interactions need to be short in duration, carefully designed to communicate the intended message, and optimized for the user experience, especially for first-time users. This paper contributes to the ongoing research on user experience in VR for cultural heritage through the presentation of the design and user evaluation of an installation that immerses users on board a historic sailing ship and has been part of a museum exhibition. We present the process of reconstructing the ship and developing the application with emphasis on design choices about the user experience (scene presentation, content delivery, navigation and interaction modes, assistance, etc.). We have performed a thorough user experience evaluation and present its results and our reflections on design issues regarding public VR installations for museums.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104439"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Including reflections in real-time voxel-based global illumination 包括基于体素的实时全局照明的反射
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-10-06 DOI: 10.1016/j.cag.2025.104449
Alejandro Cosin-Ayerbe, Gustavo Patow
Despite advances in rendering techniques, achieving high-quality real-time global illumination remains a significant challenge in Computer Graphics. While offline methods produce photorealistic lighting effects by accurately simulating light transport, real-time approaches struggle with the computational complexity of global illumination, particularly when handling dynamic scenes and moving light sources. Existing solutions often rely on precomputed data structures or approximate techniques, which either lack flexibility or introduce artifacts that degrade visual fidelity. In this work, we build upon previous research on a voxel-based real-time global illumination method to efficiently incorporate reflections and interreflections for both static and dynamic objects. Our approach leverages a voxelized scene representation, combined with a strategy for ray tracing camera-visible reflections, to ensure accurate materials while maintaining high performance. Key contributions include: (i) a high-quality material system capable of diffuse, glossy, and specular interreflections for both static and dynamic scene objects (ii) a highly-performant screen-space material model with a low memory consumption; and (iii) an open-source full implementation for further research and development. Our method outperforms state-of-the-art academic and industrial techniques, achieving higher quality and better temporal stability without requiring excessive computational resources. By enabling real-time global illumination with reflections, our work lays the foundation for more advanced rendering systems, ultimately moving closer to the visual fidelity of offline rendering while maintaining interactivity.
尽管渲染技术取得了进步,但在计算机图形学中实现高质量的实时全局照明仍然是一个重大挑战。虽然离线方法通过精确模拟光传输产生逼真的照明效果,但实时方法与全局照明的计算复杂性作斗争,特别是在处理动态场景和移动光源时。现有的解决方案通常依赖于预先计算的数据结构或近似技术,它们要么缺乏灵活性,要么引入降低视觉保真度的工件。在这项工作中,我们建立在先前基于体素的实时全局照明方法的研究基础上,有效地结合静态和动态物体的反射和互反射。我们的方法利用体素化场景表示,结合光线追踪相机可见反射的策略,以确保准确的材料,同时保持高性能。主要贡献包括:(i)一个高质量的材料系统,能够对静态和动态场景对象进行漫反射、光滑和镜面互反射;(ii)一个高性能的屏幕空间材料模型,具有低内存消耗;(iii)为进一步的研究和开发提供一个开源的全面实现。我们的方法优于最先进的学术和工业技术,在不需要过多计算资源的情况下实现更高的质量和更好的时间稳定性。通过启用具有反射的实时全局照明,我们的工作为更高级的渲染系统奠定了基础,最终在保持交互性的同时更接近离线渲染的视觉保真度。
{"title":"Including reflections in real-time voxel-based global illumination","authors":"Alejandro Cosin-Ayerbe,&nbsp;Gustavo Patow","doi":"10.1016/j.cag.2025.104449","DOIUrl":"10.1016/j.cag.2025.104449","url":null,"abstract":"<div><div>Despite advances in rendering techniques, achieving high-quality real-time global illumination remains a significant challenge in Computer Graphics. While offline methods produce photorealistic lighting effects by accurately simulating light transport, real-time approaches struggle with the computational complexity of global illumination, particularly when handling dynamic scenes and moving light sources. Existing solutions often rely on precomputed data structures or approximate techniques, which either lack flexibility or introduce artifacts that degrade visual fidelity. In this work, we build upon previous research on a voxel-based real-time global illumination method to efficiently incorporate reflections and interreflections for both static and dynamic objects. Our approach leverages a voxelized scene representation, combined with a strategy for ray tracing camera-visible reflections, to ensure accurate materials while maintaining high performance. Key contributions include: (i) a high-quality material system capable of diffuse, glossy, and specular interreflections for both static and dynamic scene objects (ii) a highly-performant screen-space material model with a low memory consumption; and (iii) an open-source full implementation for further research and development. Our method outperforms state-of-the-art academic and industrial techniques, achieving higher quality and better temporal stability without requiring excessive computational resources. By enabling real-time global illumination with reflections, our work lays the foundation for more advanced rendering systems, ultimately moving closer to the visual fidelity of offline rendering while maintaining interactivity.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104449"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlations between instant and prolonged stimuli with physiological and subjective responses in VR horror 虚拟现实恐怖中即时和长时间刺激与生理和主观反应的相关性
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-11-05 DOI: 10.1016/j.cag.2025.104470
Zeren Tao, Qilei Sun, Xiaohan Wang, Zuoqing Yang, Shengqiao Wu, Yibang Zhao, Binwei Lei
Virtual reality (VR) horror games can evoke intense feelings of fear and anxiety, yet it remains unclear how different types of fear stimuli within VR environments contribute to these physiological and emotional responses. While prior studies often investigate multisensory tension scenarios as a whole based on full-featured horror games, few have directly compared the effects of distinct fear stimuli—specifically, instant threat-based (e.g., sudden jump scares or chasing events) and prolonged atmospheric (e.g., persistent eerie ambiance) cues—on physiological indicators of fear. To address this gap, we developed a custom VR horror game that isolates these two categories of stimuli, enabling controlled experiments to examine their respective impacts on user physiology and self-reported fear. We compared experimental scenes featuring instant and prolonged stimuli against a baseline control scene to evaluate their influence. The results validate that instant stimuli exert a more pronounced influence on heart rate (HR) data, particularly in Maximum BPM and Average BPM metrics, while prolonged stimuli have a stronger effect on electrodermal activity (EDA), especially in EDA Max and EDA Mean Absolute Difference (MAD) metrics. The findings also reveal significant gender differences in certain physiological indicators and suggest that VR-based interventions could be tailored to modulate specific physiological systems by manipulating the type of emotional stimuli presented to the patient, potentially enhancing the effectiveness of therapeutic outcomes.
虚拟现实(VR)恐怖游戏可以唤起强烈的恐惧和焦虑感,但目前尚不清楚VR环境中不同类型的恐惧刺激如何影响这些生理和情绪反应。虽然之前的研究通常是基于全功能恐怖游戏来调查多感官紧张情境,但很少有人直接比较不同的恐惧刺激——特别是基于即时威胁(如突然的跳跃恐惧或追逐事件)和长时间氛围(如持续的怪异氛围)线索对恐惧生理指标的影响。为了解决这一差距,我们开发了一款定制的VR恐怖游戏,将这两类刺激分离开来,使对照实验能够检查它们各自对用户生理和自我报告恐惧的影响。我们将具有即时和长时间刺激的实验场景与基线控制场景进行比较,以评估其影响。结果证实,即时刺激对心率(HR)数据有更明显的影响,特别是在最大BPM和平均BPM指标中,而长时间刺激对皮电活动(EDA)有更强的影响,特别是在EDA Max和EDA Mean Absolute Difference (MAD)指标中。研究结果还揭示了某些生理指标的显著性别差异,并表明基于vr的干预措施可以通过操纵呈现给患者的情绪刺激类型来调节特定的生理系统,从而潜在地提高治疗结果的有效性。
{"title":"Correlations between instant and prolonged stimuli with physiological and subjective responses in VR horror","authors":"Zeren Tao,&nbsp;Qilei Sun,&nbsp;Xiaohan Wang,&nbsp;Zuoqing Yang,&nbsp;Shengqiao Wu,&nbsp;Yibang Zhao,&nbsp;Binwei Lei","doi":"10.1016/j.cag.2025.104470","DOIUrl":"10.1016/j.cag.2025.104470","url":null,"abstract":"<div><div>Virtual reality (VR) horror games can evoke intense feelings of fear and anxiety, yet it remains unclear how different types of fear stimuli within VR environments contribute to these physiological and emotional responses. While prior studies often investigate multisensory tension scenarios as a whole based on full-featured horror games, few have directly compared the effects of distinct fear stimuli—specifically, instant threat-based (e.g., sudden jump scares or chasing events) and prolonged atmospheric (e.g., persistent eerie ambiance) cues—on physiological indicators of fear. To address this gap, we developed a custom VR horror game that isolates these two categories of stimuli, enabling controlled experiments to examine their respective impacts on user physiology and self-reported fear. We compared experimental scenes featuring instant and prolonged stimuli against a baseline control scene to evaluate their influence. The results validate that instant stimuli exert a more pronounced influence on heart rate (HR) data, particularly in Maximum BPM and Average BPM metrics, while prolonged stimuli have a stronger effect on electrodermal activity (EDA), especially in EDA Max and EDA Mean Absolute Difference (MAD) metrics. The findings also reveal significant gender differences in certain physiological indicators and suggest that VR-based interventions could be tailored to modulate specific physiological systems by manipulating the type of emotional stimuli presented to the patient, potentially enhancing the effectiveness of therapeutic outcomes.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104470"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flattening-based visualization of supine breast MRI 仰卧位乳房MRI平面化可视化
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-09-16 DOI: 10.1016/j.cag.2025.104395
Julia Kummer , Elmar Laistler , Lena Nohava , Renata G. Raidou , Katja Bühler
We propose two novel visualization methods optimized for supine breast images that “flatten” breast tissue, facilitating examination of larger tissue areas within each coronal slice. Breast cancer is the most frequently diagnosed cancer in women, and early lesion detection is crucial for reducing mortality. Supine breast magnetic resonance imaging (MRI) enables better lesion localization for image-guided interventions; however, traditional axial visualization is suboptimal because the tissue spreads over the chest wall, resulting in numerous fragmented slices that radiologists must scroll through during standard interpretation. Using a human-centered design approach, we incorporated user and expert feedback throughout the co-design and evaluation stages of our flattening methods. Our first proposed method, a surface-cutting approach, generates offset surfaces and flattens them independently using As-Rigid-As-Possible (ARAP) surface mesh parameterization. The second method uses a landmark-based warp to flatten the entire breast volume at once. Expert evaluations revealed that the surface-cutting method provides intuitive overviews and clear vascular detail, with low metric (2–2.5%) and area (3.7–4.4%) distortions. However, independent slice flattening can introduce depth distortions across layers. The landmark warp offers consistent slice alignment and supports direct annotations and measurements, with radiologists favoring it for its anatomical accuracy. Both methods significantly reduced the number of slices needed to review, highlighting their potential for time savings and clinical impact — an essential factor for adopting supine MRI.
我们提出了两种新的可视化方法,优化了仰卧乳房图像,使乳房组织“变平”,便于检查每个冠状切片内更大的组织区域。乳腺癌是女性中最常见的癌症,早期发现病变对降低死亡率至关重要。仰卧位乳房磁共振成像(MRI)为图像引导干预提供了更好的病灶定位;然而,传统的轴向可视化并不理想,因为组织在胸壁上扩散,导致放射科医生在标准解释时必须滚动浏览许多碎片切片。采用以人为本的设计方法,我们在共同设计和评估扁平化方法的各个阶段都纳入了用户和专家的反馈。我们提出的第一种方法是表面切割方法,它生成偏移曲面并使用尽可能刚性(ARAP)表面网格参数化独立地使其平坦。第二种方法是使用一个基于地标的翘曲,使整个乳房体积一次变平。专家评估表明,表面切割方法提供直观的概述和清晰的血管细节,具有低度量(2-2.5%)和面积(3.7-4.4%)畸变。然而,独立的切片平坦化可以引入层之间的深度扭曲。具有里程碑意义的翘曲提供一致的切片对齐,并支持直接注释和测量,放射科医生因其解剖精度而青睐它。这两种方法都显著减少了需要审查的切片数量,突出了它们节省时间和临床影响的潜力——这是采用仰卧位MRI的重要因素。
{"title":"Flattening-based visualization of supine breast MRI","authors":"Julia Kummer ,&nbsp;Elmar Laistler ,&nbsp;Lena Nohava ,&nbsp;Renata G. Raidou ,&nbsp;Katja Bühler","doi":"10.1016/j.cag.2025.104395","DOIUrl":"10.1016/j.cag.2025.104395","url":null,"abstract":"<div><div>We propose two novel visualization methods optimized for supine breast images that “flatten” breast tissue, facilitating examination of larger tissue areas within each coronal slice. Breast cancer is the most frequently diagnosed cancer in women, and early lesion detection is crucial for reducing mortality. Supine breast magnetic resonance imaging (MRI) enables better lesion localization for image-guided interventions; however, traditional axial visualization is suboptimal because the tissue spreads over the chest wall, resulting in numerous fragmented slices that radiologists must scroll through during standard interpretation. Using a human-centered design approach, we incorporated user and expert feedback throughout the co-design and evaluation stages of our flattening methods. Our first proposed method, a <em>surface-cutting</em> approach, generates offset surfaces and flattens them independently using As-Rigid-As-Possible (ARAP) surface mesh parameterization. The second method uses a <em>landmark-based warp</em> to flatten the entire breast volume at once. Expert evaluations revealed that the surface-cutting method provides intuitive overviews and clear vascular detail, with low metric (2–2.5%) and area (3.7–4.4%) distortions. However, independent slice flattening can introduce depth distortions across layers. The landmark warp offers consistent slice alignment and supports direct annotations and measurements, with radiologists favoring it for its anatomical accuracy. Both methods significantly reduced the number of slices needed to review, highlighting their potential for time savings and clinical impact — an essential factor for adopting supine MRI.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104395"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The vividness of mental imagery in virtual reality: A study on multisensory experiences in virtual tourism 虚拟现实中心理意象的生动性:虚拟旅游中的多感官体验研究
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-01 Epub Date: 2025-09-25 DOI: 10.1016/j.cag.2025.104443
Mariana Magalhães , Miguel Melo , António Coelho , Maximino Bessa
This paper aims to evaluate how different combinations of multisensory stimuli affect the vividness of users’ mental imagery in the context of virtual tourism. To this end, a between-subjects experimental study was conducted with 94 participants, who were allocated to either a positive or a negative immersive virtual environment. The positive environment contained only pleasant multisensory stimuli, whereas the negative contained only unpleasant stimuli. For each of the virtual experiences, a multisensory treasure hunt was developed, where each object found corresponded to a planned combination of stimuli (positive or negative, accordingly). The results showed that positive stimuli involving a higher number of sensory modalities resulted in higher reported vividness. In contrast, when the same multisensory modalities were delivered with negative stimuli, vividness levels decreased — an effect we attribute to potential cognitive overload. Nevertheless, some reduced negative combinations (audiovisual with smell and audiovisual with haptics) remained effective, indicating that olfactory and haptic cues play an important role in shaping users’ vividness of mental imagery, even in negative contexts.
本文旨在评估在虚拟旅游背景下,多感官刺激的不同组合如何影响用户心理意象的生动性。为此,对94名参与者进行了一项受试者间实验研究,他们被分配到积极或消极的沉浸式虚拟环境中。积极环境只包含愉快的多感官刺激,而消极环境只包含不愉快的刺激。对于每一种虚拟体验,都开发了一种多感官寻宝游戏,其中发现的每个对象都对应于计划中的刺激组合(相应地是积极的或消极的)。结果表明,积极刺激涉及更多的感觉模式导致更高的报告生动度。相比之下,当同样的多感官模式与负面刺激一起传递时,生动程度下降——我们将这种效应归因于潜在的认知超载。然而,一些减少的负面组合(视听与嗅觉和视听与触觉)仍然有效,这表明嗅觉和触觉线索在塑造用户的心理意象的生动性方面起着重要作用,即使在负面环境中也是如此。
{"title":"The vividness of mental imagery in virtual reality: A study on multisensory experiences in virtual tourism","authors":"Mariana Magalhães ,&nbsp;Miguel Melo ,&nbsp;António Coelho ,&nbsp;Maximino Bessa","doi":"10.1016/j.cag.2025.104443","DOIUrl":"10.1016/j.cag.2025.104443","url":null,"abstract":"<div><div>This paper aims to evaluate how different combinations of multisensory stimuli affect the vividness of users’ mental imagery in the context of virtual tourism. To this end, a between-subjects experimental study was conducted with 94 participants, who were allocated to either a positive or a negative immersive virtual environment. The positive environment contained only pleasant multisensory stimuli, whereas the negative contained only unpleasant stimuli. For each of the virtual experiences, a multisensory treasure hunt was developed, where each object found corresponded to a planned combination of stimuli (positive or negative, accordingly). The results showed that positive stimuli involving a higher number of sensory modalities resulted in higher reported vividness. In contrast, when the same multisensory modalities were delivered with negative stimuli, vividness levels decreased — an effect we attribute to potential cognitive overload. Nevertheless, some reduced negative combinations (audiovisual with smell and audiovisual with haptics) remained effective, indicating that olfactory and haptic cues play an important role in shaping users’ vividness of mental imagery, even in negative contexts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104443"},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1