首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Voice of artifacts: Evaluating user preferences for artifact voice in VR museums 人工制品的声音:评估VR博物馆中人工制品声音的用户偏好
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-08 DOI: 10.1016/j.cag.2025.104473
Bingqing Chen , Wenqi Chu , Xubo Yang , Yue Li
Voice is a powerful medium for conveying personality, emotion, and social presence, yet its role in cultural contexts such as virtual museums remains underexplored. While prior research in virtual reality (VR) has focused on ambient soundscapes or system-driven narration, little is known about what kinds of artifact voices users actually prefer, or if customized voices influence their experience. In this study, we designed a virtual museum and examined user perceptions of three types of voices for artifact chatbots, including a neutral synthetic voice (default), a socially relatable voice (familiar), and a user-customized voice with adjustable elements (customized). Through a within-subjects experiment, we measured user experience with established scales and a semi-structured interview. Results showed a strong user preference for the customized voice, which significantly outperformed the other two conditions. These findings suggest that users not only expect artifacts to speak, but also prefer to have control over the voices, which can enhance their experience and engagement. Our findings provide empirical evidence for the importance of voice customization in virtual museums and lay the groundwork for future design of interactive, user-centered sound and vocal experiences in VR environments.
声音是一种传达个性、情感和社会存在的强大媒介,但它在虚拟博物馆等文化背景中的作用仍未得到充分探索。虽然之前的虚拟现实(VR)研究主要集中在环境声景或系统驱动的叙事上,但对于用户真正喜欢哪种人工声音,或者定制的声音是否会影响他们的体验,我们知之甚少。在这项研究中,我们设计了一个虚拟博物馆,并研究了用户对人工聊天机器人三种类型声音的感知,包括中性合成声音(默认)、社会相关声音(熟悉)和用户自定义的可调元素声音(定制)。通过受试者内部实验,我们用既定的量表和半结构化访谈来测量用户体验。结果显示,用户对定制语音有强烈的偏好,显著优于其他两种情况。这些发现表明,用户不仅期待人工制品说话,而且更喜欢控制声音,这可以增强他们的体验和参与度。我们的研究结果为语音定制在虚拟博物馆中的重要性提供了实证证据,并为未来在VR环境中设计交互式、以用户为中心的声音和语音体验奠定了基础。
{"title":"Voice of artifacts: Evaluating user preferences for artifact voice in VR museums","authors":"Bingqing Chen ,&nbsp;Wenqi Chu ,&nbsp;Xubo Yang ,&nbsp;Yue Li","doi":"10.1016/j.cag.2025.104473","DOIUrl":"10.1016/j.cag.2025.104473","url":null,"abstract":"<div><div>Voice is a powerful medium for conveying personality, emotion, and social presence, yet its role in cultural contexts such as virtual museums remains underexplored. While prior research in virtual reality (VR) has focused on ambient soundscapes or system-driven narration, little is known about what kinds of artifact voices users actually prefer, or if customized voices influence their experience. In this study, we designed a virtual museum and examined user perceptions of three types of voices for artifact chatbots, including a neutral synthetic voice (<em>default</em>), a socially relatable voice (<em>familiar</em>), and a user-customized voice with adjustable elements (<em>customized</em>). Through a within-subjects experiment, we measured user experience with established scales and a semi-structured interview. Results showed a strong user preference for the <em>customized</em> voice, which significantly outperformed the other two conditions. These findings suggest that users not only expect artifacts to speak, but also prefer to have control over the voices, which can enhance their experience and engagement. Our findings provide empirical evidence for the importance of voice customization in virtual museums and lay the groundwork for future design of interactive, user-centered sound and vocal experiences in VR environments.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104473"},"PeriodicalIF":2.8,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detail Enhancement Gaussian Avatar: High-quality head avatars modeling 细节增强高斯头像:高质量的头部头像建模
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-08 DOI: 10.1016/j.cag.2025.104482
Zhangjin Huang, Bowei Yin
Modeling animatable head avatars from monocular video is a long-standing and challenging problem. Although recent approaches based on 3D Gaussian Splatting (3DGS) have achieved notable progress, the rendered avatars still exhibit several limitations. First, conventional 3DMM priors lack explicit geometric modeling for the eyes and teeth, leading to missing or suboptimal Gaussian initialization in these regions. Second, the heterogeneous characteristics of different facial subregions cause uniform joint training to under-optimize fine-scale details. Third, typical 3DGS issues such as boundary floaters and rendering artifacts remain unresolved in facial Gaussian representations. To address these challenges, we propose Detail Enhancement Gaussian Avatar (DEGA). (1) We augment Gaussian initialization with explicit eye and teeth regions, filling structural gaps left by standard 3DMM-based setups. (2) We introduce a hierarchical Gaussian representation that refines and decomposes the face into semantically aware subregions, enabling more thorough supervision and balanced optimization across all facial areas. (3) We incorporate a learned confidence attribute to suppress unreliable Gaussians, effectively mitigating boundary artifacts and floater phenomena. Overall, DEGA produces lifelike, dynamically expressive head avatars with high-fidelity geometry and appearance. Experiments on public benchmarks demonstrate that our method consistently outperforms state-of-the-art baselines.
从单目视频中建模可动画的头部头像是一个长期存在且具有挑战性的问题。尽管最近基于3D高斯飞溅(3DGS)的方法已经取得了显著的进展,但渲染的化身仍然表现出一些局限性。首先,传统的3DMM先验缺乏对眼睛和牙齿的显式几何建模,导致这些区域的高斯初始化缺失或次优。其次,不同面部子区域的异质性特征导致统一的联合训练对精细尺度细节优化不足。第三,典型的3DGS问题,如边界浮动和渲染伪影在面部高斯表示中仍然没有解决。为了解决这些挑战,我们提出了细节增强高斯头像(DEGA)。(1)我们用明确的眼睛和牙齿区域增强高斯初始化,填补基于标准3dmm的设置留下的结构空白。(2)我们引入了一种分层高斯表示,将人脸细化并分解为语义感知的子区域,从而在所有面部区域实现更彻底的监督和平衡优化。(3)引入学习置信度属性来抑制不可靠高斯分布,有效缓解边界伪像和浮子现象。总的来说,DEGA生产栩栩如生,动态表达头像高保真几何和外观。在公共基准测试上的实验表明,我们的方法始终优于最先进的基线。
{"title":"Detail Enhancement Gaussian Avatar: High-quality head avatars modeling","authors":"Zhangjin Huang,&nbsp;Bowei Yin","doi":"10.1016/j.cag.2025.104482","DOIUrl":"10.1016/j.cag.2025.104482","url":null,"abstract":"<div><div>Modeling animatable head avatars from monocular video is a long-standing and challenging problem. Although recent approaches based on 3D Gaussian Splatting (3DGS) have achieved notable progress, the rendered avatars still exhibit several limitations. First, conventional 3DMM priors lack explicit geometric modeling for the eyes and teeth, leading to missing or suboptimal Gaussian initialization in these regions. Second, the heterogeneous characteristics of different facial subregions cause uniform joint training to under-optimize fine-scale details. Third, typical 3DGS issues such as boundary floaters and rendering artifacts remain unresolved in facial Gaussian representations. To address these challenges, we propose <strong>Detail Enhancement Gaussian Avatar (DEGA)</strong>. (1) We augment Gaussian initialization with explicit eye and teeth regions, filling structural gaps left by standard 3DMM-based setups. (2) We introduce a hierarchical Gaussian representation that refines and decomposes the face into semantically aware subregions, enabling more thorough supervision and balanced optimization across all facial areas. (3) We incorporate a learned confidence attribute to suppress unreliable Gaussians, effectively mitigating boundary artifacts and floater phenomena. Overall, DEGA produces lifelike, dynamically expressive head avatars with high-fidelity geometry and appearance. Experiments on public benchmarks demonstrate that our method consistently outperforms state-of-the-art baselines.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104482"},"PeriodicalIF":2.8,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and development of asymmetric VR environment supporting collaborative interaction of physicians and patients with MRI data 设计和开发非对称虚拟现实环境,支持医生和患者与MRI数据的协作交互
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-07 DOI: 10.1016/j.cag.2025.104479
Magdalena Igras-Cybulska , Artur Cybulski , John Liu , Maryla Kuczyńska , Agnieszka Dopierała , Radosław Niewiadomski , Daria Hemmerling , Isam Leebe , Gabriela Zapolska , Sławomir K. Tadeja
Emerging Virtual Reality (VR) technology holds the promise of revolutionizing how patients and medical professionals interact with medical imaging data, such as Magnetic Resonance Imaging (MRI) scans. However, to reach its full potential, the design of such systems must be thoroughly investigated and properly designed to cater to a diverse group of users. Consequently, this paper presents the design and development of a system aimed at leveraging VR to enhance patient understanding and facilitate shared decision-making in healthcare contexts. Our VR system enables real-time, immersive exploration of 2D and 3D MRI scans in Digital Imaging and Communications in Medicine (DICOM) format, letting clinicians intuitively load, view, and manipulate complex diagnostic data beyond the constraints of flat-screen tools. A key design feature is the system’s allowance for “asymmetric” collaboration–one user on a screen and another in VR viewing/manipulating 2D and 3D information synchronized in real-time. This asymmetric approach has potential to optimize user experience for clinical applications, ensuring flexibility and seamless coordination among physicians, patients, and caregivers, ultimately fostering more informed decision-making.
新兴的虚拟现实(VR)技术有望彻底改变患者和医疗专业人员与磁共振成像(MRI)扫描等医疗成像数据的交互方式。但是,为了充分发挥其潜力,必须对这种系统的设计进行彻底的调查和适当的设计,以满足不同的用户群体。因此,本文提出了一个系统的设计和开发,旨在利用虚拟现实来提高患者的理解和促进医疗保健环境中的共同决策。我们的VR系统能够实时、身临其境地探索数字成像和医学通信(DICOM)格式的2D和3D MRI扫描,让临床医生直观地加载、查看和操作复杂的诊断数据,而不受平板屏幕工具的限制。一个关键的设计特征是系统允许“非对称”协作——一个用户在屏幕上,另一个用户在VR中查看/操纵实时同步的2D和3D信息。这种不对称的方法有可能优化临床应用的用户体验,确保医生、患者和护理人员之间的灵活性和无缝协调,最终促进更明智的决策。
{"title":"Design and development of asymmetric VR environment supporting collaborative interaction of physicians and patients with MRI data","authors":"Magdalena Igras-Cybulska ,&nbsp;Artur Cybulski ,&nbsp;John Liu ,&nbsp;Maryla Kuczyńska ,&nbsp;Agnieszka Dopierała ,&nbsp;Radosław Niewiadomski ,&nbsp;Daria Hemmerling ,&nbsp;Isam Leebe ,&nbsp;Gabriela Zapolska ,&nbsp;Sławomir K. Tadeja","doi":"10.1016/j.cag.2025.104479","DOIUrl":"10.1016/j.cag.2025.104479","url":null,"abstract":"<div><div>Emerging Virtual Reality (VR) technology holds the promise of revolutionizing how patients and medical professionals interact with medical imaging data, such as Magnetic Resonance Imaging (MRI) scans. However, to reach its full potential, the design of such systems must be thoroughly investigated and properly designed to cater to a diverse group of users. Consequently, this paper presents the design and development of a system aimed at leveraging VR to enhance patient understanding and facilitate shared decision-making in healthcare contexts. Our VR system enables real-time, immersive exploration of 2D and 3D MRI scans in Digital Imaging and Communications in Medicine (DICOM) format, letting clinicians intuitively load, view, and manipulate complex diagnostic data beyond the constraints of flat-screen tools. A key design feature is the system’s allowance for “asymmetric” collaboration–one user on a screen and another in VR viewing/manipulating 2D and 3D information synchronized in real-time. This asymmetric approach has potential to optimize user experience for clinical applications, ensuring flexibility and seamless coordination among physicians, patients, and caregivers, ultimately fostering more informed decision-making.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104479"},"PeriodicalIF":2.8,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study of impacts of the large-screen and multi-screen trends of HMI in intelligent cabins on drivers' cognitive load 智能驾驶室人机界面大屏幕和多屏幕趋势对驾驶员认知负荷的影响研究
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-07 DOI: 10.1016/j.cag.2025.104476
Wang Yuxiao, Zhang Yanxiang
Human-machine Interface (HMI) serves as the core medium within the vehicle. Its trend towards large-screen and multi-screen design is a significant manifestation of today’s intelligent cabins. However, this trend also brings risks of distracting drivers' attention and affecting driving safety. Therefore, this study used virtual reality (VR) technology to provide a simulated driving environment for participants, and explored the impact of HMI-related attributes in the intelligent cabin, namely "central control screen size," "central control screen orientation," "secondary screen size," "types of central control screen information recognition tasks," on drivers’ cognitive load through simulated driving experiments. Additionally, we investigated the impact of "driving experience" on drivers' cognitive load. According to the results, during the simulated driving process, "types of central control screen information recognition tasks" had a significant effect on drivers' cognitive load. Compared to the text format, presenting information in a grid format on the central control screen resulted in a lower cognitive load for drivers. When performing text tasks, four variables—"central control screen size," "central control screen orientation," "secondary screen size," and "driving experience"—significantly influenced drivers' cognitive load. In contrast, during grid tasks, only two variables—"central control screen orientation" and "secondary screen size"—had a significant impact on drivers' cognitive load.
人机界面(HMI)是车辆内部的核心媒介。其走向大屏、多屏设计的趋势是当今智能客舱的重要体现。然而,这种趋势也带来了分散司机注意力和影响驾驶安全的风险。因此,本研究利用虚拟现实(VR)技术为被试提供模拟驾驶环境,通过模拟驾驶实验,探讨智能座舱中人机界面相关属性“中控屏尺寸”、“中控屏方位”、“副屏尺寸”、“中控屏信息识别任务类型”对驾驶员认知负荷的影响。此外,我们还研究了“驾驶经验”对驾驶员认知负荷的影响。结果表明,在模拟驾驶过程中,“中控屏信息识别任务类型”对驾驶员认知负荷有显著影响。与文本格式相比,在中央控制屏幕上以网格格式显示信息会降低驾驶员的认知负荷。在执行文本任务时,四个变量——“中央控制屏幕大小”、“中央控制屏幕方向”、“副屏幕大小”和“驾驶经验”——显著影响驾驶员的认知负荷。相比之下,在网格任务中,只有两个变量——“中央控制屏幕方向”和“副屏幕大小”——对驾驶员的认知负荷有显著影响。
{"title":"Study of impacts of the large-screen and multi-screen trends of HMI in intelligent cabins on drivers' cognitive load","authors":"Wang Yuxiao,&nbsp;Zhang Yanxiang","doi":"10.1016/j.cag.2025.104476","DOIUrl":"10.1016/j.cag.2025.104476","url":null,"abstract":"<div><div>Human-machine Interface (HMI) serves as the core medium within the vehicle. Its trend towards large-screen and multi-screen design is a significant manifestation of today’s intelligent cabins. However, this trend also brings risks of distracting drivers' attention and affecting driving safety. Therefore, this study used virtual reality (VR) technology to provide a simulated driving environment for participants, and explored the impact of HMI-related attributes in the intelligent cabin, namely \"central control screen size,\" \"central control screen orientation,\" \"secondary screen size,\" \"types of central control screen information recognition tasks,\" on drivers’ cognitive load through simulated driving experiments. Additionally, we investigated the impact of \"driving experience\" on drivers' cognitive load. According to the results, during the simulated driving process, \"types of central control screen information recognition tasks\" had a significant effect on drivers' cognitive load. Compared to the text format, presenting information in a grid format on the central control screen resulted in a lower cognitive load for drivers. When performing text tasks, four variables—\"central control screen size,\" \"central control screen orientation,\" \"secondary screen size,\" and \"driving experience\"—significantly influenced drivers' cognitive load. In contrast, during grid tasks, only two variables—\"central control screen orientation\" and \"secondary screen size\"—had a significant impact on drivers' cognitive load.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104476"},"PeriodicalIF":2.8,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit curve reconstruction with sharp features 具有尖锐特征的隐式曲线重构
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-07 DOI: 10.1016/j.cag.2025.104480
Yuhao Han , Xuhui Wang , Qian Ni , Yuan Liu , Jinna Zhang
Implicit representations have garnered substantial interest due to their prowess in representing intricate geometric and topological shapes. Nevertheless, encoding sharp features such as corners remains a challenge for these methods. This paper introduces an implicit B-spline curve reconstruction method utilizing a multiple-knot insertion strategy specifically designed to address this limitation. To enhance the boundary reconstruction, the proposed method integrates boundary error considerations into the fitting process and refines discrete signed distance field reconstructions, prioritizing the accurate reproduction of original boundaries. Numerical experiments demonstrate that our method achieves high-quality reconstruction results while preserving the sharp features.
隐式表示由于其在表示复杂的几何和拓扑形状方面的能力而获得了极大的兴趣。然而,编码尖锐的特征(如角)对这些方法来说仍然是一个挑战。本文介绍了一种隐式b样条曲线重建方法,利用专门设计的多节插入策略来解决这一限制。为了增强边界重建,该方法将边界误差考虑到拟合过程中,并对离散符号距离场重建进行细化,优先考虑原始边界的精确再现。数值实验表明,该方法在保留图像鲜明特征的同时,获得了高质量的重建结果。
{"title":"Implicit curve reconstruction with sharp features","authors":"Yuhao Han ,&nbsp;Xuhui Wang ,&nbsp;Qian Ni ,&nbsp;Yuan Liu ,&nbsp;Jinna Zhang","doi":"10.1016/j.cag.2025.104480","DOIUrl":"10.1016/j.cag.2025.104480","url":null,"abstract":"<div><div>Implicit representations have garnered substantial interest due to their prowess in representing intricate geometric and topological shapes. Nevertheless, encoding sharp features such as corners remains a challenge for these methods. This paper introduces an implicit B-spline curve reconstruction method utilizing a multiple-knot insertion strategy specifically designed to address this limitation. To enhance the boundary reconstruction, the proposed method integrates boundary error considerations into the fitting process and refines discrete signed distance field reconstructions, prioritizing the accurate reproduction of original boundaries. Numerical experiments demonstrate that our method achieves high-quality reconstruction results while preserving the sharp features.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104480"},"PeriodicalIF":2.8,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Look at that distractor: Dynamic translation gain under low perceptual load in virtual reality 看看这个干扰物:虚拟现实中低感知负载下的动态翻译增益
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-05 DOI: 10.1016/j.cag.2025.104466
Ling-Long Zou , Qiang Tong , Er-Xia Luo , Sen-Zhe Xu , Song-Hai Zhang , Fang-Lue Zhang
Redirected walking (RDW) utilizes gain adjustments within perceptual thresholds to allow natural navigation in large-scale virtual environments (VEs) within confined physical environments (PEs). Previous research has found that when users are distracted by some scene elements, they are less sensitive to gain values. However, the effects on detection thresholds have not been quantitatively measured. In this paper, we present a novel method that dynamically adjusts translation gain by leveraging visual distractors. We place distractors within the user’s field of view and apply a larger translation gain when their attention is drawn to them. Because the magnitude of gain adjustment depends on the user’s level of engagement with the distractors, the redirection process remains smooth and unobtrusive. To evaluate our method, we developed a task-oriented virtual environment for a user study (n = 26). Results show that introducing distractors in the virtual environment significantly raises users’ translation gain thresholds. Furthermore, assessments using the Simulator Sickness Questionnaire (SSQ) and Igroup Presence Questionnaire (IPQ) indicate that the method maintains user comfort and acceptance, supporting its effectiveness for RDW systems.
重定向行走(RDW)利用感知阈值内的增益调整,允许在受限物理环境(pe)内的大规模虚拟环境(ve)中进行自然导航。先前的研究发现,当用户被某些场景元素分散注意力时,他们对增益值的敏感度降低。然而,对检测阈值的影响尚未定量测量。在本文中,我们提出了一种利用视觉干扰来动态调整翻译增益的新方法。我们将干扰物放置在用户的视野中,当他们的注意力被吸引到这些干扰物上时,我们会应用更大的翻译增益。由于增益调整的幅度取决于用户与干扰物的接触程度,因此重定向过程保持平稳且不引人注目。为了评估我们的方法,我们为用户研究开发了一个面向任务的虚拟环境(n = 26)。结果表明,在虚拟环境中引入干扰物可以显著提高用户的翻译增益阈值。此外,使用模拟器疾病问卷(SSQ)和iggroup Presence问卷(IPQ)进行的评估表明,该方法保持了用户的舒适度和接受度,支持其对RDW系统的有效性。
{"title":"Look at that distractor: Dynamic translation gain under low perceptual load in virtual reality","authors":"Ling-Long Zou ,&nbsp;Qiang Tong ,&nbsp;Er-Xia Luo ,&nbsp;Sen-Zhe Xu ,&nbsp;Song-Hai Zhang ,&nbsp;Fang-Lue Zhang","doi":"10.1016/j.cag.2025.104466","DOIUrl":"10.1016/j.cag.2025.104466","url":null,"abstract":"<div><div>Redirected walking (RDW) utilizes gain adjustments within perceptual thresholds to allow natural navigation in large-scale virtual environments (VEs) within confined physical environments (PEs). Previous research has found that when users are distracted by some scene elements, they are less sensitive to gain values. However, the effects on detection thresholds have not been quantitatively measured. In this paper, we present a novel method that dynamically adjusts translation gain by leveraging visual distractors. We place distractors within the user’s field of view and apply a larger translation gain when their attention is drawn to them. Because the magnitude of gain adjustment depends on the user’s level of engagement with the distractors, the redirection process remains smooth and unobtrusive. To evaluate our method, we developed a task-oriented virtual environment for a user study (n = 26). Results show that introducing distractors in the virtual environment significantly raises users’ translation gain thresholds. Furthermore, assessments using the Simulator Sickness Questionnaire (SSQ) and Igroup Presence Questionnaire (IPQ) indicate that the method maintains user comfort and acceptance, supporting its effectiveness for RDW systems.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104466"},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and accurate neural reflectance transformation imaging through knowledge distillation 通过知识升华实现快速准确的神经反射变换成像
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-05 DOI: 10.1016/j.cag.2025.104475
Tinsae G. Dulecha , Leonardo Righetto , Ruggero Pintus , Enrico Gobbetti , Andrea Giachetti
Reflectance Transformation Imaging (RTI) is very popular for its ability to visually analyze surfaces by enhancing surface details through interactive relighting, starting from only a few tens of photographs taken with a fixed camera and variable illumination. Traditional methods like Polynomial Texture Maps (PTM) and Hemispherical Harmonics (HSH) are compact and fast, but struggle to accurately capture complex reflectance fields using few per-pixel coefficients and fixed bases, leading to artifacts, especially in highly reflective or shadowed areas. The NeuralRTI approach, which exploits a neural autoencoder to learn a compact function that better approximates the local reflectance as a function of light directions, has been shown to produce superior quality at comparable storage cost. However, as it performs interactive relighting with custom decoder networks with many parameters, the rendering step is computationally expensive and not feasible at full resolution for large images on limited hardware. Earlier attempts to reduce costs by directly training smaller networks have failed to produce valid results. For this reason, we propose to reduce its computational cost through a novel solution based on Knowledge Distillation (DISK-NeuralRTI). Starting from a teacher network that can be one of the original Neural RTI methods or a more complex solution, DISK-NeuralRTI can create a student architecture with a simplified decoder network that preserves image quality and has computational cost compatible with real-time web-based visualization of large surfaces. Experimental results show that we can obtain a student prediction that is on par or more accurate than the existing NeuralRTI solutions with up to 80% parameter reduction. Using a novel benchmark of high-resolution Multi-Light image collections (RealRTIHR), we also tested the usability of a web-based visualization tool based on our simplified decoder for realistic surface inspection tasks. The results show that the solution reaches interactive frame rates without the necessity of using progressive rendering with image quality loss.
反射变换成像(RTI)非常受欢迎,因为它能够通过交互式重照明来增强表面细节,从而在视觉上分析表面,从固定相机和可变照明拍摄的几十张照片开始。传统的方法,如多项式纹理贴图(PTM)和半球面谐波(HSH)是紧凑和快速的,但很难准确捕获复杂的反射场,使用很少的每像素系数和固定的基础,导致伪影,特别是在高反射或阴影区域。NeuralRTI方法利用神经自编码器来学习一个紧凑函数,该函数可以更好地近似局部反射率作为光方向的函数,该方法已被证明在同等存储成本下产生卓越的质量。然而,由于它使用具有许多参数的自定义解码器网络执行交互式重照明,因此渲染步骤在计算上是昂贵的,并且在有限硬件上的全分辨率大图像不可行。早期通过直接训练较小的网络来降低成本的尝试未能产生有效的结果。为此,我们提出了一种基于知识蒸馏(DISK-NeuralRTI)的新颖解决方案来降低其计算成本。DISK-NeuralRTI从教师网络开始,可以是原始的NeuralRTI方法之一,也可以是更复杂的解决方案,DISK-NeuralRTI可以创建一个带有简化解码器网络的学生架构,该解码器网络可以保持图像质量,并且具有与基于web的实时大型表面可视化兼容的计算成本。实验结果表明,我们可以获得与现有NeuralRTI解决方案相当或更准确的学生预测,参数减少高达80%。使用高分辨率多光图像集合(RealRTIHR)的新基准,我们还测试了基于我们简化解码器的基于web的可视化工具的可用性,用于现实表面检测任务。结果表明,该方案在不需要使用累进渲染的情况下达到交互帧率,且图像质量有损失。
{"title":"Fast and accurate neural reflectance transformation imaging through knowledge distillation","authors":"Tinsae G. Dulecha ,&nbsp;Leonardo Righetto ,&nbsp;Ruggero Pintus ,&nbsp;Enrico Gobbetti ,&nbsp;Andrea Giachetti","doi":"10.1016/j.cag.2025.104475","DOIUrl":"10.1016/j.cag.2025.104475","url":null,"abstract":"<div><div>Reflectance Transformation Imaging (RTI) is very popular for its ability to visually analyze surfaces by enhancing surface details through interactive relighting, starting from only a few tens of photographs taken with a fixed camera and variable illumination. Traditional methods like Polynomial Texture Maps (PTM) and Hemispherical Harmonics (HSH) are compact and fast, but struggle to accurately capture complex reflectance fields using few per-pixel coefficients and fixed bases, leading to artifacts, especially in highly reflective or shadowed areas. The NeuralRTI approach, which exploits a neural autoencoder to learn a compact function that better approximates the local reflectance as a function of light directions, has been shown to produce superior quality at comparable storage cost. However, as it performs interactive relighting with custom decoder networks with many parameters, the rendering step is computationally expensive and not feasible at full resolution for large images on limited hardware. Earlier attempts to reduce costs by directly training smaller networks have failed to produce valid results. For this reason, we propose to reduce its computational cost through a novel solution based on Knowledge Distillation (DISK-NeuralRTI). Starting from a teacher network that can be one of the original Neural RTI methods or a more complex solution, DISK-NeuralRTI can create a student architecture with a simplified decoder network that preserves image quality and has computational cost compatible with real-time web-based visualization of large surfaces. Experimental results show that we can obtain a student prediction that is on par or more accurate than the existing NeuralRTI solutions with up to 80% parameter reduction. Using a novel benchmark of high-resolution Multi-Light image collections (RealRTIHR), we also tested the usability of a web-based visualization tool based on our simplified decoder for realistic surface inspection tasks. The results show that the solution reaches interactive frame rates without the necessity of using progressive rendering with image quality loss.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104475"},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance Sampling Guided Neural Radiosity 重要性采样引导神经辐射
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-05 DOI: 10.1016/j.cag.2025.104472
Huangsheng Du, Youcheng Cai, Yutian Zhu, Peifeng Li, Ligang Liu
The rendering equation, as a high-dimensional recursive integral equation, requires recursive expansion and Monte Carlo sampling for its solution, which results in significant computational complexity. Recently, the Neural Radiosity method has been proposed to solve the rendering equation using neural networks, effectively circumventing the need for explicit recursion. However, Neural Radiosity relies on Monte Carlo integration for training, which necessitates a large number of samples for optimization, leading to unstable optimization and a relatively low convergence rate. In this paper, we propose an Importance Sampling Guided Neural Radiosity framework, which systematically integrates Neural Radiosity with importance sampling to improve optimization performance. Firstly, we propose a joint optimization strategy that simultaneously trains the importance sampling module and the Neural Radiosity module. Specifically, our importance sampling module predicts the distribution to effectively enhance the optimization of Neural Radiosity, while the importance sampling module can also achieve rapid convergence through the radiance estimated by Neural Radiosity. Subsequently, we propose an Improved Kullback–Leibler divergence to mitigate the gradient conflict problem in standard KL divergence, thereby further improving convergence performance. Extensive experiments demonstrate that our framework achieves rapid convergence and stable optimization while maintaining high-quality rendering performance. The project code will be released upon acceptance.
渲染方程是一个高维递归积分方程,求解需要递归展开和蒙特卡罗采样,计算量很大。最近,人们提出了利用神经网络求解渲染方程的Neural Radiosity方法,有效地避免了显式递归的需要。然而,Neural Radiosity依靠蒙特卡罗积分进行训练,需要大量的样本进行优化,导致优化不稳定,收敛速度相对较低。在本文中,我们提出了一个重要采样引导的神经辐射框架,该框架系统地将神经辐射与重要采样相结合,以提高优化性能。首先,我们提出了一种同时训练重要性采样模块和神经辐射模块的联合优化策略。具体来说,我们的重要性采样模块预测了分布,有效地增强了Neural Radiosity的优化,而重要性采样模块也可以通过Neural Radiosity估计的亮度实现快速收敛。随后,我们提出了一种改进的Kullback-Leibler散度来缓解标准KL散度中的梯度冲突问题,从而进一步提高收敛性能。大量的实验表明,我们的框架在保持高质量渲染性能的同时实现了快速收敛和稳定优化。验收合格后发布项目代码。
{"title":"Importance Sampling Guided Neural Radiosity","authors":"Huangsheng Du,&nbsp;Youcheng Cai,&nbsp;Yutian Zhu,&nbsp;Peifeng Li,&nbsp;Ligang Liu","doi":"10.1016/j.cag.2025.104472","DOIUrl":"10.1016/j.cag.2025.104472","url":null,"abstract":"<div><div>The rendering equation, as a high-dimensional recursive integral equation, requires recursive expansion and Monte Carlo sampling for its solution, which results in significant computational complexity. Recently, the Neural Radiosity method has been proposed to solve the rendering equation using neural networks, effectively circumventing the need for explicit recursion. However, Neural Radiosity relies on Monte Carlo integration for training, which necessitates a large number of samples for optimization, leading to unstable optimization and a relatively low convergence rate. In this paper, we propose an Importance Sampling Guided Neural Radiosity framework, which systematically integrates Neural Radiosity with importance sampling to improve optimization performance. Firstly, we propose a joint optimization strategy that simultaneously trains the importance sampling module and the Neural Radiosity module. Specifically, our importance sampling module predicts the distribution to effectively enhance the optimization of Neural Radiosity, while the importance sampling module can also achieve rapid convergence through the radiance estimated by Neural Radiosity. Subsequently, we propose an Improved Kullback–Leibler divergence to mitigate the gradient conflict problem in standard KL divergence, thereby further improving convergence performance. Extensive experiments demonstrate that our framework achieves rapid convergence and stable optimization while maintaining high-quality rendering performance. The project code will be released upon acceptance.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104472"},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foregrounding collaboration in CAVE systems: A survey across domains, interaction, and system design 在CAVE系统中的前景协作:跨领域、交互和系统设计的调查
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-05 DOI: 10.1016/j.cag.2025.104469
Fiona Xiao Yu Chen , Sen-Zhe Xu , Song-Hai Zhang
Cave Automatic Virtual Environments (CAVEs) remain an important class of projection-based, room-scale virtual reality (VR) systems, offering co-located, face-to-face immersion for a versatile range of fields that head-mounted displays cannot match. Although existing reviews typically emphasize hardware configurations, display technologies, and application areas, collaboration, a defining advantage of co-located and networked projection-based virtual reality, has received comparatively limited and fragmented attention. This survey addresses this gap through a broad, structured review of over 100 publications (1992–2025) spanning early prototypes to contemporary domain-specific installations. We analyze CAVE research from four perspectives: application domains, interaction methods, system configurations, and collaboration support. In contrast to earlier work, we treat collaboration not as a supporting feature but as a critical design variable. Across domains, multi-user support is inconsistently realized, commonly reported in the reviewed literature for education and visualization, but less integrated in cultural and medical applications. Where present, collaborative features often reported to prioritize co-presence over shaping system architecture or interaction design. By foregrounding collaboration alongside interaction, domain context, and system design, this survey documents patterns, highlights observed mismatches between domain needs and system advantages, and outlines opportunities for more intentional collaboration support of shared immersive experiences.
洞穴自动虚拟环境(CAVEs)仍然是一类重要的基于投影的房间级虚拟现实(VR)系统,为头戴式显示器无法比拟的各种领域提供共同定位的面对面沉浸式体验。虽然现有的评论通常强调硬件配置、显示技术和应用领域,但是协作,协同定位和基于网络投影的虚拟现实的一个决定性优势,受到了相对有限和分散的关注。该调查通过对100多份出版物(1992-2025)的广泛、结构化的回顾来解决这一差距,这些出版物涵盖了早期原型到当代特定领域的装置。我们从四个角度分析了CAVE研究:应用领域、交互方法、系统配置和协作支持。与早期的工作相反,我们将协作视为一个关键的设计变量,而不是一个支持特性。在各个领域,多用户支持的实现并不一致,通常在已审查的教育和可视化文献中报道,但在文化和医学应用中较少整合。在目前的情况下,协作特性通常会优先考虑共同存在,而不是形成系统架构或交互设计。通过将协作与交互、领域上下文和系统设计放在一起,本调查记录了模式,突出了领域需求和系统优势之间观察到的不匹配,并概述了为共享沉浸式体验提供更有意义的协作支持的机会。
{"title":"Foregrounding collaboration in CAVE systems: A survey across domains, interaction, and system design","authors":"Fiona Xiao Yu Chen ,&nbsp;Sen-Zhe Xu ,&nbsp;Song-Hai Zhang","doi":"10.1016/j.cag.2025.104469","DOIUrl":"10.1016/j.cag.2025.104469","url":null,"abstract":"<div><div>Cave Automatic Virtual Environments (CAVEs) remain an important class of projection-based, room-scale virtual reality (VR) systems, offering co-located, face-to-face immersion for a versatile range of fields that head-mounted displays cannot match. Although existing reviews typically emphasize hardware configurations, display technologies, and application areas, collaboration, a defining advantage of co-located and networked projection-based virtual reality, has received comparatively limited and fragmented attention. This survey addresses this gap through a broad, structured review of over 100 publications (1992–2025) spanning early prototypes to contemporary domain-specific installations. We analyze CAVE research from four perspectives: application domains, interaction methods, system configurations, and collaboration support. In contrast to earlier work, we treat collaboration not as a supporting feature but as a critical design variable. Across domains, multi-user support is inconsistently realized, commonly reported in the reviewed literature for education and visualization, but less integrated in cultural and medical applications. Where present, collaborative features often reported to prioritize co-presence over shaping system architecture or interaction design. By foregrounding collaboration alongside interaction, domain context, and system design, this survey documents patterns, highlights observed mismatches between domain needs and system advantages, and outlines opportunities for more intentional collaboration support of shared immersive experiences.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104469"},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A method for fast prediction of flood evolution process based on graph neural network 基于图神经网络的洪水演化过程快速预测方法
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-11-05 DOI: 10.1016/j.cag.2025.104471
Xuqiang Shao , Xuchen Zhang , Yuan Gao , Baiqiang Li
Flood disasters pose severe threats to socioeconomic stability, necessitating rapid and accurate prediction for disaster mitigation. However, physics-based numerical models suffer from critical limitations: low computational efficiency, high memory consumption, and unstable movement of fluid particles. To address these challenges, this paper proposes a hybrid framework that integrates a stabilized SPH-SWE solver with a tailored graph neural network: (1) An enhanced SPH-SWE solver incorporating area density definitions and virtual boundary particles to stabilize fluid motion. (2) A graph neural network (GNN) for rapid forecasting, built upon the Graph Network-based Simulator (GNS) framework. It is specifically enhanced with a Signed Distance Field (SDF) for boundary-aware learning and a novel subgraph partitioning method to reduce GPU memory usage by approximately 60% in large-scale scenarios. Experiments on a self-built dataset (spanning 10k–100k particles across diverse terrains) validate the framework’s efficiency and accuracy. The results demonstrate that these advancements collectively enable high-efficiency and scalable flood evolution modeling, providing a viable solution for rapid flood forecasting and emergency response planning.
洪水灾害对社会经济稳定构成严重威胁,因此需要快速和准确地预测以减轻灾害。然而,基于物理的数值模型存在严重的局限性:计算效率低、内存消耗高、流体颗粒运动不稳定。为了解决这些挑战,本文提出了一种混合框架,将稳定的SPH-SWE求解器与定制的图神经网络相结合:(1)结合面积密度定义和虚拟边界粒子的增强型SPH-SWE求解器来稳定流体运动。(2)在基于图形网络的模拟器(GNS)框架的基础上,构建了快速预测的图形神经网络(GNN)。它特别增强了用于边界感知学习的签名距离域(SDF)和一种新的子图划分方法,可以在大规模场景中减少大约60%的GPU内存使用。在自建数据集(跨越不同地形的10k-100k粒子)上的实验验证了该框架的效率和准确性。结果表明,这些进步共同实现了高效和可扩展的洪水演变建模,为快速洪水预报和应急响应规划提供了可行的解决方案。
{"title":"A method for fast prediction of flood evolution process based on graph neural network","authors":"Xuqiang Shao ,&nbsp;Xuchen Zhang ,&nbsp;Yuan Gao ,&nbsp;Baiqiang Li","doi":"10.1016/j.cag.2025.104471","DOIUrl":"10.1016/j.cag.2025.104471","url":null,"abstract":"<div><div>Flood disasters pose severe threats to socioeconomic stability, necessitating rapid and accurate prediction for disaster mitigation. However, physics-based numerical models suffer from critical limitations: low computational efficiency, high memory consumption, and unstable movement of fluid particles. To address these challenges, this paper proposes a hybrid framework that integrates a stabilized SPH-SWE solver with a tailored graph neural network: (1) An enhanced SPH-SWE solver incorporating area density definitions and virtual boundary particles to stabilize fluid motion. (2) A graph neural network (GNN) for rapid forecasting, built upon the Graph Network-based Simulator (GNS) framework. It is specifically enhanced with a Signed Distance Field (SDF) for boundary-aware learning and a novel subgraph partitioning method to reduce GPU memory usage by approximately 60% in large-scale scenarios. Experiments on a self-built dataset (spanning 10k–100k particles across diverse terrains) validate the framework’s efficiency and accuracy. The results demonstrate that these advancements collectively enable high-efficiency and scalable flood evolution modeling, providing a viable solution for rapid flood forecasting and emergency response planning.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104471"},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1