首页 > 最新文献

Virtual Reality最新文献

英文 中文
Quasi-3D: reducing convergence effort improves visual comfort of head-mounted stereoscopic displays 准三维:减少会聚努力,提高头戴式立体显示器的视觉舒适度
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-26 DOI: 10.1007/s10055-023-00923-8
Vittorio Dalmasso, Michela Moretti, Claudio de’Sperati

The diffusion of virtual reality urges to solve the problem of vergence-accommodation conflict arising when viewing stereoscopic displays, which causes visual stress. We addressed this issue with an approach based on reducing ocular convergence effort. In virtual environments, vergence can be controlled by manipulating the binocular separation of the virtual cameras. Using this technique, we implemented two quasi-3D conditions characterized by binocular image separations intermediate between 3D (stereoscopic) and 2D (monoscopic). In a first experiment, focused on perceptual aspects, ten participants performed a visuo-manual pursuit task while wearing a head-mounted display (HMD) in head-constrained (non-immersive) condition for an overall exposure time of ~ 7 min. Passing from 3D to quasi-3D and 2D conditions, progressively resulted in a decrease of vergence eye movements—both mean convergence angle (static vergence) and vergence excursion (dynamic vergence)—and an increase of hand pursuit spatial error, with the target perceived further from the observer and larger. Decreased static and dynamic vergence predicted decreases in asthenopia trial-wise. In a second experiment, focused on tolerance aspects, fourteen participants performed a detection task in near-vision while wearing an HMD in head-free (immersive) condition for an overall exposure time of ~ 20 min. Passing from 3D to quasi-3D and 2D conditions, there was a general decrease of both subjective and objective visual stress indicators (ocular convergence discomfort ratings, cyber-sickness symptoms and skin conductance level). Decreased static and dynamic vergence predicted the decrease in these indicators. Remarkably, skin conductance level predicted all subjective symptoms, both trial-wise and session-wise, suggesting that it could become an objective replacement of visual stress self-reports. We conclude that relieving convergence effort by reducing binocular image separation in virtual environments can be a simple and effective way to decrease visual stress caused by stereoscopic HMDs. The negative side-effect—worsening of spatial vision—arguably would become unnoticed or compensated over time. This initial proof-of-concept study should be extended by future large-scale studies testing additional environments, tasks, displays, users, and exposure times.

随着虚拟现实技术的普及,人们迫切希望解决观看立体显示时产生的辐辏-适应冲突问题,因为这会造成视觉压力。我们采用了一种基于减少眼球辐辏努力的方法来解决这一问题。在虚拟环境中,辐辏可以通过操纵虚拟摄像机的双眼分离来控制。利用这种技术,我们实现了两种准 3D 条件,其特点是双眼图像分离介于 3D (立体)和 2D(单镜)之间。在第一项侧重于感知方面的实验中,十名参与者在头部受限(非浸入式)条件下佩戴头戴式显示器(HMD)执行了一项视觉-手动追逐任务,总曝光时间约为 7 分钟。从三维条件到准三维条件和二维条件,辐辏眼动--包括平均辐辏角(静态辐辏)和辐辏偏移(动态辐辏)--逐渐减少,手追空间误差增加,目标距离观察者更远、更大。静态辐辏和动态辐辏的减少预示着试验性散光的减少。在第二项侧重于耐受性的实验中,14 名参与者在无头(沉浸式)状态下佩戴 HMD 进行了近视检测任务,总暴露时间约为 20 分钟。从三维条件到准三维和二维条件,主观和客观视觉压力指标(眼球辐辏不适评级、网络病症状和皮肤电导水平)都普遍下降。静态和动态辐辏的减少预示着这些指标的下降。值得注意的是,皮肤电导水平能预测所有的主观症状,无论是试验还是疗程,这表明皮肤电导水平可以成为视觉压力自我报告的客观替代物。我们的结论是,通过减少虚拟环境中的双眼图像分离来减轻辐辏努力,是减少立体 HMD 带来的视觉压力的一种简单而有效的方法。负面副作用--空间视觉的恶化--可以说会随着时间的推移而被忽视或补偿。这项初步概念验证研究应通过未来的大规模研究加以扩展,测试更多的环境、任务、显示器、用户和曝光时间。
{"title":"Quasi-3D: reducing convergence effort improves visual comfort of head-mounted stereoscopic displays","authors":"Vittorio Dalmasso, Michela Moretti, Claudio de’Sperati","doi":"10.1007/s10055-023-00923-8","DOIUrl":"https://doi.org/10.1007/s10055-023-00923-8","url":null,"abstract":"<p>The diffusion of virtual reality urges to solve the problem of vergence-accommodation conflict arising when viewing stereoscopic displays, which causes visual stress. We addressed this issue with an approach based on reducing ocular convergence effort. In virtual environments, vergence can be controlled by manipulating the binocular separation of the virtual cameras. Using this technique, we implemented two quasi-3D conditions characterized by binocular image separations intermediate between 3D (stereoscopic) and 2D (monoscopic). In a first experiment, focused on perceptual aspects, ten participants performed a visuo-manual pursuit task while wearing a head-mounted display (HMD) in head-constrained (non-immersive) condition for an overall exposure time of ~ 7 min. Passing from 3D to quasi-3D and 2D conditions, progressively resulted in a decrease of vergence eye movements—both mean convergence angle (static vergence) and vergence excursion (dynamic vergence)—and an increase of hand pursuit spatial error, with the target perceived further from the observer and larger. Decreased static and dynamic vergence predicted decreases in asthenopia trial-wise. In a second experiment, focused on tolerance aspects, fourteen participants performed a detection task in near-vision while wearing an HMD in head-free (immersive) condition for an overall exposure time of ~ 20 min. Passing from 3D to quasi-3D and 2D conditions, there was a general decrease of both subjective and objective visual stress indicators (ocular convergence discomfort ratings, cyber-sickness symptoms and skin conductance level). Decreased static and dynamic vergence predicted the decrease in these indicators. Remarkably, skin conductance level predicted all subjective symptoms, both trial-wise and session-wise, suggesting that it could become an objective replacement of visual stress self-reports. We conclude that relieving convergence effort by reducing binocular image separation in virtual environments can be a simple and effective way to decrease visual stress caused by stereoscopic HMDs. The negative side-effect—worsening of spatial vision—arguably would become unnoticed or compensated over time. This initial proof-of-concept study should be extended by future large-scale studies testing additional environments, tasks, displays, users, and exposure times.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"25 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139978516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social VR design features and experiential outcomes: narrative review and relationship map for dyadic agent conversations 社交虚拟现实的设计特点和体验结果:二元代理对话的叙事回顾和关系图
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-22 DOI: 10.1007/s10055-024-00941-0
Pat Mulvaney, Brendan Rooney, Maximilian A. Friehs, John Francis Leader

The application of virtual reality to the study of conversation and social interaction is a relatively new field of study. While the affordances of VR in the domain compared to traditional methods are promising, the current state of the field is plagued by a lack of methodological standards and shared understanding of how design features of the immersive experience impact participants. In order to address this, this paper develops a relationship map between design features and experiential outcomes, along with expectations for how those features interact with each other. Based on the results of a narrative review drawing from diverse fields, this relationship map focuses on dyadic conversations with agents. The experiential outcomes chosen include presence & engagement, psychological discomfort, and simulator sickness. The relevant design features contained in the framework include scenario agency, visual fidelity, agent automation, environmental context, and audio features. We conclude by discussing the findings of the review and framework, such as the multimodal nature of social VR being highlighted, and the importance of environmental context, and lastly provide recommendations for future research in social VR.

将虚拟现实技术应用于对话和社会互动研究是一个相对较新的研究领域。虽然与传统方法相比,虚拟现实技术在该领域的应用前景广阔,但由于缺乏方法标准以及对沉浸式体验的设计特征如何影响参与者的共同理解,该领域目前的状况令人担忧。为了解决这个问题,本文绘制了设计特征与体验结果之间的关系图,以及这些特征如何相互作用的预期。根据从不同领域汲取的叙事审查结果,该关系图侧重于与代理的二人对话。所选择的体验结果包括临场感和ampamp;参与度、心理不适感和模拟器不适感。框架中包含的相关设计特征包括场景代理、视觉保真度、代理自动化、环境背景和音频特征。最后,我们将讨论评论和框架的结论,如社交虚拟现实的多模态特性和环境背景的重要性,并为社交虚拟现实的未来研究提供建议。
{"title":"Social VR design features and experiential outcomes: narrative review and relationship map for dyadic agent conversations","authors":"Pat Mulvaney, Brendan Rooney, Maximilian A. Friehs, John Francis Leader","doi":"10.1007/s10055-024-00941-0","DOIUrl":"https://doi.org/10.1007/s10055-024-00941-0","url":null,"abstract":"<p>The application of virtual reality to the study of conversation and social interaction is a relatively new field of study. While the affordances of VR in the domain compared to traditional methods are promising, the current state of the field is plagued by a lack of methodological standards and shared understanding of how design features of the immersive experience impact participants. In order to address this, this paper develops a relationship map between design features and experiential outcomes, along with expectations for how those features interact with each other. Based on the results of a narrative review drawing from diverse fields, this relationship map focuses on dyadic conversations with agents. The experiential outcomes chosen include presence &amp; engagement, psychological discomfort, and simulator sickness. The relevant design features contained in the framework include scenario agency, visual fidelity, agent automation, environmental context, and audio features. We conclude by discussing the findings of the review and framework, such as the multimodal nature of social VR being highlighted, and the importance of environmental context, and lastly provide recommendations for future research in social VR.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"76 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The value of collision feedback in robotic surgical skills training 碰撞反馈在机器人手术技能培训中的价值
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-22 DOI: 10.1007/s10055-023-00891-z
Roelf Postema, Hidde Hardon, A. Masie Rahimi, Roel Horeman, Felix Nickel, Jenny Dankelman, Alexander L. A. Bloemendaal, Maarten van der Elst, Donald L. van der Peet, Freek Daams, Sem F. Hardon, Tim Horeman

Collision feedback about instrument and environment interaction is often lacking in robotic surgery training devices. The PoLaRS virtual reality simulator is a newly developed desk trainer that overcomes drawbacks of existing robot trainers for advanced laparoscopy. This study aimed to assess the effect of haptic and visual feedback during training on the performance of a robotic surgical task. Robotic surgery-naïve participants were randomized and equally divided into two training groups: Haptic and Visual Feedback (HVF) and No Haptic and Visual Feedback. Participants performed two basic virtual reality training tasks on the PoLaRS system as a pre- and post-test. The measurement parameters Time, Tip-to-tip distance, Path length Left/Right and Collisions Left/Right were used to analyze the learning curves and statistically compare the pre- and post-tests performances. In total, 198 trials performed by 22 participants were included. The visual and haptic feedback did not negatively influence the time to complete the tasks. Although no improvement in skill was observed between pre- and post-tests, the mean rank of the number of collisions of the right grasper (dominant hand) was significantly lower in the HVF feedback group during the second post-test (Mean Rank = 8.73 versus Mean Rank = 14.27, U = 30.00, p = 0.045). Haptic and visual feedback during the training on the PoLaRS system resulted in fewer instrument collisions. These results warrant the introduction of haptic feedback in subjects with no experience in robotic surgery. The PoLaRS system can be utilized to remotely optimize instrument handling before commencing robotic surgery in the operating room.

机器人手术训练设备往往缺乏器械与环境互动的碰撞反馈。PoLaRS虚拟现实模拟器是一种新开发的桌面训练器,克服了现有高级腹腔镜手术机器人训练器的缺点。本研究旨在评估训练过程中触觉和视觉反馈对机器人手术任务表现的影响。对机器人手术一无所知的参与者被随机平均分为两个训练组:触觉和视觉反馈组(HVF)和无触觉和视觉反馈组。参与者在 PoLaRS 系统上进行了两项基本的虚拟现实训练任务,作为前测和后测。测量参数 "时间"、"触点到触点的距离"、"左/右路径长度 "和 "左/右碰撞 "用于分析学习曲线,并对前后测试的表现进行统计比较。共有 22 名参与者进行了 198 次测试。视觉和触觉反馈并没有对完成任务的时间产生负面影响。虽然在测试前和测试后没有观察到技能的提高,但在测试后的第二次测试中,HVF 反馈组的右抓手(优势手)碰撞次数的平均等级明显降低(平均等级 = 8.73 对平均等级 = 14.27,U = 30.00,p = 0.045)。在 PoLaRS 系统培训期间,触觉和视觉反馈减少了仪器碰撞。这些结果证明,在没有机器人手术经验的受试者中引入触觉反馈是有必要的。在手术室开始机器人手术之前,可以利用PoLaRS系统远程优化器械操作。
{"title":"The value of collision feedback in robotic surgical skills training","authors":"Roelf Postema, Hidde Hardon, A. Masie Rahimi, Roel Horeman, Felix Nickel, Jenny Dankelman, Alexander L. A. Bloemendaal, Maarten van der Elst, Donald L. van der Peet, Freek Daams, Sem F. Hardon, Tim Horeman","doi":"10.1007/s10055-023-00891-z","DOIUrl":"https://doi.org/10.1007/s10055-023-00891-z","url":null,"abstract":"<p>Collision feedback about instrument and environment interaction is often lacking in robotic surgery training devices. The PoLaRS virtual reality simulator is a newly developed desk trainer that overcomes drawbacks of existing robot trainers for advanced laparoscopy. This study aimed to assess the effect of haptic and visual feedback during training on the performance of a robotic surgical task. Robotic surgery-naïve participants were randomized and equally divided into two training groups: Haptic and Visual Feedback (HVF) and No Haptic and Visual Feedback. Participants performed two basic virtual reality training tasks on the PoLaRS system as a pre- and post-test. The measurement parameters Time, Tip-to-tip distance, Path length Left/Right and Collisions Left/Right were used to analyze the learning curves and statistically compare the pre- and post-tests performances. In total, 198 trials performed by 22 participants were included. The visual and haptic feedback did not negatively influence the time to complete the tasks. Although no improvement in skill was observed between pre- and post-tests, the mean rank of the number of collisions of the right grasper (dominant hand) was significantly lower in the HVF feedback group during the second post-test (Mean Rank = 8.73 versus Mean Rank = 14.27, <i>U</i> = 30.00, <i>p</i> = 0.045). Haptic and visual feedback during the training on the PoLaRS system resulted in fewer instrument collisions. These results warrant the introduction of haptic feedback in subjects with no experience in robotic surgery. The PoLaRS system can be utilized to remotely optimize instrument handling before commencing robotic surgery in the operating room.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"43 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensorimotor adaptation in virtual reality: Do instructions and body representation influence aftereffects? 虚拟现实中的感觉运动适应:指令和身体表征会影响后效吗?
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-22 DOI: 10.1007/s10055-024-00957-6
Svetlana Wähnert, Ulrike Schäfer

Perturbations in virtual reality (VR) lead to sensorimotor adaptation during exposure, but also to aftereffects once the perturbation is no longer present. An experiment was conducted to investigate the impact of different task instructions and body representation on the magnitude and the persistence of these aftereffects. Participants completed the paradigm of sensorimotor adaptation in VR. They were assigned to one of three groups: control group, misinformation group or arrow group. The misinformation group and the arrow group were each compared to the control group to examine the effects of instruction and body representation. The misinformation group was given the incorrect instruction that in addition to the perturbation, a random error component was also built into the movement. The arrow group was presented a virtual arrow instead of a virtual hand. It was hypothesised that both would lead to a lower magnitude and persistence of the aftereffect because the object identity between hand and virtual representation would be reduced, and errors would be more strongly attributed to external causes. Misinformation led to lower persistence, while the arrow group showed no significant differences compared to the control group. The results suggest that information about the accuracy of the VR system can influence the aftereffects, which should be considered when developing VR instructions. No effects of body representation were found. One possible explanation is that the manipulated difference between abstract and realistic body representation was too small in terms of object identity.

虚拟现实(VR)中的扰动会导致接触过程中的感觉运动适应,但一旦扰动不再存在,也会产生后遗效应。我们进行了一项实验,研究不同的任务指令和身体表征对这些后遗效应的程度和持续性的影响。参与者在 VR 中完成了传感器运动适应范例。他们被分配到三组中的一组:对照组、错误信息组或箭头组。错误信息组和箭头组分别与对照组进行比较,以考察指令和身体表征的效果。错误信息组得到的是错误指令,除了扰动外,还在动作中加入了随机误差成分。箭头组得到的是虚拟箭头而不是虚拟手。假设这两种情况都会导致后效的程度和持续性降低,因为手和虚拟表征之间的物体识别性会降低,错误会更多地归因于外部原因。错误信息导致了较低的持续性,而箭头组与对照组相比没有显著差异。结果表明,有关 VR 系统准确性的信息会影响后效,在制定 VR 指令时应考虑到这一点。没有发现身体表征的影响。一种可能的解释是,就物体识别而言,抽象和真实的身体表征之间的操作差异太小。
{"title":"Sensorimotor adaptation in virtual reality: Do instructions and body representation influence aftereffects?","authors":"Svetlana Wähnert, Ulrike Schäfer","doi":"10.1007/s10055-024-00957-6","DOIUrl":"https://doi.org/10.1007/s10055-024-00957-6","url":null,"abstract":"<p>Perturbations in virtual reality (VR) lead to sensorimotor adaptation during exposure, but also to aftereffects once the perturbation is no longer present. An experiment was conducted to investigate the impact of different task instructions and body representation on the magnitude and the persistence of these aftereffects. Participants completed the paradigm of sensorimotor adaptation in VR. They were assigned to one of three groups: control group, misinformation group or arrow group. The misinformation group and the arrow group were each compared to the control group to examine the effects of instruction and body representation. The misinformation group was given the incorrect instruction that in addition to the perturbation, a random error component was also built into the movement. The arrow group was presented a virtual arrow instead of a virtual hand. It was hypothesised that both would lead to a lower magnitude and persistence of the aftereffect because the object identity between hand and virtual representation would be reduced, and errors would be more strongly attributed to external causes. Misinformation led to lower persistence, while the arrow group showed no significant differences compared to the control group. The results suggest that information about the accuracy of the VR system can influence the aftereffects, which should be considered when developing VR instructions. No effects of body representation were found. One possible explanation is that the manipulated difference between abstract and realistic body representation was too small in terms of object identity.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"43 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real-time wearable AR system for egocentric vision on the edge 用于边缘自我中心视觉的实时可穿戴 AR 系统
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-19 DOI: 10.1007/s10055-023-00937-2
Iason Karakostas, Aikaterini Valakou, Despoina Gavgiotaki, Zinovia Stefanidi, Ioannis Pastaltzidis, Grigorios Tsipouridis, Nikolaos Kilis, Konstantinos C. Apostolakis, Stavroula Ntoa, Nikolaos Dimitriou, George Margetis, Dimitrios Tzovaras

Real-time performance is critical for Augmented Reality (AR) systems as it directly affects responsiveness and enables the timely rendering of virtual content superimposed on real scenes. In this context, we present the DARLENE wearable AR system, analysing its specifications, overall architecture and core algorithmic components. DARLENE comprises AR glasses and a wearable computing node responsible for several time-critical computation tasks. These include computer vision modules developed for the real-time analysis of dynamic scenes supporting functionalities for instance segmentation, tracking and pose estimation. To meet real-time requirements in limited resources, concrete algorithmic adaptations and design choices are introduced. The proposed system further supports real-time video streaming and interconnection with external IoT nodes. To improve user experience, a novel approach is proposed for the adaptive rendering of AR content by considering the user’s stress level, the context of use and the environmental conditions for adjusting the level of presented information towards enhancing their situational awareness. Through extensive experiments, we evaluate the performance of individual components and end-to-end pipelines. As the proposed system targets time-critical security applications where it can be used to enhance police officers’ situational awareness, further experimental results involving end users are reported with respect to overall user experience, workload and evaluation of situational awareness.

实时性能对增强现实(AR)系统至关重要,因为它直接影响响应速度,并能及时渲染叠加在真实场景上的虚拟内容。在此背景下,我们介绍了 DARLENE 可穿戴 AR 系统,分析了其规格、整体架构和核心算法组件。DARLENE 由 AR 眼镜和一个可穿戴计算节点组成,该节点负责多项时间关键型计算任务。其中包括为实时分析动态场景而开发的计算机视觉模块,支持分割、跟踪和姿态估计等功能。为了在有限的资源条件下满足实时要求,引入了具体的算法调整和设计选择。拟议系统还支持实时视频流以及与外部物联网节点的互联。为了改善用户体验,我们提出了一种自适应 AR 内容渲染的新方法,该方法考虑了用户的压力水平、使用环境和环境条件,以调整所呈现信息的水平,从而提高用户的态势感知能力。通过大量实验,我们评估了各个组件和端到端管道的性能。由于所提议的系统以时间紧迫的安全应用为目标,可用于增强警务人员的态势感知能力,因此报告了涉及最终用户的进一步实验结果,包括整体用户体验、工作量和态势感知能力评估。
{"title":"A real-time wearable AR system for egocentric vision on the edge","authors":"Iason Karakostas, Aikaterini Valakou, Despoina Gavgiotaki, Zinovia Stefanidi, Ioannis Pastaltzidis, Grigorios Tsipouridis, Nikolaos Kilis, Konstantinos C. Apostolakis, Stavroula Ntoa, Nikolaos Dimitriou, George Margetis, Dimitrios Tzovaras","doi":"10.1007/s10055-023-00937-2","DOIUrl":"https://doi.org/10.1007/s10055-023-00937-2","url":null,"abstract":"<p>Real-time performance is critical for Augmented Reality (AR) systems as it directly affects responsiveness and enables the timely rendering of virtual content superimposed on real scenes. In this context, we present the DARLENE wearable AR system, analysing its specifications, overall architecture and core algorithmic components. DARLENE comprises AR glasses and a wearable computing node responsible for several time-critical computation tasks. These include computer vision modules developed for the real-time analysis of dynamic scenes supporting functionalities for instance segmentation, tracking and pose estimation. To meet real-time requirements in limited resources, concrete algorithmic adaptations and design choices are introduced. The proposed system further supports real-time video streaming and interconnection with external IoT nodes. To improve user experience, a novel approach is proposed for the adaptive rendering of AR content by considering the user’s stress level, the context of use and the environmental conditions for adjusting the level of presented information towards enhancing their situational awareness. Through extensive experiments, we evaluate the performance of individual components and end-to-end pipelines. As the proposed system targets time-critical security applications where it can be used to enhance police officers’ situational awareness, further experimental results involving end users are reported with respect to overall user experience, workload and evaluation of situational awareness.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"105 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139926913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reaching interactions in virtual reality: the effect of movement direction, hand dominance, and hemispace on the kinematic properties of inward and outward reaches 虚拟现实中的伸手互动:运动方向、手部优势和半球对内向和外向伸手运动特性的影响
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-14 DOI: 10.1007/s10055-023-00930-9
Logan Clark, Mohamad El Iskandarani, Sara Riggs

Recent literature has revealed that when users reach to select objects in VR, they can adapt how they move (i.e., the kinematic properties of their reaches) depending on the: (1) direction they move, (2) hand they use, and (3) side of the body where the movement occurs. In the present work, we took a more detailed look at how kinematic properties of reaching movements performed in VR change as a function of movement direction for reaches performed on each side of the body using each hand. We focused on reaches in 12 different directions that either involved moving inward (toward the body midline) or outward (away from the body midline). Twenty users reached in each direction on both left and right sides of their body, using both their dominant and non-dominant hands. The results provided a fine-grained account of how kinematic properties of virtual hand reaches change as a function of movement direction when users reach on either side of their body using either hand. The findings provide practitioners insights on how to interpret the kinematic properties of reaching behaviors in VR, which has applicability in emerging contexts that include detecting VR usability issues and using VR for stroke rehabilitation.

最近的文献显示,当用户在 VR 中伸手选择物体时,他们可以根据以下情况调整自己的移动方式(即伸手的运动学特性):(1) 移动方向;(2) 使用的手掌;(3) 发生移动的身体一侧:(1) 移动的方向,(2) 使用的手,以及 (3) 移动发生的身体侧面。在本研究中,我们更详细地研究了在 VR 中进行的伸手动作的运动特性是如何随着使用每只手在身体两侧进行的伸手动作的运动方向而变化的。我们重点研究了 12 个不同方向的伸手动作,这些动作要么涉及向内(向身体中线方向)移动,要么涉及向外(远离身体中线方向)移动。20 名用户分别使用惯用手和非惯用手向身体左右两侧的每个方向伸手。研究结果详细说明了当用户使用任何一只手向身体两侧伸手时,虚拟手伸手的运动学特性如何随运动方向而变化。研究结果为从业人员提供了如何解释 VR 中伸手行为的运动学特性的见解,适用于检测 VR 可用性问题和将 VR 用于中风康复等新兴环境。
{"title":"Reaching interactions in virtual reality: the effect of movement direction, hand dominance, and hemispace on the kinematic properties of inward and outward reaches","authors":"Logan Clark, Mohamad El Iskandarani, Sara Riggs","doi":"10.1007/s10055-023-00930-9","DOIUrl":"https://doi.org/10.1007/s10055-023-00930-9","url":null,"abstract":"<p>Recent literature has revealed that when users reach to select objects in VR, they can adapt how they move (i.e., the kinematic properties of their reaches) depending on the: (1) direction they move, (2) hand they use, and (3) side of the body where the movement occurs. In the present work, we took a more detailed look at how kinematic properties of reaching movements performed in VR change as a function of movement direction for reaches performed on each side of the body using each hand. We focused on reaches in 12 different directions that either involved moving inward (toward the body midline) or outward (away from the body midline). Twenty users reached in each direction on both left and right sides of their body, using both their dominant and non-dominant hands. The results provided a fine-grained account of how kinematic properties of virtual hand reaches change as a function of <i>movement direction</i> when users reach on either side of their body using either hand. The findings provide practitioners insights on how to interpret the kinematic properties of reaching behaviors in VR, which has applicability in emerging contexts that include detecting VR usability issues and using VR for stroke rehabilitation.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"61 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139757598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A collaborative AR application for education: from architecture design to user evaluation 用于教育的协作式 AR 应用程序:从架构设计到用户评估
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-13 DOI: 10.1007/s10055-024-00952-x
Stefano Masneri, Ana Domínguez, Guillermo Pacho, Mikel Zorrilla, Mikel Larrañaga, Ana Arruarte

Augmented reality applications can be used in an educational context to facilitate learning. In particular, augmented reality has been successfully used as a tool to boost students’ engagement and to improve their understanding of complex topics. Despite this, augmented reality usage is still not common in schools and it still offers mostly individual experiences, lacking collaboration capabilities which are of paramount importance in a learning environment. This work presents an application called ARoundTheWorld, a multiplatform augmented reality application for education. It is based on a software architecture, designed with the help of secondary school teachers, that provides interoperability, multi-user support, integration with learning management systems and data analytics capabilities, thus simplifying the development of collaborative augmented reality learning experiences. The application has been tested by 44 students and 3 teachers from 3 different educational institutions to evaluate the usability as well as the impact of collaboration functionalities in the students’ engagement. Qualitative and quantitative results show that the application fulfils all the design objectives identified by teachers as key elements for augmented reality educational applications. Furthermore, the application was positively evaluated by the students and it succeeded in promoting collaborative behaviour. These results show that ARoundTheWorld, and other applications built using the same architecture, could be easily developed and successfully integrated into existing schools curricula.

增强现实应用程序可用于教育环境,以促进学习。特别是,增强现实已被成功地用作一种工具,以提高学生的参与度,加深他们对复杂主题的理解。尽管如此,增强现实技术在学校中的应用仍不普遍,它提供的主要是个人体验,缺乏协作能力,而协作能力在学习环境中至关重要。本作品介绍了一款名为 ARoundTheWorld 的应用程序,它是一款多平台增强现实教育应用程序。它基于一个在中学教师帮助下设计的软件架构,提供互操作性、多用户支持、与学习管理系统的集成以及数据分析功能,从而简化了协作式增强现实学习体验的开发。来自 3 个不同教育机构的 44 名学生和 3 名教师对该应用程序进行了测试,以评估其可用性以及协作功能对学生参与的影响。定性和定量结果表明,该应用程序满足了教师所确定的作为增强现实教育应用程序关键要素的所有设计目标。此外,该应用程序还得到了学生的积极评价,并成功地促进了协作行为。这些结果表明,"ARoundTheWorld "以及使用相同架构开发的其他应用程序可以很容易地开发出来,并成功地整合到现有的学校课程中。
{"title":"A collaborative AR application for education: from architecture design to user evaluation","authors":"Stefano Masneri, Ana Domínguez, Guillermo Pacho, Mikel Zorrilla, Mikel Larrañaga, Ana Arruarte","doi":"10.1007/s10055-024-00952-x","DOIUrl":"https://doi.org/10.1007/s10055-024-00952-x","url":null,"abstract":"<p>Augmented reality applications can be used in an educational context to facilitate learning. In particular, augmented reality has been successfully used as a tool to boost students’ engagement and to improve their understanding of complex topics. Despite this, augmented reality usage is still not common in schools and it still offers mostly individual experiences, lacking collaboration capabilities which are of paramount importance in a learning environment. This work presents an application called <i>ARoundTheWorld</i>, a multiplatform augmented reality application for education. It is based on a software architecture, designed with the help of secondary school teachers, that provides interoperability, multi-user support, integration with learning management systems and data analytics capabilities, thus simplifying the development of collaborative augmented reality learning experiences. The application has been tested by 44 students and 3 teachers from 3 different educational institutions to evaluate the usability as well as the impact of collaboration functionalities in the students’ engagement. Qualitative and quantitative results show that the application fulfils all the design objectives identified by teachers as key elements for augmented reality educational applications. Furthermore, the application was positively evaluated by the students and it succeeded in promoting collaborative behaviour. These results show that <i>ARoundTheWorld</i>, and other applications built using the same architecture, could be easily developed and successfully integrated into existing schools curricula.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"5 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139757676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A virtual reality data visualization tool for dimensionality reduction methods 用于降维方法的虚拟现实数据可视化工具
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-12 DOI: 10.1007/s10055-024-00939-8
Juan C. Morales-Vega, Laura Raya, Manuel Rubio-Sánchez, Alberto Sanchez

In this paper, we present a virtual reality interactive tool for generating and manipulating visualizations for high-dimensional data in a natural and intuitive stereoscopic way. Our tool offers support for a diverse range of dimensionality reduction (DR) algorithms, enabling the transformation of complex data into insightful 2D or 3D representations within an immersive VR environment. The tool also allows users to include annotations with a virtual pen using hand tracking, to assign class labels to the data observations, and to perform simultaneous visualization with other users within the 3D environment to facilitate collaboration.

在本文中,我们介绍了一种虚拟现实交互工具,用于以自然、直观的立体方式生成和操作高维数据可视化。我们的工具支持多种降维(DR)算法,能够在身临其境的虚拟现实环境中将复杂的数据转换为富有洞察力的二维或三维表示。该工具还允许用户使用虚拟笔通过手部追踪进行注释,为数据观测结果分配类别标签,并在三维环境中与其他用户同步进行可视化,以促进协作。
{"title":"A virtual reality data visualization tool for dimensionality reduction methods","authors":"Juan C. Morales-Vega, Laura Raya, Manuel Rubio-Sánchez, Alberto Sanchez","doi":"10.1007/s10055-024-00939-8","DOIUrl":"https://doi.org/10.1007/s10055-024-00939-8","url":null,"abstract":"<p>In this paper, we present a virtual reality interactive tool for generating and manipulating visualizations for high-dimensional data in a natural and intuitive stereoscopic way. Our tool offers support for a diverse range of dimensionality reduction (DR) algorithms, enabling the transformation of complex data into insightful 2D or 3D representations within an immersive VR environment. The tool also allows users to include annotations with a virtual pen using hand tracking, to assign class labels to the data observations, and to perform simultaneous visualization with other users within the 3D environment to facilitate collaboration.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"38 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139757424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HoloGCS: mixed reality-based ground control station for unmanned aerial vehicle HoloGCS:基于混合现实技术的无人飞行器地面控制站
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-09 DOI: 10.1007/s10055-023-00914-9

Abstract

Human–robot interaction (HRI), which studies the interaction between robots and humans, appears as a promising research idea for the future of smart factories. In this study, HoloLens as ground control station (HoloGCS) is implemented, and its performance is discussed. HoloGCS is a mixed reality-based system for controlling and monitoring unmanned aerial vehicles (UAV). The system incorporates HRI through speech commands and video streaming, enabling UAV teleoperation. HoloGCS provides a user interface that allows operators to monitor and control the UAV easily. To demonstrate the feasibility of the proposed systems, a user case study (user testing and SUS-based questionnaire) was performed to gather qualitative results. In addition, throughput, RTT, latency, and speech accuracy were also gathered and analyzed to evaluate quantitative results.

摘要 人机交互(HRI)研究机器人与人类之间的互动,是未来智能工厂的一个有前途的研究思路。本研究将 HoloLens 用作地面控制站(HoloGCS),并对其性能进行了讨论。HoloGCS 是一个基于混合现实的系统,用于控制和监测无人驾驶飞行器(UAV)。该系统通过语音命令和视频流结合了人机交互技术,实现了无人机远程操作。HoloGCS 提供的用户界面可让操作员轻松监控无人飞行器。为证明拟议系统的可行性,进行了一项用户案例研究(用户测试和基于 SUS 的问卷调查),以收集定性结果。此外,还收集并分析了吞吐量、RTT、延迟和语音准确性,以评估定量结果。
{"title":"HoloGCS: mixed reality-based ground control station for unmanned aerial vehicle","authors":"","doi":"10.1007/s10055-023-00914-9","DOIUrl":"https://doi.org/10.1007/s10055-023-00914-9","url":null,"abstract":"<h3>Abstract</h3> <p>Human–robot interaction (HRI), which studies the interaction between robots and humans, appears as a promising research idea for the future of smart factories. In this study, HoloLens as ground control station (HoloGCS) is implemented, and its performance is discussed. HoloGCS is a mixed reality-based system for controlling and monitoring unmanned aerial vehicles (UAV). The system incorporates HRI through speech commands and video streaming, enabling UAV teleoperation. HoloGCS provides a user interface that allows operators to monitor and control the UAV easily. To demonstrate the feasibility of the proposed systems, a user case study (user testing and SUS-based questionnaire) was performed to gather qualitative results. In addition, throughput, RTT, latency, and speech accuracy were also gathered and analyzed to evaluate quantitative results.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"18 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139757702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A mixed reality application for total hip arthroplasty 用于全髋关节置换术的混合现实应用程序
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-02-09 DOI: 10.1007/s10055-024-00938-9
M.-Carmen Juan, Cora Hidaldo, Damian Mifsut

Total hip arthroplasty (or total hip replacement) is the current surgical solution for the treatment of advanced coxarthrosis, with the objective of providing mobility and pain relief to patients. For this purpose, surgery can be planned using preoperative images acquired from the patient and navigation systems can also be used during the intervention. Robots have also been used to assist in interventions. In this work, we propose a new mixed reality application for total hip arthroplasty. The surgeon only has to wear HoloLens 2. The application does not require acquiring preoperative or intraoperative images of the patient and uses hand interaction. Interaction is natural and intuitive. The application helps the surgeon place a virtual acetabular cup onto the patient's acetabulum as well as define its diameter. Similarly, a guide for drilling and implant placement is defined, establishing the abduction and anteversion angles. The surgeon has a direct view of the operating field at all times. For validation, the values of the abduction and anteversion angles offered by the application in 20 acetabular cup placements have been compared with real values (ground-truth). From the results, the mean (standard deviation) is 0.375 (0.483) degrees for the error in the anteversion angle and 0.1 (0.308) degrees for the abduction angle, with maximum discrepancies of 1 degree. A study was also carried out on a cadaver, in which a surgeon verified that the application is suitable to be transferred to routine clinical practice, helping in the guidance process for the implantation of a total hip prosthesis.

全髋关节置换术(或称全髋关节置换术)是目前治疗晚期髋关节病的手术方案,目的是为患者提供活动能力并缓解疼痛。为此,可利用从患者身上获取的术前图像规划手术,还可在干预过程中使用导航系统。机器人也被用于辅助干预。在这项工作中,我们为全髋关节置换术提出了一种新的混合现实应用。外科医生只需佩戴 HoloLens 2。该应用不需要获取患者的术前或术中图像,并使用手部交互。交互自然而直观。该应用程序可帮助外科医生将虚拟髋臼杯放到患者的髋臼上,并确定其直径。同样,还可定义钻孔和植入物放置指南,确定外展和内翻角度。外科医生在任何时候都能直接看到手术区域。为了进行验证,我们将该应用程序在 20 次髋臼杯植入中提供的外展和内翻角度值与实际值(地面真实值)进行了比较。结果显示,内收角度误差的平均值(标准偏差)为 0.375(0.483)度,外展角度误差为 0.1(0.308)度,最大误差为 1 度。此外,还在一具尸体上进行了研究,外科医生在研究中验证了该应用适合应用于常规临床实践,有助于全髋关节假体植入的指导过程。
{"title":"A mixed reality application for total hip arthroplasty","authors":"M.-Carmen Juan, Cora Hidaldo, Damian Mifsut","doi":"10.1007/s10055-024-00938-9","DOIUrl":"https://doi.org/10.1007/s10055-024-00938-9","url":null,"abstract":"<p>Total hip arthroplasty (or total hip replacement) is the current surgical solution for the treatment of advanced coxarthrosis, with the objective of providing mobility and pain relief to patients. For this purpose, surgery can be planned using preoperative images acquired from the patient and navigation systems can also be used during the intervention. Robots have also been used to assist in interventions. In this work, we propose a new mixed reality application for total hip arthroplasty. The surgeon only has to wear HoloLens 2. The application does not require acquiring preoperative or intraoperative images of the patient and uses hand interaction. Interaction is natural and intuitive. The application helps the surgeon place a virtual acetabular cup onto the patient's acetabulum as well as define its diameter. Similarly, a guide for drilling and implant placement is defined, establishing the abduction and anteversion angles. The surgeon has a direct view of the operating field at all times. For validation, the values of the abduction and anteversion angles offered by the application in 20 acetabular cup placements have been compared with real values (ground-truth). From the results, the mean (standard deviation) is 0.375 (0.483) degrees for the error in the anteversion angle and 0.1 (0.308) degrees for the abduction angle, with maximum discrepancies of 1 degree. A study was also carried out on a cadaver, in which a surgeon verified that the application is suitable to be transferred to routine clinical practice, helping in the guidance process for the implantation of a total hip prosthesis.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"2020 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139757587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1