首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Effects of virtual agents on interaction efficiency and environmental immersion in MR environments 虚拟代理对 MR 环境中互动效率和环境沉浸感的影响
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.11.001
Yihua Bao , Jie Guo , Dongdong Weng , Yue Liu , Zeyu Tian

Background

Physical entity interactions in mixed reality (MR) environments aim to harness human capabilities in manipulating physical objects, thereby enhancing virtual environment (VEs) functionality. In MR, a common strategy is to use virtual agents as substitutes for physical entities, balancing interaction efficiency with environmental immersion. However, the impact of virtual agent size and form on interaction performance remains unclear.

Methods

Two experiments were conducted to explore how virtual agent size and form affect interaction performance, immersion, and preference in MR environments. The first experiment assessed five virtual agent sizes (25%, 50%, 75%, 100%, and 125% of physical size). The second experiment tested four types of frames (no frame, consistent frame, half frame, and surrounding frame) across all agent sizes. Participants, utilizing a head-mounted display, performed tasks involving moving cups, typing words, and using a mouse. They completed questionnaires assessing aspects such as the virtual environment effects, interaction effects, collision concerns, and preferences.

Results

Results from the first experiment revealed that agents matching physical object size produced the best overall performance. The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion. To balance efficiency and immersion, frameless agents matching physical object sizes were deemed optimal.

Conclusions

Virtual agents matching physical entity sizes enhance user experience and interaction performance. Conversely, familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces. This study provides valuable insights for the future development of MR systems.

背景混合现实(MR)环境中的物理实体交互旨在利用人类操纵物理对象的能力,从而增强虚拟环境(VE)的功能。在混合现实环境中,一种常见的策略是使用虚拟代理来替代物理实体,从而在交互效率和环境沉浸感之间取得平衡。然而,虚拟代理的大小和形式对交互性能的影响仍不清楚。方法进行了两项实验,以探索虚拟代理的大小和形式如何影响 MR 环境中的交互性能、沉浸感和偏好。第一个实验评估了五种虚拟代理尺寸(物理尺寸的 25%、50%、75%、100% 和 125%)。第二个实验测试了所有虚拟代理尺寸下的四种框架(无框架、一致框架、半框架和周围框架)。参与者使用头戴式显示器完成了移动杯子、输入文字和使用鼠标等任务。他们填写了调查问卷,对虚拟环境效果、交互效果、碰撞问题和偏好等方面进行了评估。结果第一个实验的结果显示,与实物大小相匹配的代理总体表现最佳。第二个实验表明,一致的框架明显提高了交互的准确性和速度,但降低了沉浸感。为了在效率和沉浸感之间取得平衡,与物理对象尺寸相匹配的无框架代理被认为是最佳选择。相反,二维界面中熟悉的框架会对虚拟空间中的交互和沉浸感产生不利影响。这项研究为未来 MR 系统的开发提供了宝贵的见解。
{"title":"Effects of virtual agents on interaction efficiency and environmental immersion in MR environments","authors":"Yihua Bao ,&nbsp;Jie Guo ,&nbsp;Dongdong Weng ,&nbsp;Yue Liu ,&nbsp;Zeyu Tian","doi":"10.1016/j.vrih.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.11.001","url":null,"abstract":"<div><h3>Background</h3><p>Physical entity interactions in mixed reality (MR) environments aim to harness human capabilities in manipulating physical objects, thereby enhancing virtual environment (VEs) functionality. In MR, a common strategy is to use virtual agents as substitutes for physical entities, balancing interaction efficiency with environmental immersion. However, the impact of virtual agent size and form on interaction performance remains unclear.</p></div><div><h3>Methods</h3><p>Two experiments were conducted to explore how virtual agent size and form affect interaction performance, immersion, and preference in MR environments. The first experiment assessed five virtual agent sizes (25%, 50%, 75%, 100%, and 125% of physical size). The second experiment tested four types of frames (no frame, consistent frame, half frame, and surrounding frame) across all agent sizes. Participants, utilizing a head-mounted display, performed tasks involving moving cups, typing words, and using a mouse. They completed questionnaires assessing aspects such as the virtual environment effects, interaction effects, collision concerns, and preferences.</p></div><div><h3>Results</h3><p>Results from the first experiment revealed that agents matching physical object size produced the best overall performance. The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion. To balance efficiency and immersion, frameless agents matching physical object sizes were deemed optimal.</p></div><div><h3>Conclusions</h3><p>Virtual agents matching physical entity sizes enhance user experience and interaction performance. Conversely, familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces. This study provides valuable insights for the future development of MR systems.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 169-179"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000761/pdf?md5=79a7ef4bebb12cdd0b6fb18240dafefc&pid=1-s2.0-S2096579623000761-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR-based digital twin for remote monitoring of mining equipment: Architecture and a case study 基于虚拟现实技术的数字孪生系统,用于远程监控采矿设备:架构和案例研究
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.12.002
Jovana Plavšić, Ilija Mišković

Background

Traditional methods for monitoring mining equipment rely primarily on visual inspections, which are time-consuming, inefficient, and hazardous. This article introduces a novel approach to monitoring mission-critical systems and services in the mining industry by integrating virtual reality (VR) and digital twin (DT) technologies. VR-based DTs enable remote equipment monitoring, advanced analysis of machine health, enhanced visualization, and improved decision making.

Methods

This article presents an architecture for VR-based DT development, including the developmental stages, activities, and stakeholders involved. A case study on the condition monitoring of a conveyor belt using real-time synthetic vibration sensor data was conducted using the proposed methodology. The study demonstrated the application of the methodology in remote monitoring and identified the need for further development for implementation in active mining operations. The article also discusses interdisciplinarity, choice of tools, computational resources, time and cost, human involvement, user acceptance, frequency of inspection, multiuser environment, potential risks, and applications beyond the mining industry.

Results

The findings of this study provide a foundation for future research in the domain of VR-based DTs for remote equipment monitoring and a novel application area for VR in mining.

背景传统的采矿设备监控方法主要依靠目视检查,这种方法耗时长、效率低且危险。本文介绍了一种通过整合虚拟现实(VR)和数字孪生(DT)技术来监控采矿业关键任务系统和服务的新方法。方法本文介绍了基于虚拟现实技术的数字孪生技术开发架构,包括开发阶段、活动和所涉及的利益相关者。使用所提出的方法,对使用实时合成振动传感器数据进行传送带状态监测的案例进行了研究。该研究展示了该方法在远程监控中的应用,并确定了在主动采矿作业中实施该方法所需的进一步开发。文章还讨论了跨学科性、工具选择、计算资源、时间和成本、人工参与、用户接受度、检测频率、多用户环境、潜在风险以及采矿业以外的应用。
{"title":"VR-based digital twin for remote monitoring of mining equipment: Architecture and a case study","authors":"Jovana Plavšić,&nbsp;Ilija Mišković","doi":"10.1016/j.vrih.2023.12.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.12.002","url":null,"abstract":"<div><h3>Background</h3><p>Traditional methods for monitoring mining equipment rely primarily on visual inspections, which are time-consuming, inefficient, and hazardous. This article introduces a novel approach to monitoring mission-critical systems and services in the mining industry by integrating virtual reality (VR) and digital twin (DT) technologies. VR-based DTs enable remote equipment monitoring, advanced analysis of machine health, enhanced visualization, and improved decision making.</p></div><div><h3>Methods</h3><p>This article presents an architecture for VR-based DT development, including the developmental stages, activities, and stakeholders involved. A case study on the condition monitoring of a conveyor belt using real-time synthetic vibration sensor data was conducted using the proposed methodology. The study demonstrated the application of the methodology in remote monitoring and identified the need for further development for implementation in active mining operations. The article also discusses interdisciplinarity, choice of tools, computational resources, time and cost, human involvement, user acceptance, frequency of inspection, multiuser environment, potential risks, and applications beyond the mining industry.</p></div><div><h3>Results</h3><p>The findings of this study provide a foundation for future research in the domain of VR-based DTs for remote equipment monitoring and a novel application area for VR in mining.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 100-112"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000852/pdf?md5=fc1470df3595a2597f7acf4dc88f0ea0&pid=1-s2.0-S2096579623000852-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the effect of fingertip aero-haptic feedforward cues in directing eyes-free target acquisition in VR 探索指尖气动触觉前馈线索在 VR 中引导无眼目标获取的效果
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.12.001
Xiaofei Ren , Jian He , Teng Han , Songxian Liu , Mengfei Lv , Rui Zhou

Background

The sense of touch plays a crucial role in interactive behavior within virtual spaces, particularly when visual attention is absent. Although haptic feedback has been widely used to compensate for the lack of visual cues, the use of tactile information as a predictive feedforward cue to guide hand movements remains unexplored and lacks theoretical understanding.

Methods

This study introduces a fingertip aero-haptic rendering method to investigate its effectiveness in directing hand movements during eyes-free spatial interactions. The wearable device incorporates a multichannel micro-airflow chamber to deliver adjustable tactile effects on the fingertips.

Results

The first study verified that tactile directional feedforward cues significantly improve user capabilities in eyes-free target acquisition and that users rely heavily on haptic indications rather than spatial memory to control their hands. A subsequent study examined the impact of enriched tactile feedforward cues on assisting users in determining precise target positions during eyes-free interactions, and assessed the required learning efforts.

Conclusions

The haptic feedforward effect holds great practical promise in eyeless design for virtual reality. We aim to integrate cognitive models and tactile feedforward cues in the future, and apply richer tactile feedforward information to alleviate users' perceptual deficiencies.

背景触觉在虚拟空间的交互行为中发挥着至关重要的作用,尤其是在视觉注意力缺失的情况下。尽管触觉反馈已被广泛用于弥补视觉线索的不足,但将触觉信息作为引导手部动作的预测性前馈线索仍有待探索,也缺乏理论上的理解。方法本研究引入了一种指尖气动触觉渲染方法,以研究其在无视觉空间交互过程中引导手部动作的有效性。结果第一项研究验证了触觉方向前馈提示可显著提高用户在无眼目标获取中的能力,而且用户在很大程度上依赖触觉指示而非空间记忆来控制双手。随后的一项研究考察了丰富的触觉前馈提示对协助用户在无眼交互过程中确定精确目标位置的影响,并评估了所需的学习努力。 结论触觉前馈效应在虚拟现实的无眼设计中具有巨大的实用前景。我们的目标是在未来整合认知模型和触觉前馈线索,并应用更丰富的触觉前馈信息来缓解用户的感知缺陷。
{"title":"Exploring the effect of fingertip aero-haptic feedforward cues in directing eyes-free target acquisition in VR","authors":"Xiaofei Ren ,&nbsp;Jian He ,&nbsp;Teng Han ,&nbsp;Songxian Liu ,&nbsp;Mengfei Lv ,&nbsp;Rui Zhou","doi":"10.1016/j.vrih.2023.12.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.12.001","url":null,"abstract":"<div><h3>Background</h3><p>The sense of touch plays a crucial role in interactive behavior within virtual spaces, particularly when visual attention is absent. Although haptic feedback has been widely used to compensate for the lack of visual cues, the use of tactile information as a predictive feedforward cue to guide hand movements remains unexplored and lacks theoretical understanding.</p></div><div><h3>Methods</h3><p>This study introduces a fingertip aero-haptic rendering method to investigate its effectiveness in directing hand movements during eyes-free spatial interactions. The wearable device incorporates a multichannel micro-airflow chamber to deliver adjustable tactile effects on the fingertips.</p></div><div><h3>Results</h3><p>The first study verified that tactile directional feedforward cues significantly improve user capabilities in eyes-free target acquisition and that users rely heavily on haptic indications rather than spatial memory to control their hands. A subsequent study examined the impact of enriched tactile feedforward cues on assisting users in determining precise target positions during eyes-free interactions, and assessed the required learning efforts.</p></div><div><h3>Conclusions</h3><p>The haptic feedforward effect holds great practical promise in eyeless design for virtual reality. We aim to integrate cognitive models and tactile feedforward cues in the future, and apply richer tactile feedforward information to alleviate users' perceptual deficiencies.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 113-131"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000839/pdf?md5=d8fff3e7495bcc4ee949335d5463ff3c&pid=1-s2.0-S2096579623000839-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chemical simulation teaching system based on virtual reality and gesture interaction 基于虚拟现实和手势交互的化学模拟教学系统
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.09.001
Dengzhen Lu , Hengyi Li , Boyu Qiu , Siyuan Liu , Shuhan Qi

Background

Most existing chemical experiment teaching systems lack solid immersive experiences, making it difficult to engage students. To address these challenges, we propose a chemical simulation teaching system based on virtual reality and gesture interaction.

Methods

The parameters of the models were obtained through actual investigation, whereby Blender and 3DS MAX were used to model and import these parameters into a physics engine. By establishing an interface for the physics engine, gesture interaction hardware, and virtual reality (VR) helmet, a highly realistic chemical experiment environment was created. Using code script logic, particle systems, as well as other systems, chemical phenomena were simulated. Furthermore, we created an online teaching platform using streaming media and databases to address the problems of distance teaching.

Results

The proposed system was evaluated against two mainstream products in the market. In the experiments, the proposed system outperformed the other products in terms of fidelity and practicality.

Conclusions

The proposed system which offers realistic simulations and practicability, can help improve the high school chemistry experimental education.

背景现有的化学实验教学系统大多缺乏扎实的沉浸式体验,难以吸引学生。为了应对这些挑战,我们提出了基于虚拟现实和手势交互的化学模拟教学系统。方法通过实际调查获得模型参数,然后使用 Blender 和 3DS MAX 进行建模,并将这些参数导入物理引擎。通过为物理引擎、手势交互硬件和虚拟现实(VR)头盔建立接口,创建了一个高度逼真的化学实验环境。利用代码脚本逻辑、粒子系统以及其他系统,模拟了化学现象。此外,我们还利用流媒体和数据库创建了一个在线教学平台,以解决远程教学的问题。在实验中,所提出的系统在逼真度和实用性方面均优于其他产品。结论所提出的系统具有逼真的模拟效果和实用性,有助于改善高中化学实验教学。
{"title":"Chemical simulation teaching system based on virtual reality and gesture interaction","authors":"Dengzhen Lu ,&nbsp;Hengyi Li ,&nbsp;Boyu Qiu ,&nbsp;Siyuan Liu ,&nbsp;Shuhan Qi","doi":"10.1016/j.vrih.2023.09.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.09.001","url":null,"abstract":"<div><h3>Background</h3><p>Most existing chemical experiment teaching systems lack solid immersive experiences, making it difficult to engage students. To address these challenges, we propose a chemical simulation teaching system based on virtual reality and gesture interaction.</p></div><div><h3>Methods</h3><p>The parameters of the models were obtained through actual investigation, whereby Blender and 3DS MAX were used to model and import these parameters into a physics engine. By establishing an interface for the physics engine, gesture interaction hardware, and virtual reality (VR) helmet, a highly realistic chemical experiment environment was created. Using code script logic, particle systems, as well as other systems, chemical phenomena were simulated. Furthermore, we created an online teaching platform using streaming media and databases to address the problems of distance teaching.</p></div><div><h3>Results</h3><p>The proposed system was evaluated against two mainstream products in the market. In the experiments, the proposed system outperformed the other products in terms of fidelity and practicality.</p></div><div><h3>Conclusions</h3><p>The proposed system which offers realistic simulations and practicability, can help improve the high school chemistry experimental education.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 148-168"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962300061X/pdf?md5=5a61efaff7176636efdb6c186ffcfa7d&pid=1-s2.0-S209657962300061X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-scale spatial data visualization method based on augmented reality 基于增强现实技术的大规模空间数据可视化方法
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2024.02.002
Xiaoning Qiao , Wenming Xie , Xiaodong Peng , Guangyun Li , Dalin Li , Yingyi Guo , Jingyi Ren

Background

A task assigned to space exploration satellites involves detecting the physical environment within a certain space. However, space detection data are complex and abstract. These data are not conducive for researchers' visual perceptions of the evolution and interaction of events in the space environment.

Methods

A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time, and the corresponding relationships between data location features and other attribute features were established. A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data. The visualization process is optimized for rendering by merging materials, reducing the number of patches, and performing other operations.

Results

The results of sampling, feature extraction, and uniform visualization of the detection data of complex types, long duration spans, and uneven spatial distributions were obtained. The real-time visualization of large-scale spatial structures using augmented reality devices, particularly low-performance devices, was also investigated.

Conclusions

The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space, express the structure and changes in the spatial environment using augmented reality, and assist in intuitively discovering spatial environmental events and evolutionary rules.

背景空间探测卫星的任务是探测一定空间内的物理环境。然而,空间探测数据既复杂又抽象。方法提出了一种大尺度空间时间序列动态数据采样方法,用于对探测数据进行时空采样,并建立了数据位置特征与其他属性特征之间的对应关系。提出了一种基于统计直方图均衡化的色调映射方法,并将其应用于最终的属性特征数据。通过合并素材、减少补丁数量等操作,优化了可视化过程的渲染效果。 结果对类型复杂、时间跨度长、空间分布不均匀的检测数据进行了采样、特征提取和统一可视化处理,取得了良好的效果。结论所提出的可视化系统可以重建大尺度空间的三维结构,利用增强现实技术表达空间环境的结构和变化,并有助于直观地发现空间环境事件和演化规则。
{"title":"Large-scale spatial data visualization method based on augmented reality","authors":"Xiaoning Qiao ,&nbsp;Wenming Xie ,&nbsp;Xiaodong Peng ,&nbsp;Guangyun Li ,&nbsp;Dalin Li ,&nbsp;Yingyi Guo ,&nbsp;Jingyi Ren","doi":"10.1016/j.vrih.2024.02.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.02.002","url":null,"abstract":"<div><h3>Background</h3><p>A task assigned to space exploration satellites involves detecting the physical environment within a certain space. However, space detection data are complex and abstract. These data are not conducive for researchers' visual perceptions of the evolution and interaction of events in the space environment.</p></div><div><h3>Methods</h3><p>A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time, and the corresponding relationships between data location features and other attribute features were established. A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data. The visualization process is optimized for rendering by merging materials, reducing the number of patches, and performing other operations.</p></div><div><h3>Results</h3><p>The results of sampling, feature extraction, and uniform visualization of the detection data of complex types, long duration spans, and uneven spatial distributions were obtained. The real-time visualization of large-scale spatial structures using augmented reality devices, particularly low-performance devices, was also investigated.</p></div><div><h3>Conclusions</h3><p>The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space, express the structure and changes in the spatial environment using augmented reality, and assist in intuitively discovering spatial environmental events and evolutionary rules.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 132-147"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579624000081/pdf?md5=340d5b042587b27ec24ac9e75b5af9d0&pid=1-s2.0-S2096579624000081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio2AB: Audio-driven collaborative generation of virtual character animation Audio2AB:音频驱动的虚拟角色动画协作生成
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.006
Lichao Niu , Wenjun Xie , Dong Wang , Zhongrui Cao , Xiaoping Liu

Background

Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success. However, few methods exist for generating full-body animations, and the portability of virtual character gestures and facial animations has not received sufficient attention.

Methods

Therefore, we propose a deep-learning-based audio-to-animation-and-blendshape (Audio2AB) network that generates gesture animations andARK it’s 52 facial expression parameter blendshape weights based on audio, audio-corresponding text, emotion labels, and semantic relevance labels to generate parametric data for full- body animations. This parameterization method can be used to drive full-body animations of virtual characters and improve their portability. In the experiment, we first downsampled the gesture and facial data to achieve the same temporal resolution for the input, output, and facial data. The Audio2AB network then encoded the audio, audio- corresponding text, emotion labels, and semantic relevance labels, and then fused the text, emotion labels, and semantic relevance labels into the audio to obtain better audio features. Finally, we established links between the body, gestures, and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.

Results

By using audio, audio-corresponding text, and emotional and semantic relevance labels as input, the trained Audio2AB network could generate gesture animation data containing blendshape weights. Therefore, different 3D virtual character animations could be created through parameterization.

Conclusions

The experimental results showed that the proposed method could generate significant gestures and facial animations.

背景在音频驱动的虚拟人物手势和面部动画领域已经开展了大量研究,并取得了一定的成功。因此,我们提出了一种基于深度学习的音频到动画和混合形状(Audio2AB)网络,该网络可生成手势动画,并根据音频、音频对应文本、情感标签和语义相关性标签确定其 52 个面部表情参数混合形状权重,从而生成全身动画的参数数据。这种参数化方法可用于驱动虚拟人物的全身动画,并提高其可移植性。在实验中,我们首先对手势和面部数据进行降采样,使输入、输出和面部数据具有相同的时间分辨率。然后,Audio2AB 网络对音频、音频对应的文本、情感标签和语义相关性标签进行编码,再将文本、情感标签和语义相关性标签融合到音频中,以获得更好的音频特征。最后,我们在身体、手势和面部解码器之间建立了联系,并通过我们提出的 GAN-GF 损失函数生成了相应的动画序列。结果通过使用音频、音频对应文本以及情感和语义相关性标签作为输入,经过训练的 Audio2AB 网络可以生成包含混合形状权重的手势动画数据。结论实验结果表明,所提出的方法可以生成重要的手势和面部动画。
{"title":"Audio2AB: Audio-driven collaborative generation of virtual character animation","authors":"Lichao Niu ,&nbsp;Wenjun Xie ,&nbsp;Dong Wang ,&nbsp;Zhongrui Cao ,&nbsp;Xiaoping Liu","doi":"10.1016/j.vrih.2023.08.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.006","url":null,"abstract":"<div><h3>Background</h3><p>Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success. However, few methods exist for generating full-body animations, and the portability of virtual character gestures and facial animations has not received sufficient attention.</p></div><div><h3>Methods</h3><p>Therefore, we propose a deep-learning-based audio-to-animation-and-blendshape (Audio2AB) network that generates gesture animations andARK it’s 52 facial expression parameter blendshape weights based on audio, audio-corresponding text, emotion labels, and semantic relevance labels to generate parametric data for full- body animations. This parameterization method can be used to drive full-body animations of virtual characters and improve their portability. In the experiment, we first downsampled the gesture and facial data to achieve the same temporal resolution for the input, output, and facial data. The Audio2AB network then encoded the audio, audio- corresponding text, emotion labels, and semantic relevance labels, and then fused the text, emotion labels, and semantic relevance labels into the audio to obtain better audio features. Finally, we established links between the body, gestures, and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.</p></div><div><h3>Results</h3><p>By using audio, audio-corresponding text, and emotional and semantic relevance labels as input, the trained Audio2AB network could generate gesture animation data containing blendshape weights. Therefore, different 3D virtual character animations could be created through parameterization.</p></div><div><h3>Conclusions</h3><p>The experimental results showed that the proposed method could generate significant gestures and facial animations.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 56-70"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000578/pdf?md5=643d5833200a7e29b7c69fe6f55dfabf&pid=1-s2.0-S2096579623000578-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective sampling with Gromov–Hausdorff metric: Efficient dense-shape correspondence via Confidence-based sample consensus 使用 Gromov-Hausdorff 度量进行选择性采样:通过基于置信度的样本共识实现高效的密集形状对应
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.007
Dvir Ginzburg, Dan Raviv

Background

Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” sce- nario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used.

Methods

A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence.

Results

The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods.

Conclusions

The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.

背景功能映射尽管效率高,但存在 "先有鸡还是先有蛋 "的问题,即空间特征不佳会导致光谱配准不足,反之亦然,这通常会导致收敛速度慢、计算成本高和学习失败,尤其是在使用小型数据集时。然后,这些点会对配准和光谱损失项做出贡献,促进训练,并将收敛速度提高五倍。为了确保完全的无监督学习,我们使用了 Gromov-Hausdorff 距离度量来选择具有最大配准得分的点,这些点显示了最大的信心。
{"title":"Selective sampling with Gromov–Hausdorff metric: Efficient dense-shape correspondence via Confidence-based sample consensus","authors":"Dvir Ginzburg,&nbsp;Dan Raviv","doi":"10.1016/j.vrih.2023.08.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.007","url":null,"abstract":"<div><h3>Background</h3><p>Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” sce- nario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used.</p></div><div><h3>Methods</h3><p>A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the <em>Gromov–Hausdorff distance metric</em> was used to select the points with the maximal alignment score displaying most confidence.</p></div><div><h3>Results</h3><p>The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods.</p></div><div><h3>Conclusions</h3><p>The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 30-42"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962300058X/pdf?md5=0d72c2ce81fa69712b18835a2698ec47&pid=1-s2.0-S209657962300058X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance-aware 3D volume visualization for medical content-based image retrieval-a preliminary study 基于医学内容的图像检索中的重要性感知三维体积可视化--初步研究
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.005
Mingjian Li , Younhyun Jung , Michael Fulham , Jinman Kim

Background

A medical content-based image retrieval (CBIR) system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image. CBIR is widely used in evidence- based diagnosis, teaching, and research. Although the retrieval accuracy has largely improved, there has been limited development toward visualizing important image features that indicate the similarity of retrieved images. Despite the prevalence of3D volumetric data in medical imaging such as computed tomography (CT), current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images. Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information, including the size, shape, and spatial relations of multiple structures. This process is time-consuming and reliant on users’ experience.

Methods

In this study, we proposed an importance-aware 3D volume visualization method. The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process. We then integrated the proposed visualization into a CBIR system, thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.

Results

Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography (PET- CT) images of a non-small cell lung cancer dataset.

背景基于内容的医学图像检索(CBIR)系统旨在从大型图像库中检索与用户查询图像视觉相似的图像。CBIR 广泛应用于循证诊断、教学和研究。虽然检索的准确性在很大程度上得到了提高,但在可视化显示检索图像相似性的重要图像特征方面的发展还很有限。尽管三维容积数据在计算机断层扫描(CT)等医学成像中非常普遍,但当前的 CBIR 系统仍依赖二维横截面视图来可视化检索到的图像。这种二维可视化要求用户浏览图像堆栈,以确认检索到的图像的相似性,而且往往涉及三维信息的心理重建,包括多个结构的大小、形状和空间关系。方法在这项研究中,我们提出了一种重要性感知的三维体积可视化方法。我们自动优化了渲染参数,以最大限度地提高重要结构的可见度,这些结构在检索过程中已被检测到并优先处理。结果我们的初步研究结果表明,利用非小细胞肺癌数据集的多模态正电子发射断层扫描和计算机断层扫描(PET- CT)图像,三维可视化可以提供额外的信息。
{"title":"Importance-aware 3D volume visualization for medical content-based image retrieval-a preliminary study","authors":"Mingjian Li ,&nbsp;Younhyun Jung ,&nbsp;Michael Fulham ,&nbsp;Jinman Kim","doi":"10.1016/j.vrih.2023.08.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.005","url":null,"abstract":"<div><h3>Background</h3><p>A medical content-based image retrieval (CBIR) system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image. CBIR is widely used in evidence- based diagnosis, teaching, and research. Although the retrieval accuracy has largely improved, there has been limited development toward visualizing important image features that indicate the similarity of retrieved images. Despite the prevalence of3D volumetric data in medical imaging such as computed tomography (CT), current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images. Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information, including the size, shape, and spatial relations of multiple structures. This process is time-consuming and reliant on users’ experience.</p></div><div><h3>Methods</h3><p>In this study, we proposed an importance-aware 3D volume visualization method. The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process. We then integrated the proposed visualization into a CBIR system, thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.</p></div><div><h3>Results</h3><p>Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography (PET- CT) images of a non-small cell lung cancer dataset.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 71-81"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000566/pdf?md5=771df0097b94f27ef3ca76e8f800722b&pid=1-s2.0-S2096579623000566-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective data transmission through energy-efficient clus- tering and Fuzzy-Based IDS routing approach in WSNs 通过 WSN 中的高能效集群和基于模糊的 IDS 路由方法实现有效的数据传输
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2022.10.002
Saziya Tabbassum (Research Scholar) , Rajesh Kumar Pathak (Vice Chancellor)

Wireless sensor networks (WSN) gather information and sense information samples in a certain region and communicate these readings to a base station (BS). Energy efficiency is considered a major design issue in the WSNs, and can be addressed using clustering and routing techniques. Information is sent from the source to the BS via routing procedures. However, these routing protocols must ensure that packets are delivered securely, guar- anteeing that neither adversaries nor unauthentic individuals have access to the sent information. Secure data transfer is intended to protect the data from illegal access, damage, or disruption. Thus, in the proposed model, secure data transmission is developed in an energy-effective manner. A low-energy adaptive clustering hierarchy (LEACH) is developed to efficiently transfer the data. For the intrusion detection systems (IDS), Fuzzy logic and artificial neural networks (ANNs) are proposed. Initially, the nodes were randomly placed in the network and initialized to gather information. To ensure fair energy dissipation between the nodes, LEACH randomly chooses cluster heads (CHs) and allocates this role to the various nodes based on a round-robin management mechanism. The intrusion-detection procedure was then utilized to determine whether intruders were present in the network. Within the WSN, a Fuzzy interference rule was utilized to distinguish the malicious nodes from legal nodes. Subsequently, an ANN was employed to distinguish the harmful nodes from suspicious nodes. The effectiveness of the proposed approach was validated using metrics that attained 97% accuracy, 97% specificity, and 97% sensitivity of 95%. Thus, it was proved that the LEACH and Fuzzy-based IDS approaches are the best choices for securing data transmission in an energy-efficient manner.

无线传感器网络(WSN)在一定区域内收集信息和感知信息样本,并将这些读数传送到基站(BS)。能源效率被认为是 WSN 的一个主要设计问题,可通过聚类和路由技术来解决。信息通过路由程序从源发送到 BS。但是,这些路由协议必须确保数据包的安全传输,保证对手或非认证者都无法获取发送的信息。安全数据传输的目的是保护数据不被非法访问、破坏或中断。因此,在所提出的模型中,安全数据传输是以节能的方式进行的。为有效传输数据,开发了一种低能耗自适应聚类层次结构(LEACH)。对于入侵检测系统(IDS),提出了模糊逻辑和人工神经网络(ANN)。最初,节点被随机放置在网络中,并进行初始化以收集信息。为确保节点之间的能量消耗公平,LEACH 随机选择簇头(CHs),并根据轮循管理机制将这一角色分配给各个节点。然后利用入侵检测程序来确定网络中是否存在入侵者。在 WSN 中,利用模糊干扰规则来区分恶意节点和合法节点。随后,利用 ANN 区分有害节点和可疑节点。所提方法的有效性得到了验证,准确率达到了 97%,特异性达到了 97%,灵敏度达到了 95%。由此证明,LEACH 和基于模糊的 IDS 方法是以节能方式确保数据传输安全的最佳选择。
{"title":"Effective data transmission through energy-efficient clus- tering and Fuzzy-Based IDS routing approach in WSNs","authors":"Saziya Tabbassum (Research Scholar) ,&nbsp;Rajesh Kumar Pathak (Vice Chancellor)","doi":"10.1016/j.vrih.2022.10.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.10.002","url":null,"abstract":"<div><p>Wireless sensor networks (WSN) gather information and sense information samples in a certain region and communicate these readings to a base station (BS). Energy efficiency is considered a major design issue in the WSNs, and can be addressed using clustering and routing techniques. Information is sent from the source to the BS via routing procedures. However, these routing protocols must ensure that packets are delivered securely, guar- anteeing that neither adversaries nor unauthentic individuals have access to the sent information. Secure data transfer is intended to protect the data from illegal access, damage, or disruption. Thus, in the proposed model, secure data transmission is developed in an energy-effective manner. A low-energy adaptive clustering hierarchy (LEACH) is developed to efficiently transfer the data. For the intrusion detection systems (IDS), Fuzzy logic and artificial neural networks (ANNs) are proposed. Initially, the nodes were randomly placed in the network and initialized to gather information. To ensure fair energy dissipation between the nodes, LEACH randomly chooses cluster heads (CHs) and allocates this role to the various nodes based on a round-robin management mechanism. The intrusion-detection procedure was then utilized to determine whether intruders were present in the network. Within the WSN, a Fuzzy interference rule was utilized to distinguish the malicious nodes from legal nodes. Subsequently, an ANN was employed to distinguish the harmful nodes from suspicious nodes. The effectiveness of the proposed approach was validated using metrics that attained 97% accuracy, 97% specificity, and 97% sensitivity of 95%. Thus, it was proved that the LEACH and Fuzzy-based IDS approaches are the best choices for securing data transmission in an energy-efficient manner.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622001139/pdf?md5=33169ccdb2fe0c8e8a08f569df224af6&pid=1-s2.0-S2096579622001139-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized assessment and training of neurosurgical skills in virtual reality: An interpretable machine learning approach 虚拟现实中神经外科技能的个性化评估和培训:可解释的机器学习方法
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.001
Fei Li , Zhibao Qin , Kai Qian , Shaojun Liang , Chengli Li , Yonghang Tai

Background

Virtual reality technology has been widely used in surgical simulators, providing new opportunities for assessing and training surgical skills. Machine learning algorithms are commonly used to analyze and evaluate the performance of participants. However, their interpretability limits the personalization of the training for individual participants.

Methods

Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection. Data on the use of surgical tools were collected using a surgical simulator. Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model. Five machine learning algorithms were trained to predict the skill level, and the support vector machine performed the best, with an accuracy of 92.41% and Area Under Curve value of0.98253. The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.

Results

This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical per- formances. The use of Shapley values enables targeted training by identifying deficiencies in individual skills.

Conclusions

This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery. The interpretability of the machine learning models enables the development of individualized training programs. In addition, this study highlighted the potential of explanatory models in training external skills.

背景虚拟现实技术已广泛应用于手术模拟器,为评估和训练手术技能提供了新的机会。机器学习算法通常用于分析和评估参与者的表现。方法招募了 79 名参与者,根据其颅内肿瘤切除术的技能水平分为三组。使用手术模拟器收集手术工具使用数据。使用最小冗余最大相关性算法和SVM-RFE算法进行特征选择,以获得用于训练机器学习模型的最终指标。训练了五种机器学习算法来预测技能水平,其中支持向量机的表现最好,准确率为 92.41%,曲线下面积值为 0.98253。结果这项研究证明了机器学习在区分虚拟现实神经外科手术表现的评估和培训方面的有效性。结论这项研究为机器学习在虚拟现实神经外科个性化培训中的应用提供了见解。机器学习模型的可解释性使得个性化培训计划的开发成为可能。此外,本研究还强调了解释性模型在外部技能培训中的潜力。
{"title":"Personalized assessment and training of neurosurgical skills in virtual reality: An interpretable machine learning approach","authors":"Fei Li ,&nbsp;Zhibao Qin ,&nbsp;Kai Qian ,&nbsp;Shaojun Liang ,&nbsp;Chengli Li ,&nbsp;Yonghang Tai","doi":"10.1016/j.vrih.2023.08.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.001","url":null,"abstract":"<div><h3>Background</h3><p>Virtual reality technology has been widely used in surgical simulators, providing new opportunities for assessing and training surgical skills. Machine learning algorithms are commonly used to analyze and evaluate the performance of participants. However, their interpretability limits the personalization of the training for individual participants.</p></div><div><h3>Methods</h3><p>Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection. Data on the use of surgical tools were collected using a surgical simulator. Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model. Five machine learning algorithms were trained to predict the skill level, and the support vector machine performed the best, with an accuracy of 92.41% and Area Under Curve value of0.98253. The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.</p></div><div><h3>Results</h3><p>This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical per- formances. The use of Shapley values enables targeted training by identifying deficiencies in individual skills.</p></div><div><h3>Conclusions</h3><p>This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery. The interpretability of the machine learning models enables the development of individualized training programs. In addition, this study highlighted the potential of explanatory models in training external skills.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 17-29"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000451/pdf?md5=4a05396e17452331858ce0f3bf7464a8&pid=1-s2.0-S2096579623000451-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1