首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Chemical simulation teaching system based on virtual reality and gesture interaction 基于虚拟现实和手势交互的化学模拟教学系统
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.09.001
Dengzhen Lu , Hengyi Li , Boyu Qiu , Siyuan Liu , Shuhan Qi

Background

Most existing chemical experiment teaching systems lack solid immersive experiences, making it difficult to engage students. To address these challenges, we propose a chemical simulation teaching system based on virtual reality and gesture interaction.

Methods

The parameters of the models were obtained through actual investigation, whereby Blender and 3DS MAX were used to model and import these parameters into a physics engine. By establishing an interface for the physics engine, gesture interaction hardware, and virtual reality (VR) helmet, a highly realistic chemical experiment environment was created. Using code script logic, particle systems, as well as other systems, chemical phenomena were simulated. Furthermore, we created an online teaching platform using streaming media and databases to address the problems of distance teaching.

Results

The proposed system was evaluated against two mainstream products in the market. In the experiments, the proposed system outperformed the other products in terms of fidelity and practicality.

Conclusions

The proposed system which offers realistic simulations and practicability, can help improve the high school chemistry experimental education.

背景现有的化学实验教学系统大多缺乏扎实的沉浸式体验,难以吸引学生。为了应对这些挑战,我们提出了基于虚拟现实和手势交互的化学模拟教学系统。方法通过实际调查获得模型参数,然后使用 Blender 和 3DS MAX 进行建模,并将这些参数导入物理引擎。通过为物理引擎、手势交互硬件和虚拟现实(VR)头盔建立接口,创建了一个高度逼真的化学实验环境。利用代码脚本逻辑、粒子系统以及其他系统,模拟了化学现象。此外,我们还利用流媒体和数据库创建了一个在线教学平台,以解决远程教学的问题。在实验中,所提出的系统在逼真度和实用性方面均优于其他产品。结论所提出的系统具有逼真的模拟效果和实用性,有助于改善高中化学实验教学。
{"title":"Chemical simulation teaching system based on virtual reality and gesture interaction","authors":"Dengzhen Lu ,&nbsp;Hengyi Li ,&nbsp;Boyu Qiu ,&nbsp;Siyuan Liu ,&nbsp;Shuhan Qi","doi":"10.1016/j.vrih.2023.09.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.09.001","url":null,"abstract":"<div><h3>Background</h3><p>Most existing chemical experiment teaching systems lack solid immersive experiences, making it difficult to engage students. To address these challenges, we propose a chemical simulation teaching system based on virtual reality and gesture interaction.</p></div><div><h3>Methods</h3><p>The parameters of the models were obtained through actual investigation, whereby Blender and 3DS MAX were used to model and import these parameters into a physics engine. By establishing an interface for the physics engine, gesture interaction hardware, and virtual reality (VR) helmet, a highly realistic chemical experiment environment was created. Using code script logic, particle systems, as well as other systems, chemical phenomena were simulated. Furthermore, we created an online teaching platform using streaming media and databases to address the problems of distance teaching.</p></div><div><h3>Results</h3><p>The proposed system was evaluated against two mainstream products in the market. In the experiments, the proposed system outperformed the other products in terms of fidelity and practicality.</p></div><div><h3>Conclusions</h3><p>The proposed system which offers realistic simulations and practicability, can help improve the high school chemistry experimental education.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 148-168"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962300061X/pdf?md5=5a61efaff7176636efdb6c186ffcfa7d&pid=1-s2.0-S209657962300061X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-scale spatial data visualization method based on augmented reality 基于增强现实技术的大规模空间数据可视化方法
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2024.02.002
Xiaoning Qiao , Wenming Xie , Xiaodong Peng , Guangyun Li , Dalin Li , Yingyi Guo , Jingyi Ren

Background

A task assigned to space exploration satellites involves detecting the physical environment within a certain space. However, space detection data are complex and abstract. These data are not conducive for researchers' visual perceptions of the evolution and interaction of events in the space environment.

Methods

A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time, and the corresponding relationships between data location features and other attribute features were established. A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data. The visualization process is optimized for rendering by merging materials, reducing the number of patches, and performing other operations.

Results

The results of sampling, feature extraction, and uniform visualization of the detection data of complex types, long duration spans, and uneven spatial distributions were obtained. The real-time visualization of large-scale spatial structures using augmented reality devices, particularly low-performance devices, was also investigated.

Conclusions

The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space, express the structure and changes in the spatial environment using augmented reality, and assist in intuitively discovering spatial environmental events and evolutionary rules.

背景空间探测卫星的任务是探测一定空间内的物理环境。然而,空间探测数据既复杂又抽象。方法提出了一种大尺度空间时间序列动态数据采样方法,用于对探测数据进行时空采样,并建立了数据位置特征与其他属性特征之间的对应关系。提出了一种基于统计直方图均衡化的色调映射方法,并将其应用于最终的属性特征数据。通过合并素材、减少补丁数量等操作,优化了可视化过程的渲染效果。 结果对类型复杂、时间跨度长、空间分布不均匀的检测数据进行了采样、特征提取和统一可视化处理,取得了良好的效果。结论所提出的可视化系统可以重建大尺度空间的三维结构,利用增强现实技术表达空间环境的结构和变化,并有助于直观地发现空间环境事件和演化规则。
{"title":"Large-scale spatial data visualization method based on augmented reality","authors":"Xiaoning Qiao ,&nbsp;Wenming Xie ,&nbsp;Xiaodong Peng ,&nbsp;Guangyun Li ,&nbsp;Dalin Li ,&nbsp;Yingyi Guo ,&nbsp;Jingyi Ren","doi":"10.1016/j.vrih.2024.02.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.02.002","url":null,"abstract":"<div><h3>Background</h3><p>A task assigned to space exploration satellites involves detecting the physical environment within a certain space. However, space detection data are complex and abstract. These data are not conducive for researchers' visual perceptions of the evolution and interaction of events in the space environment.</p></div><div><h3>Methods</h3><p>A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time, and the corresponding relationships between data location features and other attribute features were established. A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data. The visualization process is optimized for rendering by merging materials, reducing the number of patches, and performing other operations.</p></div><div><h3>Results</h3><p>The results of sampling, feature extraction, and uniform visualization of the detection data of complex types, long duration spans, and uneven spatial distributions were obtained. The real-time visualization of large-scale spatial structures using augmented reality devices, particularly low-performance devices, was also investigated.</p></div><div><h3>Conclusions</h3><p>The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space, express the structure and changes in the spatial environment using augmented reality, and assist in intuitively discovering spatial environmental events and evolutionary rules.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 132-147"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579624000081/pdf?md5=340d5b042587b27ec24ac9e75b5af9d0&pid=1-s2.0-S2096579624000081-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio2AB: Audio-driven collaborative generation of virtual character animation Audio2AB:音频驱动的虚拟角色动画协作生成
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.006
Lichao Niu , Wenjun Xie , Dong Wang , Zhongrui Cao , Xiaoping Liu

Background

Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success. However, few methods exist for generating full-body animations, and the portability of virtual character gestures and facial animations has not received sufficient attention.

Methods

Therefore, we propose a deep-learning-based audio-to-animation-and-blendshape (Audio2AB) network that generates gesture animations andARK it’s 52 facial expression parameter blendshape weights based on audio, audio-corresponding text, emotion labels, and semantic relevance labels to generate parametric data for full- body animations. This parameterization method can be used to drive full-body animations of virtual characters and improve their portability. In the experiment, we first downsampled the gesture and facial data to achieve the same temporal resolution for the input, output, and facial data. The Audio2AB network then encoded the audio, audio- corresponding text, emotion labels, and semantic relevance labels, and then fused the text, emotion labels, and semantic relevance labels into the audio to obtain better audio features. Finally, we established links between the body, gestures, and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.

Results

By using audio, audio-corresponding text, and emotional and semantic relevance labels as input, the trained Audio2AB network could generate gesture animation data containing blendshape weights. Therefore, different 3D virtual character animations could be created through parameterization.

Conclusions

The experimental results showed that the proposed method could generate significant gestures and facial animations.

背景在音频驱动的虚拟人物手势和面部动画领域已经开展了大量研究,并取得了一定的成功。因此,我们提出了一种基于深度学习的音频到动画和混合形状(Audio2AB)网络,该网络可生成手势动画,并根据音频、音频对应文本、情感标签和语义相关性标签确定其 52 个面部表情参数混合形状权重,从而生成全身动画的参数数据。这种参数化方法可用于驱动虚拟人物的全身动画,并提高其可移植性。在实验中,我们首先对手势和面部数据进行降采样,使输入、输出和面部数据具有相同的时间分辨率。然后,Audio2AB 网络对音频、音频对应的文本、情感标签和语义相关性标签进行编码,再将文本、情感标签和语义相关性标签融合到音频中,以获得更好的音频特征。最后,我们在身体、手势和面部解码器之间建立了联系,并通过我们提出的 GAN-GF 损失函数生成了相应的动画序列。结果通过使用音频、音频对应文本以及情感和语义相关性标签作为输入,经过训练的 Audio2AB 网络可以生成包含混合形状权重的手势动画数据。结论实验结果表明,所提出的方法可以生成重要的手势和面部动画。
{"title":"Audio2AB: Audio-driven collaborative generation of virtual character animation","authors":"Lichao Niu ,&nbsp;Wenjun Xie ,&nbsp;Dong Wang ,&nbsp;Zhongrui Cao ,&nbsp;Xiaoping Liu","doi":"10.1016/j.vrih.2023.08.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.006","url":null,"abstract":"<div><h3>Background</h3><p>Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success. However, few methods exist for generating full-body animations, and the portability of virtual character gestures and facial animations has not received sufficient attention.</p></div><div><h3>Methods</h3><p>Therefore, we propose a deep-learning-based audio-to-animation-and-blendshape (Audio2AB) network that generates gesture animations andARK it’s 52 facial expression parameter blendshape weights based on audio, audio-corresponding text, emotion labels, and semantic relevance labels to generate parametric data for full- body animations. This parameterization method can be used to drive full-body animations of virtual characters and improve their portability. In the experiment, we first downsampled the gesture and facial data to achieve the same temporal resolution for the input, output, and facial data. The Audio2AB network then encoded the audio, audio- corresponding text, emotion labels, and semantic relevance labels, and then fused the text, emotion labels, and semantic relevance labels into the audio to obtain better audio features. Finally, we established links between the body, gestures, and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.</p></div><div><h3>Results</h3><p>By using audio, audio-corresponding text, and emotional and semantic relevance labels as input, the trained Audio2AB network could generate gesture animation data containing blendshape weights. Therefore, different 3D virtual character animations could be created through parameterization.</p></div><div><h3>Conclusions</h3><p>The experimental results showed that the proposed method could generate significant gestures and facial animations.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 56-70"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000578/pdf?md5=643d5833200a7e29b7c69fe6f55dfabf&pid=1-s2.0-S2096579623000578-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective sampling with Gromov–Hausdorff metric: Efficient dense-shape correspondence via Confidence-based sample consensus 使用 Gromov-Hausdorff 度量进行选择性采样:通过基于置信度的样本共识实现高效的密集形状对应
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.007
Dvir Ginzburg, Dan Raviv

Background

Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” sce- nario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used.

Methods

A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence.

Results

The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods.

Conclusions

The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.

背景功能映射尽管效率高,但存在 "先有鸡还是先有蛋 "的问题,即空间特征不佳会导致光谱配准不足,反之亦然,这通常会导致收敛速度慢、计算成本高和学习失败,尤其是在使用小型数据集时。然后,这些点会对配准和光谱损失项做出贡献,促进训练,并将收敛速度提高五倍。为了确保完全的无监督学习,我们使用了 Gromov-Hausdorff 距离度量来选择具有最大配准得分的点,这些点显示了最大的信心。
{"title":"Selective sampling with Gromov–Hausdorff metric: Efficient dense-shape correspondence via Confidence-based sample consensus","authors":"Dvir Ginzburg,&nbsp;Dan Raviv","doi":"10.1016/j.vrih.2023.08.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.007","url":null,"abstract":"<div><h3>Background</h3><p>Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” sce- nario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used.</p></div><div><h3>Methods</h3><p>A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the <em>Gromov–Hausdorff distance metric</em> was used to select the points with the maximal alignment score displaying most confidence.</p></div><div><h3>Results</h3><p>The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods.</p></div><div><h3>Conclusions</h3><p>The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 30-42"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962300058X/pdf?md5=0d72c2ce81fa69712b18835a2698ec47&pid=1-s2.0-S209657962300058X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance-aware 3D volume visualization for medical content-based image retrieval-a preliminary study 基于医学内容的图像检索中的重要性感知三维体积可视化--初步研究
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.005
Mingjian Li , Younhyun Jung , Michael Fulham , Jinman Kim

Background

A medical content-based image retrieval (CBIR) system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image. CBIR is widely used in evidence- based diagnosis, teaching, and research. Although the retrieval accuracy has largely improved, there has been limited development toward visualizing important image features that indicate the similarity of retrieved images. Despite the prevalence of3D volumetric data in medical imaging such as computed tomography (CT), current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images. Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information, including the size, shape, and spatial relations of multiple structures. This process is time-consuming and reliant on users’ experience.

Methods

In this study, we proposed an importance-aware 3D volume visualization method. The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process. We then integrated the proposed visualization into a CBIR system, thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.

Results

Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography (PET- CT) images of a non-small cell lung cancer dataset.

背景基于内容的医学图像检索(CBIR)系统旨在从大型图像库中检索与用户查询图像视觉相似的图像。CBIR 广泛应用于循证诊断、教学和研究。虽然检索的准确性在很大程度上得到了提高,但在可视化显示检索图像相似性的重要图像特征方面的发展还很有限。尽管三维容积数据在计算机断层扫描(CT)等医学成像中非常普遍,但当前的 CBIR 系统仍依赖二维横截面视图来可视化检索到的图像。这种二维可视化要求用户浏览图像堆栈,以确认检索到的图像的相似性,而且往往涉及三维信息的心理重建,包括多个结构的大小、形状和空间关系。方法在这项研究中,我们提出了一种重要性感知的三维体积可视化方法。我们自动优化了渲染参数,以最大限度地提高重要结构的可见度,这些结构在检索过程中已被检测到并优先处理。结果我们的初步研究结果表明,利用非小细胞肺癌数据集的多模态正电子发射断层扫描和计算机断层扫描(PET- CT)图像,三维可视化可以提供额外的信息。
{"title":"Importance-aware 3D volume visualization for medical content-based image retrieval-a preliminary study","authors":"Mingjian Li ,&nbsp;Younhyun Jung ,&nbsp;Michael Fulham ,&nbsp;Jinman Kim","doi":"10.1016/j.vrih.2023.08.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.005","url":null,"abstract":"<div><h3>Background</h3><p>A medical content-based image retrieval (CBIR) system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image. CBIR is widely used in evidence- based diagnosis, teaching, and research. Although the retrieval accuracy has largely improved, there has been limited development toward visualizing important image features that indicate the similarity of retrieved images. Despite the prevalence of3D volumetric data in medical imaging such as computed tomography (CT), current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images. Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information, including the size, shape, and spatial relations of multiple structures. This process is time-consuming and reliant on users’ experience.</p></div><div><h3>Methods</h3><p>In this study, we proposed an importance-aware 3D volume visualization method. The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process. We then integrated the proposed visualization into a CBIR system, thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.</p></div><div><h3>Results</h3><p>Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography (PET- CT) images of a non-small cell lung cancer dataset.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 71-81"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000566/pdf?md5=771df0097b94f27ef3ca76e8f800722b&pid=1-s2.0-S2096579623000566-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective data transmission through energy-efficient clus- tering and Fuzzy-Based IDS routing approach in WSNs 通过 WSN 中的高能效集群和基于模糊的 IDS 路由方法实现有效的数据传输
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2022.10.002
Saziya Tabbassum (Research Scholar) , Rajesh Kumar Pathak (Vice Chancellor)

Wireless sensor networks (WSN) gather information and sense information samples in a certain region and communicate these readings to a base station (BS). Energy efficiency is considered a major design issue in the WSNs, and can be addressed using clustering and routing techniques. Information is sent from the source to the BS via routing procedures. However, these routing protocols must ensure that packets are delivered securely, guar- anteeing that neither adversaries nor unauthentic individuals have access to the sent information. Secure data transfer is intended to protect the data from illegal access, damage, or disruption. Thus, in the proposed model, secure data transmission is developed in an energy-effective manner. A low-energy adaptive clustering hierarchy (LEACH) is developed to efficiently transfer the data. For the intrusion detection systems (IDS), Fuzzy logic and artificial neural networks (ANNs) are proposed. Initially, the nodes were randomly placed in the network and initialized to gather information. To ensure fair energy dissipation between the nodes, LEACH randomly chooses cluster heads (CHs) and allocates this role to the various nodes based on a round-robin management mechanism. The intrusion-detection procedure was then utilized to determine whether intruders were present in the network. Within the WSN, a Fuzzy interference rule was utilized to distinguish the malicious nodes from legal nodes. Subsequently, an ANN was employed to distinguish the harmful nodes from suspicious nodes. The effectiveness of the proposed approach was validated using metrics that attained 97% accuracy, 97% specificity, and 97% sensitivity of 95%. Thus, it was proved that the LEACH and Fuzzy-based IDS approaches are the best choices for securing data transmission in an energy-efficient manner.

无线传感器网络(WSN)在一定区域内收集信息和感知信息样本,并将这些读数传送到基站(BS)。能源效率被认为是 WSN 的一个主要设计问题,可通过聚类和路由技术来解决。信息通过路由程序从源发送到 BS。但是,这些路由协议必须确保数据包的安全传输,保证对手或非认证者都无法获取发送的信息。安全数据传输的目的是保护数据不被非法访问、破坏或中断。因此,在所提出的模型中,安全数据传输是以节能的方式进行的。为有效传输数据,开发了一种低能耗自适应聚类层次结构(LEACH)。对于入侵检测系统(IDS),提出了模糊逻辑和人工神经网络(ANN)。最初,节点被随机放置在网络中,并进行初始化以收集信息。为确保节点之间的能量消耗公平,LEACH 随机选择簇头(CHs),并根据轮循管理机制将这一角色分配给各个节点。然后利用入侵检测程序来确定网络中是否存在入侵者。在 WSN 中,利用模糊干扰规则来区分恶意节点和合法节点。随后,利用 ANN 区分有害节点和可疑节点。所提方法的有效性得到了验证,准确率达到了 97%,特异性达到了 97%,灵敏度达到了 95%。由此证明,LEACH 和基于模糊的 IDS 方法是以节能方式确保数据传输安全的最佳选择。
{"title":"Effective data transmission through energy-efficient clus- tering and Fuzzy-Based IDS routing approach in WSNs","authors":"Saziya Tabbassum (Research Scholar) ,&nbsp;Rajesh Kumar Pathak (Vice Chancellor)","doi":"10.1016/j.vrih.2022.10.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.10.002","url":null,"abstract":"<div><p>Wireless sensor networks (WSN) gather information and sense information samples in a certain region and communicate these readings to a base station (BS). Energy efficiency is considered a major design issue in the WSNs, and can be addressed using clustering and routing techniques. Information is sent from the source to the BS via routing procedures. However, these routing protocols must ensure that packets are delivered securely, guar- anteeing that neither adversaries nor unauthentic individuals have access to the sent information. Secure data transfer is intended to protect the data from illegal access, damage, or disruption. Thus, in the proposed model, secure data transmission is developed in an energy-effective manner. A low-energy adaptive clustering hierarchy (LEACH) is developed to efficiently transfer the data. For the intrusion detection systems (IDS), Fuzzy logic and artificial neural networks (ANNs) are proposed. Initially, the nodes were randomly placed in the network and initialized to gather information. To ensure fair energy dissipation between the nodes, LEACH randomly chooses cluster heads (CHs) and allocates this role to the various nodes based on a round-robin management mechanism. The intrusion-detection procedure was then utilized to determine whether intruders were present in the network. Within the WSN, a Fuzzy interference rule was utilized to distinguish the malicious nodes from legal nodes. Subsequently, an ANN was employed to distinguish the harmful nodes from suspicious nodes. The effectiveness of the proposed approach was validated using metrics that attained 97% accuracy, 97% specificity, and 97% sensitivity of 95%. Thus, it was proved that the LEACH and Fuzzy-based IDS approaches are the best choices for securing data transmission in an energy-efficient manner.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622001139/pdf?md5=33169ccdb2fe0c8e8a08f569df224af6&pid=1-s2.0-S2096579622001139-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized assessment and training of neurosurgical skills in virtual reality: An interpretable machine learning approach 虚拟现实中神经外科技能的个性化评估和培训:可解释的机器学习方法
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.08.001
Fei Li , Zhibao Qin , Kai Qian , Shaojun Liang , Chengli Li , Yonghang Tai

Background

Virtual reality technology has been widely used in surgical simulators, providing new opportunities for assessing and training surgical skills. Machine learning algorithms are commonly used to analyze and evaluate the performance of participants. However, their interpretability limits the personalization of the training for individual participants.

Methods

Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection. Data on the use of surgical tools were collected using a surgical simulator. Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model. Five machine learning algorithms were trained to predict the skill level, and the support vector machine performed the best, with an accuracy of 92.41% and Area Under Curve value of0.98253. The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.

Results

This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical per- formances. The use of Shapley values enables targeted training by identifying deficiencies in individual skills.

Conclusions

This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery. The interpretability of the machine learning models enables the development of individualized training programs. In addition, this study highlighted the potential of explanatory models in training external skills.

背景虚拟现实技术已广泛应用于手术模拟器,为评估和训练手术技能提供了新的机会。机器学习算法通常用于分析和评估参与者的表现。方法招募了 79 名参与者,根据其颅内肿瘤切除术的技能水平分为三组。使用手术模拟器收集手术工具使用数据。使用最小冗余最大相关性算法和SVM-RFE算法进行特征选择,以获得用于训练机器学习模型的最终指标。训练了五种机器学习算法来预测技能水平,其中支持向量机的表现最好,准确率为 92.41%,曲线下面积值为 0.98253。结果这项研究证明了机器学习在区分虚拟现实神经外科手术表现的评估和培训方面的有效性。结论这项研究为机器学习在虚拟现实神经外科个性化培训中的应用提供了见解。机器学习模型的可解释性使得个性化培训计划的开发成为可能。此外,本研究还强调了解释性模型在外部技能培训中的潜力。
{"title":"Personalized assessment and training of neurosurgical skills in virtual reality: An interpretable machine learning approach","authors":"Fei Li ,&nbsp;Zhibao Qin ,&nbsp;Kai Qian ,&nbsp;Shaojun Liang ,&nbsp;Chengli Li ,&nbsp;Yonghang Tai","doi":"10.1016/j.vrih.2023.08.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.001","url":null,"abstract":"<div><h3>Background</h3><p>Virtual reality technology has been widely used in surgical simulators, providing new opportunities for assessing and training surgical skills. Machine learning algorithms are commonly used to analyze and evaluate the performance of participants. However, their interpretability limits the personalization of the training for individual participants.</p></div><div><h3>Methods</h3><p>Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection. Data on the use of surgical tools were collected using a surgical simulator. Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model. Five machine learning algorithms were trained to predict the skill level, and the support vector machine performed the best, with an accuracy of 92.41% and Area Under Curve value of0.98253. The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.</p></div><div><h3>Results</h3><p>This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical per- formances. The use of Shapley values enables targeted training by identifying deficiencies in individual skills.</p></div><div><h3>Conclusions</h3><p>This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery. The interpretability of the machine learning models enables the development of individualized training programs. In addition, this study highlighted the potential of explanatory models in training external skills.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 17-29"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000451/pdf?md5=4a05396e17452331858ce0f3bf7464a8&pid=1-s2.0-S2096579623000451-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent 3D garment system of the human body based on deep spiking neural network 基于深度尖峰神经网络的人体 3D 智能服装系统
Q1 Computer Science Pub Date : 2024-02-01 DOI: 10.1016/j.vrih.2023.07.002
Minghua Jiang , Zhangyuan Tian , Chenyu Yu , Yankang Shi , Li Liu , Tao Peng , Xinrong Hu , Feng Yu

Background

Intelligent garments, a burgeoning class of wearable devices, have extensive applications in domains such as sports training and medical rehabilitation. Nonetheless, existing research in the smart wearables domain predominantly emphasizes sensor functionality and quantity, often skipping crucial aspects related to user experience and interaction.

Methods

To address this gap, this study introduces a novel real-time 3D interactive system based on intelligent garments. The system utilizes lightweight sensor modules to collect human motion data and introduces a dual-stream fusion network based on pulsed neural units to classify and recognize human movements, thereby achieving real-time interaction between users and sensors. Additionally, the system in- corporates 3D human visualization functionality, which visualizes sensor data and recognizes human actions as 3D models in realtime, providing accurate and comprehensive visual feedback to help users better understand and analyze the details and features of human motion. This system has significant potential for applications in motion detection, medical monitoring, virtual reality, and other fields. The accurate classification of human actions con- tributes to the development of personalized training plans and injury prevention strategies.

Conclusions

This study has substantial implications in the domains of intelligent garments, human motion monitoring, and digital twin visualization. The advancement of this system is expected to propel the progress of wearable technology and foster a deeper comprehension of human motion.

背景智能服装是一类新兴的可穿戴设备,在运动训练和医疗康复等领域有着广泛的应用。然而,智能可穿戴设备领域的现有研究主要强调传感器的功能和数量,往往忽略了与用户体验和交互相关的重要方面。该系统利用轻量级传感器模块收集人体运动数据,并引入基于脉冲神经单元的双流融合网络对人体运动进行分类和识别,从而实现用户与传感器之间的实时互动。此外,该系统还加入了三维人体可视化功能,将传感器数据可视化,并将人体动作实时识别为三维模型,提供准确、全面的视觉反馈,帮助用户更好地理解和分析人体运动的细节和特征。该系统在运动检测、医疗监测、虚拟现实等领域的应用潜力巨大。对人体动作的准确分类有助于制定个性化的训练计划和伤害预防策略。该系统的发展有望推动可穿戴技术的进步,并促进对人体运动的深入理解。
{"title":"Intelligent 3D garment system of the human body based on deep spiking neural network","authors":"Minghua Jiang ,&nbsp;Zhangyuan Tian ,&nbsp;Chenyu Yu ,&nbsp;Yankang Shi ,&nbsp;Li Liu ,&nbsp;Tao Peng ,&nbsp;Xinrong Hu ,&nbsp;Feng Yu","doi":"10.1016/j.vrih.2023.07.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.07.002","url":null,"abstract":"<div><h3>Background</h3><p>Intelligent garments, a burgeoning class of wearable devices, have extensive applications in domains such as sports training and medical rehabilitation. Nonetheless, existing research in the smart wearables domain predominantly emphasizes sensor functionality and quantity, often skipping crucial aspects related to user experience and interaction.</p></div><div><h3>Methods</h3><p>To address this gap, this study introduces a novel real-time 3D interactive system based on intelligent garments. The system utilizes lightweight sensor modules to collect human motion data and introduces a dual-stream fusion network based on pulsed neural units to classify and recognize human movements, thereby achieving real-time interaction between users and sensors. Additionally, the system in- corporates 3D human visualization functionality, which visualizes sensor data and recognizes human actions as 3D models in realtime, providing accurate and comprehensive visual feedback to help users better understand and analyze the details and features of human motion. This system has significant potential for applications in motion detection, medical monitoring, virtual reality, and other fields. The accurate classification of human actions con- tributes to the development of personalized training plans and injury prevention strategies.</p></div><div><h3>Conclusions</h3><p>This study has substantial implications in the domains of intelligent garments, human motion monitoring, and digital twin visualization. The advancement of this system is expected to propel the progress of wearable technology and foster a deeper comprehension of human motion.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 1","pages":"Pages 43-55"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962300044X/pdf?md5=934866992a1e420fa2627cab1a89561d&pid=1-s2.0-S209657962300044X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HYDRO: Optimizing Interactive Hybrid Images for Digital Signage Content HYDRO:优化数字标牌内容的交互式混合图像
Q1 Computer Science Pub Date : 2023-12-01 DOI: 10.1016/j.vrih.2022.08.009
Masanori Nakayama, Karin Uchino, Ken Nagao, Issei Fujishiro

In modern society, digital signage installed in many large-scale facilities supports our daily life. However, with a limited screen size, it is difficult to provide different types of information for many viewers at varying distances from the screen simultaneously. Therefore, in this study, we extend existing research on the use of hybrid images for tiled displays. To facilitate smoother information selection, a new interactive display method is proposed that incorporates a touchactivated widget as a high-frequency part of the hybrid image; these widgets are novel in that they are more visible to the viewers near to the display. We develop an authoring tool that we call the Hybrid Image Display Resolutions Optimizer (HYDRO); it features two kinds of control functions by which to optimize the visibility of the touch-activated widgets in terms of placement and resolution. The effectiveness of the present method is shown empirically via a quantitative user study and an eye-tracking-based qualitative evaluation.

在现代社会中,许多大型设施中安装的数字标牌为我们的日常生活提供了支持。然而,由于屏幕尺寸有限,很难同时为距离屏幕不同距离的众多观众提供不同类型的信息。因此,在本研究中,我们扩展了现有研究,将混合图像用于平铺显示。为了便于更流畅地选择信息,我们提出了一种新的交互式显示方法,将触摸激活的小部件作为混合图像的高频部分;这些小部件的新颖之处在于,靠近显示屏的观众更容易看到它们。我们开发了一种称为 "混合图像显示分辨率优化器"(HYDRO)的创作工具;它具有两种控制功能,可在位置和分辨率方面优化触摸激活部件的可视性。本方法的有效性通过定量用户研究和基于眼动跟踪的定性评估得到了验证。
{"title":"HYDRO: Optimizing Interactive Hybrid Images for Digital Signage Content","authors":"Masanori Nakayama,&nbsp;Karin Uchino,&nbsp;Ken Nagao,&nbsp;Issei Fujishiro","doi":"10.1016/j.vrih.2022.08.009","DOIUrl":"10.1016/j.vrih.2022.08.009","url":null,"abstract":"<div><p>In modern society, digital signage installed in many large-scale facilities supports our daily life. However, with a limited screen size, it is difficult to provide different types of information for many viewers at varying distances from the screen simultaneously. Therefore, in this study, we extend existing research on the use of hybrid images for tiled displays. To facilitate smoother information selection, a new interactive display method is proposed that incorporates a touchactivated widget as a high-frequency part of the hybrid image; these widgets are novel in that they are more visible to the viewers near to the display. We develop an authoring tool that we call the Hybrid Image Display Resolutions Optimizer (HYDRO); it features two kinds of control functions by which to optimize the visibility of the touch-activated widgets in terms of placement and resolution. The effectiveness of the present method is shown empirically via a quantitative user study and an eye-tracking-based qualitative evaluation.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 6","pages":"Pages 565-577"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000845/pdf?md5=66bf13c453add643fb720daf9ae46a21&pid=1-s2.0-S2096579622000845-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139021992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Adequate Alignment and Interaction for Cross-Modal Retrieval 为跨模态检索学习适当的对齐和交互方式
Q1 Computer Science Pub Date : 2023-12-01 DOI: 10.1016/j.vrih.2023.06.003
MingKang Wang , Min Meng , Jigang Liu , Jigang Wu

Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications, especially image-text retrieval in the fields of computer vision and natural language processing. Recently, visual and semantic embedding (VSE) learning has shown promising improvements on image-text retrieval tasks. Most existing VSE models employ two unrelated encoders to extract features, then use complex methods to contextualize and aggregate those features into holistic embeddings. Despite recent advances, existing approaches still suffer from two limitations: 1) without considering intermediate interaction and adequate alignment between different modalities, these models cannot guarantee the discriminative ability of representations; 2) existing feature aggregators are susceptible to certain noisy regions, which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features. To address these challenges, we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder, which aims to learn adequate alignment and interaction on aggregated features for effectively bridging the modality gap. Experiments on Microsoft COCO and Flickr30k datasets demonstrates the superiority of our model over the state-of-the-art methods.

跨模态检索在许多跨媒体相似性搜索应用中引起了广泛关注,尤其是计算机视觉和自然语言处理领域的图像-文本检索。最近,视觉和语义嵌入(VSE)学习在图像-文本检索任务中显示出良好的改进前景。现有的视觉和语义嵌入模型大多采用两个互不相关的编码器来提取特征,然后使用复杂的方法将这些特征上下文化并聚合成整体嵌入。尽管最近取得了一些进展,但现有方法仍存在两个局限性:1)如果不考虑不同模态之间的中间交互和充分对齐,这些模型无法保证表征的分辨能力;2)现有的特征聚合器容易受到某些噪声区域的影响,这可能导致不合理的集合系数,影响最终聚合特征的质量。为了应对这些挑战,我们提出了一种新型跨模态检索模型,该模型包含一个精心设计的对齐模块和一个新型多模态融合编码器,旨在对聚合特征进行充分的对齐和交互学习,从而有效弥合模态差距。在微软 COCO 和 Flickr30k 数据集上的实验表明,我们的模型优于最先进的方法。
{"title":"Learning Adequate Alignment and Interaction for Cross-Modal Retrieval","authors":"MingKang Wang ,&nbsp;Min Meng ,&nbsp;Jigang Liu ,&nbsp;Jigang Wu","doi":"10.1016/j.vrih.2023.06.003","DOIUrl":"10.1016/j.vrih.2023.06.003","url":null,"abstract":"<div><p>Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications, especially image-text retrieval in the fields of computer vision and natural language processing. Recently, visual and semantic embedding (VSE) learning has shown promising improvements on image-text retrieval tasks. Most existing VSE models employ two unrelated encoders to extract features, then use complex methods to contextualize and aggregate those features into holistic embeddings. Despite recent advances, existing approaches still suffer from two limitations: 1) without considering intermediate interaction and adequate alignment between different modalities, these models cannot guarantee the discriminative ability of representations; 2) existing feature aggregators are susceptible to certain noisy regions, which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features. To address these challenges, we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder, which aims to learn adequate alignment and interaction on aggregated features for effectively bridging the modality gap. Experiments on Microsoft COCO and Flickr30k datasets demonstrates the superiority of our model over the state-of-the-art methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 6","pages":"Pages 509-522"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962300027X/pdf?md5=12c947f69173683c04a27c84c4b305fc&pid=1-s2.0-S209657962300027X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139017416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1