首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
A survey of real-time rendering on Web3D application Web3D实时渲染技术综述
Q1 Computer Science Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2022.04.002
Geng Yu , Chang Liu , Ting Fang , Jinyuan Jia , Enming Lin , Yiqiang He , Siyuan Fu , Long Wang , Lei Wei , Qingyu Huang

Background

In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories.

Results

Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field.

Conclusions

Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.

背景近年来,随着移动互联网和Web3D技术的快速发展,出现了大量基于web的在线三维可视化应用。Web3D应用程序,包括Web3D在线旅游、Web3D在线体系结构、Web3D联机教育环境、Web3D online medical care和Web3D online shopping,都是利用web上3D渲染的这些应用程序的示例。这些应用程序突破了传统网络应用程序的界限,传统网络应用将文本、声音、图像、视频和2D动画作为其主要通信媒体,并将3D虚拟场景作为主要交互对象,从而实现了一种具有强烈沉浸感的用户体验。本文通过Web3D的核心技术“实时渲染技术”来探讨新兴的对人们生活产生更大影响的Web3D应用。本文讨论了Web3D的所有主要三维图形API和国内外著名的Web3D引擎,并将Web3D应用程序的实时渲染框架分为不同的类别。结果最后,本研究通过参考每个特定领域中具有代表性的Web3D应用,分析了不同领域对Web3D应用提出的具体需求。结论我们的调查结果表明,基于实时渲染的Web3D应用深入社会甚至家庭,这是一种影响各行各业的趋势。
{"title":"A survey of real-time rendering on Web3D application","authors":"Geng Yu ,&nbsp;Chang Liu ,&nbsp;Ting Fang ,&nbsp;Jinyuan Jia ,&nbsp;Enming Lin ,&nbsp;Yiqiang He ,&nbsp;Siyuan Fu ,&nbsp;Long Wang ,&nbsp;Lei Wei ,&nbsp;Qingyu Huang","doi":"10.1016/j.vrih.2022.04.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.04.002","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories.</p></div><div><h3>Results</h3><p>Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field.</p></div><div><h3>Conclusions</h3><p>Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 379-394"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-pose estimation based on weak supervision 基于弱监督的人体姿态估计
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.010
Xiaoyan Hu, Xizhao Bao, Guoli Wei, Zhaoyu Li

Background

In computer vision, simultaneously estimating human pose, shape, and clothing is a practical issue in real life, but remains a challenging task owing to the variety of clothing, complexity of deformation, shortage of large-scale datasets, and difficulty in estimating clothing style.

Methods

We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape, pose, and clothing deformation. In the first stage, the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body. Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view, obtain a more accurate human posture, and access supervisory information easily. In the second stage, clothing is represented by a PCAbased model that uses two-dimensional key points of clothing as supervised information to regress the parameters. In the third stage, we predefine an embedding graph for each type of clothing to describe the deformation. Then, the mask information of the clothing is used to further adjust the deformation of the clothing. To facilitate training, we constructed a multi-view synthetic dataset that included BCNet and SURREAL.

Results

The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information. Because this study uses only weakly supervised information, which is much easier to obtain, it has the advantage of utilizing existing data as training data. Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information, compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.

Conclusions

Our weak supervision method can accurately estimate human body size, pose, and several common types of clothing and overcome the issues of the current shortage of clothing data.

背景在计算机视觉中,同时估计人体姿势、形状和服装是现实生活中的一个实际问题,但由于服装的多样性、变形的复杂性、缺乏大规模数据集以及估计服装风格的困难,这仍然是一项具有挑战性的任务。方法我们提出了一种多阶段弱监督方法,该方法充分利用标记信息较少的数据来学习估计人体形状、姿势和服装变形。在第一阶段,使用人体的多视图2D关键点对SMPL人体模型参数进行回归。使用多视图信息作为弱监督信息可以避免单个视图的深度模糊问题,获得更准确的人体姿态,并方便地访问监督信息。在第二阶段,服装由基于PCA的模型表示,该模型使用服装的二维关键点作为监督信息来回归参数。在第三阶段,我们为每种类型的服装预先定义了一个嵌入图来描述变形。然后,使用衣服的掩码信息来进一步调整衣服的变形。为了便于训练,我们构建了一个包括BCNet和SURREAL的多视图合成数据集。结果实验表明,我们的方法在使用强监督信息而仅使用弱监督信息的情况下,其精度达到了与SOTA方法相同的水平。由于该研究只使用弱监督信息,更容易获得,因此它具有利用现有数据作为训练数据的优势。在DeepFashion2数据集上的实验表明,与由于缺乏精确的注释信息而无法训练或调整的强监督信息相比,我们的方法可以充分利用现有的弱监督信息在监督信息很少的数据集上进行微调。结论我们的弱监督方法可以准确估计人体大小、姿势和几种常见的服装类型,克服了目前服装数据短缺的问题。
{"title":"Human-pose estimation based on weak supervision","authors":"Xiaoyan Hu,&nbsp;Xizhao Bao,&nbsp;Guoli Wei,&nbsp;Zhaoyu Li","doi":"10.1016/j.vrih.2022.08.010","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.010","url":null,"abstract":"<div><h3>Background</h3><p>In computer vision, simultaneously estimating human pose, shape, and clothing is a practical issue in real life, but remains a challenging task owing to the variety of clothing, complexity of deformation, shortage of large-scale datasets, and difficulty in estimating clothing style.</p></div><div><h3>Methods</h3><p>We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape, pose, and clothing deformation. In the first stage, the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body. Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view, obtain a more accurate human posture, and access supervisory information easily. In the second stage, clothing is represented by a PCAbased model that uses two-dimensional key points of clothing as supervised information to regress the parameters. In the third stage, we predefine an embedding graph for each type of clothing to describe the deformation. Then, the mask information of the clothing is used to further adjust the deformation of the clothing. To facilitate training, we constructed a multi-view synthetic dataset that included BCNet and SURREAL.</p></div><div><h3>Results</h3><p>The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information. Because this study uses only weakly supervised information, which is much easier to obtain, it has the advantage of utilizing existing data as training data. Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information, compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.</p></div><div><h3>Conclusions</h3><p>Our weak supervision method can accurately estimate human body size, pose, and several common types of clothing and overcome the issues of the current shortage of clothing data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 366-377"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The validity analysis of the non-local mean filter and a derived novel denoising method 对非局部均值滤波器的有效性进行了分析,并提出了一种新的去噪方法
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.017
Xiangyuan Liu, Zhongke Wu, Xingce Wang

Image denoising is an important topic in the digital image processing field. This paper theoretically studies the validity of the classical non-local mean filter (NLM) for removing Gaussian noise from a novel statistic perspective. By regarding the restored image as an estimator of the clear image from the statistical view, we gradually analyse the unbiasedness and effectiveness of the restored value obtained by the NLM filter. Then, we propose an improved NLM algorithm called the clustering-based NLM filter (CNLM) that derived from the conditions obtained through the theoretical analysis. The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process. Here, we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components. The experimental results show that our algorithm obtains considerable improvement in peak signal-to-noise ratio (PSNR) values and visual results when removing Gaussian noise. On the other hand, the considerable practical performance of our filter shows that our method is theoretically acceptable as it can effectively estimates ideal images.

图像去噪是数字图像处理领域的一个重要课题。本文从一个新的统计角度,从理论上研究了经典非局部均值滤波(NLM)去除高斯噪声的有效性。从统计的角度将恢复后的图像作为清晰图像的估计量,逐步分析NLM滤波器得到的恢复值的无偏性和有效性。然后,根据理论分析得出的条件,提出了一种改进的NLM算法——基于聚类的NLM滤波器(CNLM)。所提出的滤波器试图利用图像聚类过程获得的近似恒定强度来恢复理想值。在此,我们采用混合概率模型对预滤波图像生成理想聚类分量的估计量。实验结果表明,在去除高斯噪声后,该算法在峰值信噪比(PSNR)值和视觉效果上都有较大改善。另一方面,我们的滤波器相当大的实际性能表明,我们的方法在理论上是可以接受的,因为它可以有效地估计理想图像。
{"title":"The validity analysis of the non-local mean filter and a derived novel denoising method","authors":"Xiangyuan Liu,&nbsp;Zhongke Wu,&nbsp;Xingce Wang","doi":"10.1016/j.vrih.2022.08.017","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.017","url":null,"abstract":"<div><p>Image denoising is an important topic in the digital image processing field. This paper theoretically studies the validity of the classical non-local mean filter (NLM) for removing Gaussian noise from a novel statistic perspective. By regarding the restored image as an estimator of the clear image from the statistical view, we gradually analyse the unbiasedness and effectiveness of the restored value obtained by the NLM filter. Then, we propose an improved NLM algorithm called the clustering-based NLM filter (CNLM) that derived from the conditions obtained through the theoretical analysis. The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process. Here, we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components. The experimental results show that our algorithm obtains considerable improvement in peak signal-to-noise ratio (PSNR) values and visual results when removing Gaussian noise. On the other hand, the considerable practical performance of our filter shows that our method is theoretically acceptable as it can effectively estimates ideal images.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 338-350"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49897113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intelligent experimental container suite: using a chemical experiment with virtual-real fusion as an example 一个智能实验容器套件:以一个虚拟-真实融合的化学实验为例
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.07.008
Lurong Yang , Zhiquan Feng , Junhong Meng

Background

At present, the teaching of experiments in primary and secondary schools is affected by cost and security factors. The existing research on virtual-experiment platforms alleviates this problem. However, the lack of real experimental equipment and the use of a single channel to understand users’ intentions weaken these platforms operationally and degrade the naturalness of interactions. To slove the above problems,we propose an intelligent experimental container structure and a situational awareness algorithm,both of which are verified and then applied to a chemical experiment involving virtual-real fusion. First, acquired images are denoised in the visual channel, using maximum diffuse reflection chroma to remove overexposures. Second, container situational awareness is realized by segmenting the image liquid level and establishing a relation-fitting model. Then, strategies for constructing complete behaviors and making priority comparisons among behaviors are adopted for information complementarity and information independence, respectively. A multichannel intentional understanding model and an interactive paradigm fusing vision, hearing and touch are proposed. The results show that the designed experimental container and algorithm in a virtual chemical experiment platform can achieve a natural level of human-computer interaction, enhance the user's sense of operation, and achieve high user satisfaction.

背景当前,中小学实验教学受到成本和安全因素的影响。现有的虚拟实验平台研究缓解了这一问题。然而,缺乏真正的实验设备,使用单一渠道来了解用户的意图,削弱了这些平台的操作能力,降低了交互的自然性。为了解决上述问题,我们提出了一种智能实验容器结构和一种情境感知算法,并将其应用于涉及虚拟-现实融合的化学实验。首先,在视觉通道中对采集的图像进行去噪,使用最大漫反射色度来去除过度曝光。其次,通过对图像液位进行分割并建立关系拟合模型来实现容器态势感知。然后,分别采用构建完整行为的策略和对行为之间的优先级进行比较的策略来实现信息互补和信息独立。提出了一种融合视觉、听觉和触觉的多通道有意理解模型和交互范式。结果表明,在虚拟化学实验平台中,所设计的实验容器和算法可以实现自然水平的人机交互,增强用户的操作感,达到较高的用户满意度。
{"title":"An intelligent experimental container suite: using a chemical experiment with virtual-real fusion as an example","authors":"Lurong Yang ,&nbsp;Zhiquan Feng ,&nbsp;Junhong Meng","doi":"10.1016/j.vrih.2022.07.008","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.008","url":null,"abstract":"<div><h3>Background</h3><p>At present, the teaching of experiments in primary and secondary schools is affected by cost and security factors. The existing research on virtual-experiment platforms alleviates this problem. However, the lack of real experimental equipment and the use of a single channel to understand users’ intentions weaken these platforms operationally and degrade the naturalness of interactions. To slove the above problems,we propose an intelligent experimental container structure and a situational awareness algorithm,both of which are verified and then applied to a chemical experiment involving virtual-real fusion. First, acquired images are denoised in the visual channel, using maximum diffuse reflection chroma to remove overexposures. Second, container situational awareness is realized by segmenting the image liquid level and establishing a relation-fitting model. Then, strategies for constructing complete behaviors and making priority comparisons among behaviors are adopted for information complementarity and information independence, respectively. A multichannel intentional understanding model and an interactive paradigm fusing vision, hearing and touch are proposed. The results show that the designed experimental container and algorithm in a virtual chemical experiment platform can achieve a natural level of human-computer interaction, enhance the user's sense of operation, and achieve high user satisfaction.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 317-337"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling heterogeneous behaviors with different strategies in a terrorist attack 恐怖袭击中采用不同策略的异质行为建模
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.015
Le Bi , Tingting Liu , Zhen Liu , Jason Teo , Yumeng Zhao , Yanjie Chai

In terrorist attack simulations, existing methods do not describe individual differences, which means different individuals will not have different behaviors. To address this problem, we propose a framework to model people’s heterogeneous behaviors in terrorist attack. For pedestrian, we construct an emotional model that takes into account its personality and visual perception. The emotional model is then combined with pedestrians' relationship networks to make the decision-making model. With the proposed decision-making model, pedestrian may have altruistic behaviors. For terrorist, a mapping model is developed to map its antisocial personality to its attacking strategy. The experiments show that the proposed algorithm can generate realistic heterogeneous behaviors that are consistent with existing psychological research findings.

在恐怖袭击模拟中,现有的方法没有描述个体差异,这意味着不同的个体不会有不同的行为。为了解决这个问题,我们提出了一个框架来模拟人们在恐怖袭击中的异质行为。对于行人,我们构建了一个考虑其个性和视觉感知的情感模型。然后将情感模型与行人的关系网络相结合,形成决策模型。利用所提出的决策模型,行人可能会有利他行为。对于恐怖分子,开发了一个映射模型,将其反社会人格映射到其攻击策略。实验表明,该算法能够产生与现有心理学研究结果一致的现实异质行为。
{"title":"Modeling heterogeneous behaviors with different strategies in a terrorist attack","authors":"Le Bi ,&nbsp;Tingting Liu ,&nbsp;Zhen Liu ,&nbsp;Jason Teo ,&nbsp;Yumeng Zhao ,&nbsp;Yanjie Chai","doi":"10.1016/j.vrih.2022.08.015","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.015","url":null,"abstract":"<div><p>In terrorist attack simulations, existing methods do not describe individual differences, which means different individuals will not have different behaviors. To address this problem, we propose a framework to model people’s heterogeneous behaviors in terrorist attack. For pedestrian, we construct an emotional model that takes into account its personality and visual perception. The emotional model is then combined with pedestrians' relationship networks to make the decision-making model. With the proposed decision-making model, pedestrian may have altruistic behaviors. For terrorist, a mapping model is developed to map its antisocial personality to its attacking strategy. The experiments show that the proposed algorithm can generate realistic heterogeneous behaviors that are consistent with existing psychological research findings.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 351-365"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of intelligent diagnosis methods of imaging gland cancer based on machine learning 基于机器学习的癌症影像学智能诊断方法综述
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.09.002
Han Jiang, Wen-Jia Sun, Han-Fei Guo, Jia-Yuan Zeng, Xin Xue, Shuai Li

Background

Gland cancer is a high-incidence disease endangering human health, and its early detection and treatment need efficient, accurate and objective intelligent diagnosis methods. In recent years, the advent of machine learning techniques has yielded satisfactory results in the intelligent gland cancer diagnosis based on clinical images, greatly improving the accuracy and efficiency of medical image interpretation while reducing the workload of doctors. The foci of this paper is to review, classify and analyze the intelligent diagnosis methods of imaging gland cancer based on machine learning and deep learning. To start with, the paper presents a brief introduction about some basic imaging principles of multi-modal medical images, such as the commonly used CT, MRI, US, PET, and pathology. In addition, the intelligent diagnosis methods of imaging gland cancer are further classified into supervised learning and weakly-supervised learning. Supervised learning consists of traditional machine learning methods like KNN, SVM, multilayer perceptron, etc. and deep learning methods evolving from CNN, meanwhile, weakly-supervised learning can be further categorized into active learning, semi-supervised learning and transfer learning. The state-of-the-art methods are illustrated with implementation details, including image segmentation, feature extraction, the optimization of classifiers, and their performances are evaluated through indicators like accuracy, precision and sensitivity. To conclude, the challenges and development trend of intelligent diagnosis methods of imaging gland cancer are addressed and discussed.

背景癌症是危害人类健康的高发病率疾病,其早期发现和治疗需要高效、准确、客观的智能诊断方法。近年来,机器学习技术的出现在基于临床图像的癌症智能诊断中取得了令人满意的结果,极大地提高了医学图像解释的准确性和效率,同时减少了医生的工作量。本文的重点是回顾、分类和分析基于机器学习和深度学习的成像腺癌症智能诊断方法。首先,本文简要介绍了多模态医学图像的一些基本成像原理,如常用的CT、MRI、US、PET和病理学。此外,将癌症影像学诊断方法进一步分为监督学习和弱监督学习。监督学习包括传统的机器学习方法,如KNN、SVM、多层感知器等,以及由CNN发展而来的深度学习方法。同时,弱监督学习可以进一步分为主动学习、半监督学习和迁移学习。通过实现细节说明了最先进的方法,包括图像分割、特征提取、分类器的优化,并通过准确性、精度和灵敏度等指标评估了它们的性能。最后,对癌症影像学智能诊断方法面临的挑战和发展趋势进行了探讨。
{"title":"A review of intelligent diagnosis methods of imaging gland cancer based on machine learning","authors":"Han Jiang,&nbsp;Wen-Jia Sun,&nbsp;Han-Fei Guo,&nbsp;Jia-Yuan Zeng,&nbsp;Xin Xue,&nbsp;Shuai Li","doi":"10.1016/j.vrih.2022.09.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.09.002","url":null,"abstract":"<div><h3>Background</h3><p>Gland cancer is a high-incidence disease endangering human health, and its early detection and treatment need efficient, accurate and objective intelligent diagnosis methods. In recent years, the advent of machine learning techniques has yielded satisfactory results in the intelligent gland cancer diagnosis based on clinical images, greatly improving the accuracy and efficiency of medical image interpretation while reducing the workload of doctors. The foci of this paper is to review, classify and analyze the intelligent diagnosis methods of imaging gland cancer based on machine learning and deep learning. To start with, the paper presents a brief introduction about some basic imaging principles of multi-modal medical images, such as the commonly used CT, MRI, US, PET, and pathology. In addition, the intelligent diagnosis methods of imaging gland cancer are further classified into supervised learning and weakly-supervised learning. Supervised learning consists of traditional machine learning methods like KNN, SVM, multilayer perceptron, etc. and deep learning methods evolving from CNN, meanwhile, weakly-supervised learning can be further categorized into active learning, semi-supervised learning and transfer learning. The state-of-the-art methods are illustrated with implementation details, including image segmentation, feature extraction, the optimization of classifiers, and their performances are evaluated through indicators like accuracy, precision and sensitivity. To conclude, the challenges and development trend of intelligent diagnosis methods of imaging gland cancer are addressed and discussed.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 293-316"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training on LSA lifeboat operation using Mixed Reality 基于混合现实的LSA救生艇操作培训
Q1 Computer Science Pub Date : 2023-06-01 DOI: 10.1016/j.vrih.2023.02.005
Spyridon Nektarios Bolierakis, Margarita Kostovasili, Lazaros Karagiannidis, Dr. Angelos Amditis

Background

This work aims to provide an overview of the Mixed Reality (MR) technology’s use in maritime industry for training purposes. Current training procedures cover a broad range of procedural operations for Life-Saving Appliances (LSA) lifeboats; however, several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR. Augmented, Virtual and Mixed Reality applications are already used in various fields in maritime industry, but its full potential has not been yet exploited. SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.

Methods

An MR Training application is proposed supporting the training of crew members in equipment usage and operation, as well as in maintenance activities and procedures. The application consists of the training tool that trains crew members on handling lifeboats, the training evaluation tool that allows trainers to assess the performance of trainees and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats. For each tool, an indicative session and scenario workflow are implemented, along with the main supported interactions of the trainee with the equipment.

Results

The application has been tested and validated both in lab environment and using a real LSA lifeboat, resulting to improved experience for the users that provided feedback and recommendations for further development. The application has also been demonstrated onboard a cruise ship, showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.

Conclusions

The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members LSA lifeboat operation and maintenance, while it is still subject to improvement and further expansion.

背景这项工作旨在提供混合现实(MR)技术在海事行业中用于培训目的的概述。目前的培训程序涵盖救生设备救生艇的广泛程序操作;然而,已经发现了一些与实践培训相关的差距和局限性,这些差距和局限可以通过使用MR来解决。增强、虚拟和混合现实应用已经在海事行业的各个领域中使用,但其全部潜力尚未得到开发。SafePASS项目旨在通过引入专注于LSA救生艇使用和维护的相关应用程序,利用MR在海事培训中的优势。方法提出MR培训应用程序,支持船员在设备使用和操作以及维护活动和程序方面的培训。该应用程序包括培训船员操作救生艇的培训工具、允许培训师评估受训人员表现的培训评估工具以及支持船员在救生艇上执行维护活动和程序的维护工具。对于每种工具,都会实施一个指示性会话和场景工作流,以及受支持的受训人员与设备的主要交互。结果该应用程序已在实验室环境中和使用真实的LSA救生艇进行了测试和验证,为用户提供了改进的体验,并为进一步开发提供了反馈和建议。该应用程序还在游轮上进行了演示,向相关利益相关者展示了支持的功能,这些利益相关者认识到该应用程序的附加值,并提出了未来潜在的开发领域。结论MR培训应用程序已被评估为在提供一个用户友好的培训环境方面非常有前景,可以支持船员LSA救生艇的操作和维护,但它仍有待改进和进一步扩展。
{"title":"Training on LSA lifeboat operation using Mixed Reality","authors":"Spyridon Nektarios Bolierakis,&nbsp;Margarita Kostovasili,&nbsp;Lazaros Karagiannidis,&nbsp;Dr. Angelos Amditis","doi":"10.1016/j.vrih.2023.02.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.02.005","url":null,"abstract":"<div><h3>Background</h3><p>This work aims to provide an overview of the Mixed Reality (MR) technology’s use in maritime industry for training purposes. Current training procedures cover a broad range of procedural operations for Life-Saving Appliances (LSA) lifeboats; however, several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR. Augmented, Virtual and Mixed Reality applications are already used in various fields in maritime industry, but its full potential has not been yet exploited. SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.</p></div><div><h3>Methods</h3><p>An MR Training application is proposed supporting the training of crew members in equipment usage and operation, as well as in maintenance activities and procedures. The application consists of the training tool that trains crew members on handling lifeboats, the training evaluation tool that allows trainers to assess the performance of trainees and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats. For each tool, an indicative session and scenario workflow are implemented, along with the main supported interactions of the trainee with the equipment.</p></div><div><h3>Results</h3><p>The application has been tested and validated both in lab environment and using a real LSA lifeboat, resulting to improved experience for the users that provided feedback and recommendations for further development. The application has also been demonstrated onboard a cruise ship, showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.</p></div><div><h3>Conclusions</h3><p>The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members LSA lifeboat operation and maintenance, while it is still subject to improvement and further expansion.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 201-212"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Spatiotemporal Intelligent Framework and Experimental Platform for Urban Digital Twins 城市数字孪生的时空智能框架与实验平台
Q1 Computer Science Pub Date : 2023-06-01 DOI: 10.1016/j.vrih.2022.08.018
Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Wenjiang Chen , Xiongfei Ye , Donghao Li , Ge Yang

This work emphasizes the current research status of the urban Digital Twins to establish an intelligent spatiotemporal framework. A Geospatial Artificial Intelligent (GeoAI) system is developed based on the Geographic Information System and Artificial Intelligence. It integrates multi-video technology and Virtual City in urban Digital Twins. Besides, an improved small object detection model is proposed: YOLOv5-Pyramid, and Siamese network video tracking models, namely MPSiam and FSSiamese, are established. Finally, an experimental platform is built to verify the georeferencing correction scheme of video images. The experimental results show that the Multiply-Accumulate value of MPSiam is 0.5B, and that of ResNet50-Siam is 4.5B. Besides, the model is compressed by 4.8 times. The inference speed has increased by 3.3 times, reaching 83 Frames Per Second. 3% of the Average Expectation Overlap is lost. Therefore, the urban Digital Twins-oriented GeoAI framework established here has excellent performance for video georeferencing and target detection problems.

本工作强调了城市数字孪生的研究现状,以建立一个智能时空框架。在地理信息系统和人工智能的基础上,开发了一个地理空间人工智能系统。它集成了多视频技术和城市数字双胞胎中的虚拟城市。此外,提出了一种改进的小目标检测模型:YOLOv5 Pyramid,并建立了暹罗网络视频跟踪模型MPSiam和FSDiames。最后,搭建了一个实验平台,对视频图像的地理参考校正方案进行了验证。实验结果表明,MPSiam的乘法累加值为0.5B,ResNet50Siam的加法累加值为4.5B,并且模型被压缩了4.8倍。推理速度提高了3.3倍,达到每秒83帧。失去了平均期望重叠的3%。因此,本文建立的面向城市数字孪生的GeoAI框架在视频地理参考和目标检测问题上具有良好的性能。
{"title":"A Spatiotemporal Intelligent Framework and Experimental Platform for Urban Digital Twins","authors":"Jinxing Hu ,&nbsp;Zhihan Lv ,&nbsp;Diping Yuan ,&nbsp;Bing He ,&nbsp;Wenjiang Chen ,&nbsp;Xiongfei Ye ,&nbsp;Donghao Li ,&nbsp;Ge Yang","doi":"10.1016/j.vrih.2022.08.018","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.018","url":null,"abstract":"<div><p>This work emphasizes the current research status of the urban Digital Twins to establish an intelligent spatiotemporal framework. A Geospatial Artificial Intelligent (GeoAI) system is developed based on the Geographic Information System and Artificial Intelligence. It integrates multi-video technology and Virtual City in urban Digital Twins. Besides, an improved small object detection model is proposed: YOLOv5-Pyramid, and Siamese network video tracking models, namely MPSiam and FSSiamese, are established. Finally, an experimental platform is built to verify the georeferencing correction scheme of video images. The experimental results show that the Multiply-Accumulate value of MPSiam is 0.5B, and that of ResNet50-Siam is 4.5B. Besides, the model is compressed by 4.8 times. The inference speed has increased by 3.3 times, reaching 83 Frames Per Second. 3% of the Average Expectation Overlap is lost. Therefore, the urban Digital Twins-oriented GeoAI framework established here has excellent performance for video georeferencing and target detection problems.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 213-231"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive navigation assistance based on eye movement features in virtual reality 虚拟现实中基于眼动特征的自适应导航辅助
Q1 Computer Science Pub Date : 2023-06-01 DOI: 10.1016/j.vrih.2022.07.003
Song Zhao, Shiwei Cheng

Background

Navigation assistance is very important for users when roaming in virtual reality scenes, however, the traditional navigation method requires users to manually request a map for viewing, which leads to low immersion and poor user experience.

Methods

To address this issue, first, we collected data when users need navigation assistance in a virtual reality environment, including various eye movement features such as gaze fixation, pupil size, and gaze angle, etc. After that, we used the Boostingbased XGBoost algorithm to train a prediction model, and finally used it to predict whether users need navigation assistance in a roaming task.

Results

After evaluating the performance of the model, the accuracy, precision, recall, and F1-score of our model reached about 95%. In addition, by applying the model to a virtual reality scene, an adaptive navigation assistance system based on the user’s real-time eye movement data was implemented.

Conclusions

Compared with traditional navigation assistance methods, our new adaptive navigation assistance could enable the user to be more immersive and effective during roaming in VR environment.

背景导航辅助对于用户在虚拟现实场景中漫游非常重要,但传统的导航方法需要用户手动请求地图进行查看,导致沉浸感低,用户体验差。方法针对这一问题,首先,我们收集了用户在虚拟现实环境中需要导航辅助时的数据,包括各种眼动特征,如注视度、瞳孔大小和凝视角度等。然后,我们使用基于Boosting的XGBoost算法来训练预测模型,并最终用于预测用户在漫游任务中是否需要导航辅助。结果通过对该模型的性能评估,该模型的准确率、精密度、召回率和F1得分均达到95%左右。此外,通过将该模型应用于虚拟现实场景,实现了基于用户实时眼动数据的自适应导航辅助系统。结论与传统的导航辅助方法相比,我们新的自适应导航辅助方法可以让用户在虚拟现实环境中漫游时更加身临其境和有效。
{"title":"Adaptive navigation assistance based on eye movement features in virtual reality","authors":"Song Zhao,&nbsp;Shiwei Cheng","doi":"10.1016/j.vrih.2022.07.003","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.003","url":null,"abstract":"<div><h3>Background</h3><p>Navigation assistance is very important for users when roaming in virtual reality scenes, however, the traditional navigation method requires users to manually request a map for viewing, which leads to low immersion and poor user experience.</p></div><div><h3>Methods</h3><p>To address this issue, first, we collected data when users need navigation assistance in a virtual reality environment, including various eye movement features such as gaze fixation, pupil size, and gaze angle, etc. After that, we used the Boostingbased XGBoost algorithm to train a prediction model, and finally used it to predict whether users need navigation assistance in a roaming task.</p></div><div><h3>Results</h3><p>After evaluating the performance of the model, the accuracy, precision, recall, and F1-score of our model reached about 95%. In addition, by applying the model to a virtual reality scene, an adaptive navigation assistance system based on the user’s real-time eye movement data was implemented.</p></div><div><h3>Conclusions</h3><p>Compared with traditional navigation assistance methods, our new adaptive navigation assistance could enable the user to be more immersive and effective during roaming in VR environment.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 232-248"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on AGV task path planning based on improved A* algorithm 基于改进A*算法的AGV任务路径规划研究
Q1 Computer Science Pub Date : 2023-06-01 DOI: 10.1016/j.vrih.2022.11.002
Wang Xianwei , Ke Fuyang , Lu Jiajia

Background

In recent years, automatic guided vehicles (AGVs) have developed rapidly and been widely applied in intelligent transportation, cargo assembly, military testing, and other fields. One of the key issues in these applications is path planning. Global path planning results based on known environmental information are used as the ideal path for AGVs combined with local path planning to achieve safe and fast arrival at the destination. The global planning method planning results as the ideal path should meet the requirements of as few turns as possible, short planning time, and continuous path curvature.

Methods

We propose a global path-planning method based on an improved A * algorithm. And the robustness of the algorithm is verified by simulation experiments in typical multi obstacles and indoor scenarios. To improve the efficiency of pathfinding time, we increase the heuristic information weight of the target location and avoided the invalid cost calculation of the obstacle areas in the dynamic programming process. Then, the optimality of the number of turns in the path is ensured based on the turning node backtracking optimization method. Since the final global path needs to satisfy the AGV kinematic constraints and the curvature continuity condition, we adopt a curve smoothing scheme and select the optimal result that meets the constraints.

Conclusions

Simulation results show that the improved algorithm proposed in this paper outperforms the traditional method and can help AGVs improve the efficiency of task execution by efficiently planning a path with low complexity and smoothness. Additionally, this scheme provides a new solution for global path planning of unmanned vehicles.

背景近年来,自动导引车发展迅速,在智能交通、货物组装、军事测试等领域得到了广泛应用。这些应用程序中的关键问题之一是路径规划。基于已知环境信息的全球路径规划结果被用作AGV的理想路径,并与本地路径规划相结合,以实现安全快速到达目的地。全局规划方法将规划结果作为理想路径,应满足转弯次数尽可能少、规划时间短、路径曲率连续的要求。方法提出一种基于改进a*算法的全局路径规划方法。并通过在典型多障碍物和室内场景下的仿真实验验证了算法的鲁棒性。为了提高寻路时间的效率,我们在动态规划过程中增加了目标位置的启发式信息权重,避免了障碍区域的无效成本计算。然后,基于转弯节点回溯优化方法,确保了路径转弯次数的最优性。由于最终的全局路径需要满足AGV运动学约束和曲率连续性条件,我们采用了曲线平滑方案,并选择满足约束的最优结果。结论仿真结果表明,本文提出的改进算法优于传统方法,可以通过高效规划低复杂度和平滑度的路径来帮助AGV提高任务执行效率。此外,该方案为无人车的全球路径规划提供了一种新的解决方案。
{"title":"Research on AGV task path planning based on improved A* algorithm","authors":"Wang Xianwei ,&nbsp;Ke Fuyang ,&nbsp;Lu Jiajia","doi":"10.1016/j.vrih.2022.11.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.11.002","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, automatic guided vehicles (AGVs) have developed rapidly and been widely applied in intelligent transportation, cargo assembly, military testing, and other fields. One of the key issues in these applications is path planning. Global path planning results based on known environmental information are used as the ideal path for AGVs combined with local path planning to achieve safe and fast arrival at the destination. The global planning method planning results as the ideal path should meet the requirements of as few turns as possible, short planning time, and continuous path curvature.</p></div><div><h3>Methods</h3><p>We propose a global path-planning method based on an improved A * algorithm. And the robustness of the algorithm is verified by simulation experiments in typical multi obstacles and indoor scenarios. To improve the efficiency of pathfinding time, we increase the heuristic information weight of the target location and avoided the invalid cost calculation of the obstacle areas in the dynamic programming process. Then, the optimality of the number of turns in the path is ensured based on the turning node backtracking optimization method. Since the final global path needs to satisfy the AGV kinematic constraints and the curvature continuity condition, we adopt a curve smoothing scheme and select the optimal result that meets the constraints.</p></div><div><h3>Conclusions</h3><p>Simulation results show that the improved algorithm proposed in this paper outperforms the traditional method and can help AGVs improve the efficiency of task execution by efficiently planning a path with low complexity and smoothness. Additionally, this scheme provides a new solution for global path planning of unmanned vehicles.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 249-265"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1