首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Implementation of natural hand gestures in holograms for 3D object manipulation 在全息图中实现自然手势,用于3D对象操作
Q1 Computer Science Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2023.02.001
Ajune Wanis Ismail , Muhammad Akma Iman

Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with virtual objects. Gesture interaction is a recent research topic, as it allows users to use their bare hands to directly interact with the hologram. However, it remains unclear whether real hand gestures are well suited for hologram applications. Therefore, we discuss the development process and implementation of three-dimensional object manipulation using natural hand gestures in a hologram. We describe the design and development process for hologram applications and its integration with real hand gesture interactions as initial findings. Experimental results from Nasa TLX form are discussed. Based on the findings, we actualize the user interactions in the hologram.

全息图提供了一种显示和传递信息的独特方式,并已被改进以提供更好的用户交互全息图交互很重要,因为它们改善了用户与虚拟对象的交互。手势交互是最近的一个研究课题,因为它允许用户徒手直接与全息图交互。然而,目前尚不清楚真实的手势是否适合全息应用。因此,我们讨论了在全息图中使用自然手势进行三维物体操作的发展过程和实现。我们描述了全息图应用程序的设计和开发过程,以及它与真实手势交互的集成,作为初步发现。讨论了美国国家航空航天局TLX形式的实验结果。基于这些发现,我们实现了全息图中的用户交互。
{"title":"Implementation of natural hand gestures in holograms for 3D object manipulation","authors":"Ajune Wanis Ismail ,&nbsp;Muhammad Akma Iman","doi":"10.1016/j.vrih.2023.02.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.02.001","url":null,"abstract":"<div><p>Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with virtual objects. Gesture interaction is a recent research topic, as it allows users to use their bare hands to directly interact with the hologram. However, it remains unclear whether real hand gestures are well suited for hologram applications. Therefore, we discuss the development process and implementation of three-dimensional object manipulation using natural hand gestures in a hologram. We describe the design and development process for hologram applications and its integration with real hand gesture interactions as initial findings. Experimental results from Nasa TLX form are discussed. Based on the findings, we actualize the user interactions in the hologram.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of lightweighting methods of huge 3D models for online Web3D visualization 面向Web3D在线可视化的海量三维模型轻量化方法研究
Q1 Computer Science Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2020.02.002
Xiaojun Liu , Jinyuan Jia , Chang Liu

Background

With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system load and resource consumption in the processing of these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper.

Methods

By observing the geometry redundancy introduced by man-made operations in the modeling procedure, several categories of lightweighting related work that aim at reducing the amount of data and resource consumption are elaborated for Web3D visualization.

Results

By comparing perspectives, the characteristics of each method are summarized, and among the reviewed methods, the geometric redundancy removal that achieves the lightweight goal by detecting and removing the repeated components is an appropriate method for current online Web3D visualization. Meanwhile, the learning algorithm, still in improvement period at present, is our expected future research topic.

Conclusions

Various aspects should be considered in an efficient lightweight method for online Web3D visualization, such as characteristics of original data, combination or extension of existing methods, scheduling strategy, cache management, and rendering mechanism. Meanwhile, innovation methods, particularly the learning algorithm, are worth exploring.

背景随着Web3D技术的快速发展,在线Web3D可视化,特别是对复杂模型或场景的可视化需求越来越大。由于在处理这些庞大模型时,Web3D系统负载和资源消耗之间存在重大冲突,本文综述了用于在线Web3D可视化的庞大3D模型轻量级方法。方法通过观察建模过程中人为操作引入的几何冗余,阐述了Web3D可视化中几类旨在减少数据量和资源消耗的轻量级相关工作。结果通过比较视角,总结了每种方法的特点,在综述的方法中,通过检测和去除重复分量来实现轻量级目标的几何冗余去除方法是当前在线Web3D可视化的合适方法。同时,学习算法目前仍处于改进期,是我们期待的未来研究课题。结论在一种高效的轻量级在线Web3D可视化方法中,应考虑原始数据的特性、现有方法的组合或扩展、调度策略、缓存管理和渲染机制等多个方面。同时,创新方法,特别是学习算法,值得探索。
{"title":"Survey of lightweighting methods of huge 3D models for online Web3D visualization","authors":"Xiaojun Liu ,&nbsp;Jinyuan Jia ,&nbsp;Chang Liu","doi":"10.1016/j.vrih.2020.02.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2020.02.002","url":null,"abstract":"<div><h3>Background</h3><p>With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system load and resource consumption in the processing of these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper.</p></div><div><h3>Methods</h3><p>By observing the geometry redundancy introduced by man-made operations in the modeling procedure, several categories of lightweighting related work that aim at reducing the amount of data and resource consumption are elaborated for Web3D visualization.</p></div><div><h3>Results</h3><p>By comparing perspectives, the characteristics of each method are summarized, and among the reviewed methods, the geometric redundancy removal that achieves the lightweight goal by detecting and removing the repeated components is an appropriate method for current online Web3D visualization. Meanwhile, the learning algorithm, still in improvement period at present, is our expected future research topic.</p></div><div><h3>Conclusions</h3><p>Various aspects should be considered in an efficient lightweight method for online Web3D visualization, such as characteristics of original data, combination or extension of existing methods, scheduling strategy, cache management, and rendering mechanism. Meanwhile, innovation methods, particularly the learning algorithm, are worth exploring.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-reinforcement-learning-based robot motion strategies for grabbing objects from human hands 基于深度强化学习的机器人从人手中抓取物体的运动策略
Q1 Computer Science Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2022.12.001
Zeyuan Cai , Zhiquan Feng , Liran Zhou , Xiaohui Yang , Tao Xu

Background

Robot grasping encompasses a wide range of research areas; however, most studies have been focused on the grasping of only stationary objects in a scene; only a few studies on how to grasp objects from a user's hand have been conducted. In this paper, a robot grasping algorithm based on deep reinforcement learning (RGRL) is proposed.

Methods

The RGRL takes the relative positions of the robot and the object in a user's hand as input and outputs the best action of the robot in the current state. Thus, the proposed algorithm realizes the functions of autonomous path planning and grasping objects safely from the hands of users. A new method for improving the safety of human–robot cooperation is explored. To solve the problems of a low utilization rate and slow convergence of reinforcement learning algorithms, the RGRL is first trained in a simulation scene, and then, the model parameters are applied to a real scene. To reduce the difference between the simulated and real scenes, domain randomization is applied to randomly change the positions and angles of objects in the simulated scenes at regular intervals, thereby improving the diversity of the training samples and robustness of the algorithm.

Results

The RGRL's effectiveness and accuracy are verified by evaluating it on both simulated and real scenes, and the results show that the RGRL can achieve an accuracy of more than 80% in both cases.

Conclusions

RGRL is a robot grasping algorithm that employs domain randomization and deep reinforcement learning for effective grasping in simulated and real scenes. However, it lacks flexibility in adapting to different grasping poses, prompting future research in achieving safe grasping for diverse user postures.

背景机器人抓取包含了广泛的研究领域;然而,大多数研究都集中在场景中静止物体的抓取上;只有少数关于如何从用户手中抓握物体的研究被进行。本文提出了一种基于深度强化学习的机器人抓取算法。方法RGRL以机器人和物体在用户手中的相对位置为输入,输出机器人在当前状态下的最佳动作。因此,该算法实现了自主路径规划和从用户手中安全抓取物体的功能。探索了一种提高人机协作安全性的新方法。为了解决强化学习算法利用率低、收敛慢的问题,首先在模拟场景中训练RGRL,然后将模型参数应用于真实场景。为了减少模拟场景和真实场景之间的差异,应用域随机化以规则的间隔随机改变模拟场景中对象的位置和角度,从而提高训练样本的多样性和算法的鲁棒性。结果通过对RGRL在模拟和真实场景中的评估,验证了其有效性和准确性,结果表明,RGRL在两种情况下都能达到80%以上的准确率。结论sRGRL是一种采用领域随机化和深度强化学习的机器人抓取算法,可在模拟和真实场景中进行有效抓取。然而,它在适应不同的抓握姿势方面缺乏灵活性,这促使未来对实现不同用户姿势的安全抓握进行研究。
{"title":"Deep-reinforcement-learning-based robot motion strategies for grabbing objects from human hands","authors":"Zeyuan Cai ,&nbsp;Zhiquan Feng ,&nbsp;Liran Zhou ,&nbsp;Xiaohui Yang ,&nbsp;Tao Xu","doi":"10.1016/j.vrih.2022.12.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.12.001","url":null,"abstract":"<div><h3>Background</h3><p>Robot grasping encompasses a wide range of research areas; however, most studies have been focused on the grasping of only stationary objects in a scene; only a few studies on how to grasp objects from a user's hand have been conducted. In this paper, a robot grasping algorithm based on deep reinforcement learning (RGRL) is proposed.</p></div><div><h3>Methods</h3><p>The RGRL takes the relative positions of the robot and the object in a user's hand as input and outputs the best action of the robot in the current state. Thus, the proposed algorithm realizes the functions of autonomous path planning and grasping objects safely from the hands of users. A new method for improving the safety of human–robot cooperation is explored. To solve the problems of a low utilization rate and slow convergence of reinforcement learning algorithms, the RGRL is first trained in a simulation scene, and then, the model parameters are applied to a real scene. To reduce the difference between the simulated and real scenes, domain randomization is applied to randomly change the positions and angles of objects in the simulated scenes at regular intervals, thereby improving the diversity of the training samples and robustness of the algorithm.</p></div><div><h3>Results</h3><p>The RGRL's effectiveness and accuracy are verified by evaluating it on both simulated and real scenes, and the results show that the RGRL can achieve an accuracy of more than 80% in both cases.</p></div><div><h3>Conclusions</h3><p>RGRL is a robot grasping algorithm that employs domain randomization and deep reinforcement learning for effective grasping in simulated and real scenes. However, it lacks flexibility in adapting to different grasping poses, prompting future research in achieving safe grasping for diverse user postures.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye-shaped keyboard for dual-hand text entry in virtual reality 虚拟现实中用于双手文本输入的眼形键盘
Q1 Computer Science Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2023.07.001
Kangyu Wang , Yangqiu Yan , Hao Zhang , Xiaolong Liu , Lili Wang

We propose an eye-shaped keyboard for high-speed text entry in virtual reality (VR), having the shape of dual eyes with characters arranged along the curved eyelids, which ensures low density and short spacing of the keys. The eye-shaped keyboard references the QWERTY key sequence, allowing the users to benefit from their experience using the QWERTY keyboard. The user interacts with an eye-shaped keyboard using rays controlled with both the hands. A character can be entered in one step by moving the rays from the inner eye regions to regions of the characters. A high-speed auto-complete system was designed for the eye-shaped keyboard. We conducted a pilot study to determine the optimal parameters, and a user study to compare our eye-shaped keyboard with the QWERTY and circular keyboards. For beginners, the eye-shaped keyboard performed significantly more efficiently and accurately with less task load and hand movement than the circular keyboard. Compared with the QWERTY keyboard, the eye-shaped keyboard is more accurate and significantly reduces hand translation while maintaining similar efficiency. Finally, to evaluate the potential of eye-shaped keyboards, we conducted another user study. In this study, the participants were asked to type continuously for three days using the proposed eye-shaped keyboard, with two sessions per day. In each session, participants were asked to type for 20min, and then their typing performance was tested. The eye-shaped keyboard was proven to be efficient and promising, with an average speed of 19.89 words per minute (WPM) and mean uncorrected error rate of 1.939%. The maximum speed reached 24.97 WPM after six sessions and continued to increase.

我们提出了一种用于虚拟现实(VR)中高速文本输入的眼睛形状键盘,具有双眼形状,字符沿着弯曲的眼睑排列,这确保了键的低密度和短间距。眼形键盘参考QWERTY键序列,允许用户从他们使用QWERTY键盘的体验中受益。用户使用双手控制的光线与眼睛形状的键盘进行交互。通过将光线从眼睛内部区域移动到角色的区域,可以一步进入角色。针对眼形键盘,设计了一套高速自动完成系统。我们进行了一项试点研究以确定最佳参数,并对用户进行了研究,将我们的眼形键盘与QWERTY和圆形键盘进行了比较。对于初学者来说,与圆形键盘相比,眼睛形状的键盘表现得更高效、更准确,任务负荷和手部运动更少。与QWERTY键盘相比,眼形键盘更准确,在保持类似效率的同时显著减少了手部平移。最后,为了评估眼形键盘的潜力,我们进行了另一项用户研究。在这项研究中,参与者被要求使用拟议的眼形键盘连续打字三天,每天两次。在每个环节中,参与者被要求打字20分钟,然后测试他们的打字表现。眼形键盘被证明是高效和有前途的,平均速度为19.89字/分钟(WPM),平均未纠正错误率为1.939%。六次会话后,最大速度达到24.97字/分钟,并继续提高。
{"title":"Eye-shaped keyboard for dual-hand text entry in virtual reality","authors":"Kangyu Wang ,&nbsp;Yangqiu Yan ,&nbsp;Hao Zhang ,&nbsp;Xiaolong Liu ,&nbsp;Lili Wang","doi":"10.1016/j.vrih.2023.07.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.07.001","url":null,"abstract":"<div><p>We propose an eye-shaped keyboard for high-speed text entry in virtual reality (VR), having the shape of dual eyes with characters arranged along the curved eyelids, which ensures low density and short spacing of the keys. The eye-shaped keyboard references the QWERTY key sequence, allowing the users to benefit from their experience using the QWERTY keyboard. The user interacts with an eye-shaped keyboard using rays controlled with both the hands. A character can be entered in one step by moving the rays from the inner eye regions to regions of the characters. A high-speed auto-complete system was designed for the eye-shaped keyboard. We conducted a pilot study to determine the optimal parameters, and a user study to compare our eye-shaped keyboard with the QWERTY and circular keyboards. For beginners, the eye-shaped keyboard performed significantly more efficiently and accurately with less task load and hand movement than the circular keyboard. Compared with the QWERTY keyboard, the eye-shaped keyboard is more accurate and significantly reduces hand translation while maintaining similar efficiency. Finally, to evaluate the potential of eye-shaped keyboards, we conducted another user study. In this study, the participants were asked to type continuously for three days using the proposed eye-shaped keyboard, with two sessions per day. In each session, participants were asked to type for 20min, and then their typing performance was tested. The eye-shaped keyboard was proven to be efficient and promising, with an average speed of 19.89 words per minute (WPM) and mean uncorrected error rate of 1.939%. The maximum speed reached 24.97 WPM after six sessions and continued to increase.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71729298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel learning framework for optimal multi-object video trajectory tracking 一种新的多目标视频轨迹优化学习框架
Q1 Computer Science Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2023.04.001
Siyuan Chen, Xiaowu Hu, Wenying Jiang, Wen Zhou, Xintao Ding

Background

With the rapid development of Web3D, virtual reality, and digital twins, virtual trajectories and decision data considerably rely on the analysis and understanding of real video data, particularly in emergency evacuation scenarios. Correctly and effectively evacuating crowds in virtual emergency scenarios are becoming increasingly urgent. One good solution is to extract pedestrian trajectories from videos of emergency situations using a multi-target tracking algorithm and use them to define evacuation procedures.

Methods

To implement this solution, a trajectory extraction and optimization framework based on multi-target tracking is developed in this study. First, a multi-target tracking algorithm is used to extract and preprocess the trajectory data of the crowd in a video. Then, the trajectory is optimized by combining the trajectory point extraction algorithm and Savitzky–Golay smoothing filtering method. Finally, related experiments are conducted, and the results show that the proposed approach can effectively and accurately extract the trajectories of multiple target objects in real time.

Results

In addition, the proposed approach retains the real characteristics of the trajectories as much as possible while improving the trajectory smoothing index, which can provide data support for the analysis of pedestrian trajectory data and formulation of personnel evacuation schemes in emergency scenarios.

Conclusions

Further comparisons with methods used in related studies confirm the feasibility and superiority of the proposed framework.

背景随着Web3D、虚拟现实和数字孪生的快速发展,虚拟轨迹和决策数据在很大程度上依赖于对真实视频数据的分析和理解,尤其是在紧急疏散场景中。在虚拟紧急情况下正确有效地疏散人群变得越来越紧迫。一个好的解决方案是使用多目标跟踪算法从紧急情况的视频中提取行人轨迹,并使用它们来定义疏散程序。方法为了实现这一解决方案,本研究开发了一个基于多目标跟踪的轨迹提取和优化框架。首先,采用多目标跟踪算法对视频中人群的轨迹数据进行提取和预处理。然后,结合轨迹点提取算法和Savitzky–Golay平滑滤波方法对轨迹进行优化。最后,进行了相关实验,结果表明,该方法能够有效、准确地实时提取多个目标物体的轨迹。结果此外,该方法在提高轨迹平滑指数的同时,尽可能地保留了轨迹的真实特征,可以为行人轨迹数据的分析和紧急情况下人员疏散方案的制定提供数据支持。结论进一步与相关研究中使用的方法进行比较,证实了所提出的框架的可行性和优越性。
{"title":"Novel learning framework for optimal multi-object video trajectory tracking","authors":"Siyuan Chen,&nbsp;Xiaowu Hu,&nbsp;Wenying Jiang,&nbsp;Wen Zhou,&nbsp;Xintao Ding","doi":"10.1016/j.vrih.2023.04.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.04.001","url":null,"abstract":"<div><h3>Background</h3><p>With the rapid development of Web3D, virtual reality, and digital twins, virtual trajectories and decision data considerably rely on the analysis and understanding of real video data, particularly in emergency evacuation scenarios. Correctly and effectively evacuating crowds in virtual emergency scenarios are becoming increasingly urgent. One good solution is to extract pedestrian trajectories from videos of emergency situations using a multi-target tracking algorithm and use them to define evacuation procedures.</p></div><div><h3>Methods</h3><p>To implement this solution, a trajectory extraction and optimization framework based on multi-target tracking is developed in this study. First, a multi-target tracking algorithm is used to extract and preprocess the trajectory data of the crowd in a video. Then, the trajectory is optimized by combining the trajectory point extraction algorithm and Savitzky–Golay smoothing filtering method. Finally, related experiments are conducted, and the results show that the proposed approach can effectively and accurately extract the trajectories of multiple target objects in real time.</p></div><div><h3>Results</h3><p>In addition, the proposed approach retains the real characteristics of the trajectories as much as possible while improving the trajectory smoothing index, which can provide data support for the analysis of pedestrian trajectory data and formulation of personnel evacuation schemes in emergency scenarios.</p></div><div><h3>Conclusions</h3><p>Further comparisons with methods used in related studies confirm the feasibility and superiority of the proposed framework.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of real-time rendering on Web3D application Web3D实时渲染技术综述
Q1 Computer Science Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2022.04.002
Geng Yu , Chang Liu , Ting Fang , Jinyuan Jia , Enming Lin , Yiqiang He , Siyuan Fu , Long Wang , Lei Wei , Qingyu Huang

Background

In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories.

Results

Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field.

Conclusions

Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.

背景近年来,随着移动互联网和Web3D技术的快速发展,出现了大量基于web的在线三维可视化应用。Web3D应用程序,包括Web3D在线旅游、Web3D在线体系结构、Web3D联机教育环境、Web3D online medical care和Web3D online shopping,都是利用web上3D渲染的这些应用程序的示例。这些应用程序突破了传统网络应用程序的界限,传统网络应用将文本、声音、图像、视频和2D动画作为其主要通信媒体,并将3D虚拟场景作为主要交互对象,从而实现了一种具有强烈沉浸感的用户体验。本文通过Web3D的核心技术“实时渲染技术”来探讨新兴的对人们生活产生更大影响的Web3D应用。本文讨论了Web3D的所有主要三维图形API和国内外著名的Web3D引擎,并将Web3D应用程序的实时渲染框架分为不同的类别。结果最后,本研究通过参考每个特定领域中具有代表性的Web3D应用,分析了不同领域对Web3D应用提出的具体需求。结论我们的调查结果表明,基于实时渲染的Web3D应用深入社会甚至家庭,这是一种影响各行各业的趋势。
{"title":"A survey of real-time rendering on Web3D application","authors":"Geng Yu ,&nbsp;Chang Liu ,&nbsp;Ting Fang ,&nbsp;Jinyuan Jia ,&nbsp;Enming Lin ,&nbsp;Yiqiang He ,&nbsp;Siyuan Fu ,&nbsp;Long Wang ,&nbsp;Lei Wei ,&nbsp;Qingyu Huang","doi":"10.1016/j.vrih.2022.04.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.04.002","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories.</p></div><div><h3>Results</h3><p>Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field.</p></div><div><h3>Conclusions</h3><p>Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-pose estimation based on weak supervision 基于弱监督的人体姿态估计
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.010
Xiaoyan Hu, Xizhao Bao, Guoli Wei, Zhaoyu Li

Background

In computer vision, simultaneously estimating human pose, shape, and clothing is a practical issue in real life, but remains a challenging task owing to the variety of clothing, complexity of deformation, shortage of large-scale datasets, and difficulty in estimating clothing style.

Methods

We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape, pose, and clothing deformation. In the first stage, the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body. Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view, obtain a more accurate human posture, and access supervisory information easily. In the second stage, clothing is represented by a PCAbased model that uses two-dimensional key points of clothing as supervised information to regress the parameters. In the third stage, we predefine an embedding graph for each type of clothing to describe the deformation. Then, the mask information of the clothing is used to further adjust the deformation of the clothing. To facilitate training, we constructed a multi-view synthetic dataset that included BCNet and SURREAL.

Results

The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information. Because this study uses only weakly supervised information, which is much easier to obtain, it has the advantage of utilizing existing data as training data. Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information, compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.

Conclusions

Our weak supervision method can accurately estimate human body size, pose, and several common types of clothing and overcome the issues of the current shortage of clothing data.

背景在计算机视觉中,同时估计人体姿势、形状和服装是现实生活中的一个实际问题,但由于服装的多样性、变形的复杂性、缺乏大规模数据集以及估计服装风格的困难,这仍然是一项具有挑战性的任务。方法我们提出了一种多阶段弱监督方法,该方法充分利用标记信息较少的数据来学习估计人体形状、姿势和服装变形。在第一阶段,使用人体的多视图2D关键点对SMPL人体模型参数进行回归。使用多视图信息作为弱监督信息可以避免单个视图的深度模糊问题,获得更准确的人体姿态,并方便地访问监督信息。在第二阶段,服装由基于PCA的模型表示,该模型使用服装的二维关键点作为监督信息来回归参数。在第三阶段,我们为每种类型的服装预先定义了一个嵌入图来描述变形。然后,使用衣服的掩码信息来进一步调整衣服的变形。为了便于训练,我们构建了一个包括BCNet和SURREAL的多视图合成数据集。结果实验表明,我们的方法在使用强监督信息而仅使用弱监督信息的情况下,其精度达到了与SOTA方法相同的水平。由于该研究只使用弱监督信息,更容易获得,因此它具有利用现有数据作为训练数据的优势。在DeepFashion2数据集上的实验表明,与由于缺乏精确的注释信息而无法训练或调整的强监督信息相比,我们的方法可以充分利用现有的弱监督信息在监督信息很少的数据集上进行微调。结论我们的弱监督方法可以准确估计人体大小、姿势和几种常见的服装类型,克服了目前服装数据短缺的问题。
{"title":"Human-pose estimation based on weak supervision","authors":"Xiaoyan Hu,&nbsp;Xizhao Bao,&nbsp;Guoli Wei,&nbsp;Zhaoyu Li","doi":"10.1016/j.vrih.2022.08.010","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.010","url":null,"abstract":"<div><h3>Background</h3><p>In computer vision, simultaneously estimating human pose, shape, and clothing is a practical issue in real life, but remains a challenging task owing to the variety of clothing, complexity of deformation, shortage of large-scale datasets, and difficulty in estimating clothing style.</p></div><div><h3>Methods</h3><p>We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape, pose, and clothing deformation. In the first stage, the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body. Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view, obtain a more accurate human posture, and access supervisory information easily. In the second stage, clothing is represented by a PCAbased model that uses two-dimensional key points of clothing as supervised information to regress the parameters. In the third stage, we predefine an embedding graph for each type of clothing to describe the deformation. Then, the mask information of the clothing is used to further adjust the deformation of the clothing. To facilitate training, we constructed a multi-view synthetic dataset that included BCNet and SURREAL.</p></div><div><h3>Results</h3><p>The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information. Because this study uses only weakly supervised information, which is much easier to obtain, it has the advantage of utilizing existing data as training data. Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information, compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.</p></div><div><h3>Conclusions</h3><p>Our weak supervision method can accurately estimate human body size, pose, and several common types of clothing and overcome the issues of the current shortage of clothing data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The validity analysis of the non-local mean filter and a derived novel denoising method 对非局部均值滤波器的有效性进行了分析,并提出了一种新的去噪方法
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.017
Xiangyuan Liu, Zhongke Wu, Xingce Wang

Image denoising is an important topic in the digital image processing field. This paper theoretically studies the validity of the classical non-local mean filter (NLM) for removing Gaussian noise from a novel statistic perspective. By regarding the restored image as an estimator of the clear image from the statistical view, we gradually analyse the unbiasedness and effectiveness of the restored value obtained by the NLM filter. Then, we propose an improved NLM algorithm called the clustering-based NLM filter (CNLM) that derived from the conditions obtained through the theoretical analysis. The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process. Here, we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components. The experimental results show that our algorithm obtains considerable improvement in peak signal-to-noise ratio (PSNR) values and visual results when removing Gaussian noise. On the other hand, the considerable practical performance of our filter shows that our method is theoretically acceptable as it can effectively estimates ideal images.

图像去噪是数字图像处理领域的一个重要课题。本文从一个新的统计角度,从理论上研究了经典非局部均值滤波(NLM)去除高斯噪声的有效性。从统计的角度将恢复后的图像作为清晰图像的估计量,逐步分析NLM滤波器得到的恢复值的无偏性和有效性。然后,根据理论分析得出的条件,提出了一种改进的NLM算法——基于聚类的NLM滤波器(CNLM)。所提出的滤波器试图利用图像聚类过程获得的近似恒定强度来恢复理想值。在此,我们采用混合概率模型对预滤波图像生成理想聚类分量的估计量。实验结果表明,在去除高斯噪声后,该算法在峰值信噪比(PSNR)值和视觉效果上都有较大改善。另一方面,我们的滤波器相当大的实际性能表明,我们的方法在理论上是可以接受的,因为它可以有效地估计理想图像。
{"title":"The validity analysis of the non-local mean filter and a derived novel denoising method","authors":"Xiangyuan Liu,&nbsp;Zhongke Wu,&nbsp;Xingce Wang","doi":"10.1016/j.vrih.2022.08.017","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.017","url":null,"abstract":"<div><p>Image denoising is an important topic in the digital image processing field. This paper theoretically studies the validity of the classical non-local mean filter (NLM) for removing Gaussian noise from a novel statistic perspective. By regarding the restored image as an estimator of the clear image from the statistical view, we gradually analyse the unbiasedness and effectiveness of the restored value obtained by the NLM filter. Then, we propose an improved NLM algorithm called the clustering-based NLM filter (CNLM) that derived from the conditions obtained through the theoretical analysis. The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process. Here, we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components. The experimental results show that our algorithm obtains considerable improvement in peak signal-to-noise ratio (PSNR) values and visual results when removing Gaussian noise. On the other hand, the considerable practical performance of our filter shows that our method is theoretically acceptable as it can effectively estimates ideal images.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49897113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intelligent experimental container suite: using a chemical experiment with virtual-real fusion as an example 一个智能实验容器套件:以一个虚拟-真实融合的化学实验为例
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.07.008
Lurong Yang , Zhiquan Feng , Junhong Meng

Background

At present, the teaching of experiments in primary and secondary schools is affected by cost and security factors. The existing research on virtual-experiment platforms alleviates this problem. However, the lack of real experimental equipment and the use of a single channel to understand users’ intentions weaken these platforms operationally and degrade the naturalness of interactions. To slove the above problems,we propose an intelligent experimental container structure and a situational awareness algorithm,both of which are verified and then applied to a chemical experiment involving virtual-real fusion. First, acquired images are denoised in the visual channel, using maximum diffuse reflection chroma to remove overexposures. Second, container situational awareness is realized by segmenting the image liquid level and establishing a relation-fitting model. Then, strategies for constructing complete behaviors and making priority comparisons among behaviors are adopted for information complementarity and information independence, respectively. A multichannel intentional understanding model and an interactive paradigm fusing vision, hearing and touch are proposed. The results show that the designed experimental container and algorithm in a virtual chemical experiment platform can achieve a natural level of human-computer interaction, enhance the user's sense of operation, and achieve high user satisfaction.

背景当前,中小学实验教学受到成本和安全因素的影响。现有的虚拟实验平台研究缓解了这一问题。然而,缺乏真正的实验设备,使用单一渠道来了解用户的意图,削弱了这些平台的操作能力,降低了交互的自然性。为了解决上述问题,我们提出了一种智能实验容器结构和一种情境感知算法,并将其应用于涉及虚拟-现实融合的化学实验。首先,在视觉通道中对采集的图像进行去噪,使用最大漫反射色度来去除过度曝光。其次,通过对图像液位进行分割并建立关系拟合模型来实现容器态势感知。然后,分别采用构建完整行为的策略和对行为之间的优先级进行比较的策略来实现信息互补和信息独立。提出了一种融合视觉、听觉和触觉的多通道有意理解模型和交互范式。结果表明,在虚拟化学实验平台中,所设计的实验容器和算法可以实现自然水平的人机交互,增强用户的操作感,达到较高的用户满意度。
{"title":"An intelligent experimental container suite: using a chemical experiment with virtual-real fusion as an example","authors":"Lurong Yang ,&nbsp;Zhiquan Feng ,&nbsp;Junhong Meng","doi":"10.1016/j.vrih.2022.07.008","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.008","url":null,"abstract":"<div><h3>Background</h3><p>At present, the teaching of experiments in primary and secondary schools is affected by cost and security factors. The existing research on virtual-experiment platforms alleviates this problem. However, the lack of real experimental equipment and the use of a single channel to understand users’ intentions weaken these platforms operationally and degrade the naturalness of interactions. To slove the above problems,we propose an intelligent experimental container structure and a situational awareness algorithm,both of which are verified and then applied to a chemical experiment involving virtual-real fusion. First, acquired images are denoised in the visual channel, using maximum diffuse reflection chroma to remove overexposures. Second, container situational awareness is realized by segmenting the image liquid level and establishing a relation-fitting model. Then, strategies for constructing complete behaviors and making priority comparisons among behaviors are adopted for information complementarity and information independence, respectively. A multichannel intentional understanding model and an interactive paradigm fusing vision, hearing and touch are proposed. The results show that the designed experimental container and algorithm in a virtual chemical experiment platform can achieve a natural level of human-computer interaction, enhance the user's sense of operation, and achieve high user satisfaction.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling heterogeneous behaviors with different strategies in a terrorist attack 恐怖袭击中采用不同策略的异质行为建模
Q1 Computer Science Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.015
Le Bi , Tingting Liu , Zhen Liu , Jason Teo , Yumeng Zhao , Yanjie Chai

In terrorist attack simulations, existing methods do not describe individual differences, which means different individuals will not have different behaviors. To address this problem, we propose a framework to model people’s heterogeneous behaviors in terrorist attack. For pedestrian, we construct an emotional model that takes into account its personality and visual perception. The emotional model is then combined with pedestrians' relationship networks to make the decision-making model. With the proposed decision-making model, pedestrian may have altruistic behaviors. For terrorist, a mapping model is developed to map its antisocial personality to its attacking strategy. The experiments show that the proposed algorithm can generate realistic heterogeneous behaviors that are consistent with existing psychological research findings.

在恐怖袭击模拟中,现有的方法没有描述个体差异,这意味着不同的个体不会有不同的行为。为了解决这个问题,我们提出了一个框架来模拟人们在恐怖袭击中的异质行为。对于行人,我们构建了一个考虑其个性和视觉感知的情感模型。然后将情感模型与行人的关系网络相结合,形成决策模型。利用所提出的决策模型,行人可能会有利他行为。对于恐怖分子,开发了一个映射模型,将其反社会人格映射到其攻击策略。实验表明,该算法能够产生与现有心理学研究结果一致的现实异质行为。
{"title":"Modeling heterogeneous behaviors with different strategies in a terrorist attack","authors":"Le Bi ,&nbsp;Tingting Liu ,&nbsp;Zhen Liu ,&nbsp;Jason Teo ,&nbsp;Yumeng Zhao ,&nbsp;Yanjie Chai","doi":"10.1016/j.vrih.2022.08.015","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.015","url":null,"abstract":"<div><p>In terrorist attack simulations, existing methods do not describe individual differences, which means different individuals will not have different behaviors. To address this problem, we propose a framework to model people’s heterogeneous behaviors in terrorist attack. For pedestrian, we construct an emotional model that takes into account its personality and visual perception. The emotional model is then combined with pedestrians' relationship networks to make the decision-making model. With the proposed decision-making model, pedestrian may have altruistic behaviors. For terrorist, a mapping model is developed to map its antisocial personality to its attacking strategy. The experiments show that the proposed algorithm can generate realistic heterogeneous behaviors that are consistent with existing psychological research findings.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1