首页 > 最新文献

Computer Animation and Virtual Worlds最新文献

英文 中文
A multi-species material point method with a mixture model 采用混合物模型的多物种材料点方法
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-17 DOI: 10.1002/cav.2239
Bo Li, Shiguang Liu

The material point method (MPM) has attracted more and more attention in computer graphics. It is very successful in simulating both fluid flow and solid deformation, but may fail in simulating multiple fluids and solids coupling. We propose a unified MPM solver for multi-species simulations. Compared to traditional MPM, we extend the degree of freedom on background grid to store information of multiple materials, so that our framework is able to deal with multiple materials well. The proposed method leverages the advantages of MPM as a hybrid method. We introduce the mixture model into the framework, which was the most widely used for grid-based multi-fluid flows. This enables MPM to capture the interaction and relative motion, and animates complex and coupled fluids and solids in a unified manner. A series of experiments are presented to demonstrate effectiveness of our method.

材料点法(MPM)在计算机制图领域受到越来越多的关注。它在模拟流体流动和固体变形方面非常成功,但在模拟多种流体和固体耦合时可能会失败。我们提出了一种用于多物种模拟的统一 MPM 求解器。与传统的 MPM 相比,我们扩展了背景网格的自由度,以存储多种材料的信息,从而使我们的框架能够很好地处理多种材料。作为一种混合方法,所提出的方法充分利用了 MPM 的优势。我们在框架中引入了混合模型,该模型在基于网格的多流体流中应用最为广泛。这使得 MPM 能够捕捉相互作用和相对运动,并以统一的方式为复杂的耦合流体和固体提供动画效果。一系列实验证明了我们方法的有效性。
{"title":"A multi-species material point method with a mixture model","authors":"Bo Li,&nbsp;Shiguang Liu","doi":"10.1002/cav.2239","DOIUrl":"https://doi.org/10.1002/cav.2239","url":null,"abstract":"<p>The material point method (MPM) has attracted more and more attention in computer graphics. It is very successful in simulating both fluid flow and solid deformation, but may fail in simulating multiple fluids and solids coupling. We propose a unified MPM solver for multi-species simulations. Compared to traditional MPM, we extend the degree of freedom on background grid to store information of multiple materials, so that our framework is able to deal with multiple materials well. The proposed method leverages the advantages of MPM as a hybrid method. We introduce the mixture model into the framework, which was the most widely used for grid-based multi-fluid flows. This enables MPM to capture the interaction and relative motion, and animates complex and coupled fluids and solids in a unified manner. A series of experiments are presented to demonstrate effectiveness of our method.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140953056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversified realistic face image generation GAN for human subjects in multimedia content creation 为多媒体内容创作中的人类主体生成多样化的逼真人脸图像 GAN
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-03 DOI: 10.1002/cav.2232
Lalit Kumar, Dushyant Kumar Singh

Face image generation plays an important role in generating innovative and unique multimedia content using the GAN model. With these qualities of the GAN model, they have numerous challenges in the human face image generation. The problems encountered in the generation of facial images are like blurriness in images, incomplete details in the generated facial images, high computational power requirements, and so forth. In this manuscript, we proposed a GAN model that utilizes the composite strength of VGG-16 and ResNet-50's models to overcome those difficulties. It uses VGG-16 to build a discriminator model to discriminate between real and fake images. The generator model utilizes a combination of components from the ResNet-50 and VGG-16 models to enhance the image generation process at each iteration, resulting in the creation of realistic face images. The proposed DRFI GAN (Diversified and Realistic Face Image Generation GAN) model's generator achieves an impressive low FID score of 20.50, which is less than existing state-of-the-art approaches. Furthermore, our findings indicate that the images generated by the DRFI GAN model exhibit 10%–15% greater efficiency and realism with reduced training time compared to existing state-of-the-art methods with lower FID scores.

人脸图像生成在利用 GAN 模型生成新颖独特的多媒体内容方面发挥着重要作用。由于 GAN 模型的这些特性,它们在人脸图像生成方面面临着许多挑战。人脸图像生成过程中遇到的问题包括图像模糊、生成的人脸图像细节不完整、计算能力要求高等。在本手稿中,我们提出了一种 GAN 模型,利用 VGG-16 和 ResNet-50 模型的复合优势来克服这些困难。它利用 VGG-16 建立一个鉴别器模型来区分真假图像。生成器模型利用 ResNet-50 模型和 VGG-16 模型的组件组合来增强每次迭代的图像生成过程,从而生成逼真的人脸图像。所提出的 DRFI GAN(多元化真实人脸图像生成 GAN)模型的生成器实现了令人印象深刻的 20.50 分的低 FID 分数,低于现有的最先进方法。此外,我们的研究结果表明,与 FID 分数较低的现有先进方法相比,DRFI GAN 模型生成图像的效率和逼真度提高了 10%-15%,训练时间也缩短了。
{"title":"Diversified realistic face image generation GAN for human subjects in multimedia content creation","authors":"Lalit Kumar,&nbsp;Dushyant Kumar Singh","doi":"10.1002/cav.2232","DOIUrl":"https://doi.org/10.1002/cav.2232","url":null,"abstract":"<p>Face image generation plays an important role in generating innovative and unique multimedia content using the GAN model. With these qualities of the GAN model, they have numerous challenges in the human face image generation. The problems encountered in the generation of facial images are like blurriness in images, incomplete details in the generated facial images, high computational power requirements, and so forth. In this manuscript, we proposed a GAN model that utilizes the composite strength of VGG-16 and ResNet-50's models to overcome those difficulties. It uses VGG-16 to build a discriminator model to discriminate between real and fake images. The generator model utilizes a combination of components from the ResNet-50 and VGG-16 models to enhance the image generation process at each iteration, resulting in the creation of realistic face images. The proposed DRFI GAN (Diversified and Realistic Face Image Generation GAN) model's generator achieves an impressive low FID score of 20.50, which is less than existing state-of-the-art approaches. Furthermore, our findings indicate that the images generated by the DRFI GAN model exhibit 10%–15% greater efficiency and realism with reduced training time compared to existing state-of-the-art methods with lower FID scores.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 2","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility study of virtual reality in audiovisual environment: Assessment of university cafeteria acoustic environment 视听环境中虚拟现实技术的可行性研究:大学食堂声学环境评估
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-03 DOI: 10.1002/cav.2231
Wen Zehua, Guo Xiaoyang

This study primarily focuses on investigating whether virtual reality scenarios can authentically replicate real-life audio-visual environments. The authenticity of audio-visual environments plays a crucial role in both the design and VR fields today. Only when the authenticity of audio-visual interactive experiences is validated as feasible can virtual reality technology demonstrate positive impacts. We assessed the annoyance levels subjectively under different audio-visual conditions: a real cafeteria environment and a simulated cafeteria environment. Participants were tasked with the same activities in both environments. After each experiment, they indicated their levels of annoyance by completing a questionnaire. The results indicated a significant positive correlation between the overall subjective annoyance levels in both experiments and the subjective annoyance levels associated with different behaviors. This suggests that under identical audio conditions, virtual reality scenarios more effectively replicate the real noise environment. Furthermore, we have uncovered that certain objective factors influence the expression of authenticity. Optimizing these factors may potentially further enhance the feasibility of virtual reality technology in audio-visual environments.

本研究主要侧重于探讨虚拟现实场景能否真实地复制现实生活中的视听环境。视听环境的真实性在当今的设计和虚拟现实领域都起着至关重要的作用。只有当视听交互体验的真实性被证实是可行的,虚拟现实技术才能产生积极的影响。我们主观地评估了不同视听条件下的烦扰程度:真实的自助餐厅环境和模拟的自助餐厅环境。在这两种环境中,受试者都承担了相同的活动任务。每次实验结束后,他们都会填写一份调查问卷,说明自己的烦恼程度。结果表明,两次实验中的总体主观烦恼水平与与不同行为相关的主观烦恼水平之间存在明显的正相关。这表明,在相同的音频条件下,虚拟现实场景能更有效地再现真实的噪音环境。此外,我们还发现某些客观因素会影响真实性的表达。优化这些因素有可能进一步提高虚拟现实技术在视听环境中的可行性。
{"title":"Feasibility study of virtual reality in audiovisual environment: Assessment of university cafeteria acoustic environment","authors":"Wen Zehua,&nbsp;Guo Xiaoyang","doi":"10.1002/cav.2231","DOIUrl":"https://doi.org/10.1002/cav.2231","url":null,"abstract":"<p>This study primarily focuses on investigating whether virtual reality scenarios can authentically replicate real-life audio-visual environments. The authenticity of audio-visual environments plays a crucial role in both the design and VR fields today. Only when the authenticity of audio-visual interactive experiences is validated as feasible can virtual reality technology demonstrate positive impacts. We assessed the annoyance levels subjectively under different audio-visual conditions: a real cafeteria environment and a simulated cafeteria environment. Participants were tasked with the same activities in both environments. After each experiment, they indicated their levels of annoyance by completing a questionnaire. The results indicated a significant positive correlation between the overall subjective annoyance levels in both experiments and the subjective annoyance levels associated with different behaviors. This suggests that under identical audio conditions, virtual reality scenarios more effectively replicate the real noise environment. Furthermore, we have uncovered that certain objective factors influence the expression of authenticity. Optimizing these factors may potentially further enhance the feasibility of virtual reality technology in audio-visual environments.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 2","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial emotion recognition with a reduced feature set for video game and metaverse avatars 针对电子游戏和元宇宙化身的面部情绪识别,特征集有所减少
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-04-02 DOI: 10.1002/cav.2230
Darren Bellenger, Minsi Chen, Zhijie Xu

This paper presents a novel real-time facial feature extraction algorithm, producing a small feature set, suitable for implementing emotion recognition with online game and metaverse avatars. The algorithm aims to reduce data transmission and storage requirements, hurdles in the adoption of emotion recognition in these mediums. The early results presented show a facial emotion recognition accuracy of up to 92% on one benchmark dataset, with an overall accuracy of 77.2% across a wide range of datasets, demonstrating the early promise of the research.

本文介绍了一种新颖的实时面部特征提取算法,该算法能生成一个小的特征集,适用于通过在线游戏和元宇宙化身实现情感识别。该算法旨在降低数据传输和存储要求,这些都是在这些媒体中采用情感识别的障碍。初步结果显示,在一个基准数据集上,面部情绪识别的准确率高达 92%,在各种数据集上的总体准确率为 77.2%,这表明了这项研究的早期前景。
{"title":"Facial emotion recognition with a reduced feature set for video game and metaverse avatars","authors":"Darren Bellenger,&nbsp;Minsi Chen,&nbsp;Zhijie Xu","doi":"10.1002/cav.2230","DOIUrl":"https://doi.org/10.1002/cav.2230","url":null,"abstract":"<p>This paper presents a novel real-time facial feature extraction algorithm, producing a small feature set, suitable for implementing emotion recognition with online game and metaverse avatars. The algorithm aims to reduce data transmission and storage requirements, hurdles in the adoption of emotion recognition in these mediums. The early results presented show a facial emotion recognition accuracy of up to 92% on one benchmark dataset, with an overall accuracy of 77.2% across a wide range of datasets, demonstrating the early promise of the research.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 2","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2230","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140340366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Animation line art colorization based on the optical flow method 基于光流法的动画线条艺术着色
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-02-22 DOI: 10.1002/cav.2229
Yifeng Yu, Jiangbo Qian, Chong Wang, Yihong Dong, Baisong Liu

Coloring an animation sketch sequence is a challenging task in computer vision since the information contained in line sketches is too sparse, and the colors need to be uniform between continuous frames. Many the existing colorization algorithms can only be applied to one image and can be considered color filling algorithms. Such algorithms only provide a color result that fits within a reasonable range and can not be applied to the coloring of frame sequences. This paper proposes an end-to-end two-stage optical flow colorization network to solve the animation frame sequence colorization problem. The first stage of the network finds the direction of the color pixel flow from the detail change between a given reference frame and the next frame of line artwork and then completes the initial coloring process. The second stage of the network performs color correction and clarifies the output of the first stage. Since our algorithm does not directly colorize the image but finds the path of the color change to colorize it, it ensures a consistent color space for the sequence frames after colorization. We conduct experiments on an animation dataset, and the results show that our algorithm is effective. The code is available at https://github.com/silenye/Colorization.

为动画草图序列着色是计算机视觉领域的一项挑战性任务,因为线条草图中包含的信息过于稀疏,而且连续帧之间的颜色需要统一。许多现有的着色算法只能应用于一幅图像,可视为色彩填充算法。这类算法只能提供符合合理范围的色彩结果,无法应用于帧序列的着色。本文提出了一种端到端两阶段光流着色网络来解决动画帧序列着色问题。该网络的第一阶段从给定参考帧和下一帧线稿之间的细节变化中找到颜色像素流的方向,然后完成初始着色过程。网络的第二阶段对第一阶段的输出进行色彩校正和澄清。由于我们的算法并不直接对图像进行着色,而是找到颜色变化的路径对其进行着色,因此它能确保着色后序列帧的颜色空间保持一致。我们在一个动画数据集上进行了实验,结果表明我们的算法是有效的。代码见 https://github.com/silenye/Colorization。
{"title":"Animation line art colorization based on the optical flow method","authors":"Yifeng Yu,&nbsp;Jiangbo Qian,&nbsp;Chong Wang,&nbsp;Yihong Dong,&nbsp;Baisong Liu","doi":"10.1002/cav.2229","DOIUrl":"https://doi.org/10.1002/cav.2229","url":null,"abstract":"<p>Coloring an animation sketch sequence is a challenging task in computer vision since the information contained in line sketches is too sparse, and the colors need to be uniform between continuous frames. Many the existing colorization algorithms can only be applied to one image and can be considered color filling algorithms. Such algorithms only provide a color result that fits within a reasonable range and can not be applied to the coloring of frame sequences. This paper proposes an end-to-end two-stage optical flow colorization network to solve the animation frame sequence colorization problem. The first stage of the network finds the direction of the color pixel flow from the detail change between a given reference frame and the next frame of line artwork and then completes the initial coloring process. The second stage of the network performs color correction and clarifies the output of the first stage. Since our algorithm does not directly colorize the image but finds the path of the color change to colorize it, it ensures a consistent color space for the sequence frames after colorization. We conduct experiments on an animation dataset, and the results show that our algorithm is effective. The code is available at \u0000https://github.com/silenye/Colorization.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MarkerNet: A divide-and-conquer solution to motion capture solving from raw markers MarkerNet:利用原始标记进行运动捕捉解算的分而治之解决方案
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-01-15 DOI: 10.1002/cav.2228
Zhipeng Hu, Jilin Tang, Lincheng Li, Jie Hou, Haoran Xin, Xin Yu, Jiajun Bu

Marker-based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical movements for virtual characters in various games such as the role-playing game, the fighting game, and the action-adventure game. However, the conventional MoCap cleaning and solving process is extremely labor-intensive, time-consuming, and usually the most costly part of game animation production. Thus, there is a high demand for automated algorithms to replace costly manual operations and achieve accurate MoCap cleaning and solving in the game industry. In this article, we design a divide-and-conquer-based MoCap solving network, dubbed MarkerNet, to estimate human skeleton motions from sequential raw markers effectively. In a nutshell, our key idea is to decompose the task of direct solving of global motion from all markers into first modeling sub-motions of local parts from the corresponding marker subsets and then aggregating sub-motions into a global one. In this manner, our model can effectively capture local motion patterns w.r.t. different marker subsets, thus producing more accurate results compared to the existing methods. Extensive experiments on both real and synthetic data verify the effectiveness of the proposed method.

基于标记的光学动作捕捉(MoCap)旨在根据输入的原始标记序列定位三维人体动作。它被广泛用于制作角色扮演游戏、格斗游戏和动作冒险游戏等各种游戏中虚拟角色的物理动作。然而,传统的 MoCap 清理和解算过程极其耗费人力和时间,通常也是游戏动画制作中成本最高的部分。因此,游戏行业亟需自动化算法来取代昂贵的人工操作,实现精确的 MoCap 清理和解算。在本文中,我们设计了一种基于分而治之法的 MoCap 解算网络(称为 MarkerNet),可有效地从连续的原始标记中估计人体骨骼的运动。简而言之,我们的主要思路是将从所有标记直接求解全局运动的任务分解为首先从相应的标记子集对局部的子运动进行建模,然后将子运动汇总为全局运动。通过这种方式,我们的模型可以有效捕捉不同标记子集的局部运动模式,从而得出比现有方法更精确的结果。在真实数据和合成数据上进行的大量实验验证了所提方法的有效性。
{"title":"MarkerNet: A divide-and-conquer solution to motion capture solving from raw markers","authors":"Zhipeng Hu,&nbsp;Jilin Tang,&nbsp;Lincheng Li,&nbsp;Jie Hou,&nbsp;Haoran Xin,&nbsp;Xin Yu,&nbsp;Jiajun Bu","doi":"10.1002/cav.2228","DOIUrl":"10.1002/cav.2228","url":null,"abstract":"<p>Marker-based optical motion capture (MoCap) aims to localize 3D human motions from a sequence of input raw markers. It is widely used to produce physical movements for virtual characters in various games such as the role-playing game, the fighting game, and the action-adventure game. However, the conventional MoCap cleaning and solving process is extremely labor-intensive, time-consuming, and usually the most costly part of game animation production. Thus, there is a high demand for automated algorithms to replace costly manual operations and achieve accurate MoCap cleaning and solving in the game industry. In this article, we design a divide-and-conquer-based MoCap solving network, dubbed <i>MarkerNet</i>, to estimate human skeleton motions from sequential raw markers effectively. In a nutshell, our key idea is to decompose the task of direct solving of global motion from all markers into first modeling sub-motions of local parts from the corresponding marker subsets and then aggregating sub-motions into a global one. In this manner, our model can effectively capture local motion patterns w.r.t. different marker subsets, thus producing more accurate results compared to the existing methods. Extensive experiments on both real and synthetic data verify the effectiveness of the proposed method.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial issue 34.6 第 34.6 期社论
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-12-28 DOI: 10.1002/cav.2227
Nadia Magnenat Thalmann, Daniel Thalmann
<p>This issue contains 12 regular papers. In the first paper, Hong Li et al. present an animation translation method based on edge enhancement and coordinate attention, which is called FAEC-GAN. They design a novel edge discrimination network to identify the edge features of images, so that the generated anime images can present clear and coherent lines. And the coordinate attention module is introduced in the encoder to adapt the model to the geometric changes in translation, to produce more realistic animation images. In addition, the method combines the focal frequency loss and pixel loss, which can pay attention to both the frequency domain information and pixel information of the generated image to improve the visual effect of the image.</p><p>In the second paper, Rahul Jain et al. propose an algorithm to convert a depth video into a single dynamic image known as a linked motion image (LMI). The LMI has been given to a classifier consisting of an ensemble of three modified pre-trained convolutional neural networks (CNNs). The experiments were conducted using two datasets: a multimodal large-scale EgoGesture dataset and The MSR Gesture 3D dataset. For the EgoGesture dataset, the proposed method achieved an accuracy of 92.91%, which is better than the state-of-the-art methods. For the MSR Gesture 3D dataset, the proposed method accuracy is 100%, which outperforms the state-of-the-art methods. The recognition accuracy and precision of each gesture are also highlighted in this work.</p><p>In the third paper, Rustam Akhunov et al. propose a set of experiments to aid the evaluation of the main categories of fluid-boundary interactions that are important in computer animation, i.e. no motion (resting) fluid, tangential and normal motion of a fluid with respect to the boundary, and a fluid impacting a corner. They propose 10 experiments, comprising experimental setup and quantitative evaluation with optional visual inspections, that are arranged in four groups which focus on one of the main category of fluid-boundary interactions. The authors use these experiments to evaluate three particle-based boundary handling methods, that is, Pressure Mirroring (PM), Pressure Boundaries (PB) and Moving Least Squares Pressure Extrapolation (MLS), in combination with two incompressible SPH fluid simulation methods, namely IISPH and DFSPH.</p><p>In the fourth paper, Shenghuan Zhao et al. present three Extended Reality (XR) apps (AR, MR, and VR) to interactively visualize façade fenestration geometries and indoor illuminance simulations. Then XR technologies are assessed by 120 students and young architects, from task performance and engagement level two aspects. The task performance is measured by correct rate and time consumption two indicators, while the engagement level is measured by usability and interest two indicators. Evaluation results show that compared to AR and VR, MR is the best XR technology for this aim. VR outperforms AR on three indicators exc
本期包含 12 篇常规论文。在第一篇论文中,李宏等人提出了一种基于边缘增强和坐标注意的动漫翻译方法,称为 FAEC-GAN。他们设计了一种新颖的边缘识别网络来识别图像的边缘特征,从而使生成的动漫图像能呈现清晰连贯的线条。并在编码器中引入坐标注意模块,使模型适应平移中的几何变化,从而生成更逼真的动漫图像。此外,该方法将焦点频率损耗和像素损耗相结合,可以同时关注生成图像的频域信息和像素信息,从而改善图像的视觉效果。在第二篇论文中,Rahul Jain 等人提出了一种将深度视频转换为单一动态图像的算法,称为链接运动图像(LMI)。LMI 交给了一个分类器,该分类器由三个经过修改的预训练卷积神经网络(CNN)组成。实验使用了两个数据集:多模态大规模 EgoGesture 数据集和 MSR 手势 3D 数据集。对于 EgoGesture 数据集,所提出的方法达到了 92.91% 的准确率,优于最先进的方法。对于 MSR 手势 3D 数据集,所提出的方法的准确率为 100%,优于最先进的方法。在第三篇论文中,Rustam Akhunov 等人提出了一套实验来帮助评估计算机动画中重要的流体与边界相互作用的主要类别,即无运动(静止)流体、流体相对于边界的切线运动和法线运动以及流体撞击角落。他们提出了 10 项实验,包括实验设置、定量评估和可选的视觉检查,这些实验分为四组,分别侧重于流体与边界相互作用中的一个主要类别。作者利用这些实验评估了三种基于粒子的边界处理方法,即压力镜像法(PM)、压力边界法(PB)和移动最小二乘法压力外推法(MLS),以及两种不可压缩 SPH 流体模拟方法,即 IISPH 和 DFSPH。随后,120 名学生和青年建筑师从任务表现和参与度两个方面对 XR 技术进行了评估。任务性能通过正确率和耗时两个指标来衡量,而参与度则通过可用性和兴趣两个指标来衡量。评估结果表明,与 AR 和 VR 相比,MR 是实现这一目标的最佳 XR 技术。除可用性外,VR 在三项指标上均优于 AR。通过揭示三种不同的 XR 技术在辅助门窗设计中的表现,这项研究提高了将 XR 应用于建筑设计领域的实用价值。第五篇论文由赵静等人撰写,主要研究基于 MPM 和 PFM 的多流体耦合模拟算法。首先,他们以 MPM 为基础,在欧拉网格上建立多相流模型,并结合 PFM 捕捉不相溶流体之间的尖锐界面。在气液相互作用过程中,气相进一步被视为流体。其次,为了演示流体从高能态到低能态的自然运动演化,本文提出了局部最小体能函数来控制低能态。最后,本文设计并实现了多组多流体耦合对比实验。实验结果表明,所提出的方法可以模拟多流体耦合中快速扩散的各种效应,如完全溶解、互溶、萃取等现象。在第六篇论文中,张继伟等人提出了一种通过多特征子空间表示网络(MFSRN)融合多个异构特征的新方法,在保持特征间差异尽可能小(即共用子空间约束)的同时,最大限度地提高分类性能。作者在鸟瞰人物数据集上与最先进的模型进行了对比实验,大量的实验结果表明,所提出的 MFSRN 可以获得更好的识别性能。在第七篇论文中,Sahadeb Shit 等人提出了一种基于卷积神经网络(CNN)的图像去雾化和检测方法,称为端到端去雾化和检测网络(EDD-N),用于适当的图像可视化和检测。
{"title":"Editorial issue 34.6","authors":"Nadia Magnenat Thalmann, Daniel Thalmann","doi":"10.1002/cav.2227","DOIUrl":"https://doi.org/10.1002/cav.2227","url":null,"abstract":"&lt;p&gt;This issue contains 12 regular papers. In the first paper, Hong Li et al. present an animation translation method based on edge enhancement and coordinate attention, which is called FAEC-GAN. They design a novel edge discrimination network to identify the edge features of images, so that the generated anime images can present clear and coherent lines. And the coordinate attention module is introduced in the encoder to adapt the model to the geometric changes in translation, to produce more realistic animation images. In addition, the method combines the focal frequency loss and pixel loss, which can pay attention to both the frequency domain information and pixel information of the generated image to improve the visual effect of the image.&lt;/p&gt;\u0000&lt;p&gt;In the second paper, Rahul Jain et al. propose an algorithm to convert a depth video into a single dynamic image known as a linked motion image (LMI). The LMI has been given to a classifier consisting of an ensemble of three modified pre-trained convolutional neural networks (CNNs). The experiments were conducted using two datasets: a multimodal large-scale EgoGesture dataset and The MSR Gesture 3D dataset. For the EgoGesture dataset, the proposed method achieved an accuracy of 92.91%, which is better than the state-of-the-art methods. For the MSR Gesture 3D dataset, the proposed method accuracy is 100%, which outperforms the state-of-the-art methods. The recognition accuracy and precision of each gesture are also highlighted in this work.&lt;/p&gt;\u0000&lt;p&gt;In the third paper, Rustam Akhunov et al. propose a set of experiments to aid the evaluation of the main categories of fluid-boundary interactions that are important in computer animation, i.e. no motion (resting) fluid, tangential and normal motion of a fluid with respect to the boundary, and a fluid impacting a corner. They propose 10 experiments, comprising experimental setup and quantitative evaluation with optional visual inspections, that are arranged in four groups which focus on one of the main category of fluid-boundary interactions. The authors use these experiments to evaluate three particle-based boundary handling methods, that is, Pressure Mirroring (PM), Pressure Boundaries (PB) and Moving Least Squares Pressure Extrapolation (MLS), in combination with two incompressible SPH fluid simulation methods, namely IISPH and DFSPH.&lt;/p&gt;\u0000&lt;p&gt;In the fourth paper, Shenghuan Zhao et al. present three Extended Reality (XR) apps (AR, MR, and VR) to interactively visualize façade fenestration geometries and indoor illuminance simulations. Then XR technologies are assessed by 120 students and young architects, from task performance and engagement level two aspects. The task performance is measured by correct rate and time consumption two indicators, while the engagement level is measured by usability and interest two indicators. Evaluation results show that compared to AR and VR, MR is the best XR technology for this aim. VR outperforms AR on three indicators exc","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"18 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139065759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the impact of non-verbal cues on user experience in immersive virtual reality 探索非语言线索对沉浸式虚拟现实用户体验的影响
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-12-19 DOI: 10.1002/cav.2224
Elena Dzardanova, Vasiliki Nikolakopoulou, Vlasios Kasapakis, Spyros Vosinakis, Ioannis Xenakis, Damianos Gavalas

Face-to-face communication relies extensively on non-verbal cues (NVCs) which complement, or at times dominate, the communicative process as they convey emotions with intense salience, thus definitively affecting interpersonal communication. The capture, transference, and subsequent interpretation of NVCs becomes complicated in computer-mediated communicative processes, particularly in shared virtual worlds, for which there is growing interest both in regard to NVCs technological integration and their affective impact. This paper presents a between-groups experimental setup which is facilitated in immersive virtual reality (IVR), and examines NVCs effects on user experience, with special emphasis on degree of attention toward each NVC as an isolated controlled variable of a scripted performance by a virtual character (VC). This study aims to evaluate NVCs fidelity based on the capabilities of the motion-capture technologies utilized to address cue integration developmental challenges and examines NVCs impact on users' perceived realism of the VC, their empathy toward him, and the degree of social presence experienced. To meet the objectives set the affective impact of low-fidelity automated NVCs and high-fidelity real-time captured NVCs were compared. The findings of the evaluation suggest that although NVCs do impact user experience to an extent, their effects are notably more subtle compared to previous studies.

面对面的交流广泛依赖于非语言线索(NVCs),这些线索是对交流过程的补充,有时甚至主导着交流过程,因为它们传递着强烈的情感,从而明确地影响着人际交流。在以计算机为媒介的交际过程中,特别是在共享虚拟世界中,NVCs 的捕捉、传递和后续解释变得复杂起来,人们对 NVCs 技术整合及其情感影响的兴趣与日俱增。本文介绍了一种在沉浸式虚拟现实(IVR)中进行的组间实验设置,并研究了 NVC 对用户体验的影响,特别强调了作为虚拟角色(VC)脚本表演中一个独立控制变量的每个 NVC 的受关注程度。本研究旨在根据所使用的动作捕捉技术的能力来评估 NVC 的保真度,以解决线索整合发展方面的挑战,并研究 NVC 对用户感知到的虚拟人物的逼真度、他们对虚拟人物的共鸣以及所体验到的社会存在程度的影响。为了实现设定的目标,我们比较了低保真自动 NVC 和高保真实时捕捉 NVC 的情感影响。评估结果表明,虽然 NVC 确实在一定程度上影响了用户体验,但与之前的研究相比,其影响明显更加微妙。
{"title":"Exploring the impact of non-verbal cues on user experience in immersive virtual reality","authors":"Elena Dzardanova,&nbsp;Vasiliki Nikolakopoulou,&nbsp;Vlasios Kasapakis,&nbsp;Spyros Vosinakis,&nbsp;Ioannis Xenakis,&nbsp;Damianos Gavalas","doi":"10.1002/cav.2224","DOIUrl":"10.1002/cav.2224","url":null,"abstract":"<p>Face-to-face communication relies extensively on non-verbal cues (NVCs) which complement, or at times dominate, the communicative process as they convey emotions with intense salience, thus definitively affecting interpersonal communication. The capture, transference, and subsequent interpretation of NVCs becomes complicated in computer-mediated communicative processes, particularly in shared virtual worlds, for which there is growing interest both in regard to NVCs technological integration and their affective impact. This paper presents a between-groups experimental setup which is facilitated in immersive virtual reality (IVR), and examines NVCs effects on user experience, with special emphasis on degree of attention toward each NVC as an isolated controlled variable of a scripted performance by a virtual character (VC). This study aims to evaluate NVCs fidelity based on the capabilities of the motion-capture technologies utilized to address cue integration developmental challenges and examines NVCs impact on users' perceived realism of the VC, their empathy toward him, and the degree of social presence experienced. To meet the objectives set the affective impact of low-fidelity automated NVCs and high-fidelity real-time captured NVCs were compared. The findings of the evaluation suggest that although NVCs do impact user experience to an extent, their effects are notably more subtle compared to previous studies.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2224","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138816499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wav2Lip-HR: Synthesising clear high-resolution talking head in the wild Wav2Lip-HR:在野外合成清晰的高分辨率话头
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-12-15 DOI: 10.1002/cav.2226
Chao Liang, Qinghua Wang, Yunlin Chen, Minjie Tang

Talking head generation aims to synthesize a photo-realistic speaking video with accurate lip motion. While this field has attracted more attention in recent audio-visual researches, most existing methods do not achieve the simultaneous improvement of lip synchronization and visual quality. In this paper, we propose Wav2Lip-HR, a neural-based audio-driven high-resolution talking head generation method. With our technique, all required to generate a clear high-resolution lip sync talking video is an image/video of the target face and an audio clip of any speech. The primary benefit of our method is that it generates clear high-resolution videos with sufficient facial details, rather than the ones just be large-sized with less clarity. We first analyze key factors that limit the clarity of generated videos and then put forth several important solutions to address the problem, including data augmentation, model structure improvement and a more effective loss function. Finally, we employ several efficient metrics to evaluate the clarity of images generated by our proposed approach as well as several widely used metrics to evaluate lip-sync performance. Numerous experiments demonstrate that our method has superior performance on visual quality and lip synchronization when compared to other existing schemes.

话头生成的目的是合成具有准确唇部动作的逼真说话视频。虽然这一领域在近年来的视听研究中受到越来越多的关注,但现有的大多数方法并不能同时实现唇部同步和视觉质量的提高。在本文中,我们提出了 Wav2Lip-HR,一种基于神经的音频驱动高分辨率说话头生成方法。利用我们的技术,生成清晰的高分辨率唇语同步视频所需的只是目标面部的图像/视频和任何语音的音频片段。我们的方法的主要优点是,它能生成清晰的高分辨率视频,并能提供足够的面部细节,而不是只生成大尺寸而不太清晰的视频。我们首先分析了限制生成视频清晰度的关键因素,然后提出了几个重要的解决方案来解决这个问题,包括数据增强、模型结构改进和更有效的损失函数。最后,我们采用了几种有效的指标来评估我们提出的方法生成的图像的清晰度,以及几种广泛使用的指标来评估唇语同步性能。大量实验证明,与其他现有方案相比,我们的方法在视觉质量和唇部同步方面表现出色。
{"title":"Wav2Lip-HR: Synthesising clear high-resolution talking head in the wild","authors":"Chao Liang,&nbsp;Qinghua Wang,&nbsp;Yunlin Chen,&nbsp;Minjie Tang","doi":"10.1002/cav.2226","DOIUrl":"10.1002/cav.2226","url":null,"abstract":"<p>Talking head generation aims to synthesize a photo-realistic speaking video with accurate lip motion. While this field has attracted more attention in recent audio-visual researches, most existing methods do not achieve the simultaneous improvement of lip synchronization and visual quality. In this paper, we propose Wav2Lip-HR, a neural-based audio-driven high-resolution talking head generation method. With our technique, all required to generate a clear high-resolution lip sync talking video is an image/video of the target face and an audio clip of any speech. The primary benefit of our method is that it generates clear high-resolution videos with sufficient facial details, rather than the ones just be large-sized with less clarity. We first analyze key factors that limit the clarity of generated videos and then put forth several important solutions to address the problem, including data augmentation, model structure improvement and a more effective loss function. Finally, we employ several efficient metrics to evaluate the clarity of images generated by our proposed approach as well as several widely used metrics to evaluate lip-sync performance. Numerous experiments demonstrate that our method has superior performance on visual quality and lip synchronization when compared to other existing schemes.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Botanical-based simulation of color change in fruit ripening: Taking tomato as an example 基于植物学的果实成熟过程中颜色变化模拟——以番茄为例
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-23 DOI: 10.1002/cav.2225
Yixin Xu, Shiguang Liu

The color change of plant fruit in ripening is a typical time-varying phenomenon involving various factors. Due to its complexity and biodiversity, it is challenging to model this phenomenon. To address this issue, we take the tomato as an example and propose a botanical-based framework considering variety, environment, phytohormone, and genes to simulate fruit color change during the ripening process. Specifically, we propose a first-order kinetic model that integrates varietal, environmental, and phytohormonal factors to represent the variation of pigment concentrations in the pericarp. Moreover, we introduce a logistic model to describe the change in pigment concentration in the epidermis. Based on the gene expression pathway of tomato color in botany, we propose a genotype-to-phenotype simulation method to represent its biodiversity. An improved method is proposed to convert pigment concentrations into color accurately. Furthermore, we propose a gradient descent-based method to assist the user in quickly setting pigment concentration parameters. Experiments verified that the proposed framework can simulate a wide range of tomato colors. Both qualitative and quantitative experiments validated the proposed method. Furthermore, our framework can be applied to more fruits.

植物果实成熟过程中的颜色变化是一个典型的时变现象,涉及多种因素。由于其复杂性和生物多样性,对这一现象进行建模具有挑战性。为了解决这一问题,我们以番茄为例,提出了一个考虑品种、环境、植物激素和基因的植物学框架来模拟成熟过程中果实颜色的变化。具体来说,我们提出了一个综合品种、环境和植物激素因素的一级动力学模型来代表果皮中色素浓度的变化。此外,我们引入了一个逻辑模型来描述表皮中色素浓度的变化。基于植物中番茄颜色的基因表达途径,我们提出了一种基因型-表型模拟方法来表征其生物多样性。提出了一种将颜料浓度准确转化为颜色的改进方法。此外,我们提出了一种基于梯度下降的方法来帮助用户快速设置颜料浓度参数。实验证明,该框架可以模拟多种番茄颜色。定性和定量实验验证了该方法的有效性。此外,我们的框架可以应用于更多的水果。
{"title":"Botanical-based simulation of color change in fruit ripening: Taking tomato as an example","authors":"Yixin Xu,&nbsp;Shiguang Liu","doi":"10.1002/cav.2225","DOIUrl":"10.1002/cav.2225","url":null,"abstract":"<p>The color change of plant fruit in ripening is a typical time-varying phenomenon involving various factors. Due to its complexity and biodiversity, it is challenging to model this phenomenon. To address this issue, we take the tomato as an example and propose a botanical-based framework considering variety, environment, phytohormone, and genes to simulate fruit color change during the ripening process. Specifically, we propose a first-order kinetic model that integrates varietal, environmental, and phytohormonal factors to represent the variation of pigment concentrations in the pericarp. Moreover, we introduce a logistic model to describe the change in pigment concentration in the epidermis. Based on the gene expression pathway of tomato color in botany, we propose a genotype-to-phenotype simulation method to represent its biodiversity. An improved method is proposed to convert pigment concentrations into color accurately. Furthermore, we propose a gradient descent-based method to assist the user in quickly setting pigment concentration parameters. Experiments verified that the proposed framework can simulate a wide range of tomato colors. Both qualitative and quantitative experiments validated the proposed method. Furthermore, our framework can be applied to more fruits.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Animation and Virtual Worlds
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1