首页 > 最新文献

Computer Animation and Virtual Worlds最新文献

英文 中文
WDANet: Exploring Stylized Animation via Diffusion Model for Woodcut-Style Design 利用扩散模型探索木刻风格设计的风格化动画
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-01-08 DOI: 10.1002/cav.70007
Yangchunxue Ou, Jingjun Xu

Stylized animation strives for innovation and bold visual creativity. Integrating the inherent strong visual impact and color contrast of woodcut style into such animations is both appealing and challenging, especially during the design phase. Traditional woodcut methods, hand-drawing, and previous computer-aided techniques face challenges such as dwindling design inspiration, lengthy production times, and complex adjustment procedures. To address these issues, we propose a novel network framework, the Woodcut-style Design Assistant Network (WDANet). Our research is the first to use diffusion models to streamline the woodcut-style design process. We curate the Woodcut-62 dataset, which features works from 62 renowned historical artists, to train WDANet in capturing and learning the aesthetic nuances of woodcut prints. WDANet, based on the denoising U-Net, effectively decouples content and style features. It allows users to input or slightly modify a text description to quickly generate accurate, high-quality woodcut-style designs, saving time and offering flexibility. Quantitative and qualitative analyses, along with user studies, confirm that WDANet outperforms current state-of-the-art methods in generating woodcut-style images, demonstrating its value as a design aid.

程式化动画追求创新,大胆的视觉创意。将木刻风格固有的强烈视觉冲击和色彩对比融入到这样的动画中,既吸引人又具有挑战性,尤其是在设计阶段。传统的木刻方法、手绘和以前的计算机辅助技术面临着设计灵感减少、制作时间长、调整程序复杂等挑战。为了解决这些问题,我们提出了一个新的网络框架,木刻风格的设计助理网络(WDANet)。我们的研究是第一个使用扩散模型来简化木刻风格的设计过程。我们策划了woodcut -62数据集,其中包括来自62位著名历史艺术家的作品,以训练WDANet捕捉和学习木刻版画的美学细微差别。WDANet基于去噪的U-Net,有效地解耦了内容和样式特征。它允许用户输入或稍微修改文本描述,以快速生成准确,高质量的木刻风格设计,节省时间并提供灵活性。定量和定性分析以及用户研究证实,WDANet在生成木刻风格图像方面优于当前最先进的方法,证明了其作为设计辅助工具的价值。
{"title":"WDANet: Exploring Stylized Animation via Diffusion Model for Woodcut-Style Design","authors":"Yangchunxue Ou,&nbsp;Jingjun Xu","doi":"10.1002/cav.70007","DOIUrl":"https://doi.org/10.1002/cav.70007","url":null,"abstract":"<div>\u0000 \u0000 <p>Stylized animation strives for innovation and bold visual creativity. Integrating the inherent strong visual impact and color contrast of woodcut style into such animations is both appealing and challenging, especially during the design phase. Traditional woodcut methods, hand-drawing, and previous computer-aided techniques face challenges such as dwindling design inspiration, lengthy production times, and complex adjustment procedures. To address these issues, we propose a novel network framework, the Woodcut-style Design Assistant Network (WDANet). Our research is the first to use diffusion models to streamline the woodcut-style design process. We curate the Woodcut-62 dataset, which features works from 62 renowned historical artists, to train WDANet in capturing and learning the aesthetic nuances of woodcut prints. WDANet, based on the denoising U-Net, effectively decouples content and style features. It allows users to input or slightly modify a text description to quickly generate accurate, high-quality woodcut-style designs, saving time and offering flexibility. Quantitative and qualitative analyses, along with user studies, confirm that WDANet outperforms current state-of-the-art methods in generating woodcut-style images, demonstrating its value as a design aid.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel View Synthesis Based on Similar Perspective 基于相似视角的新型视图合成
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-01-07 DOI: 10.1002/cav.70006
Wenkang Huang

Neural radiance fields (NeRF) technology has garnered significant attention due to its exceptional performance in generating high-quality novel view images. In this study, we propose an innovative method that leverages the similarity between views to enhance the quality of novel view image generation. Initially, a pre-trained NeRF model generates an initial novel view image, which is subsequently compared and subjected to feature transfer with the most similar reference view from the training dataset. Following this, the reference view that is most similar to the initial novel view is selected from the training dataset. We designed a texture transfer module that employs a strategy progressing from coarse-to-fine, effectively integrating salient features from the reference view into the initial image, thus producing more realistic novel view images. By using similar views, this approach not only improves the quality of novel perspective images but also incorporates the training dataset as a dynamic information pool into the novel view integration process. This allows for the continuous acquisition and utilization of useful information from the training data throughout the synthesis process. Extensive experimental validation shows that using similar views to provide scene information significantly outperforms existing neural rendering techniques in enhancing the realism and accuracy of novel view images.

神经辐射场(NeRF)技术因其在生成高质量新视图图像方面的卓越性能而受到广泛关注。在这项研究中,我们提出了一种创新的方法,利用视图之间的相似性来提高新视图生成的质量。首先,预训练的NeRF模型生成一个初始的新视图图像,随后将其与训练数据集中最相似的参考视图进行比较和特征转移。在此之后,从训练数据集中选择与初始新颖视图最相似的参考视图。我们设计了一个纹理传输模块,采用由粗到精的策略,有效地将参考视图的显著特征整合到初始图像中,从而产生更逼真的新视图图像。通过使用相似的视图,该方法不仅提高了新视角图像的质量,而且将训练数据集作为动态信息池纳入到新视图集成过程中。这允许在整个合成过程中不断地从训练数据中获取和利用有用的信息。大量的实验验证表明,使用相似的视图来提供场景信息,在提高新视图图像的真实感和准确性方面显着优于现有的神经渲染技术。
{"title":"Novel View Synthesis Based on Similar Perspective","authors":"Wenkang Huang","doi":"10.1002/cav.70006","DOIUrl":"https://doi.org/10.1002/cav.70006","url":null,"abstract":"<div>\u0000 \u0000 <p>Neural radiance fields (NeRF) technology has garnered significant attention due to its exceptional performance in generating high-quality novel view images. In this study, we propose an innovative method that leverages the similarity between views to enhance the quality of novel view image generation. Initially, a pre-trained NeRF model generates an initial novel view image, which is subsequently compared and subjected to feature transfer with the most similar reference view from the training dataset. Following this, the reference view that is most similar to the initial novel view is selected from the training dataset. We designed a texture transfer module that employs a strategy progressing from coarse-to-fine, effectively integrating salient features from the reference view into the initial image, thus producing more realistic novel view images. By using similar views, this approach not only improves the quality of novel perspective images but also incorporates the training dataset as a dynamic information pool into the novel view integration process. This allows for the continuous acquisition and utilization of useful information from the training data throughout the synthesis process. Extensive experimental validation shows that using similar views to provide scene information significantly outperforms existing neural rendering techniques in enhancing the realism and accuracy of novel view images.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143112801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Body Part Segmentation of Anime Characters 动漫人物的身体部位分割
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-12-17 DOI: 10.1002/cav.2295
Zhenhua Ou, Xueting Liu, Chengze Li, Zhenkun Wen, Ping Li, Zhijian Gao, Huisi Wu

Semantic segmentation is an important approach to present the perceptual semantic understanding of an image, which is of significant usage in various applications. Especially, body part segmentation is designed for segmenting body parts of human characters to assist different editing tasks, such as style editing, pose transfer, and animation production. Since segmentation requires pixel-level precision in semantic labeling, classic heuristics-based methods generally have unstable performance. With the deployment of deep learning, a great step has been taken in segmenting body parts of human characters in natural photographs. However, the existing models are purely trained on natural photographs and generally obtain incorrect segmentation results when applied on anime character images, due to the large visual gap between training data and testing data. In this article, we present a novel approach to achieving body part segmentation of cartoon characters via a pose-based graph-cut formulation. We demonstrate the use of the acquired body part segmentation map in various image editing tasks, including conditional generation, style manipulation, pose transfer, and video-to-anime.

语义分割是呈现图像感知语义理解的一种重要方法,在各种应用中有着重要的用途。尤其是身体部位分割,是为了对人物的身体部位进行分割,以辅助不同的编辑任务,如风格编辑、姿势转换、动画制作等。由于语义标注对切分精度要求很高,传统的启发式方法通常性能不稳定。随着深度学习的部署,在自然照片中人物身体部位的分割方面迈出了一大步。然而,由于训练数据和测试数据之间的视觉差距较大,现有的模型纯粹是在自然照片上进行训练,在应用于动漫人物图像时,通常会得到不正确的分割结果。在本文中,我们提出了一种新的方法,通过基于姿态的图形切割公式来实现卡通人物的身体部位分割。我们演示了在各种图像编辑任务中使用获得的身体部位分割图,包括条件生成,风格操作,姿势转移和视频到动画。
{"title":"Body Part Segmentation of Anime Characters","authors":"Zhenhua Ou,&nbsp;Xueting Liu,&nbsp;Chengze Li,&nbsp;Zhenkun Wen,&nbsp;Ping Li,&nbsp;Zhijian Gao,&nbsp;Huisi Wu","doi":"10.1002/cav.2295","DOIUrl":"https://doi.org/10.1002/cav.2295","url":null,"abstract":"<div>\u0000 \u0000 <p>Semantic segmentation is an important approach to present the perceptual semantic understanding of an image, which is of significant usage in various applications. Especially, body part segmentation is designed for segmenting body parts of human characters to assist different editing tasks, such as style editing, pose transfer, and animation production. Since segmentation requires pixel-level precision in semantic labeling, classic heuristics-based methods generally have unstable performance. With the deployment of deep learning, a great step has been taken in segmenting body parts of human characters in natural photographs. However, the existing models are purely trained on natural photographs and generally obtain incorrect segmentation results when applied on anime character images, due to the large visual gap between training data and testing data. In this article, we present a novel approach to achieving body part segmentation of cartoon characters via a pose-based graph-cut formulation. We demonstrate the use of the acquired body part segmentation map in various image editing tasks, including conditional generation, style manipulation, pose transfer, and video-to-anime.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and Incremental 3D Model Renewal for Urban Scenes With Appearance Changes 具有外观变化的城市场景的快速增量3D模型更新
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-12-11 DOI: 10.1002/cav.70004
Yuan Xiong, Zhong Zhou

Urban 3D models with high-resolution details are the basis of various mixed reality and geographic information systems. Fast and accurate urban reconstruction from aerial photographs has attracted intense attention. Existing methods exploit multi-view geometry information from landscape patterns with similar illumination conditions and terrain appearance. In practice, urban models become obsolete over time due to human activities. Mainstream reconstruction pipelines rebuild the whole scene even if the main part of them remains unchanged. This paper proposes a novel wrapping-based incremental modeling framework to reuse existing models and renew them with new meshes efficiently. The paper illustrates a pose optimization method with illumination-based augmentation and virtual bundle adjustment. Besides, a high-performance wrapping-based meshing method is proposed for fast reconstruction. Experimental results show that the proposed method can achieve higher performance and quality than state-of-the-art methods.

具有高分辨率细节的城市3D模型是各种混合现实和地理信息系统的基础。快速准确的航拍城市重建引起了人们的高度关注。现有的方法利用具有相似光照条件和地形外观的景观模式的多视图几何信息。在实践中,由于人类活动,城市模式随着时间的推移而过时。主流重建管道重建整个场景,即使其主要部分保持不变。本文提出了一种新的基于包装的增量建模框架,以有效地重用现有模型并使用新网格进行更新。提出了一种基于光照增强和虚拟束平差的姿态优化方法。此外,提出了一种基于包络的高性能网格划分方法,以实现快速重构。实验结果表明,该方法比现有方法具有更高的性能和质量。
{"title":"Fast and Incremental 3D Model Renewal for Urban Scenes With Appearance Changes","authors":"Yuan Xiong,&nbsp;Zhong Zhou","doi":"10.1002/cav.70004","DOIUrl":"https://doi.org/10.1002/cav.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>Urban 3D models with high-resolution details are the basis of various mixed reality and geographic information systems. Fast and accurate urban reconstruction from aerial photographs has attracted intense attention. Existing methods exploit multi-view geometry information from landscape patterns with similar illumination conditions and terrain appearance. In practice, urban models become obsolete over time due to human activities. Mainstream reconstruction pipelines rebuild the whole scene even if the main part of them remains unchanged. This paper proposes a novel wrapping-based incremental modeling framework to reuse existing models and renew them with new meshes efficiently. The paper illustrates a pose optimization method with illumination-based augmentation and virtual bundle adjustment. Besides, a high-performance wrapping-based meshing method is proposed for fast reconstruction. Experimental results show that the proposed method can achieve higher performance and quality than state-of-the-art methods.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diverse Motions and Responses in Crowd Simulation 人群模拟中的各种运动和反应
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-26 DOI: 10.1002/cav.70002
Yiwen Ma, Tingting Liu, Zhen Liu

A challenge in crowd simulation is to generate diverse pedestrian motions in virtual environments. Nowadays, there is a greater emphasis on the diversity and authenticity of pedestrian movements in crowd simulation, while most traditional models primarily focus on collision avoidance and motion continuity. Recent studies have enhanced realism through data-driven approaches that exploit the movement patterns of pedestrians from real data for trajectory prediction. However, they have not taken into account the body-part motions of pedestrians. Differing from these approaches, we innovatively utilize learning-based character motion and physics animation to enhance the diversity of pedestrian motions in crowd simulation. The proposed method can provide a promising avenue for more diverse crowds and is realized by a novel framework that deeply integrates motion synthesis and physics animation with crowd simulation. The framework consists of three main components: the learning-based motion generator, which is responsible for generating diverse character motions; the hybrid simulation, which ensures the physical realism of pedestrian motions; and the velocity-based interface, which assists in integrating navigation algorithms with the motion generator. Experiments have been conducted to verify the effectiveness of the proposed method in different aspects. The visual results demonstrate the feasibility of our approach.

在人群仿真中,如何在虚拟环境中生成多样化的行人运动是一项挑战。如今,在人群仿真中,人们更加强调行人运动的多样性和真实性,而大多数传统模型则主要关注避免碰撞和运动的连续性。最近的研究通过数据驱动方法,利用真实数据中的行人运动模式进行轨迹预测,从而增强了逼真度。然而,这些方法并没有考虑到行人的身体部位运动。与这些方法不同,我们创新性地利用基于学习的角色运动和物理动画来增强人群模拟中行人运动的多样性。所提出的方法为实现更多样化的人群提供了一条很有前景的途径,它是通过一个新颖的框架实现的,该框架将运动合成和物理动画与人群仿真进行了深度整合。该框架由三个主要部分组成:基于学习的运动生成器,负责生成多样化的角色运动;混合模拟,确保行人运动的物理真实性;以及基于速度的界面,协助将导航算法与运动生成器集成。为了验证所提方法在不同方面的有效性,我们进行了实验。直观的结果证明了我们方法的可行性。
{"title":"Diverse Motions and Responses in Crowd Simulation","authors":"Yiwen Ma,&nbsp;Tingting Liu,&nbsp;Zhen Liu","doi":"10.1002/cav.70002","DOIUrl":"https://doi.org/10.1002/cav.70002","url":null,"abstract":"<div>\u0000 \u0000 <p>A challenge in crowd simulation is to generate diverse pedestrian motions in virtual environments. Nowadays, there is a greater emphasis on the diversity and authenticity of pedestrian movements in crowd simulation, while most traditional models primarily focus on collision avoidance and motion continuity. Recent studies have enhanced realism through data-driven approaches that exploit the movement patterns of pedestrians from real data for trajectory prediction. However, they have not taken into account the body-part motions of pedestrians. Differing from these approaches, we innovatively utilize learning-based character motion and physics animation to enhance the diversity of pedestrian motions in crowd simulation. The proposed method can provide a promising avenue for more diverse crowds and is realized by a novel framework that deeply integrates motion synthesis and physics animation with crowd simulation. The framework consists of three main components: the learning-based motion generator, which is responsible for generating diverse character motions; the hybrid simulation, which ensures the physical realism of pedestrian motions; and the velocity-based interface, which assists in integrating navigation algorithms with the motion generator. Experiments have been conducted to verify the effectiveness of the proposed method in different aspects. The visual results demonstrate the feasibility of our approach.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142737568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters 面向外观无关 3D 角色的面部运动重定位管道
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1002/cav.70001
ChangAn Zhu, Chris Joslin

3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time-consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time-consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor-demanding preparations.

三维面部动作重定位的优势在于捕捉和再现人类面部动作的细微差别,并加快耗时的三维面部动画制作过程。然而,面部动作重定向管道在反映面部动作的语义信息(即含义和强度)方面存在局限性,尤其是在应用于非人类角色时。重定向质量在很大程度上依赖于目标脸部装备,而这需要耗时的准备工作,如人脸三维扫描和混合形状建模。在本文中,我们提出了一种面部运动重定向管道,旨在为不同角色提供快速且语义准确的重定向结果。新框架包括一个基于面部解剖学的目标面部参数化模块和一个兼容的源运动解释模块。通过定量和定性评估,我们发现所提出的重定向管道可以自然地重现动作捕捉对象所表现的表情,其含义和强度相当,这种语义准确性可以扩展到非人类角色的面部,而无需费力的准备工作。
{"title":"A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D Characters","authors":"ChangAn Zhu,&nbsp;Chris Joslin","doi":"10.1002/cav.70001","DOIUrl":"https://doi.org/10.1002/cav.70001","url":null,"abstract":"<p>3D facial motion retargeting has the advantage of capturing and recreating the nuances of human facial motions and speeding up the time-consuming 3D facial animation process. However, the facial motion retargeting pipeline is limited in reflecting the facial motion's semantic information (i.e., meaning and intensity), especially when applied to nonhuman characters. The retargeting quality heavily relies on the target face rig, which requires time-consuming preparation such as 3D scanning of human faces and modeling of blendshapes. In this paper, we propose a facial motion retargeting pipeline aiming to provide fast and semantically accurate retargeting results for diverse characters. The new framework comprises a target face parameterization module based on face anatomy and a compatible source motion interpretation module. From the quantitative and qualitative evaluations, we found that the proposed retargeting pipeline can naturally recreate the expressions performed by a motion capture subject in equivalent meanings and intensities, such semantic accuracy extends to the faces of nonhuman characters without labor-demanding preparations.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.70001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142674057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Front-End Security: Protecting User Data and Privacy in Web Applications 加强前端安全:在网络应用程序中保护用户数据和隐私
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-13 DOI: 10.1002/cav.70003
Oleksandr Tkachenko, Vadim Goncharov, Przemysław Jatkiewicz

Conducting research on this subject remains relevant in light of the rapid development of technology and the emergence of new threats in cybersecurity, requiring constant updating of knowledge and protection methods. The purpose of the study is to identify effective front-end security methods and technologies that help ensure the protection of user data and their privacy when using web applications or sites. A methodology that defines the steps and processes for effective front-end security and user data protection is developed. The research identifies the primary security threats, including cross-site scripting (XSS), cross-site request forgery (CSRF), and SQL injections, and evaluates existing front-end security methods such as Content Security Policy (CSP), HTTPS, authentication, and authorization mechanisms. The findings highlight the effectiveness of these measures in mitigating security risks, providing a clear assessment of their advantages and limitations. Key recommendations for developers include the integration of modern security protocols, regular updates, and comprehensive security training. This study offers practical insights to improve front-end security and enhance user data protection in an evolving digital landscape.

鉴于技术的飞速发展和网络安全领域新威胁的不断出现,需要不断更新知识和保护方法,对这一主题进行研究仍然具有现实意义。本研究的目的是确定有效的前端安全方法和技术,帮助确保在使用网络应用程序或网站时保护用户数据及其隐私。研究制定了一套方法,确定了有效的前端安全和用户数据保护的步骤和流程。研究确定了主要的安全威胁,包括跨站脚本 (XSS)、跨站请求伪造 (CSRF) 和 SQL 注入,并评估了现有的前端安全方法,如内容安全策略 (CSP)、HTTPS、身份验证和授权机制。研究结果强调了这些措施在降低安全风险方面的有效性,并对其优势和局限性进行了清晰的评估。对开发人员的主要建议包括整合现代安全协议、定期更新和全面的安全培训。本研究为在不断发展的数字环境中提高前端安全性和加强用户数据保护提供了实用的见解。
{"title":"Enhancing Front-End Security: Protecting User Data and Privacy in Web Applications","authors":"Oleksandr Tkachenko,&nbsp;Vadim Goncharov,&nbsp;Przemysław Jatkiewicz","doi":"10.1002/cav.70003","DOIUrl":"https://doi.org/10.1002/cav.70003","url":null,"abstract":"<div>\u0000 \u0000 <p>Conducting research on this subject remains relevant in light of the rapid development of technology and the emergence of new threats in cybersecurity, requiring constant updating of knowledge and protection methods. The purpose of the study is to identify effective front-end security methods and technologies that help ensure the protection of user data and their privacy when using web applications or sites. A methodology that defines the steps and processes for effective front-end security and user data protection is developed. The research identifies the primary security threats, including cross-site scripting (XSS), cross-site request forgery (CSRF), and SQL injections, and evaluates existing front-end security methods such as Content Security Policy (CSP), HTTPS, authentication, and authorization mechanisms. The findings highlight the effectiveness of these measures in mitigating security risks, providing a clear assessment of their advantages and limitations. Key recommendations for developers include the integration of modern security protocols, regular updates, and comprehensive security training. This study offers practical insights to improve front-end security and enhance user data protection in an evolving digital landscape.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Roaming of Cultural Heritage Based on Image Processing 基于图像处理的文化遗产虚拟漫游
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-10 DOI: 10.1002/cav.70000
Junzhe Chen, Xing She, Yuanxin Fan, Wenwen Shao

With the digital protection and development of cultural heritage as a focus, an analysis of the trends in cultural heritage digitization reveals the importance of digital technology in this field, as demonstrated by the application of virtual reality (VR) to the protection and development of the Lingjiatan site. The implementation of the Lingjiatan roaming system involves sequential steps, including image acquisition, image splicing, and roaming system production. A user test was conducted to evaluate the usability and user experience of the system. The results show that the system operates normally, with smooth interactive functions that allow users to tour the Lingjiatan site virtually. Users can learn about Lingjiatan's culture from this virtual environment. This study further explores the system's potential for site preservation and development, and its role in the integration of cultural heritage and tourism.

以文化遗产的数字化保护和开发为重点,通过对文化遗产数字化发展趋势的分析,可以发现数字技术在这一领域的重要性,虚拟现实(VR)技术在凌家滩遗址保护和开发中的应用就证明了这一点。凌家滩漫游系统的实施包括图像采集、图像拼接和漫游系统制作等连续步骤。为评估系统的可用性和用户体验,进行了一次用户测试。结果表明,该系统运行正常,交互功能流畅,用户可以虚拟游览凌家滩遗址。用户可以从这个虚拟环境中了解凌家滩的文化。本研究进一步探讨了该系统在遗址保护和开发方面的潜力,以及它在文化遗产与旅游业融合方面的作用。
{"title":"Virtual Roaming of Cultural Heritage Based on Image Processing","authors":"Junzhe Chen,&nbsp;Xing She,&nbsp;Yuanxin Fan,&nbsp;Wenwen Shao","doi":"10.1002/cav.70000","DOIUrl":"https://doi.org/10.1002/cav.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>With the digital protection and development of cultural heritage as a focus, an analysis of the trends in cultural heritage digitization reveals the importance of digital technology in this field, as demonstrated by the application of virtual reality (VR) to the protection and development of the Lingjiatan site. The implementation of the Lingjiatan roaming system involves sequential steps, including image acquisition, image splicing, and roaming system production. A user test was conducted to evaluate the usability and user experience of the system. The results show that the system operates normally, with smooth interactive functions that allow users to tour the Lingjiatan site virtually. Users can learn about Lingjiatan's culture from this virtual environment. This study further explores the system's potential for site preservation and development, and its role in the integration of cultural heritage and tourism.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PainterAR: A Self-Painting AR Interface for Mobile Devices PainterAR:移动设备的自绘 AR 界面
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-07 DOI: 10.1002/cav.2296
Yuan Ma, Yinghan Shi, Lizhi Zhao, Xuequan Lu, Been-Lirn Duh, Meili Wang

Painting is a complex and creative process that involves the use of various drawing skills to create artworks. The concept of training artificial intelligence models to imitate this process is referred to as neural painting. To enable ordinary people to engage in the process of painting, we propose PainterAR, a novel interface that renders any paintings stroke-by-stroke in an immersive and realistic augmented reality (AR) environment. PainterAR is composed of two components: the neural painting model and the AR interface. Regarding the neural painting model, unlike previous models, we introduce the Kullback–Leibler divergence to replace the original Wasserstein distance existed in the baseline paint transformer model, which solves an important problem of encountering different scales of strokes (big or small) during painting. We then design an interactive AR interface, which allows users to upload an image and display the creation process of the neural painting model on the virtual drawing board. Experiments demonstrate that the paintings generated by our improved neural painting model are more realistic and vivid than previous neural painting models. The user study demonstrates that users prefer to control the painting process interactively in our AR environment.

绘画是一个复杂而富有创造性的过程,需要运用各种绘画技巧来创作艺术作品。训练人工智能模型来模仿这一过程的概念被称为神经绘画。为了让普通人也能参与绘画过程,我们提出了一个新颖的界面--PainterAR,它能在一个身临其境、逼真的增强现实(AR)环境中逐笔渲染任何绘画作品。PainterAR 由两部分组成:神经绘画模型和 AR 界面。在神经绘画模型方面,与以往模型不同的是,我们引入了库尔贝-莱布勒发散法(Kullback-Leibler divergence),取代了基线绘画转换器模型中原有的瓦瑟斯坦距离(Wasserstein distance),解决了绘画过程中遇到不同尺度(大或小)笔触的重要问题。然后,我们设计了一个交互式 AR 界面,允许用户上传图像,并在虚拟画板上显示神经绘画模型的创作过程。实验证明,与以前的神经绘画模型相比,我们改进的神经绘画模型生成的绘画作品更加逼真生动。用户研究表明,用户更喜欢在我们的 AR 环境中交互式地控制绘画过程。
{"title":"PainterAR: A Self-Painting AR Interface for Mobile Devices","authors":"Yuan Ma,&nbsp;Yinghan Shi,&nbsp;Lizhi Zhao,&nbsp;Xuequan Lu,&nbsp;Been-Lirn Duh,&nbsp;Meili Wang","doi":"10.1002/cav.2296","DOIUrl":"https://doi.org/10.1002/cav.2296","url":null,"abstract":"<div>\u0000 \u0000 <p>Painting is a complex and creative process that involves the use of various drawing skills to create artworks. The concept of training artificial intelligence models to imitate this process is referred to as neural painting. To enable ordinary people to engage in the process of painting, we propose PainterAR, a novel interface that renders any paintings stroke-by-stroke in an immersive and realistic augmented reality (AR) environment. PainterAR is composed of two components: the neural painting model and the AR interface. Regarding the neural painting model, unlike previous models, we introduce the Kullback–Leibler divergence to replace the original Wasserstein distance existed in the baseline paint transformer model, which solves an important problem of encountering different scales of strokes (big or small) during painting. We then design an interactive AR interface, which allows users to upload an image and display the creation process of the neural painting model on the virtual drawing board. Experiments demonstrate that the paintings generated by our improved neural painting model are more realistic and vivid than previous neural painting models. The user study demonstrates that users prefer to control the painting process interactively in our AR environment.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupled Edge Physics Algorithms for Collaborative XR Simulations 用于 XR 协作模拟的解耦边缘物理算法
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-03 DOI: 10.1002/cav.2294
George Kokiadis, Antonis Protopsaltis, Michalis Morfiadakis, Nick Lydatakis, George Papagiannakis

This work proposes a novel approach to transform any modern game engine pipeline, for optimized performance and enhanced user experiences in extended reality (XR) environments decoupling the physics engine from the game engine pipeline and using a client-server N1$$ N-1 $$ architecture creates a scalable solution, efficiently serving multiple graphics clients on head-mounted displays (HMDs) with a single physics engine on edge-cloud infrastructure. This approach ensures better synchronization in multiplayer scenarios without introducing overhead in single-player experiences, maintaining session continuity despite changes in user participation. Relocating the Physics Engine to an edge or cloud node reduces strain on local hardware, dedicating more resources to high-quality rendering and unlocking the full potential of untethered HMDs. We present four algorithms that decouple the physics engine, increasing frame rates and Quality of Experience (QoE) in VR simulations, supporting advanced interactions, numerous physics objects, and multiuser sessions with over 100 concurrent users. Incorporating a Geometric Algebra interpolator reduces inter-calls between dissected parts, maintaining QoE and easing network stress. Experimental validation, with more than 100 concurrent users, 10,000 physics objects, and softbody simulations, confirms the technical viability of the proposed architecture, showcasing transformative capabilities for more immersive and collaborative XR applications without compromising performance.

这项研究提出了一种新颖的方法来改造任何现代游戏引擎流水线,以便在扩展现实(XR)环境中优化性能和增强用户体验。该方法将物理引擎与游戏引擎流水线解耦,并使用客户端-服务器 N - 1 $$ N-1 $$ 架构创建了一个可扩展的解决方案,通过边缘云基础设施上的单个物理引擎为头戴式显示器(HMD)上的多个图形客户端提供高效服务。这种方法可确保在多人游戏场景中实现更好的同步,而不会在单人游戏体验中引入开销,从而在用户参与发生变化时仍能保持会话的连续性。将物理引擎迁移到边缘或云节点可减少对本地硬件的压力,将更多资源用于高质量渲染,并充分释放非绑定 HMD 的潜力。我们介绍了四种解耦物理引擎的算法,提高了 VR 模拟的帧速率和体验质量(QoE),支持高级交互、大量物理对象和超过 100 个并发用户的多用户会话。几何代数插值器的加入减少了剖分部件之间的相互调用,从而保持了 QoE 并减轻了网络压力。通过对 100 多名并发用户、10,000 个物理对象和软体模拟进行实验验证,证实了所建议架构的技术可行性,展示了在不影响性能的前提下实现更具沉浸感和协作性的 XR 应用的变革能力。
{"title":"Decoupled Edge Physics Algorithms for Collaborative XR Simulations","authors":"George Kokiadis,&nbsp;Antonis Protopsaltis,&nbsp;Michalis Morfiadakis,&nbsp;Nick Lydatakis,&nbsp;George Papagiannakis","doi":"10.1002/cav.2294","DOIUrl":"https://doi.org/10.1002/cav.2294","url":null,"abstract":"<div>\u0000 \u0000 <p>This work proposes a novel approach to transform any modern game engine pipeline, for optimized performance and enhanced user experiences in extended reality (XR) environments decoupling the physics engine from the game engine pipeline and using a client-server <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>N</mi>\u0000 <mo>−</mo>\u0000 <mn>1</mn>\u0000 </mrow>\u0000 <annotation>$$ N-1 $$</annotation>\u0000 </semantics></math> architecture creates a scalable solution, efficiently serving multiple graphics clients on head-mounted displays (HMDs) with a single physics engine on edge-cloud infrastructure. This approach ensures better synchronization in multiplayer scenarios without introducing overhead in single-player experiences, maintaining session continuity despite changes in user participation. Relocating the Physics Engine to an edge or cloud node reduces strain on local hardware, dedicating more resources to high-quality rendering and unlocking the full potential of untethered HMDs. We present four algorithms that decouple the physics engine, increasing frame rates and Quality of Experience (QoE) in VR simulations, supporting advanced interactions, numerous physics objects, and multiuser sessions with over 100 concurrent users. Incorporating a Geometric Algebra interpolator reduces inter-calls between dissected parts, maintaining QoE and easing network stress. Experimental validation, with more than 100 concurrent users, 10,000 physics objects, and softbody simulations, confirms the technical viability of the proposed architecture, showcasing transformative capabilities for more immersive and collaborative XR applications without compromising performance.</p>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Animation and Virtual Worlds
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1