首页 > 最新文献

Computer Animation and Virtual Worlds最新文献

英文 中文
Crowd evacuation simulation based on hierarchical agent model and physics-based character control 基于分层代理模型和基于物理的角色控制的人群疏散模拟
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-27 DOI: 10.1002/cav.2263
Jianming Ye, Zhen Liu, Tingting Liu, Yanhui Wu, Yuanyi Wang

Crowd evacuation has gained increasing attention in recent years. The agent-based method has shown a superior capability to simulate complex behaviors during crowd evacuation simulation. For agent modeling, most existing methods only consider the decision process but ignore the detailed physical motion. In this article, we propose a hierarchical framework for crowd evacuation simulation, which combines the agent decision model with the agent motion model. In the decision model, we integrate emotional contagion and scene information to determine global path planning and local collision avoidance. In the motion model, we introduce a physics-based character control method and control agent motion using deep reinforcement learning. Based on the decision strategy, the decision model can use a signal to control the agent motion in the motion model. Compared with existing methods, our framework can simulate physical interactions between agents and the environment. The results of the crowd evacuation simulation demonstrate that our framework can simulate crowd evacuation with physical fidelity.

近年来,人群疏散越来越受到关注。基于代理的方法在模拟人群疏散过程中的复杂行为方面表现出了卓越的能力。对于代理建模,现有的大多数方法只考虑了决策过程,而忽略了详细的物理运动。在本文中,我们提出了一种用于人群疏散模拟的分层框架,它将代理决策模型与代理运动模型相结合。在决策模型中,我们整合了情绪传染和场景信息,以确定全局路径规划和局部碰撞规避。在运动模型中,我们引入了基于物理的角色控制方法,并利用深度强化学习控制代理运动。基于决策策略,决策模型可以使用信号来控制运动模型中的代理运动。与现有方法相比,我们的框架可以模拟代理与环境之间的物理交互。人群疏散模拟的结果表明,我们的框架能够以物理保真度模拟人群疏散。
{"title":"Crowd evacuation simulation based on hierarchical agent model and physics-based character control","authors":"Jianming Ye,&nbsp;Zhen Liu,&nbsp;Tingting Liu,&nbsp;Yanhui Wu,&nbsp;Yuanyi Wang","doi":"10.1002/cav.2263","DOIUrl":"https://doi.org/10.1002/cav.2263","url":null,"abstract":"<p>Crowd evacuation has gained increasing attention in recent years. The agent-based method has shown a superior capability to simulate complex behaviors during crowd evacuation simulation. For agent modeling, most existing methods only consider the decision process but ignore the detailed physical motion. In this article, we propose a hierarchical framework for crowd evacuation simulation, which combines the agent decision model with the agent motion model. In the decision model, we integrate emotional contagion and scene information to determine global path planning and local collision avoidance. In the motion model, we introduce a physics-based character control method and control agent motion using deep reinforcement learning. Based on the decision strategy, the decision model can use a signal to control the agent motion in the motion model. Compared with existing methods, our framework can simulate physical interactions between agents and the environment. The results of the crowd evacuation simulation demonstrate that our framework can simulate crowd evacuation with physical fidelity.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-particle debris flow simulation based on SPH 基于 SPH 的双粒子泥石流模拟
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-27 DOI: 10.1002/cav.2261
Jiaxiu Zhang, Meng Yang, Xiaomin Li, Qun'ou Jiang, Heng Zhang, Weiliang Meng

Debris flow is a highly destructive natural disaster, necessitating accurate simulation and prediction. Existing simulation methods tend to be overly simplified, neglecting the three-dimensional complexity and multiphase fluid interactions, and they also lack comprehensive consideration of soil conditions. We propose a novel two-particle debris flow simulation method based on smoothed particle hydrodynamics (SPH) for enhanced accuracy. Our method employs a sophisticated two-particle model coupling debris flow dynamics with SPH to simulate fluid-solid interaction effectively, which considers various soil factors, dividing terrain into variable and fixed areas, incorporating soil impact factors for realistic simulation. By dynamically updating positions and reconstructing surfaces, and employing GPU and hash lookup acceleration methods, we achieve accurate simulation with significantly efficiency. Experimental results validate the effectiveness of our method across different conditions, making it valuable for debris flow risk assessment in natural disaster management.

泥石流是一种破坏性极强的自然灾害,需要精确的模拟和预测。现有的模拟方法往往过于简化,忽略了三维复杂性和多相流体的相互作用,也缺乏对土壤条件的全面考虑。为了提高精度,我们提出了一种基于平滑粒子流体力学(SPH)的新型双粒子泥石流模拟方法。我们的方法采用了一个复杂的双粒子模型,将泥石流动力学与 SPH 相结合,有效地模拟了流固相互作用。该模型考虑了各种土壤因素,将地形分为可变区和固定区,并结合了土壤影响因素,以实现逼真的模拟。通过动态更新位置和重构曲面,并采用 GPU 和哈希查找加速方法,我们实现了精确模拟并显著提高了效率。实验结果验证了我们的方法在不同条件下的有效性,使其在自然灾害管理的泥石流风险评估中具有重要价值。
{"title":"Two-particle debris flow simulation based on SPH","authors":"Jiaxiu Zhang,&nbsp;Meng Yang,&nbsp;Xiaomin Li,&nbsp;Qun'ou Jiang,&nbsp;Heng Zhang,&nbsp;Weiliang Meng","doi":"10.1002/cav.2261","DOIUrl":"https://doi.org/10.1002/cav.2261","url":null,"abstract":"<p>Debris flow is a highly destructive natural disaster, necessitating accurate simulation and prediction. Existing simulation methods tend to be overly simplified, neglecting the three-dimensional complexity and multiphase fluid interactions, and they also lack comprehensive consideration of soil conditions. We propose a novel two-particle debris flow simulation method based on smoothed particle hydrodynamics (SPH) for enhanced accuracy. Our method employs a sophisticated two-particle model coupling debris flow dynamics with SPH to simulate fluid-solid interaction effectively, which considers various soil factors, dividing terrain into variable and fixed areas, incorporating soil impact factors for realistic simulation. By dynamically updating positions and reconstructing surfaces, and employing GPU and hash lookup acceleration methods, we achieve accurate simulation with significantly efficiency. Experimental results validate the effectiveness of our method across different conditions, making it valuable for debris flow risk assessment in natural disaster management.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiagent trajectory prediction with global-local scene-enhanced social interaction graph network 利用全局-本地场景增强型社会互动图网络进行多机器人轨迹预测
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-27 DOI: 10.1002/cav.2237
Xuanqi Lin, Yong Zhang, Shun Wang, Xinglin Piao, Baocai Yin

Trajectory prediction is essential for intelligent autonomous systems like autonomous driving, behavior analysis, and service robotics. Deep learning has emerged as the predominant technique due to its superior modeling capability for trajectory data. However, deep learning-based models face challenges in effectively utilizing scene information and accurately modeling agent interactions, largely due to the complexity and uncertainty of real-world scenarios. To mitigate these challenges, this study presents a novel multiagent trajectory prediction model, termed the global-local scene-enhanced social interaction graph network (GLSESIGN), which incorporates two pivotal strategies: global-local scene information utilization and a social adaptive attention graph network. The model hierarchically learns scene information relevant to multiple intelligent agents, thereby enhancing the understanding of complex scenes. Additionally, it adaptively captures social interactions, improving adaptability to diverse interaction patterns through sparse graph structures. This model not only improves the understanding of complex scenes but also accurately predicts future trajectories of multiple intelligent agents by flexibly modeling intricate interactions. Experimental validation on public datasets substantiates the efficacy of the proposed model. This research offers a novel model to address the complexity and uncertainty in multiagent trajectory prediction, providing more accurate predictive support in practical application scenarios.

轨迹预测对于自动驾驶、行为分析和服务机器人等智能自主系统至关重要。深度学习因其对轨迹数据的卓越建模能力而成为主流技术。然而,基于深度学习的模型在有效利用场景信息和准确建模代理互动方面面临挑战,这主要是由于现实世界场景的复杂性和不确定性。为了缓解这些挑战,本研究提出了一种新型多代理轨迹预测模型,即全局-本地场景增强社会交互图网络(GLSESIGN),它融合了两种关键策略:全局-本地场景信息利用和社会自适应注意力图网络。该模型分层学习与多个智能代理相关的场景信息,从而增强对复杂场景的理解。此外,它还能自适应地捕捉社会互动,通过稀疏图结构提高对各种互动模式的适应性。该模型不仅能提高对复杂场景的理解,还能通过对错综复杂的互动进行灵活建模,准确预测多个智能代理的未来轨迹。公共数据集的实验验证证明了所提模型的有效性。这项研究为解决多智能体轨迹预测中的复杂性和不确定性问题提供了一种新型模型,为实际应用场景提供了更准确的预测支持。
{"title":"Multiagent trajectory prediction with global-local scene-enhanced social interaction graph network","authors":"Xuanqi Lin,&nbsp;Yong Zhang,&nbsp;Shun Wang,&nbsp;Xinglin Piao,&nbsp;Baocai Yin","doi":"10.1002/cav.2237","DOIUrl":"https://doi.org/10.1002/cav.2237","url":null,"abstract":"<p>Trajectory prediction is essential for intelligent autonomous systems like autonomous driving, behavior analysis, and service robotics. Deep learning has emerged as the predominant technique due to its superior modeling capability for trajectory data. However, deep learning-based models face challenges in effectively utilizing scene information and accurately modeling agent interactions, largely due to the complexity and uncertainty of real-world scenarios. To mitigate these challenges, this study presents a novel multiagent trajectory prediction model, termed the global-local scene-enhanced social interaction graph network (GLSESIGN), which incorporates two pivotal strategies: global-local scene information utilization and a social adaptive attention graph network. The model hierarchically learns scene information relevant to multiple intelligent agents, thereby enhancing the understanding of complex scenes. Additionally, it adaptively captures social interactions, improving adaptability to diverse interaction patterns through sparse graph structures. This model not only improves the understanding of complex scenes but also accurately predicts future trajectories of multiple intelligent agents by flexibly modeling intricate interactions. Experimental validation on public datasets substantiates the efficacy of the proposed model. This research offers a novel model to address the complexity and uncertainty in multiagent trajectory prediction, providing more accurate predictive support in practical application scenarios.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highlight mask-guided adaptive residual network for single image highlight detection and removal 用于单幅图像高光检测和去除的高光遮罩引导自适应残差网络
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-27 DOI: 10.1002/cav.2271
Shuaibin Wang, Li Li, Juan Wang, Tao Peng, Zhenwei Li

Specular highlights detection and removal is a challenging task. Although various methods exist for removing specular highlights, they often fail to effectively preserve the color and texture details of objects after highlight removal due to the high brightness and nonuniform distribution characteristics of highlights. Furthermore, when processing scenes with complex highlight properties, existing methods frequently encounter performance bottlenecks, which restrict their applicability. Therefore, we introduce a highlight mask-guided adaptive residual network (HMGARN). HMGARN comprises three main components: detection-net, adaptive-removal network (AR-Net), and reconstruct-net. Specifically, detection-net can accurately predict highlight mask from a single RGB image. The predicted highlight mask is then inputted into the AR-Net, which adaptively guides the model to remove specular highlights and estimate an image without specular highlights. Subsequently, reconstruct-net is used to progressively refine this result, remove any residual specular highlights, and construct the final high-quality image without specular highlights. We evaluated our method on the public dataset (SHIQ) and confirmed its superiority through comparative experimental results.

镜面高光检测和去除是一项具有挑战性的任务。虽然目前已有多种去除镜面高光的方法,但由于高光的高亮度和非均匀分布特性,这些方法在去除高光后往往无法有效保留物体的颜色和纹理细节。此外,在处理具有复杂高光属性的场景时,现有方法经常会遇到性能瓶颈,限制了其适用性。因此,我们引入了高光遮罩引导的自适应残差网络(HMGARN)。HMGARN 由三个主要部分组成:检测网络、自适应去除网络(AR-Net)和重构网络。具体来说,检测网络可以从单张 RGB 图像中准确预测高光掩码。然后将预测的高光掩码输入 AR-网络,AR-网络将自适应地引导模型去除镜面高光,并估算出没有镜面高光的图像。随后,重建网用于逐步完善这一结果,去除任何残留的镜面高光,并构建最终的高质量无镜面高光图像。我们在公共数据集(SHIQ)上评估了我们的方法,并通过对比实验结果证实了其优越性。
{"title":"Highlight mask-guided adaptive residual network for single image highlight detection and removal","authors":"Shuaibin Wang,&nbsp;Li Li,&nbsp;Juan Wang,&nbsp;Tao Peng,&nbsp;Zhenwei Li","doi":"10.1002/cav.2271","DOIUrl":"https://doi.org/10.1002/cav.2271","url":null,"abstract":"<p>Specular highlights detection and removal is a challenging task. Although various methods exist for removing specular highlights, they often fail to effectively preserve the color and texture details of objects after highlight removal due to the high brightness and nonuniform distribution characteristics of highlights. Furthermore, when processing scenes with complex highlight properties, existing methods frequently encounter performance bottlenecks, which restrict their applicability. Therefore, we introduce a highlight mask-guided adaptive residual network (HMGARN). HMGARN comprises three main components: detection-net, adaptive-removal network (AR-Net), and reconstruct-net. Specifically, detection-net can accurately predict highlight mask from a single RGB image. The predicted highlight mask is then inputted into the AR-Net, which adaptively guides the model to remove specular highlights and estimate an image without specular highlights. Subsequently, reconstruct-net is used to progressively refine this result, remove any residual specular highlights, and construct the final high-quality image without specular highlights. We evaluated our method on the public dataset (SHIQ) and confirmed its superiority through comparative experimental results.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person 以关键点为导向的自适应卷积和实例归一化,实现任意人物的连续反式人脸重现
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-24 DOI: 10.1002/cav.2256
Shibiao Xu, Miao Hua, Jiguang Zhang, Zhaohui Zhang, Xiaopeng Zhang

Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth- and point-wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods.

人脸再现技术被广泛应用于各种领域。然而,现有方法的重建效果往往不够逼真。因此,本文提出了一种渐进式人脸重现方法。首先,为了充分利用关键信息,我们提出了自适应卷积和实例归一化,将关键信息编码到网络中所有可学习的参数中,包括卷积核的权重和归一化层中的均值和方差。其次,我们根据由关键点生成的网络的所有权重,提出了连续的传递性面部表情生成方法,从而实现了网络生成的图像的连续变化。第三,与经典卷积不同,我们采用了深度卷积和点卷积相结合的方法,这可以大大减少权重数,提高训练效率。最后,我们将所提出的人脸再现方法扩展到人脸编辑应用中。综合实验证明了所提方法的有效性,它能从任何人身上生成更清晰、更逼真的人脸,比其他方法更具通用性和适用性。
{"title":"Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person","authors":"Shibiao Xu,&nbsp;Miao Hua,&nbsp;Jiguang Zhang,&nbsp;Zhaohui Zhang,&nbsp;Xiaopeng Zhang","doi":"10.1002/cav.2256","DOIUrl":"https://doi.org/10.1002/cav.2256","url":null,"abstract":"<p>Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth- and point-wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SocialVis: Dynamic social visualization in dense scenes via real-time multi-object tracking and proximity graph construction SocialVis:通过实时多目标跟踪和邻近图构建实现密集场景中的动态社交可视化
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-24 DOI: 10.1002/cav.2272
Bowen Li, Wei Li, Jingqi Wang, Weiliang Meng, Jiguang Zhang, Xiaopeng Zhang

To monitor and assess social dynamics and risks at large gatherings, we propose “SocialVis,” a comprehensive monitoring system based on multi-object tracking and graph analysis techniques. Our SocialVis includes a camera detection system that operates in two modes: a real-time mode, which enables participants to track and identify close contacts instantly, and an offline mode that allows for more comprehensive post-event analysis. The dual functionality not only aids in preventing mass gatherings or overcrowding by enabling the issuance of alerts and recommendations to organizers, but also allows for the generation of proximity-based graphs that map participant interactions, thereby enhancing the understanding of social dynamics and identifying potential high-risk areas. It also provides tools for analyzing pedestrian flow statistics and visualizing paths, offering valuable insights into crowd density and interaction patterns. To enhance system performance, we designed the SocialDetect algorithm in conjunction with the BYTE tracking algorithm. This combination is specifically engineered to improve detection accuracy and minimize ID switches among tracked objects, leveraging the strengths of both algorithms. Experiments on both public and real-world datasets validate that our SocialVis outperforms existing methods, showing 1.2%$$ 1.2% $$ improvement in detection accuracy and 45.4%$$ 45.4% $$ reduction in ID switches in dense pedestrian scenarios.

为了监控和评估大型集会中的社会动态和风险,我们提出了 "SocialVis"--一种基于多目标跟踪和图分析技术的综合监控系统。我们的 SocialVis 包括一个摄像头检测系统,可在两种模式下运行:一种是实时模式,可让参与者即时跟踪和识别密切接触者;另一种是离线模式,可进行更全面的事后分析。这种双重功能不仅可以向组织者发出警报和建议,帮助防止大规模集会或过度拥挤,还可以生成基于邻近度的图表,绘制参与者互动图,从而加强对社会动态的了解,并识别潜在的高风险区域。它还提供了行人流量统计分析和路径可视化工具,为了解人群密度和互动模式提供了宝贵的信息。为了提高系统性能,我们将 SocialDetect 算法与 BYTE 跟踪算法结合使用。这种组合是专门为提高检测准确性和最大限度地减少被跟踪对象之间的 ID 切换而设计的,充分发挥了两种算法的优势。在公共数据集和真实世界数据集上的实验验证了 SocialVis 的性能优于现有方法,显示出 1 . 2 % $$ 1.2% $$ 的检测准确率和 45 . 4 % $$ 45.4 % $ 减少了密集行人场景中的 ID 切换。
{"title":"SocialVis: Dynamic social visualization in dense scenes via real-time multi-object tracking and proximity graph construction","authors":"Bowen Li,&nbsp;Wei Li,&nbsp;Jingqi Wang,&nbsp;Weiliang Meng,&nbsp;Jiguang Zhang,&nbsp;Xiaopeng Zhang","doi":"10.1002/cav.2272","DOIUrl":"https://doi.org/10.1002/cav.2272","url":null,"abstract":"<p>To monitor and assess social dynamics and risks at large gatherings, we propose “SocialVis,” a comprehensive monitoring system based on multi-object tracking and graph analysis techniques. Our SocialVis includes a camera detection system that operates in two modes: a real-time mode, which enables participants to track and identify close contacts instantly, and an offline mode that allows for more comprehensive post-event analysis. The dual functionality not only aids in preventing mass gatherings or overcrowding by enabling the issuance of alerts and recommendations to organizers, but also allows for the generation of proximity-based graphs that map participant interactions, thereby enhancing the understanding of social dynamics and identifying potential high-risk areas. It also provides tools for analyzing pedestrian flow statistics and visualizing paths, offering valuable insights into crowd density and interaction patterns. To enhance system performance, we designed the SocialDetect algorithm in conjunction with the BYTE tracking algorithm. This combination is specifically engineered to improve detection accuracy and minimize ID switches among tracked objects, leveraging the strengths of both algorithms. Experiments on both public and real-world datasets validate that our SocialVis outperforms existing methods, showing <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>1</mn>\u0000 <mo>.</mo>\u0000 <mn>2</mn>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 <annotation>$$ 1.2% $$</annotation>\u0000 </semantics></math> improvement in detection accuracy and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mn>45</mn>\u0000 <mo>.</mo>\u0000 <mn>4</mn>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 <annotation>$$ 45.4% $$</annotation>\u0000 </semantics></math> reduction in ID switches in dense pedestrian scenarios.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSANet: A lightweight hybrid network for human action recognition in virtual sports DSANet:用于虚拟运动中人类动作识别的轻量级混合网络
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-24 DOI: 10.1002/cav.2274
Zhiyong Xiao, Feng Yu, Li Liu, Tao Peng, Xinrong Hu, Minghua Jiang

Human activity recognition (HAR) has significant potential in virtual sports applications. However, current HAR networks often prioritize high accuracy at the expense of practical application requirements, resulting in networks with large parameter counts and computational complexity. This can pose challenges for real-time and efficient recognition. This paper proposes a hybrid lightweight DSANet network designed to address the challenges of real-time performance and algorithmic complexity. The network utilizes a multi-scale depthwise separable convolutional (Multi-scale DWCNN) module to extract spatial information and a multi-layer Gated Recurrent Unit (Multi-layer GRU) module for temporal feature extraction. It also incorporates an improved channel-space attention module called RCSFA to enhance feature extraction capability. By leveraging channel, spatial, and temporal information, the network achieves a low number of parameters with high accuracy. Experimental evaluations on UCIHAR, WISDM, and PAMAP2 datasets demonstrate that the network not only reduces parameter counts but also achieves accuracy rates of 97.55%, 98.99%, and 98.67%, respectively, compared to state-of-the-art networks. This research provides valuable insights for the virtual sports field and presents a novel network for real-time activity recognition deployment in embedded devices.

人类活动识别(HAR)在虚拟运动应用中具有巨大潜力。然而,目前的人类活动识别网络往往以高精度为优先考虑,而忽略了实际应用需求,导致网络参数数量大、计算复杂。这给实时高效的识别带来了挑战。本文提出了一种混合轻量级 DSANet 网络,旨在应对实时性和算法复杂性的挑战。该网络利用多尺度深度可分离卷积(Multi-scale DWCNN)模块提取空间信息,利用多层门控递归单元(Multi-layer GRU)模块提取时间特征。它还集成了一个名为 RCSFA 的改进型信道空间注意力模块,以增强特征提取能力。通过利用信道、空间和时间信息,该网络实现了低参数数和高精度。在 UCIHAR、WISDM 和 PAMAP2 数据集上进行的实验评估表明,与最先进的网络相比,该网络不仅减少了参数数量,而且准确率分别达到 97.55%、98.99% 和 98.67%。这项研究为虚拟运动领域提供了宝贵的见解,并为嵌入式设备的实时活动识别部署提供了一种新型网络。
{"title":"DSANet: A lightweight hybrid network for human action recognition in virtual sports","authors":"Zhiyong Xiao,&nbsp;Feng Yu,&nbsp;Li Liu,&nbsp;Tao Peng,&nbsp;Xinrong Hu,&nbsp;Minghua Jiang","doi":"10.1002/cav.2274","DOIUrl":"https://doi.org/10.1002/cav.2274","url":null,"abstract":"<p>Human activity recognition (HAR) has significant potential in virtual sports applications. However, current HAR networks often prioritize high accuracy at the expense of practical application requirements, resulting in networks with large parameter counts and computational complexity. This can pose challenges for real-time and efficient recognition. This paper proposes a hybrid lightweight DSANet network designed to address the challenges of real-time performance and algorithmic complexity. The network utilizes a multi-scale depthwise separable convolutional (Multi-scale DWCNN) module to extract spatial information and a multi-layer Gated Recurrent Unit (Multi-layer GRU) module for temporal feature extraction. It also incorporates an improved channel-space attention module called RCSFA to enhance feature extraction capability. By leveraging channel, spatial, and temporal information, the network achieves a low number of parameters with high accuracy. Experimental evaluations on UCIHAR, WISDM, and PAMAP2 datasets demonstrate that the network not only reduces parameter counts but also achieves accuracy rates of 97.55%, 98.99%, and 98.67%, respectively, compared to state-of-the-art networks. This research provides valuable insights for the virtual sports field and presents a novel network for real-time activity recognition deployment in embedded devices.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FrseGAN: Free-style editable facial makeup transfer based on GAN combined with transformer FrseGAN:基于 GAN 并结合变换器的自由式可编辑面部化妆转移系统
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-24 DOI: 10.1002/cav.2235
Weifeng Xu, Pengjie Wang, Xiaosong Yang

Makeup in real life varies widely and is personalized, presenting a key challenge in makeup transfer. Most previous makeup transfer techniques divide the face into distinct regions for color transfer, frequently neglecting details like eyeshadow and facial contours. Given the successful advancements of Transformers in various visual tasks, we believe that this technology holds large potential in addressing pose, expression, and occlusion differences. To explore this, we propose novel pipeline which combines well-designed Convolutional Neural Network with Transformer to leverage the advantages of both networks for high-quality facial makeup transfer. This enables hierarchical extraction of both local and global facial features, facilitating the encoding of facial attributes into pyramid feature maps. Furthermore, a Low-Frequency Information Fusion Module is proposed to address the problem of large pose and expression variations which exist between the source and reference faces by extracting makeup features from the reference and adapting them to the source. Experiments demonstrate that our method produces makeup faces that are visually more detailed and realistic, yielding superior results.

现实生活中的妆容千差万别,而且都是个性化的,这给彩妆转移带来了重大挑战。以往的大多数彩妆转移技术都是将面部划分为不同的区域进行色彩转移,往往会忽略眼影和面部轮廓等细节。鉴于变形金刚在各种视觉任务中的成功应用,我们相信这项技术在解决姿势、表情和闭塞差异方面具有巨大潜力。为了探索这一点,我们提出了新颖的管道,将精心设计的卷积神经网络与变形器相结合,充分利用两个网络的优势,实现高质量的面部化妆转移。这样就能分层提取局部和全局面部特征,便于将面部属性编码为金字塔特征图。此外,我们还提出了一个低频信息融合模块,通过从参照物中提取化妆特征并使其适应参照物,来解决源人脸和参照人脸之间存在的姿势和表情差异较大的问题。实验证明,我们的方法所生成的化妆人脸在视觉上更加细致逼真,效果更佳。
{"title":"FrseGAN: Free-style editable facial makeup transfer based on GAN combined with transformer","authors":"Weifeng Xu,&nbsp;Pengjie Wang,&nbsp;Xiaosong Yang","doi":"10.1002/cav.2235","DOIUrl":"https://doi.org/10.1002/cav.2235","url":null,"abstract":"<p>Makeup in real life varies widely and is personalized, presenting a key challenge in makeup transfer. Most previous makeup transfer techniques divide the face into distinct regions for color transfer, frequently neglecting details like eyeshadow and facial contours. Given the successful advancements of Transformers in various visual tasks, we believe that this technology holds large potential in addressing pose, expression, and occlusion differences. To explore this, we propose novel pipeline which combines well-designed Convolutional Neural Network with Transformer to leverage the advantages of both networks for high-quality facial makeup transfer. This enables hierarchical extraction of both local and global facial features, facilitating the encoding of facial attributes into pyramid feature maps. Furthermore, a Low-Frequency Information Fusion Module is proposed to address the problem of large pose and expression variations which exist between the source and reference faces by extracting makeup features from the reference and adapting them to the source. Experiments demonstrate that our method produces makeup faces that are visually more detailed and realistic, yielding superior results.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cav.2235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAN-Based Multi-Decomposition Photo Cartoonization 基于 GAN 的多分解照片卡通化
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-23 DOI: 10.1002/cav.2248
Wenqing Zhao, Jianlin Zhu, Jin Huang, Ping Li, Bin Sheng

Background

Cartoon images play a vital role in film production, scientific and educational animation, video games, and other fields, and are one of the key visual expressions of artistic creation. However, since hand-crafted cartoon images often require a great deal of time and effort on the part of professional artists, it is necessary to be able to automatically transform real-world images into different styles of cartoon images. Although cartoon images vary from artist to artist, cartoon images generally have the unique characteristics of being highly simplified and abstract, with clear edges, smooth color shading, and relatively simple textures. However, existing image cartoonization methods tend to create a number of problems when performing style transfer, which mainly include: (1) the resulting generated images do not have obvious cartoon-style textures; and (2) the generated images are prone to structural confusion, color artifacts, and loss of the original image content. Therefore, it is also a great challenge in the field of image cartoonization to be able to make a good balance between style transfer and content keeping.

Methods

In this paper, we propose a GAN-based multi-attention mechanism for image cartoonization to address the above issues. The method combines the residual block used to extract deep network features in the generator with the attention mechanism, and further strengthens the perceptual ability of the generative model to cartoon images through the adaptive feature correction of the attention module to improve the cartoon features of the generated images. At the same time, we also introduce the attention mechanism in the convolution block of the discriminator, which is used to further reduce the image visual quality problem caused by the style transfer process. By introducing the attention mechanism into the generator and discriminator models of the generative adversarial network, our method enables the generated images to have obvious cartoon-style features while effectively improving the image's visual quality.

Results

A large number of quantitative, qualitative, and ablation experiments are conducted to demonstrate the advantages of our method in the field of image cartoonization and the role of each module in the method.

背景 卡通图像在电影制作、科教动画、视频游戏等领域发挥着重要作用,是艺术创作的重要视觉表现形式之一。然而,由于手工制作的卡通形象往往需要专业艺术家花费大量的时间和精力,因此有必要将现实世界的图像自动转换成不同风格的卡通形象。虽然不同艺术家的卡通形象各不相同,但卡通形象一般都具有高度简化和抽象、边缘清晰、色调平滑和纹理相对简单的独特特征。然而,现有的图像卡通化方法在进行风格转换时往往会产生一些问题,主要包括(1) 生成的图像没有明显的卡通风格纹理;(2) 生成的图像容易出现结构混乱、色彩伪造和原始图像内容丢失等问题。因此,如何在风格传递和内容保持之间取得良好的平衡也是图像卡通化领域的一大挑战。 方法 本文针对上述问题,提出了一种基于 GAN 的图像卡通化多注意力机制。该方法将生成器中用于提取深度网络特征的残差块与注意力机制相结合,并通过注意力模块的自适应特征校正进一步加强生成模型对卡通图像的感知能力,从而改善生成图像的卡通特征。同时,我们还在鉴别器的卷积块中引入了注意力机制,用于进一步降低风格转移过程所带来的图像视觉质量问题。通过在生成式对抗网络的生成器和判别器模型中引入注意力机制,我们的方法使生成的图像具有明显的卡通风格特征,同时有效改善了图像的视觉质量。 结果 通过大量的定量、定性和消融实验,证明了我们的方法在图像卡通化领域的优势以及各个模块在方法中的作用。
{"title":"GAN-Based Multi-Decomposition Photo Cartoonization","authors":"Wenqing Zhao,&nbsp;Jianlin Zhu,&nbsp;Jin Huang,&nbsp;Ping Li,&nbsp;Bin Sheng","doi":"10.1002/cav.2248","DOIUrl":"https://doi.org/10.1002/cav.2248","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Cartoon images play a vital role in film production, scientific and educational animation, video games, and other fields, and are one of the key visual expressions of artistic creation. However, since hand-crafted cartoon images often require a great deal of time and effort on the part of professional artists, it is necessary to be able to automatically transform real-world images into different styles of cartoon images. Although cartoon images vary from artist to artist, cartoon images generally have the unique characteristics of being highly simplified and abstract, with clear edges, smooth color shading, and relatively simple textures. However, existing image cartoonization methods tend to create a number of problems when performing style transfer, which mainly include: (1) the resulting generated images do not have obvious cartoon-style textures; and (2) the generated images are prone to structural confusion, color artifacts, and loss of the original image content. Therefore, it is also a great challenge in the field of image cartoonization to be able to make a good balance between style transfer and content keeping.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>In this paper, we propose a GAN-based multi-attention mechanism for image cartoonization to address the above issues. The method combines the residual block used to extract deep network features in the generator with the attention mechanism, and further strengthens the perceptual ability of the generative model to cartoon images through the adaptive feature correction of the attention module to improve the cartoon features of the generated images. At the same time, we also introduce the attention mechanism in the convolution block of the discriminator, which is used to further reduce the image visual quality problem caused by the style transfer process. By introducing the attention mechanism into the generator and discriminator models of the generative adversarial network, our method enables the generated images to have obvious cartoon-style features while effectively improving the image's visual quality.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>A large number of quantitative, qualitative, and ablation experiments are conducted to demonstrate the advantages of our method in the field of image cartoonization and the role of each module in the method.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141085021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Momentum-preserving inversion alleviation for elastic material simulation 弹性材料模拟的动量保护反演缓解
IF 1.1 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-05-17 DOI: 10.1002/cav.2249
Heejo Jeong, Seung-wook Kim, JaeHyun Lee, Kiwon Um, Min Hyung Kee, JungHyun Han

This paper proposes a novel method that enhances the optimization-based elastic body solver. The proposed method tackles the element inversion problem, which is prevalent in the prediction-projection approach for numerical simulation of elastic bodies. At the prediction stage, our method alleviates inversions such that the subsequent projection solver can benefit in stability and efficiency. To prevent excessive suppression of predicted inertial motion when alleviating, we introduce a velocity decomposition method and adapt only the non-rigid motion while preserving the rigid motion, that is, linear and angular momenta. Thanks to the respected inertial motion in the prediction stage, our method produces lively motions while keeping the entire simulation more stable. The experiments demonstrate that our alleviation method successfully stabilizes the simulation and improves the efficiency particularly when large deformations hamper the solver.

本文提出了一种增强基于优化的弹性体求解器的新方法。所提出的方法解决了弹性体数值模拟的预测-投影方法中普遍存在的元素反演问题。在预测阶段,我们的方法减轻了反演,从而使后续的投影求解器在稳定性和效率方面受益。为了防止在缓和时过度抑制预测的惯性运动,我们引入了一种速度分解方法,只调整非刚性运动,而保留刚性运动,即线性和角动量。由于在预测阶段尊重了惯性运动,我们的方法可以产生生动的运动,同时使整个模拟更加稳定。实验证明,我们的缓和方法成功地稳定了模拟并提高了效率,尤其是当大变形阻碍求解器时。
{"title":"Momentum-preserving inversion alleviation for elastic material simulation","authors":"Heejo Jeong,&nbsp;Seung-wook Kim,&nbsp;JaeHyun Lee,&nbsp;Kiwon Um,&nbsp;Min Hyung Kee,&nbsp;JungHyun Han","doi":"10.1002/cav.2249","DOIUrl":"https://doi.org/10.1002/cav.2249","url":null,"abstract":"<p>This paper proposes a novel method that enhances the optimization-based elastic body solver. The proposed method tackles the <i>element inversion</i> problem, which is prevalent in the <i>prediction-projection</i> approach for numerical simulation of elastic bodies. At the prediction stage, our method alleviates inversions such that the subsequent projection solver can benefit in stability and efficiency. To prevent excessive suppression of predicted inertial motion when alleviating, we introduce a velocity decomposition method and adapt only the non-rigid motion while preserving the rigid motion, that is, linear and angular momenta. Thanks to the respected inertial motion in the prediction stage, our method produces lively motions while keeping the entire simulation more stable. The experiments demonstrate that our alleviation method successfully stabilizes the simulation and improves the efficiency particularly when large deformations hamper the solver.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":1.1,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140953053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Animation and Virtual Worlds
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1