首页 > 最新文献

The Visual Computer最新文献

英文 中文
Point cloud upsampling via a coarse-to-fine network with transformer-encoder 通过带有变压器编码器的粗到细网络进行点云升采样
Pub Date : 2024-06-21 DOI: 10.1007/s00371-024-03535-8
Yixi Li, Yanzhe Liu, Rong Chen, Hui Li, Na Zhao

Point clouds provide a common geometric representation for burgeoning 3D graphics and vision tasks. To deal with the sparse, noisy and non-uniform output of most 3D data acquisition devices, this paper presents a novel coarse-to-fine learning framework that incorporates the Transformer-encoder and positional feature fusion. Its long-range dependencies with sensitive positional information allow robust feature embedding and fusion of points, especially noising elements and non-regular outliers. The proposed network consists of a Coarse Points Generator and a Points Offsets Refiner. The generator embodies a multi-feature Transformer-encoder and an EdgeConv-based feature reshaping to infer the coarse but dense upsampling point sets, whereas the refiner further learns the positions of upsampled points based on multi-feature fusion strategy that can adaptively adjust the fused features’ weights of coarse points and points offsets. Extensive qualitative and quantitative results on both synthetic and real-scanned datasets demonstrate the superiority of our method over the state-of-the-arts. Our code is publicly available at https://github.com/Superlyxi/CFT-PU.

点云为蓬勃发展的三维图形和视觉任务提供了一种常见的几何表示方法。为了处理大多数三维数据采集设备输出的稀疏、噪声和不均匀性,本文提出了一种新颖的从粗到细的学习框架,该框架结合了变换编码器和位置特征融合。它与敏感位置信息的长程依赖关系允许对点进行稳健的特征嵌入和融合,特别是噪声元素和非规则离群点。拟议的网络由粗点生成器和点偏移精炼器组成。粗点生成器包含一个多特征变换编码器和一个基于 EdgeConv 的特征重塑器,用于推断粗糙但密集的上采样点集,而细化器则基于多特征融合策略进一步学习上采样点的位置,从而自适应地调整粗点和点偏移的融合特征权重。在合成数据集和真实扫描数据集上取得的大量定性和定量结果表明,我们的方法优于同行。我们的代码可在 https://github.com/Superlyxi/CFT-PU 公开获取。
{"title":"Point cloud upsampling via a coarse-to-fine network with transformer-encoder","authors":"Yixi Li, Yanzhe Liu, Rong Chen, Hui Li, Na Zhao","doi":"10.1007/s00371-024-03535-8","DOIUrl":"https://doi.org/10.1007/s00371-024-03535-8","url":null,"abstract":"<p>Point clouds provide a common geometric representation for burgeoning 3D graphics and vision tasks. To deal with the sparse, noisy and non-uniform output of most 3D data acquisition devices, this paper presents a novel coarse-to-fine learning framework that incorporates the Transformer-encoder and positional feature fusion. Its long-range dependencies with sensitive positional information allow robust feature embedding and fusion of points, especially noising elements and non-regular outliers. The proposed network consists of a Coarse Points Generator and a Points Offsets Refiner. The generator embodies a multi-feature Transformer-encoder and an EdgeConv-based feature reshaping to infer the coarse but dense upsampling point sets, whereas the refiner further learns the positions of upsampled points based on multi-feature fusion strategy that can adaptively adjust the fused features’ weights of coarse points and points offsets. Extensive qualitative and quantitative results on both synthetic and real-scanned datasets demonstrate the superiority of our method over the state-of-the-arts. Our code is publicly available at https://github.com/Superlyxi/CFT-PU.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZMNet: feature fusion and semantic boundary supervision for real-time semantic segmentation ZMNet:用于实时语义分割的特征融合和语义边界监督
Pub Date : 2024-06-20 DOI: 10.1007/s00371-024-03448-6
Ya Li, Ziming Li, Huiwang Liu, Qing Wang

Feature fusion module is an essential component of real-time semantic segmentation networks to bridge the semantic gap among different feature layers. However, many networks are inefficient in multi-level feature fusion. In this paper, we propose a simple yet effective decoder that consists of a series of multi-level attention feature fusion modules (MLA-FFMs) aimed at fusing multi-level features in a top-down manner. Specifically, MLA-FFM is a lightweight attention-based module. Therefore, it can not only efficiently fuse features to bridge the semantic gap at different levels, but also be applied to real-time segmentation tasks. In addition, to solve the problem of low accuracy of existing real-time segmentation methods at semantic boundaries, we propose a semantic boundary supervision module (BSM) to improve the accuracy by supervising the prediction of semantic boundaries. Extensive experiments demonstrate that our network achieves a state-of-the-art trade-off between segmentation accuracy and inference speed on both Cityscapes and CamVid datasets. On a single NVIDIA GeForce 1080Ti GPU, our model achieves 77.4% mIoU with a speed of 97.5 FPS on the Cityscapes test dataset, and 74% mIoU with a speed of 156.6 FPS on the CamVid test dataset, which is superior to most state-of-the-art real-time methods.

特征融合模块是实时语义分割网络的重要组成部分,可弥合不同特征层之间的语义差距。然而,许多网络在多层次特征融合方面效率低下。在本文中,我们提出了一种简单而有效的解码器,它由一系列多层次注意力特征融合模块(MLA-FFM)组成,旨在以自上而下的方式融合多层次特征。具体来说,MLA-FFM 是一种基于注意力的轻量级模块。因此,它不仅能有效地融合特征,弥合不同层次的语义差距,还能应用于实时分割任务。此外,为了解决现有实时分割方法在语义边界准确率低的问题,我们提出了语义边界监督模块(BSM),通过监督语义边界的预测来提高准确率。广泛的实验证明,我们的网络在 Cityscapes 和 CamVid 数据集上实现了分割精度和推理速度之间的最佳平衡。在单个 NVIDIA GeForce 1080Ti GPU 上,我们的模型在 Cityscapes 测试数据集上以 97.5 FPS 的速度实现了 77.4% 的 mIoU,在 CamVid 测试数据集上以 156.6 FPS 的速度实现了 74% 的 mIoU,优于大多数最先进的实时方法。
{"title":"ZMNet: feature fusion and semantic boundary supervision for real-time semantic segmentation","authors":"Ya Li, Ziming Li, Huiwang Liu, Qing Wang","doi":"10.1007/s00371-024-03448-6","DOIUrl":"https://doi.org/10.1007/s00371-024-03448-6","url":null,"abstract":"<p>Feature fusion module is an essential component of real-time semantic segmentation networks to bridge the semantic gap among different feature layers. However, many networks are inefficient in multi-level feature fusion. In this paper, we propose a simple yet effective decoder that consists of a series of multi-level attention feature fusion modules (MLA-FFMs) aimed at fusing multi-level features in a top-down manner. Specifically, MLA-FFM is a lightweight attention-based module. Therefore, it can not only efficiently fuse features to bridge the semantic gap at different levels, but also be applied to real-time segmentation tasks. In addition, to solve the problem of low accuracy of existing real-time segmentation methods at semantic boundaries, we propose a semantic boundary supervision module (BSM) to improve the accuracy by supervising the prediction of semantic boundaries. Extensive experiments demonstrate that our network achieves a state-of-the-art trade-off between segmentation accuracy and inference speed on both Cityscapes and CamVid datasets. On a single NVIDIA GeForce 1080Ti GPU, our model achieves 77.4% mIoU with a speed of 97.5 FPS on the Cityscapes test dataset, and 74% mIoU with a speed of 156.6 FPS on the CamVid test dataset, which is superior to most state-of-the-art real-time methods.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UTE-CrackNet: transformer-guided and edge feature extraction U-shaped road crack image segmentation UTE-CrackNet:变换器引导和边缘特征提取 U 形道路裂缝图像分割
Pub Date : 2024-06-20 DOI: 10.1007/s00371-024-03531-y
Huaping Zhou, Bin Deng, Kelei Sun, Shunxiang Zhang, Yongqi Zhang

Cracks in the road surface can cause significant harm. Road crack detection, segmentation, and immediate repair can help reduce the occurrence of risks. Some methods based on convolutional neural networks still have some problems, such as fuzzy edge information, small receptive fields, and insufficient perception ability of local information. To solve the above problems, this paper offers UTE-CrackNet, a novel road crack segmentation network that attempts to increase the generalization ability and segmentation accuracy of road crack segmentation networks. To begin, our design combines the U-shaped structure that enables the model to learn more features. Given the lack of skip connections, we designed the multi-convolution coordinate attention block to reduce semantic differences in cascaded features and the gated residual attention block to get more local features. Because most fractures have strip characteristics, we propose the transformer edge atlas spatial pyramid pooling module, which innovatively applies the transformer module and edge detection module to the network so that the network can better capture the edge information and context information of the fracture area. In addition, we use focus loss in training to solve the problem of positive and negative sample imbalances. Experiments were conducted on four publicly available road crack segmentation datasets: Rissbilder, GAPS384, CFD, and CrackTree200. The experimental results reveal that the network outperforms the standard road fracture segmentation models. The code and models are publicly available at https://github.com/mushan0929/UTE-crackNet.

路面裂缝可造成重大伤害。路面裂缝的检测、分割和及时修复有助于降低风险的发生。一些基于卷积神经网络的方法仍存在一些问题,如边缘信息模糊、感受野小、局部信息感知能力不足等。为了解决上述问题,本文提出了一种新型道路裂缝分割网络 UTE-CrackNet,试图提高道路裂缝分割网络的泛化能力和分割精度。首先,我们的设计结合了 U 型结构,使模型能够学习更多特征。鉴于缺乏跳转连接,我们设计了多卷积坐标注意力块来减少级联特征的语义差异,并设计了门控残差注意力块来获取更多局部特征。由于大多数断裂具有条状特征,我们提出了变压器边缘图集空间金字塔汇集模块,创新性地将变压器模块和边缘检测模块应用到网络中,使网络能更好地捕捉断裂区域的边缘信息和上下文信息。此外,我们还在训练中使用了焦点损耗,以解决正负样本不平衡的问题。我们在四个公开的道路裂缝分割数据集上进行了实验:Rissbilder、GAPS384、CFD 和 CrackTree200。实验结果表明,该网络的性能优于标准道路裂缝分割模型。代码和模型可在 https://github.com/mushan0929/UTE-crackNet 上公开获取。
{"title":"UTE-CrackNet: transformer-guided and edge feature extraction U-shaped road crack image segmentation","authors":"Huaping Zhou, Bin Deng, Kelei Sun, Shunxiang Zhang, Yongqi Zhang","doi":"10.1007/s00371-024-03531-y","DOIUrl":"https://doi.org/10.1007/s00371-024-03531-y","url":null,"abstract":"<p>Cracks in the road surface can cause significant harm. Road crack detection, segmentation, and immediate repair can help reduce the occurrence of risks. Some methods based on convolutional neural networks still have some problems, such as fuzzy edge information, small receptive fields, and insufficient perception ability of local information. To solve the above problems, this paper offers UTE-CrackNet, a novel road crack segmentation network that attempts to increase the generalization ability and segmentation accuracy of road crack segmentation networks. To begin, our design combines the U-shaped structure that enables the model to learn more features. Given the lack of skip connections, we designed the multi-convolution coordinate attention block to reduce semantic differences in cascaded features and the gated residual attention block to get more local features. Because most fractures have strip characteristics, we propose the transformer edge atlas spatial pyramid pooling module, which innovatively applies the transformer module and edge detection module to the network so that the network can better capture the edge information and context information of the fracture area. In addition, we use focus loss in training to solve the problem of positive and negative sample imbalances. Experiments were conducted on four publicly available road crack segmentation datasets: Rissbilder, GAPS384, CFD, and CrackTree200. The experimental results reveal that the network outperforms the standard road fracture segmentation models. The code and models are publicly available at https://github.com/mushan0929/UTE-crackNet.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAMAF: dual attention network with multi-level adaptive complementary fusion for medical image segmentation DAMAF:用于医学图像分割的多级自适应互补融合双注意力网络
Pub Date : 2024-06-20 DOI: 10.1007/s00371-024-03543-8
Yueqian Pan, Qiaohong Chen, Xian Fang

Transformers have been widely applied in medical image segmentation due to their ability to establish excellent long-distance dependency through self-attention. However, relying solely on self-attention makes it difficult to effectively extract rich spatial and channel information from adjacent levels. To address this issue, we propose a novel dual attention model based on a multi-level adaptive complementary fusion mechanism, namely DAMAF. We first employ efficient attention and transpose attention to synchronously capture robust spatial and channel cures in a lightweight manner. Then, we design a multi-level fusion attention block to expand the complementarity of features at each level and enrich the contextual information, thereby gradually enhancing the correlation between high-level and low-level features. In addition, we develop a multi-level skip attention block to strengthen the adjacent-level information of the model through mutual fusion, which improves the feature expression ability of the model. Extensive experiments on the Synapse, ACDC, and ISIC-2018 datasets demonstrate that the proposed DAMAF achieves significantly superior results compared to competitors. Our code is publicly available at https://github.com/PanYging/DAMAF.

变换器能够通过自注意建立出色的远距离依赖关系,因此被广泛应用于医学图像分割。然而,仅仅依靠自我注意很难有效地从相邻层次中提取丰富的空间和通道信息。为了解决这个问题,我们提出了一种基于多层次自适应互补融合机制的新型双重注意模型,即 DAMAF。我们首先采用高效注意力和转置注意力,以轻量级的方式同步捕捉稳健的空间和信道固化信息。然后,我们设计了一个多层次融合注意块,以扩大各层次特征的互补性并丰富上下文信息,从而逐步增强高层次特征与低层次特征之间的相关性。此外,我们还开发了多级跳转注意模块,通过相互融合来强化模型的相邻级信息,从而提高模型的特征表达能力。在 Synapse、ACDC 和 ISIC-2018 数据集上进行的大量实验表明,与竞争对手相比,所提出的 DAMAF 取得了明显优于竞争对手的结果。我们的代码可在 https://github.com/PanYging/DAMAF 公开获取。
{"title":"DAMAF: dual attention network with multi-level adaptive complementary fusion for medical image segmentation","authors":"Yueqian Pan, Qiaohong Chen, Xian Fang","doi":"10.1007/s00371-024-03543-8","DOIUrl":"https://doi.org/10.1007/s00371-024-03543-8","url":null,"abstract":"<p>Transformers have been widely applied in medical image segmentation due to their ability to establish excellent long-distance dependency through self-attention. However, relying solely on self-attention makes it difficult to effectively extract rich spatial and channel information from adjacent levels. To address this issue, we propose a novel dual attention model based on a multi-level adaptive complementary fusion mechanism, namely DAMAF. We first employ efficient attention and transpose attention to synchronously capture robust spatial and channel cures in a lightweight manner. Then, we design a multi-level fusion attention block to expand the complementarity of features at each level and enrich the contextual information, thereby gradually enhancing the correlation between high-level and low-level features. In addition, we develop a multi-level skip attention block to strengthen the adjacent-level information of the model through mutual fusion, which improves the feature expression ability of the model. Extensive experiments on the Synapse, ACDC, and ISIC-2018 datasets demonstrate that the proposed DAMAF achieves significantly superior results compared to competitors. Our code is publicly available at https://github.com/PanYging/DAMAF.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agent-based crowd simulation: an in-depth survey of determining factors for heterogeneous behavior 基于代理的人群模拟:对异质行为决定因素的深入调查
Pub Date : 2024-06-19 DOI: 10.1007/s00371-024-03503-2
Saba Khan, Zhigang Deng

In recent years, the field of crowd simulation has experienced significant advancements, attributed in part to the improvement of hardware performance, coupled with a notable emphasis on agent-based characteristics. Agent-based simulations stand out as the preferred methodology when researchers seek to model agents with unique behavioral traits and purpose-driven actions, a crucial aspect for simulating diverse and realistic crowd movements. This survey adopts a systematic approach, meticulously delving into the array of factors vital for simulating a heterogeneous microscopic crowd. The emphasis is placed on scrutinizing low-level behavioral details and individual features of virtual agents to capture a nuanced understanding of their interactions. The survey is based on studies published in reputable peer-reviewed journals and conferences. The primary aim of this survey is to present the diverse advancements in the realm of agent-based crowd simulations, with a specific emphasis on the various aspects of agent behavior that researchers take into account when developing crowd simulation models. Additionally, the survey suggests future research directions with the objective of developing new applications that focus on achieving more realistic and efficient crowd simulations.

近年来,人群仿真领域取得了长足的进步,这部分归功于硬件性能的提高,以及对基于代理特征的显著强调。当研究人员试图模拟具有独特行为特征和目的驱动行动的代理时,基于代理的模拟是首选方法,这对于模拟多样化和逼真的人群运动至关重要。本研究采用系统化方法,细致研究了模拟异质微观人群的一系列重要因素。重点是仔细研究虚拟代理的低级行为细节和个体特征,以捕捉对其互动的细微理解。本调查报告基于在知名同行评审期刊和会议上发表的研究。本调查的主要目的是介绍基于代理的人群仿真领域的各种进展,特别强调研究人员在开发人群仿真模型时所考虑的代理行为的各个方面。此外,调查还提出了未来的研究方向,目的是开发新的应用,重点实现更逼真、更高效的人群模拟。
{"title":"Agent-based crowd simulation: an in-depth survey of determining factors for heterogeneous behavior","authors":"Saba Khan, Zhigang Deng","doi":"10.1007/s00371-024-03503-2","DOIUrl":"https://doi.org/10.1007/s00371-024-03503-2","url":null,"abstract":"<p>In recent years, the field of crowd simulation has experienced significant advancements, attributed in part to the improvement of hardware performance, coupled with a notable emphasis on agent-based characteristics. Agent-based simulations stand out as the preferred methodology when researchers seek to model agents with unique behavioral traits and purpose-driven actions, a crucial aspect for simulating diverse and realistic crowd movements. This survey adopts a systematic approach, meticulously delving into the array of factors vital for simulating a heterogeneous microscopic crowd. The emphasis is placed on scrutinizing low-level behavioral details and individual features of virtual agents to capture a nuanced understanding of their interactions. The survey is based on studies published in reputable peer-reviewed journals and conferences. The primary aim of this survey is to present the diverse advancements in the realm of agent-based crowd simulations, with a specific emphasis on the various aspects of agent behavior that researchers take into account when developing crowd simulation models. Additionally, the survey suggests future research directions with the objective of developing new applications that focus on achieving more realistic and efficient crowd simulations.\u0000</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROMOT: Referring-expression-comprehension open-set multi-object tracking ROMOT:参照-表达-理解开放集多目标跟踪
Pub Date : 2024-06-19 DOI: 10.1007/s00371-024-03544-7
Wei Li, Bowen Li, Jingqi Wang, Weiliang Meng, Jiguang Zhang, Xiaopeng Zhang

Traditional multi-object tracking is limited to tracking a predefined set of categories, whereas open-vocabulary tracking expands its capabilities to track novel categories. In this paper, we propose ROMOT (referring-expression-comprehension open-set multi-object tracking), which not only tracks objects from novel categories not included in the training data, but also enables tracking based on referring expression comprehension (REC). REC describes targets solely by their attributes, such as “the person running at the front” or “the bird flying in the air rather than on the ground,” making it particularly relevant for real-world multi-object tracking scenarios. Our ROMOT achieves this by harnessing the exceptional capabilities of a visual language model and employing multi-stage cross-modal attention to handle tracking for novel categories and REC tasks. Integrating RSM (reconstruction similarity metric) and OCM (observation-centric momentum) in our ROMOT eliminates the need for task-specific training, addressing the challenge of insufficient datasets. Our ROMOT enhances efficiency and adaptability in handling tracking requirements without relying on extensive tracking training data.

传统的多对象跟踪仅限于跟踪一组预定义的类别,而开放式词汇跟踪则将其功能扩展到跟踪新类别。在本文中,我们提出了 ROMOT(指代-表达-理解开放集多目标跟踪),它不仅能跟踪训练数据中未包含的新类别中的目标,还能基于指代表达理解(REC)进行跟踪。REC 仅通过目标的属性来描述目标,例如 "跑在最前面的人 "或 "飞在空中而不是地面上的鸟",因此特别适用于真实世界的多目标跟踪场景。我们的 ROMOT 通过利用视觉语言模型的卓越能力,并采用多阶段跨模态注意力来处理新类别和 REC 任务的跟踪,从而实现了这一目标。在我们的 ROMOT 中集成了 RSM(重建相似度量)和 OCM(以观测为中心的动量),因此无需针对特定任务进行训练,从而解决了数据集不足的难题。我们的 ROMOT 提高了处理跟踪要求的效率和适应性,而无需依赖大量的跟踪训练数据。
{"title":"ROMOT: Referring-expression-comprehension open-set multi-object tracking","authors":"Wei Li, Bowen Li, Jingqi Wang, Weiliang Meng, Jiguang Zhang, Xiaopeng Zhang","doi":"10.1007/s00371-024-03544-7","DOIUrl":"https://doi.org/10.1007/s00371-024-03544-7","url":null,"abstract":"<p>Traditional multi-object tracking is limited to tracking a predefined set of categories, whereas open-vocabulary tracking expands its capabilities to track novel categories. In this paper, we propose ROMOT (referring-expression-comprehension open-set multi-object tracking), which not only tracks objects from novel categories not included in the training data, but also enables tracking based on referring expression comprehension (REC). REC describes targets solely by their attributes, such as “the person running at the front” or “the bird flying in the air rather than on the ground,” making it particularly relevant for real-world multi-object tracking scenarios. Our ROMOT achieves this by harnessing the exceptional capabilities of a visual language model and employing multi-stage cross-modal attention to handle tracking for novel categories and REC tasks. Integrating RSM (reconstruction similarity metric) and OCM (observation-centric momentum) in our ROMOT eliminates the need for task-specific training, addressing the challenge of insufficient datasets. Our ROMOT enhances efficiency and adaptability in handling tracking requirements without relying on extensive tracking training data.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wire rope damage detection based on a uniform-complementary binary pattern with exponentially weighted guide image filtering 基于指数加权引导图像滤波的均匀互补二进制模式的钢丝绳损伤检测
Pub Date : 2024-06-18 DOI: 10.1007/s00371-024-03538-5
Qunpo Liu, Qi Tang, Bo Su, Xuhui Bu, Naohiko Hanajima, Manli Wang

In response to the problem of unclear texture structure in steel wire rope images caused by complex and uncertain lighting conditions, resulting in inconsistent LBP feature values for the same structure, this paper proposes a steel wire surface damage recognition method based on exponential weighted guided filtering and complementary binary equivalent patterns. Leveraging the phenomenon of Mach bands in vision, we introduce a guided filtering method based on local exponential weighting to enhance texture details by applying exponential mapping to evaluate pixel differences within local window regions during image filtering. Additionally, we propose complementary binary equivalent pattern descriptors as neighborhood difference symbol information representation operators to reduce feature dimensionality while enhancing the robustness of binary encoding against interference. Experimental results demonstrate that compared to classical guided filtering algorithms, our image enhancement method achieves improvements in PSNR and SSIM mean values by more than 32.5% and 18.5%, respectively, effectively removing noise while preserving image edge structures. Moreover, our algorithm achieves a classification accuracy of 99.3% on the steel wire dataset, with a processing time of only 0.606 s per image.

针对钢丝绳图像因光照条件复杂且不确定而导致纹理结构不清晰,从而导致同一结构的 LBP 特征值不一致的问题,本文提出了一种基于指数加权引导滤波和互补二元等效模式的钢丝绳表面损伤识别方法。利用视觉中的马赫带现象,我们引入了一种基于局部指数加权的引导滤波方法,通过在图像滤波过程中应用指数映射来评估局部窗口区域内的像素差异,从而增强纹理细节。此外,我们还提出了互补二进制等效模式描述符作为邻域差异符号信息表示算子,以降低特征维度,同时增强二进制编码抗干扰的鲁棒性。实验结果表明,与经典的引导滤波算法相比,我们的图像增强方法的 PSNR 和 SSIM 平均值分别提高了 32.5% 和 18.5%,在有效去除噪声的同时保留了图像边缘结构。此外,我们的算法在钢丝数据集上的分类准确率达到了 99.3%,而每幅图像的处理时间仅为 0.606 秒。
{"title":"Wire rope damage detection based on a uniform-complementary binary pattern with exponentially weighted guide image filtering","authors":"Qunpo Liu, Qi Tang, Bo Su, Xuhui Bu, Naohiko Hanajima, Manli Wang","doi":"10.1007/s00371-024-03538-5","DOIUrl":"https://doi.org/10.1007/s00371-024-03538-5","url":null,"abstract":"<p>In response to the problem of unclear texture structure in steel wire rope images caused by complex and uncertain lighting conditions, resulting in inconsistent LBP feature values for the same structure, this paper proposes a steel wire surface damage recognition method based on exponential weighted guided filtering and complementary binary equivalent patterns. Leveraging the phenomenon of Mach bands in vision, we introduce a guided filtering method based on local exponential weighting to enhance texture details by applying exponential mapping to evaluate pixel differences within local window regions during image filtering. Additionally, we propose complementary binary equivalent pattern descriptors as neighborhood difference symbol information representation operators to reduce feature dimensionality while enhancing the robustness of binary encoding against interference. Experimental results demonstrate that compared to classical guided filtering algorithms, our image enhancement method achieves improvements in PSNR and SSIM mean values by more than 32.5% and 18.5%, respectively, effectively removing noise while preserving image edge structures. Moreover, our algorithm achieves a classification accuracy of 99.3% on the steel wire dataset, with a processing time of only 0.606 s per image.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Swin-VEC: Video Swin Transformer-based GAN for video error concealment of VVC Swin-VEC:基于视频斯温变换器的 GAN,用于 VVC 的视频错误隐藏
Pub Date : 2024-06-18 DOI: 10.1007/s00371-024-03518-9
Bing Zhang, Ran Ma, Yu Cao, Ping An

Video error concealment can effectively improve the visual perception quality of videos damaged by packet loss in video transmission or error reception at the decoder. The latest versatile video coding (VVC) standard further improves the compression performance and lacks error recovery mechanism, which makes the VVC bitstream highly sensitive to errors. Most of the existing error concealment algorithms are designed for the video coding standards before VVC and are not applicable to VVC; thus, the research on video error concealment for VVC is urgently needed. In this paper, a novel deep video error concealment model for VVC is proposed, called Swin-VEC. The model innovatively integrates Video Swin Transformer into the generator of generative adversarial network (GAN). Specifically, the generator of the model employs convolutional neural network (CNN) to extract shallow features, and utilizes the Video Swin Transformer to extract deep multi-scale features. Subsequently, the designed dual upsampling modules are used to accomplish the recovery of spatiotemporal dimensions, and combined with CNN to achieve frame reconstruction. Moreover, an augmented dataset BVI-DVC-VVC is constructed for model training and verification. The optimization of the model is realized by adversarial training. Extensive experiments on BVI-DVC-VVC and UCF101 demonstrate the effectiveness and superiority of our proposed model for the video error concealment of VVC.

视频错误隐藏可以有效改善因视频传输过程中丢包或解码器接收错误而受损的视频的视觉感知质量。最新的通用视频编码(VVC)标准进一步提高了压缩性能,但缺乏错误恢复机制,这使得 VVC 比特流对错误高度敏感。现有的错误隐藏算法大多是针对 VVC 之前的视频编码标准设计的,不适用于 VVC;因此,针对 VVC 的视频错误隐藏研究迫在眉睫。本文提出了一种新型的 VVC 深度视频错误隐藏模型,称为 Swin-VEC。该模型创新性地将视频 Swin 变换器集成到生成式对抗网络(GAN)的生成器中。具体来说,该模型的生成器采用卷积神经网络(CNN)提取浅层特征,并利用视频 Swin 变换器提取深层多尺度特征。随后,利用设计的双上采样模块完成时空维度的恢复,并结合 CNN 实现帧重建。此外,还构建了一个增强数据集 BVI-DVC-VVC,用于模型训练和验证。模型的优化是通过对抗训练实现的。在 BVI-DVC-VVC 和 UCF101 上进行的大量实验证明了我们提出的模型在 VVC 视频错误隐藏方面的有效性和优越性。
{"title":"Swin-VEC: Video Swin Transformer-based GAN for video error concealment of VVC","authors":"Bing Zhang, Ran Ma, Yu Cao, Ping An","doi":"10.1007/s00371-024-03518-9","DOIUrl":"https://doi.org/10.1007/s00371-024-03518-9","url":null,"abstract":"<p>Video error concealment can effectively improve the visual perception quality of videos damaged by packet loss in video transmission or error reception at the decoder. The latest versatile video coding (VVC) standard further improves the compression performance and lacks error recovery mechanism, which makes the VVC bitstream highly sensitive to errors. Most of the existing error concealment algorithms are designed for the video coding standards before VVC and are not applicable to VVC; thus, the research on video error concealment for VVC is urgently needed. In this paper, a novel deep video error concealment model for VVC is proposed, called Swin-VEC. The model innovatively integrates Video Swin Transformer into the generator of generative adversarial network (GAN). Specifically, the generator of the model employs convolutional neural network (CNN) to extract shallow features, and utilizes the Video Swin Transformer to extract deep multi-scale features. Subsequently, the designed dual upsampling modules are used to accomplish the recovery of spatiotemporal dimensions, and combined with CNN to achieve frame reconstruction. Moreover, an augmented dataset BVI-DVC-VVC is constructed for model training and verification. The optimization of the model is realized by adversarial training. Extensive experiments on BVI-DVC-VVC and UCF101 demonstrate the effectiveness and superiority of our proposed model for the video error concealment of VVC.\u0000</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral reordering for faster elasticity simulations 频谱重排,加快弹性模拟速度
Pub Date : 2024-06-18 DOI: 10.1007/s00371-024-03513-0
Alon Flor, Mridul Aanjaneya

We present a novel method for faster physics simulations of elastic solids. Our key idea is to reorder the unknown variables according to the Fiedler vector (i.e., the second-smallest eigenvector) of the combinatorial Laplacian. It is well known in the geometry processing community that the Fiedler vector brings together vertices that are geometrically nearby, causing fewer cache misses when computing differential operators. However, to the best of our knowledge, this idea has not been exploited to accelerate simulations of elastic solids which require an expensive linear (or non-linear) system solve at every time step. The cost of computing the Fiedler vector is negligible, thanks to an algebraic Multigrid-preconditioned Conjugate Gradients (AMGPCG) solver. We observe that our AMGPCG solver requires approximately 1 s for computing the Fiedler vector for a mesh with approximately 50K vertices or 100K tetrahedra. Our method provides a speed-up between (10%)(30%) at every time step, which can lead to considerable savings, noting that even modest simulations of elastic solids require at least 240 time steps. Our method is easy to implement and can be used as a plugin for speeding up existing physics simulators for elastic solids, as we demonstrate through our experiments using the Vega library and the ADMM solver, which use different algorithms for elasticity.

我们提出了一种新方法,用于加快弹性固体的物理模拟。我们的主要想法是根据组合拉普拉奇的费德勒向量(即第二小特征向量)对未知变量重新排序。众所周知,在几何处理领域,费德勒向量能将几何上临近的顶点聚集在一起,从而在计算微分算子时减少缓存丢失。然而,据我们所知,这一想法尚未被用于加速弹性固体的模拟,因为弹性固体需要在每个时间步进行昂贵的线性(或非线性)系统求解。由于采用了代数多网格预处理共轭梯度(AMGPCG)求解器,计算费德勒向量的成本可以忽略不计。我们发现,对于一个有大约 50K 个顶点或 100K 个四面体的网格,我们的 AMGPCG 求解器计算费德勒向量大约需要 1 秒钟。我们的方法在每个时间步上提供了介于 (10%) - (30%) 之间的速度提升,这可以节省相当多的成本,要知道,即使是弹性固体的适度模拟也至少需要 240 个时间步。我们的方法很容易实现,可以作为一个插件来加速现有的弹性固体物理模拟器,正如我们通过使用 Vega 库和 ADMM 求解器的实验所证明的那样,它们使用了不同的弹性算法。
{"title":"Spectral reordering for faster elasticity simulations","authors":"Alon Flor, Mridul Aanjaneya","doi":"10.1007/s00371-024-03513-0","DOIUrl":"https://doi.org/10.1007/s00371-024-03513-0","url":null,"abstract":"<p>We present a novel method for faster physics simulations of elastic solids. Our key idea is to reorder the unknown variables according to the Fiedler vector (i.e., the second-smallest eigenvector) of the combinatorial Laplacian. It is well known in the geometry processing community that the Fiedler vector brings together vertices that are geometrically nearby, causing fewer cache misses when computing differential operators. However, to the best of our knowledge, this idea has not been exploited to accelerate simulations of elastic solids which require an expensive linear (or non-linear) system solve at every time step. The cost of computing the Fiedler vector is negligible, thanks to an algebraic Multigrid-preconditioned Conjugate Gradients (AMGPCG) solver. We observe that our AMGPCG solver requires approximately 1 s for computing the Fiedler vector for a mesh with approximately 50<i>K</i> vertices or 100<i>K</i> tetrahedra. Our method provides a speed-up between <span>(10%)</span> – <span>(30%)</span> at every time step, which can lead to considerable savings, noting that even modest simulations of elastic solids require at least 240 time steps. Our method is easy to implement and can be used as a plugin for speeding up existing physics simulators for elastic solids, as we demonstrate through our experiments using the Vega library and the ADMM solver, which use different algorithms for elasticity.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High similarity controllable face anonymization based on dynamic identity perception 基于动态身份感知的高相似度可控人脸匿名技术
Pub Date : 2024-06-18 DOI: 10.1007/s00371-024-03526-9
Jiayi Xu, Xuan Tan, Yixuan Ju, Xiaoyang Mao, Shanqing Zhang

In the meta-universe scenario, with the development of personalized social networks, interactive behaviors such as uploading and sharing personal and family photographs are becoming increasingly widespread. Consequently, the risk of being searched or leaking personal financial information increases. A possible solution is to use anonymized face images instead of real images in the public situations. Most of the existing face anonymization methods attempt to replace a large portion of the face image to modify identity information. However, the resulted faces are often not similar enough to the original faces as seen with the naked eyes. To maintain visual coherence as much as possible while avoiding recognition by face recognition systems, we propose to detect part of the face that is most relevant to the identity based on saliency analysis. Furthermore, we preserve the identification of irrelevant face features by re-injecting them into the regenerated face. The proposed model consists of three stages. Firstly, we employ a dynamic identity perception network to detect the identity-relevant facial region and generate a masked face with removed identity. Secondly, we apply feature selection and preservation network that extracts basic semantic attributes from the original face and also extracts multilevel identity-irrelevant face features from the masked face, and then fuses them into conditional feature vectors for face regeneration. Finally, a pre-trained StyleGAN2 generator is applied to obtain a high-quality identity-obscured face image. The experimental results show that the proposed method can obtain more realistic anonymized face images that retain most of the original facial attributes, while it can deceive face recognition system to protect privacy in the modern digital economy and entertainment scenarios.

在元宇宙场景中,随着个性化社交网络的发展,上传和分享个人和家庭照片等互动行为越来越普遍。因此,被搜索或泄露个人财务信息的风险也随之增加。一个可行的解决方案是在公共场合使用匿名人脸图像代替真实图像。现有的大多数人脸匿名方法都试图替换大部分人脸图像来修改身份信息。然而,这样得到的人脸往往与肉眼看到的原始人脸不够相似。为了在避免被人脸识别系统识别的同时尽可能保持视觉连贯性,我们建议根据显著性分析来检测与身份最相关的人脸部分。此外,我们还通过将无关的人脸特征重新注入到再生人脸中来保留对这些特征的识别。所提出的模型包括三个阶段。首先,我们采用动态身份感知网络来检测与身份相关的面部区域,并生成一个去除了身份的蒙面人脸。其次,我们应用特征选择和保存网络,从原始人脸中提取基本语义属性,并从蒙面人脸中提取多层次的与身份无关的人脸特征,然后将它们融合为条件特征向量,用于人脸再生。最后,应用预先训练好的 StyleGAN2 生成器获得高质量的身份模糊人脸图像。实验结果表明,所提出的方法能获得更真实的匿名人脸图像,保留了大部分原始面部属性,同时还能欺骗人脸识别系统,在现代数字经济和娱乐场景中保护个人隐私。
{"title":"High similarity controllable face anonymization based on dynamic identity perception","authors":"Jiayi Xu, Xuan Tan, Yixuan Ju, Xiaoyang Mao, Shanqing Zhang","doi":"10.1007/s00371-024-03526-9","DOIUrl":"https://doi.org/10.1007/s00371-024-03526-9","url":null,"abstract":"<p>In the meta-universe scenario, with the development of personalized social networks, interactive behaviors such as uploading and sharing personal and family photographs are becoming increasingly widespread. Consequently, the risk of being searched or leaking personal financial information increases. A possible solution is to use anonymized face images instead of real images in the public situations. Most of the existing face anonymization methods attempt to replace a large portion of the face image to modify identity information. However, the resulted faces are often not similar enough to the original faces as seen with the naked eyes. To maintain visual coherence as much as possible while avoiding recognition by face recognition systems, we propose to detect part of the face that is most relevant to the identity based on saliency analysis. Furthermore, we preserve the identification of irrelevant face features by re-injecting them into the regenerated face. The proposed model consists of three stages. Firstly, we employ a dynamic identity perception network to detect the identity-relevant facial region and generate a masked face with removed identity. Secondly, we apply feature selection and preservation network that extracts basic semantic attributes from the original face and also extracts multilevel identity-irrelevant face features from the masked face, and then fuses them into conditional feature vectors for face regeneration. Finally, a pre-trained StyleGAN2 generator is applied to obtain a high-quality identity-obscured face image. The experimental results show that the proposed method can obtain more realistic anonymized face images that retain most of the original facial attributes, while it can deceive face recognition system to protect privacy in the modern digital economy and entertainment scenarios.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Visual Computer
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1