Pub Date : 2024-06-21DOI: 10.1007/s00371-024-03535-8
Yixi Li, Yanzhe Liu, Rong Chen, Hui Li, Na Zhao
Point clouds provide a common geometric representation for burgeoning 3D graphics and vision tasks. To deal with the sparse, noisy and non-uniform output of most 3D data acquisition devices, this paper presents a novel coarse-to-fine learning framework that incorporates the Transformer-encoder and positional feature fusion. Its long-range dependencies with sensitive positional information allow robust feature embedding and fusion of points, especially noising elements and non-regular outliers. The proposed network consists of a Coarse Points Generator and a Points Offsets Refiner. The generator embodies a multi-feature Transformer-encoder and an EdgeConv-based feature reshaping to infer the coarse but dense upsampling point sets, whereas the refiner further learns the positions of upsampled points based on multi-feature fusion strategy that can adaptively adjust the fused features’ weights of coarse points and points offsets. Extensive qualitative and quantitative results on both synthetic and real-scanned datasets demonstrate the superiority of our method over the state-of-the-arts. Our code is publicly available at https://github.com/Superlyxi/CFT-PU.
{"title":"Point cloud upsampling via a coarse-to-fine network with transformer-encoder","authors":"Yixi Li, Yanzhe Liu, Rong Chen, Hui Li, Na Zhao","doi":"10.1007/s00371-024-03535-8","DOIUrl":"https://doi.org/10.1007/s00371-024-03535-8","url":null,"abstract":"<p>Point clouds provide a common geometric representation for burgeoning 3D graphics and vision tasks. To deal with the sparse, noisy and non-uniform output of most 3D data acquisition devices, this paper presents a novel coarse-to-fine learning framework that incorporates the Transformer-encoder and positional feature fusion. Its long-range dependencies with sensitive positional information allow robust feature embedding and fusion of points, especially noising elements and non-regular outliers. The proposed network consists of a Coarse Points Generator and a Points Offsets Refiner. The generator embodies a multi-feature Transformer-encoder and an EdgeConv-based feature reshaping to infer the coarse but dense upsampling point sets, whereas the refiner further learns the positions of upsampled points based on multi-feature fusion strategy that can adaptively adjust the fused features’ weights of coarse points and points offsets. Extensive qualitative and quantitative results on both synthetic and real-scanned datasets demonstrate the superiority of our method over the state-of-the-arts. Our code is publicly available at https://github.com/Superlyxi/CFT-PU.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s00371-024-03448-6
Ya Li, Ziming Li, Huiwang Liu, Qing Wang
Feature fusion module is an essential component of real-time semantic segmentation networks to bridge the semantic gap among different feature layers. However, many networks are inefficient in multi-level feature fusion. In this paper, we propose a simple yet effective decoder that consists of a series of multi-level attention feature fusion modules (MLA-FFMs) aimed at fusing multi-level features in a top-down manner. Specifically, MLA-FFM is a lightweight attention-based module. Therefore, it can not only efficiently fuse features to bridge the semantic gap at different levels, but also be applied to real-time segmentation tasks. In addition, to solve the problem of low accuracy of existing real-time segmentation methods at semantic boundaries, we propose a semantic boundary supervision module (BSM) to improve the accuracy by supervising the prediction of semantic boundaries. Extensive experiments demonstrate that our network achieves a state-of-the-art trade-off between segmentation accuracy and inference speed on both Cityscapes and CamVid datasets. On a single NVIDIA GeForce 1080Ti GPU, our model achieves 77.4% mIoU with a speed of 97.5 FPS on the Cityscapes test dataset, and 74% mIoU with a speed of 156.6 FPS on the CamVid test dataset, which is superior to most state-of-the-art real-time methods.
{"title":"ZMNet: feature fusion and semantic boundary supervision for real-time semantic segmentation","authors":"Ya Li, Ziming Li, Huiwang Liu, Qing Wang","doi":"10.1007/s00371-024-03448-6","DOIUrl":"https://doi.org/10.1007/s00371-024-03448-6","url":null,"abstract":"<p>Feature fusion module is an essential component of real-time semantic segmentation networks to bridge the semantic gap among different feature layers. However, many networks are inefficient in multi-level feature fusion. In this paper, we propose a simple yet effective decoder that consists of a series of multi-level attention feature fusion modules (MLA-FFMs) aimed at fusing multi-level features in a top-down manner. Specifically, MLA-FFM is a lightweight attention-based module. Therefore, it can not only efficiently fuse features to bridge the semantic gap at different levels, but also be applied to real-time segmentation tasks. In addition, to solve the problem of low accuracy of existing real-time segmentation methods at semantic boundaries, we propose a semantic boundary supervision module (BSM) to improve the accuracy by supervising the prediction of semantic boundaries. Extensive experiments demonstrate that our network achieves a state-of-the-art trade-off between segmentation accuracy and inference speed on both Cityscapes and CamVid datasets. On a single NVIDIA GeForce 1080Ti GPU, our model achieves 77.4% mIoU with a speed of 97.5 FPS on the Cityscapes test dataset, and 74% mIoU with a speed of 156.6 FPS on the CamVid test dataset, which is superior to most state-of-the-art real-time methods.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"174 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s00371-024-03531-y
Huaping Zhou, Bin Deng, Kelei Sun, Shunxiang Zhang, Yongqi Zhang
Cracks in the road surface can cause significant harm. Road crack detection, segmentation, and immediate repair can help reduce the occurrence of risks. Some methods based on convolutional neural networks still have some problems, such as fuzzy edge information, small receptive fields, and insufficient perception ability of local information. To solve the above problems, this paper offers UTE-CrackNet, a novel road crack segmentation network that attempts to increase the generalization ability and segmentation accuracy of road crack segmentation networks. To begin, our design combines the U-shaped structure that enables the model to learn more features. Given the lack of skip connections, we designed the multi-convolution coordinate attention block to reduce semantic differences in cascaded features and the gated residual attention block to get more local features. Because most fractures have strip characteristics, we propose the transformer edge atlas spatial pyramid pooling module, which innovatively applies the transformer module and edge detection module to the network so that the network can better capture the edge information and context information of the fracture area. In addition, we use focus loss in training to solve the problem of positive and negative sample imbalances. Experiments were conducted on four publicly available road crack segmentation datasets: Rissbilder, GAPS384, CFD, and CrackTree200. The experimental results reveal that the network outperforms the standard road fracture segmentation models. The code and models are publicly available at https://github.com/mushan0929/UTE-crackNet.
路面裂缝可造成重大伤害。路面裂缝的检测、分割和及时修复有助于降低风险的发生。一些基于卷积神经网络的方法仍存在一些问题,如边缘信息模糊、感受野小、局部信息感知能力不足等。为了解决上述问题,本文提出了一种新型道路裂缝分割网络 UTE-CrackNet,试图提高道路裂缝分割网络的泛化能力和分割精度。首先,我们的设计结合了 U 型结构,使模型能够学习更多特征。鉴于缺乏跳转连接,我们设计了多卷积坐标注意力块来减少级联特征的语义差异,并设计了门控残差注意力块来获取更多局部特征。由于大多数断裂具有条状特征,我们提出了变压器边缘图集空间金字塔汇集模块,创新性地将变压器模块和边缘检测模块应用到网络中,使网络能更好地捕捉断裂区域的边缘信息和上下文信息。此外,我们还在训练中使用了焦点损耗,以解决正负样本不平衡的问题。我们在四个公开的道路裂缝分割数据集上进行了实验:Rissbilder、GAPS384、CFD 和 CrackTree200。实验结果表明,该网络的性能优于标准道路裂缝分割模型。代码和模型可在 https://github.com/mushan0929/UTE-crackNet 上公开获取。
{"title":"UTE-CrackNet: transformer-guided and edge feature extraction U-shaped road crack image segmentation","authors":"Huaping Zhou, Bin Deng, Kelei Sun, Shunxiang Zhang, Yongqi Zhang","doi":"10.1007/s00371-024-03531-y","DOIUrl":"https://doi.org/10.1007/s00371-024-03531-y","url":null,"abstract":"<p>Cracks in the road surface can cause significant harm. Road crack detection, segmentation, and immediate repair can help reduce the occurrence of risks. Some methods based on convolutional neural networks still have some problems, such as fuzzy edge information, small receptive fields, and insufficient perception ability of local information. To solve the above problems, this paper offers UTE-CrackNet, a novel road crack segmentation network that attempts to increase the generalization ability and segmentation accuracy of road crack segmentation networks. To begin, our design combines the U-shaped structure that enables the model to learn more features. Given the lack of skip connections, we designed the multi-convolution coordinate attention block to reduce semantic differences in cascaded features and the gated residual attention block to get more local features. Because most fractures have strip characteristics, we propose the transformer edge atlas spatial pyramid pooling module, which innovatively applies the transformer module and edge detection module to the network so that the network can better capture the edge information and context information of the fracture area. In addition, we use focus loss in training to solve the problem of positive and negative sample imbalances. Experiments were conducted on four publicly available road crack segmentation datasets: Rissbilder, GAPS384, CFD, and CrackTree200. The experimental results reveal that the network outperforms the standard road fracture segmentation models. The code and models are publicly available at https://github.com/mushan0929/UTE-crackNet.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s00371-024-03543-8
Yueqian Pan, Qiaohong Chen, Xian Fang
Transformers have been widely applied in medical image segmentation due to their ability to establish excellent long-distance dependency through self-attention. However, relying solely on self-attention makes it difficult to effectively extract rich spatial and channel information from adjacent levels. To address this issue, we propose a novel dual attention model based on a multi-level adaptive complementary fusion mechanism, namely DAMAF. We first employ efficient attention and transpose attention to synchronously capture robust spatial and channel cures in a lightweight manner. Then, we design a multi-level fusion attention block to expand the complementarity of features at each level and enrich the contextual information, thereby gradually enhancing the correlation between high-level and low-level features. In addition, we develop a multi-level skip attention block to strengthen the adjacent-level information of the model through mutual fusion, which improves the feature expression ability of the model. Extensive experiments on the Synapse, ACDC, and ISIC-2018 datasets demonstrate that the proposed DAMAF achieves significantly superior results compared to competitors. Our code is publicly available at https://github.com/PanYging/DAMAF.
{"title":"DAMAF: dual attention network with multi-level adaptive complementary fusion for medical image segmentation","authors":"Yueqian Pan, Qiaohong Chen, Xian Fang","doi":"10.1007/s00371-024-03543-8","DOIUrl":"https://doi.org/10.1007/s00371-024-03543-8","url":null,"abstract":"<p>Transformers have been widely applied in medical image segmentation due to their ability to establish excellent long-distance dependency through self-attention. However, relying solely on self-attention makes it difficult to effectively extract rich spatial and channel information from adjacent levels. To address this issue, we propose a novel dual attention model based on a multi-level adaptive complementary fusion mechanism, namely DAMAF. We first employ efficient attention and transpose attention to synchronously capture robust spatial and channel cures in a lightweight manner. Then, we design a multi-level fusion attention block to expand the complementarity of features at each level and enrich the contextual information, thereby gradually enhancing the correlation between high-level and low-level features. In addition, we develop a multi-level skip attention block to strengthen the adjacent-level information of the model through mutual fusion, which improves the feature expression ability of the model. Extensive experiments on the Synapse, ACDC, and ISIC-2018 datasets demonstrate that the proposed DAMAF achieves significantly superior results compared to competitors. Our code is publicly available at https://github.com/PanYging/DAMAF.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s00371-024-03503-2
Saba Khan, Zhigang Deng
In recent years, the field of crowd simulation has experienced significant advancements, attributed in part to the improvement of hardware performance, coupled with a notable emphasis on agent-based characteristics. Agent-based simulations stand out as the preferred methodology when researchers seek to model agents with unique behavioral traits and purpose-driven actions, a crucial aspect for simulating diverse and realistic crowd movements. This survey adopts a systematic approach, meticulously delving into the array of factors vital for simulating a heterogeneous microscopic crowd. The emphasis is placed on scrutinizing low-level behavioral details and individual features of virtual agents to capture a nuanced understanding of their interactions. The survey is based on studies published in reputable peer-reviewed journals and conferences. The primary aim of this survey is to present the diverse advancements in the realm of agent-based crowd simulations, with a specific emphasis on the various aspects of agent behavior that researchers take into account when developing crowd simulation models. Additionally, the survey suggests future research directions with the objective of developing new applications that focus on achieving more realistic and efficient crowd simulations.
{"title":"Agent-based crowd simulation: an in-depth survey of determining factors for heterogeneous behavior","authors":"Saba Khan, Zhigang Deng","doi":"10.1007/s00371-024-03503-2","DOIUrl":"https://doi.org/10.1007/s00371-024-03503-2","url":null,"abstract":"<p>In recent years, the field of crowd simulation has experienced significant advancements, attributed in part to the improvement of hardware performance, coupled with a notable emphasis on agent-based characteristics. Agent-based simulations stand out as the preferred methodology when researchers seek to model agents with unique behavioral traits and purpose-driven actions, a crucial aspect for simulating diverse and realistic crowd movements. This survey adopts a systematic approach, meticulously delving into the array of factors vital for simulating a heterogeneous microscopic crowd. The emphasis is placed on scrutinizing low-level behavioral details and individual features of virtual agents to capture a nuanced understanding of their interactions. The survey is based on studies published in reputable peer-reviewed journals and conferences. The primary aim of this survey is to present the diverse advancements in the realm of agent-based crowd simulations, with a specific emphasis on the various aspects of agent behavior that researchers take into account when developing crowd simulation models. Additionally, the survey suggests future research directions with the objective of developing new applications that focus on achieving more realistic and efficient crowd simulations.\u0000</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional multi-object tracking is limited to tracking a predefined set of categories, whereas open-vocabulary tracking expands its capabilities to track novel categories. In this paper, we propose ROMOT (referring-expression-comprehension open-set multi-object tracking), which not only tracks objects from novel categories not included in the training data, but also enables tracking based on referring expression comprehension (REC). REC describes targets solely by their attributes, such as “the person running at the front” or “the bird flying in the air rather than on the ground,” making it particularly relevant for real-world multi-object tracking scenarios. Our ROMOT achieves this by harnessing the exceptional capabilities of a visual language model and employing multi-stage cross-modal attention to handle tracking for novel categories and REC tasks. Integrating RSM (reconstruction similarity metric) and OCM (observation-centric momentum) in our ROMOT eliminates the need for task-specific training, addressing the challenge of insufficient datasets. Our ROMOT enhances efficiency and adaptability in handling tracking requirements without relying on extensive tracking training data.
{"title":"ROMOT: Referring-expression-comprehension open-set multi-object tracking","authors":"Wei Li, Bowen Li, Jingqi Wang, Weiliang Meng, Jiguang Zhang, Xiaopeng Zhang","doi":"10.1007/s00371-024-03544-7","DOIUrl":"https://doi.org/10.1007/s00371-024-03544-7","url":null,"abstract":"<p>Traditional multi-object tracking is limited to tracking a predefined set of categories, whereas open-vocabulary tracking expands its capabilities to track novel categories. In this paper, we propose ROMOT (referring-expression-comprehension open-set multi-object tracking), which not only tracks objects from novel categories not included in the training data, but also enables tracking based on referring expression comprehension (REC). REC describes targets solely by their attributes, such as “the person running at the front” or “the bird flying in the air rather than on the ground,” making it particularly relevant for real-world multi-object tracking scenarios. Our ROMOT achieves this by harnessing the exceptional capabilities of a visual language model and employing multi-stage cross-modal attention to handle tracking for novel categories and REC tasks. Integrating RSM (reconstruction similarity metric) and OCM (observation-centric momentum) in our ROMOT eliminates the need for task-specific training, addressing the challenge of insufficient datasets. Our ROMOT enhances efficiency and adaptability in handling tracking requirements without relying on extensive tracking training data.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s00371-024-03538-5
Qunpo Liu, Qi Tang, Bo Su, Xuhui Bu, Naohiko Hanajima, Manli Wang
In response to the problem of unclear texture structure in steel wire rope images caused by complex and uncertain lighting conditions, resulting in inconsistent LBP feature values for the same structure, this paper proposes a steel wire surface damage recognition method based on exponential weighted guided filtering and complementary binary equivalent patterns. Leveraging the phenomenon of Mach bands in vision, we introduce a guided filtering method based on local exponential weighting to enhance texture details by applying exponential mapping to evaluate pixel differences within local window regions during image filtering. Additionally, we propose complementary binary equivalent pattern descriptors as neighborhood difference symbol information representation operators to reduce feature dimensionality while enhancing the robustness of binary encoding against interference. Experimental results demonstrate that compared to classical guided filtering algorithms, our image enhancement method achieves improvements in PSNR and SSIM mean values by more than 32.5% and 18.5%, respectively, effectively removing noise while preserving image edge structures. Moreover, our algorithm achieves a classification accuracy of 99.3% on the steel wire dataset, with a processing time of only 0.606 s per image.
{"title":"Wire rope damage detection based on a uniform-complementary binary pattern with exponentially weighted guide image filtering","authors":"Qunpo Liu, Qi Tang, Bo Su, Xuhui Bu, Naohiko Hanajima, Manli Wang","doi":"10.1007/s00371-024-03538-5","DOIUrl":"https://doi.org/10.1007/s00371-024-03538-5","url":null,"abstract":"<p>In response to the problem of unclear texture structure in steel wire rope images caused by complex and uncertain lighting conditions, resulting in inconsistent LBP feature values for the same structure, this paper proposes a steel wire surface damage recognition method based on exponential weighted guided filtering and complementary binary equivalent patterns. Leveraging the phenomenon of Mach bands in vision, we introduce a guided filtering method based on local exponential weighting to enhance texture details by applying exponential mapping to evaluate pixel differences within local window regions during image filtering. Additionally, we propose complementary binary equivalent pattern descriptors as neighborhood difference symbol information representation operators to reduce feature dimensionality while enhancing the robustness of binary encoding against interference. Experimental results demonstrate that compared to classical guided filtering algorithms, our image enhancement method achieves improvements in PSNR and SSIM mean values by more than 32.5% and 18.5%, respectively, effectively removing noise while preserving image edge structures. Moreover, our algorithm achieves a classification accuracy of 99.3% on the steel wire dataset, with a processing time of only 0.606 s per image.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s00371-024-03518-9
Bing Zhang, Ran Ma, Yu Cao, Ping An
Video error concealment can effectively improve the visual perception quality of videos damaged by packet loss in video transmission or error reception at the decoder. The latest versatile video coding (VVC) standard further improves the compression performance and lacks error recovery mechanism, which makes the VVC bitstream highly sensitive to errors. Most of the existing error concealment algorithms are designed for the video coding standards before VVC and are not applicable to VVC; thus, the research on video error concealment for VVC is urgently needed. In this paper, a novel deep video error concealment model for VVC is proposed, called Swin-VEC. The model innovatively integrates Video Swin Transformer into the generator of generative adversarial network (GAN). Specifically, the generator of the model employs convolutional neural network (CNN) to extract shallow features, and utilizes the Video Swin Transformer to extract deep multi-scale features. Subsequently, the designed dual upsampling modules are used to accomplish the recovery of spatiotemporal dimensions, and combined with CNN to achieve frame reconstruction. Moreover, an augmented dataset BVI-DVC-VVC is constructed for model training and verification. The optimization of the model is realized by adversarial training. Extensive experiments on BVI-DVC-VVC and UCF101 demonstrate the effectiveness and superiority of our proposed model for the video error concealment of VVC.
{"title":"Swin-VEC: Video Swin Transformer-based GAN for video error concealment of VVC","authors":"Bing Zhang, Ran Ma, Yu Cao, Ping An","doi":"10.1007/s00371-024-03518-9","DOIUrl":"https://doi.org/10.1007/s00371-024-03518-9","url":null,"abstract":"<p>Video error concealment can effectively improve the visual perception quality of videos damaged by packet loss in video transmission or error reception at the decoder. The latest versatile video coding (VVC) standard further improves the compression performance and lacks error recovery mechanism, which makes the VVC bitstream highly sensitive to errors. Most of the existing error concealment algorithms are designed for the video coding standards before VVC and are not applicable to VVC; thus, the research on video error concealment for VVC is urgently needed. In this paper, a novel deep video error concealment model for VVC is proposed, called Swin-VEC. The model innovatively integrates Video Swin Transformer into the generator of generative adversarial network (GAN). Specifically, the generator of the model employs convolutional neural network (CNN) to extract shallow features, and utilizes the Video Swin Transformer to extract deep multi-scale features. Subsequently, the designed dual upsampling modules are used to accomplish the recovery of spatiotemporal dimensions, and combined with CNN to achieve frame reconstruction. Moreover, an augmented dataset BVI-DVC-VVC is constructed for model training and verification. The optimization of the model is realized by adversarial training. Extensive experiments on BVI-DVC-VVC and UCF101 demonstrate the effectiveness and superiority of our proposed model for the video error concealment of VVC.\u0000</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s00371-024-03513-0
Alon Flor, Mridul Aanjaneya
We present a novel method for faster physics simulations of elastic solids. Our key idea is to reorder the unknown variables according to the Fiedler vector (i.e., the second-smallest eigenvector) of the combinatorial Laplacian. It is well known in the geometry processing community that the Fiedler vector brings together vertices that are geometrically nearby, causing fewer cache misses when computing differential operators. However, to the best of our knowledge, this idea has not been exploited to accelerate simulations of elastic solids which require an expensive linear (or non-linear) system solve at every time step. The cost of computing the Fiedler vector is negligible, thanks to an algebraic Multigrid-preconditioned Conjugate Gradients (AMGPCG) solver. We observe that our AMGPCG solver requires approximately 1 s for computing the Fiedler vector for a mesh with approximately 50K vertices or 100K tetrahedra. Our method provides a speed-up between (10%) – (30%) at every time step, which can lead to considerable savings, noting that even modest simulations of elastic solids require at least 240 time steps. Our method is easy to implement and can be used as a plugin for speeding up existing physics simulators for elastic solids, as we demonstrate through our experiments using the Vega library and the ADMM solver, which use different algorithms for elasticity.
{"title":"Spectral reordering for faster elasticity simulations","authors":"Alon Flor, Mridul Aanjaneya","doi":"10.1007/s00371-024-03513-0","DOIUrl":"https://doi.org/10.1007/s00371-024-03513-0","url":null,"abstract":"<p>We present a novel method for faster physics simulations of elastic solids. Our key idea is to reorder the unknown variables according to the Fiedler vector (i.e., the second-smallest eigenvector) of the combinatorial Laplacian. It is well known in the geometry processing community that the Fiedler vector brings together vertices that are geometrically nearby, causing fewer cache misses when computing differential operators. However, to the best of our knowledge, this idea has not been exploited to accelerate simulations of elastic solids which require an expensive linear (or non-linear) system solve at every time step. The cost of computing the Fiedler vector is negligible, thanks to an algebraic Multigrid-preconditioned Conjugate Gradients (AMGPCG) solver. We observe that our AMGPCG solver requires approximately 1 s for computing the Fiedler vector for a mesh with approximately 50<i>K</i> vertices or 100<i>K</i> tetrahedra. Our method provides a speed-up between <span>(10%)</span> – <span>(30%)</span> at every time step, which can lead to considerable savings, noting that even modest simulations of elastic solids require at least 240 time steps. Our method is easy to implement and can be used as a plugin for speeding up existing physics simulators for elastic solids, as we demonstrate through our experiments using the Vega library and the ADMM solver, which use different algorithms for elasticity.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s00371-024-03526-9
Jiayi Xu, Xuan Tan, Yixuan Ju, Xiaoyang Mao, Shanqing Zhang
In the meta-universe scenario, with the development of personalized social networks, interactive behaviors such as uploading and sharing personal and family photographs are becoming increasingly widespread. Consequently, the risk of being searched or leaking personal financial information increases. A possible solution is to use anonymized face images instead of real images in the public situations. Most of the existing face anonymization methods attempt to replace a large portion of the face image to modify identity information. However, the resulted faces are often not similar enough to the original faces as seen with the naked eyes. To maintain visual coherence as much as possible while avoiding recognition by face recognition systems, we propose to detect part of the face that is most relevant to the identity based on saliency analysis. Furthermore, we preserve the identification of irrelevant face features by re-injecting them into the regenerated face. The proposed model consists of three stages. Firstly, we employ a dynamic identity perception network to detect the identity-relevant facial region and generate a masked face with removed identity. Secondly, we apply feature selection and preservation network that extracts basic semantic attributes from the original face and also extracts multilevel identity-irrelevant face features from the masked face, and then fuses them into conditional feature vectors for face regeneration. Finally, a pre-trained StyleGAN2 generator is applied to obtain a high-quality identity-obscured face image. The experimental results show that the proposed method can obtain more realistic anonymized face images that retain most of the original facial attributes, while it can deceive face recognition system to protect privacy in the modern digital economy and entertainment scenarios.
{"title":"High similarity controllable face anonymization based on dynamic identity perception","authors":"Jiayi Xu, Xuan Tan, Yixuan Ju, Xiaoyang Mao, Shanqing Zhang","doi":"10.1007/s00371-024-03526-9","DOIUrl":"https://doi.org/10.1007/s00371-024-03526-9","url":null,"abstract":"<p>In the meta-universe scenario, with the development of personalized social networks, interactive behaviors such as uploading and sharing personal and family photographs are becoming increasingly widespread. Consequently, the risk of being searched or leaking personal financial information increases. A possible solution is to use anonymized face images instead of real images in the public situations. Most of the existing face anonymization methods attempt to replace a large portion of the face image to modify identity information. However, the resulted faces are often not similar enough to the original faces as seen with the naked eyes. To maintain visual coherence as much as possible while avoiding recognition by face recognition systems, we propose to detect part of the face that is most relevant to the identity based on saliency analysis. Furthermore, we preserve the identification of irrelevant face features by re-injecting them into the regenerated face. The proposed model consists of three stages. Firstly, we employ a dynamic identity perception network to detect the identity-relevant facial region and generate a masked face with removed identity. Secondly, we apply feature selection and preservation network that extracts basic semantic attributes from the original face and also extracts multilevel identity-irrelevant face features from the masked face, and then fuses them into conditional feature vectors for face regeneration. Finally, a pre-trained StyleGAN2 generator is applied to obtain a high-quality identity-obscured face image. The experimental results show that the proposed method can obtain more realistic anonymized face images that retain most of the original facial attributes, while it can deceive face recognition system to protect privacy in the modern digital economy and entertainment scenarios.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"125 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}