In the current context of the extensive use of deep neural networks, it has been observed that neural network models are vulnerable to adversarial perturbations, which may lead to unexpected results. In this paper, we introduce an Adversarial Purification Model rooted in latent representation compression, aimed at enhancing the robustness of deep learning models. Initially, we employ an encoder-decoder architecture inspired by the U-net to extract features from input samples. Subsequently, these features undergo a process of information compression to remove adversarial perturbations from the latent space. To counteract the model's tendency to overly focus on fine-grained details of input samples, resulting in ineffective adversarial sample purification, an early freezing mechanism is introduced during the encoder training process. We tested our model's ability to purify adversarial samples generated from the CIFAR-10, CIFAR-100, and ImageNet datasets using various methods. These samples were then used to test ResNet, an image recognition classifiers. Our experiments covered different resolutions and attack types to fully assess LRCM's effectiveness against adversarial attacks. We also compared LRCM with other defence strategies, demonstrating its strong defensive capabilities.
{"title":"LRCM: Enhancing Adversarial Purification Through Latent Representation Compression","authors":"Yixin Li, Xintao Luo, Weijie Wu, Minjia Zheng","doi":"10.1049/cvi2.70030","DOIUrl":"10.1049/cvi2.70030","url":null,"abstract":"<p>In the current context of the extensive use of deep neural networks, it has been observed that neural network models are vulnerable to adversarial perturbations, which may lead to unexpected results. In this paper, we introduce an Adversarial Purification Model rooted in latent representation compression, aimed at enhancing the robustness of deep learning models. Initially, we employ an encoder-decoder architecture inspired by the U-net to extract features from input samples. Subsequently, these features undergo a process of information compression to remove adversarial perturbations from the latent space. To counteract the model's tendency to overly focus on fine-grained details of input samples, resulting in ineffective adversarial sample purification, an early freezing mechanism is introduced during the encoder training process. We tested our model's ability to purify adversarial samples generated from the CIFAR-10, CIFAR-100, and ImageNet datasets using various methods. These samples were then used to test ResNet, an image recognition classifiers. Our experiments covered different resolutions and attack types to fully assess LRCM's effectiveness against adversarial attacks. We also compared LRCM with other defence strategies, demonstrating its strong defensive capabilities.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144085290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, the accuracy of self-supervised deep learning models for indoor depth estimation has approached that of supervised models by improving the supervision in planar regions. However, a common issue with integrating multiple planar priors is the generation of oversmooth depth maps, leading to unrealistic and erroneous depth representations at edges. Despite the fact that edge pixels only cover a small part of the image, they are of high significance for downstream tasks such as visual odometry, where image features, essential for motion computation, are mostly located at edges. To improve erroneous depth predictions at edge regions, we delve into the self-supervised training process, identifying its limitations and using these insights to develop a geometric edge model. Building on this, we introduce a novel algorithm that utilises the smooth depth predictions of existing models and colour image data to accurately identify edge pixels. After finding the edge pixels, our approach generates targeted self-supervision in these zones by interpolating depth values from adjacent planar areas towards the edges. We integrate the proposed algorithms into a novel loss function that encourages neural networks to predict sharper and more accurate depth edges in indoor scenes. To validate our methodology, we incorporated the proposed edge-enhancing loss function into a state-of-the-art self-supervised depth estimation framework. Our results demonstrate a notable improvement in the accuracy of edge depth predictions and a 19% improvement in visual odometry when using our depth model to generate RGB-D input, compared to the baseline model.
{"title":"Geometric Edge Modelling in Self-Supervised Learning for Enhanced Indoor Depth Estimation","authors":"Niclas Joswig, Laura Ruotsalainen","doi":"10.1049/cvi2.70026","DOIUrl":"10.1049/cvi2.70026","url":null,"abstract":"<p>Recently, the accuracy of self-supervised deep learning models for indoor depth estimation has approached that of supervised models by improving the supervision in planar regions. However, a common issue with integrating multiple planar priors is the generation of <i>oversmooth</i> depth maps, leading to unrealistic and erroneous depth representations at edges. Despite the fact that edge pixels only cover a small part of the image, they are of high significance for downstream tasks such as visual odometry, where image features, essential for motion computation, are mostly located at edges. To improve erroneous depth predictions at edge regions, we delve into the self-supervised training process, identifying its limitations and using these insights to develop a geometric edge model. Building on this, we introduce a novel algorithm that utilises the smooth depth predictions of existing models and colour image data to accurately identify edge pixels. After finding the edge pixels, our approach generates targeted self-supervision in these zones by interpolating depth values from adjacent planar areas towards the edges. We integrate the proposed algorithms into a novel loss function that encourages neural networks to predict sharper and more accurate depth edges in indoor scenes. To validate our methodology, we incorporated the proposed edge-enhancing loss function into a state-of-the-art self-supervised depth estimation framework. Our results demonstrate a notable improvement in the accuracy of edge depth predictions and a 19% improvement in visual odometry when using our depth model to generate RGB-D input, compared to the baseline model.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143938938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avirath Sundaresan, Jason Parham, Jonathan Crall, Rosemary Warungu, Timothy Muthami, Jackson Miliko, Margaret Mwangi, Jason Holmberg, Tanya Berger-Wolf, Daniel Rubenstein, Charles Stewart, Sara Beery
The Grévy's zebra, an endangered species native to Kenya and southern Ethiopia, has been the target of sustained conservation efforts in recent years. Accurately monitoring Grévy's zebra populations is essential for ecologists to evaluate ongoing conservation initiatives. Recently, in both 2016 and 2018, a full census of the Grévy's zebra population was enabled by the Great Grévy's Rally (GGR), a citizen science event that combines teams of volunteers to capture data with computer vision algorithms that help experts estimate the number of individuals in the population. A complementary, scalable, cost-effective and long-term Grévy's population monitoring approach involves deploying a network of camera traps, which we have done at the Mpala Research Centre in Laikipia County, Kenya. In both scenarios, a substantial majority of the images of zebras are not usable for individual identification due to ‘in-the-wild’ imaging conditions—occlusions from vegetation or other animals, oblique views, low image quality and animals that appear in the far background and are thus too small to identify. Camera trap images, without an intelligent human photographer to select the framing and focus on the animals of interest, are of even poorer quality, with high rates of occlusion and high spatiotemporal similarity within image bursts. We employ an image filtering pipeline incorporating animal detection, species identification, viewpoint estimation, quality evaluation and temporal subsampling to compensate for these factors and obtain individual crops from camera trap and GGR images of suitable quality for re-ID. We then employ the local clusterings and their alternatives (LCA) algorithm, a hybrid computer vision and graph clustering method for animal re-ID, on the resulting high-quality crops. Our method processed images taken during GGR-16 and GGR-18 in Meru County, Kenya, into 4142 highly comparable annotations, requiring only 120 contrastive same-vs-different-individual decisions from a human reviewer to produce a population estimate of 349 individuals (within 4.6