{"title":"Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation","authors":"Kangkai Zhang, Shiming Ge, Ruixin Shi, Dan Zeng","doi":"arxiv-2409.02555","DOIUrl":null,"url":null,"abstract":"Recognizing objects in low-resolution images is a challenging task due to the\nlack of informative details. Recent studies have shown that knowledge\ndistillation approaches can effectively transfer knowledge from a\nhigh-resolution teacher model to a low-resolution student model by aligning\ncross-resolution representations. However, these approaches still face\nlimitations in adapting to the situation where the recognized objects exhibit\nsignificant representation discrepancies between training and testing images.\nIn this study, we propose a cross-resolution relational contrastive\ndistillation approach to facilitate low-resolution object recognition. Our\napproach enables the student model to mimic the behavior of a well-trained\nteacher model which delivers high accuracy in identifying high-resolution\nobjects. To extract sufficient knowledge, the student learning is supervised\nwith contrastive relational distillation loss, which preserves the similarities\nin various relational structures in contrastive representation space. In this\nmanner, the capability of recovering missing details of familiar low-resolution\nobjects can be effectively enhanced, leading to a better knowledge transfer.\nExtensive experiments on low-resolution object classification and\nlow-resolution face recognition clearly demonstrate the effectiveness and\nadaptability of our approach.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"67 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02555","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recognizing objects in low-resolution images is a challenging task due to the
lack of informative details. Recent studies have shown that knowledge
distillation approaches can effectively transfer knowledge from a
high-resolution teacher model to a low-resolution student model by aligning
cross-resolution representations. However, these approaches still face
limitations in adapting to the situation where the recognized objects exhibit
significant representation discrepancies between training and testing images.
In this study, we propose a cross-resolution relational contrastive
distillation approach to facilitate low-resolution object recognition. Our
approach enables the student model to mimic the behavior of a well-trained
teacher model which delivers high accuracy in identifying high-resolution
objects. To extract sufficient knowledge, the student learning is supervised
with contrastive relational distillation loss, which preserves the similarities
in various relational structures in contrastive representation space. In this
manner, the capability of recovering missing details of familiar low-resolution
objects can be effectively enhanced, leading to a better knowledge transfer.
Extensive experiments on low-resolution object classification and
low-resolution face recognition clearly demonstrate the effectiveness and
adaptability of our approach.