Guo-Sen Xie;Zheng Zhang;Huan Xiong;Ling Shao;Xuelong Li
{"title":"Towards Zero-Shot Learning: A Brief Review and an Attention-Based Embedding Network","authors":"Guo-Sen Xie;Zheng Zhang;Huan Xiong;Ling Shao;Xuelong Li","doi":"10.1109/TCSVT.2022.3208071","DOIUrl":null,"url":null,"abstract":"Zero-shot learning (ZSL), an emerging topic in recent years, targets at distinguishing unseen class images by taking images from seen classes for training the classifier. Existing works often build embeddings between global feature space and attribute space, which, however, neglect the treasure in image parts. Discrimination information is usually contained in the image parts, e.g., black and white striped area of a zebra is the key difference from a horse. As such, image parts can facilitate the transfer of knowledge among the seen and unseen categories. In this paper, we first conduct a brief review on ZSL with detailed descriptions of these methods. Next, to discover meaningful parts, we propose an end-to-end attention-based embedding network for ZSL, which contains two sub-streams: the attention part embedding (APE) stream, and the attention second-order embedding (ASE) stream. APE is used to discover multiple image parts based on attention. ASE is introduced for ensuring knowledge transfer stably by second-order collaboration. Furthermore, an adaptive thresholding strategy is proposed to suppress noise and redundant parts. Finally, a global branch is incorporated for the full use of global information. Experiments on four benchmarks demonstrate that our models achieve superior results under both ZSL and GZSL settings.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":null,"pages":null},"PeriodicalIF":8.3000,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/9895459/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 5
Abstract
Zero-shot learning (ZSL), an emerging topic in recent years, targets at distinguishing unseen class images by taking images from seen classes for training the classifier. Existing works often build embeddings between global feature space and attribute space, which, however, neglect the treasure in image parts. Discrimination information is usually contained in the image parts, e.g., black and white striped area of a zebra is the key difference from a horse. As such, image parts can facilitate the transfer of knowledge among the seen and unseen categories. In this paper, we first conduct a brief review on ZSL with detailed descriptions of these methods. Next, to discover meaningful parts, we propose an end-to-end attention-based embedding network for ZSL, which contains two sub-streams: the attention part embedding (APE) stream, and the attention second-order embedding (ASE) stream. APE is used to discover multiple image parts based on attention. ASE is introduced for ensuring knowledge transfer stably by second-order collaboration. Furthermore, an adaptive thresholding strategy is proposed to suppress noise and redundant parts. Finally, a global branch is incorporated for the full use of global information. Experiments on four benchmarks demonstrate that our models achieve superior results under both ZSL and GZSL settings.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.