{"title":"SG-Shuffle: Multi-aspect Shuffle Transformer for Scene Graph Generation","authors":"Anh Duc Bui, S. Han, Josiah Poon","doi":"10.48550/arXiv.2211.04773","DOIUrl":null,"url":null,"abstract":". Scene Graph Generation (SGG) serves a comprehensive representation of the images for human understanding as well as visual understanding tasks. Due to the long tail bias problem of the object and predicate labels in the available annotated data, the scene graph generated from current methodologies can be biased toward common, non-informative relationship labels. Relationship can sometimes be non-mutually exclusive, which can be described from multiple perspectives like geometrical relationships or semantic relationships, making it even more challenging to predict the most suitable relationship label. In this work, we proposed the SG-Shuffle pipeline for scene graph generation with 3 components: 1) Parallel Transformer Encoder, which learns to predict object relationships in a more exclusive manner by grouping relationship labels into groups of similar purpose; 2) Shuffle Transformer, which learns to select the final relationship labels from the category-specific feature generated in the previous step; and 3) Weighted CE loss, used to alleviate the training bias caused by the imbalanced dataset.","PeriodicalId":91448,"journal":{"name":"Applied informatics","volume":"1 1","pages":"87-101"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2211.04773","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
. Scene Graph Generation (SGG) serves a comprehensive representation of the images for human understanding as well as visual understanding tasks. Due to the long tail bias problem of the object and predicate labels in the available annotated data, the scene graph generated from current methodologies can be biased toward common, non-informative relationship labels. Relationship can sometimes be non-mutually exclusive, which can be described from multiple perspectives like geometrical relationships or semantic relationships, making it even more challenging to predict the most suitable relationship label. In this work, we proposed the SG-Shuffle pipeline for scene graph generation with 3 components: 1) Parallel Transformer Encoder, which learns to predict object relationships in a more exclusive manner by grouping relationship labels into groups of similar purpose; 2) Shuffle Transformer, which learns to select the final relationship labels from the category-specific feature generated in the previous step; and 3) Weighted CE loss, used to alleviate the training bias caused by the imbalanced dataset.
场景图生成(SGG)为人类理解以及视觉理解任务提供图像的综合表示。由于可用注释数据中对象和谓词标签的长尾偏误问题,根据当前方法生成的场景图可能偏向于常见的、非信息性的关系标签。关系有时可能是非互斥的,可以从几何关系或语义关系等多个角度进行描述,这使得预测最合适的关系标签变得更加困难。在这项工作中,我们提出了用于场景图生成的SG-Shu-sulue管道,该管道由3个组件组成:1)并行转换器编码器,它通过将关系标签分组到具有类似目的的组中,学习以更排他性的方式预测对象关系;2) Shu sulu e Transformer,它学习从上一步生成的类别特定特征中选择最终的关系标签;以及3)加权CE损失,用于减轻由不平衡数据集引起的训练偏差。