Tian Zheng, Wenhua Qian, Rencan Nie, Jinde Cao, Dan Xu
{"title":"图结构关注和图像标题的全局关注","authors":"Tian Zheng, Wenhua Qian, Rencan Nie, Jinde Cao, Dan Xu","doi":"10.1109/ICICIP53388.2021.9642211","DOIUrl":null,"url":null,"abstract":"Attention mechanism plays a significant role in the current encoder-decoder framework of image captioning. Nevertheless, many attention mechanisms only fuse textual feature and image feature once, failing to adequately integrate the feature between context and image. Furthermore, many image captioning networks based on scene graphs only consider the node information but ignore the structure, which is insufficient in grasping the spatial object relationship. To address the above problems, we propose structural attention and increased global attention. Two attentions select critical image features from image detail and global image. The increased global attention, focusing on global image features, enhances integration between text and image via fusing detailed image features into global attention. To better describe the relationship among image objects, our network allows for both the node information by content attention and the structure information by structural attention. Structural attention computes the similarity between the structure information of scene graph and local attention, building the image objects relationship differing from content attention. We evaluate the performance of our image captioning network in MS COCO and Visual Genome datasets. The results of the experiments show that our method achieves superior performance compared with the existing methods.","PeriodicalId":435799,"journal":{"name":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph Structural Attention and Increased Global Attention for Image Captioning\",\"authors\":\"Tian Zheng, Wenhua Qian, Rencan Nie, Jinde Cao, Dan Xu\",\"doi\":\"10.1109/ICICIP53388.2021.9642211\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Attention mechanism plays a significant role in the current encoder-decoder framework of image captioning. Nevertheless, many attention mechanisms only fuse textual feature and image feature once, failing to adequately integrate the feature between context and image. Furthermore, many image captioning networks based on scene graphs only consider the node information but ignore the structure, which is insufficient in grasping the spatial object relationship. To address the above problems, we propose structural attention and increased global attention. Two attentions select critical image features from image detail and global image. The increased global attention, focusing on global image features, enhances integration between text and image via fusing detailed image features into global attention. To better describe the relationship among image objects, our network allows for both the node information by content attention and the structure information by structural attention. Structural attention computes the similarity between the structure information of scene graph and local attention, building the image objects relationship differing from content attention. We evaluate the performance of our image captioning network in MS COCO and Visual Genome datasets. The results of the experiments show that our method achieves superior performance compared with the existing methods.\",\"PeriodicalId\":435799,\"journal\":{\"name\":\"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)\",\"volume\":\"60 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICIP53388.2021.9642211\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICIP53388.2021.9642211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Graph Structural Attention and Increased Global Attention for Image Captioning
Attention mechanism plays a significant role in the current encoder-decoder framework of image captioning. Nevertheless, many attention mechanisms only fuse textual feature and image feature once, failing to adequately integrate the feature between context and image. Furthermore, many image captioning networks based on scene graphs only consider the node information but ignore the structure, which is insufficient in grasping the spatial object relationship. To address the above problems, we propose structural attention and increased global attention. Two attentions select critical image features from image detail and global image. The increased global attention, focusing on global image features, enhances integration between text and image via fusing detailed image features into global attention. To better describe the relationship among image objects, our network allows for both the node information by content attention and the structure information by structural attention. Structural attention computes the similarity between the structure information of scene graph and local attention, building the image objects relationship differing from content attention. We evaluate the performance of our image captioning network in MS COCO and Visual Genome datasets. The results of the experiments show that our method achieves superior performance compared with the existing methods.