{"title":"BraIN: A Bidirectional Generative Adversarial Networks for image captions","authors":"Yuhui Wang, D. Cook","doi":"10.1145/3446132.3446406","DOIUrl":null,"url":null,"abstract":"Although progress has been made in image captioning, machine-generated captions and human-generated captions are still quite distinct. Machine-generated captions perform well based on automated metrics. However, they lack naturalness, an essential characteristic of human language, because they maximize the likelihood of training samples. We propose a novel model to generate more human-like captions than has been accomplished with prior methods. Our model includes an attention mechanism, a bidirectional language generation model, and a conditional generative adversarial network. Specifically, the attention mechanism captures image details by segmenting important information into smaller pieces. The bidirectional language generation model produces human-like sentences by considering multiple perspectives. Simultaneously, the conditional generative adversarial network increases sentence quality by comparing a set of captions. To evaluate the performance of our model, we compare human preferences for BraIN-generated captions with baseline methods. We also compare results with actual human-generated captions using automated metrics. Results show our model is capable of producing more human-like captions than baseline methods.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3446132.3446406","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Although progress has been made in image captioning, machine-generated captions and human-generated captions are still quite distinct. Machine-generated captions perform well based on automated metrics. However, they lack naturalness, an essential characteristic of human language, because they maximize the likelihood of training samples. We propose a novel model to generate more human-like captions than has been accomplished with prior methods. Our model includes an attention mechanism, a bidirectional language generation model, and a conditional generative adversarial network. Specifically, the attention mechanism captures image details by segmenting important information into smaller pieces. The bidirectional language generation model produces human-like sentences by considering multiple perspectives. Simultaneously, the conditional generative adversarial network increases sentence quality by comparing a set of captions. To evaluate the performance of our model, we compare human preferences for BraIN-generated captions with baseline methods. We also compare results with actual human-generated captions using automated metrics. Results show our model is capable of producing more human-like captions than baseline methods.