{"title":"Feature Fusion Attention Visual Question Answering","authors":"Chunlin Wang, Jianyong Sun, Xiaolin Chen","doi":"10.1145/3318299.3318305","DOIUrl":null,"url":null,"abstract":"Visual Question Answering (VQA) is the multitask research field of computer vision and natural language processing and is one of the most intelligent applications among machine learning applications at present. It firstly analyzes and copes with the problem sentences to extract the core key words as well as then seeking out the answers from the figure. In our research, it extracts characteristic values from problem sentences and images by adopting the BI-LSTM and VGG_19 algorithms. Then, after integrating the values into new feature vectors, the paper correlates them into the attention through the attention mechanism and finally predicts the answers finally. Also, the VQA1.0 data set is adopted to train the model. After conducting the training, the accuracy of the test by using the test set reached up to 54.8%.","PeriodicalId":164987,"journal":{"name":"International Conference on Machine Learning and Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3318299.3318305","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Visual Question Answering (VQA) is the multitask research field of computer vision and natural language processing and is one of the most intelligent applications among machine learning applications at present. It firstly analyzes and copes with the problem sentences to extract the core key words as well as then seeking out the answers from the figure. In our research, it extracts characteristic values from problem sentences and images by adopting the BI-LSTM and VGG_19 algorithms. Then, after integrating the values into new feature vectors, the paper correlates them into the attention through the attention mechanism and finally predicts the answers finally. Also, the VQA1.0 data set is adopted to train the model. After conducting the training, the accuracy of the test by using the test set reached up to 54.8%.