Mahamudul Hasan Rafi, Shifat Islam, S. M. Hasan Imtiaz Labib, SM Sajid Hasan, F. Shah, Sifat Ahmed
{"title":"基于深度学习的孟加拉语视觉问答系统","authors":"Mahamudul Hasan Rafi, Shifat Islam, S. M. Hasan Imtiaz Labib, SM Sajid Hasan, F. Shah, Sifat Ahmed","doi":"10.1109/ICCIT57492.2022.10055205","DOIUrl":null,"url":null,"abstract":"Visual Question Answering (VQA) is a challenging task in Artificial Intelligence (AI), where an AI agent answers questions regarding visual content based on images provided. Therefore, to implement a VQA system, a computer system requires complex reasoning over visual aspects of images and textual parts of the questions to anticipate the correct answer. Although there is a good deal of VQA research in English, Bengali still needs to thoroughly explore this area of artificial intelligence. To address this, we have constructed a Bengali VQA dataset by preparing human-annotated question-answers using a small portion of the images from the VQA v2.0 dataset. To overcome high linguistic priors that hide the importance of precise visual information in visual question answering, we have used real-life scenarios to construct a balanced Bengali VQA dataset. This is the first human-annotated dataset of this kind in Bengali. We have proposed a Top-Down Attention-based approach in this study and conducted several studies to assess our model’s performance.","PeriodicalId":255498,"journal":{"name":"2022 25th International Conference on Computer and Information Technology (ICCIT)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Deep Learning-Based Bengali Visual Question Answering System\",\"authors\":\"Mahamudul Hasan Rafi, Shifat Islam, S. M. Hasan Imtiaz Labib, SM Sajid Hasan, F. Shah, Sifat Ahmed\",\"doi\":\"10.1109/ICCIT57492.2022.10055205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual Question Answering (VQA) is a challenging task in Artificial Intelligence (AI), where an AI agent answers questions regarding visual content based on images provided. Therefore, to implement a VQA system, a computer system requires complex reasoning over visual aspects of images and textual parts of the questions to anticipate the correct answer. Although there is a good deal of VQA research in English, Bengali still needs to thoroughly explore this area of artificial intelligence. To address this, we have constructed a Bengali VQA dataset by preparing human-annotated question-answers using a small portion of the images from the VQA v2.0 dataset. To overcome high linguistic priors that hide the importance of precise visual information in visual question answering, we have used real-life scenarios to construct a balanced Bengali VQA dataset. This is the first human-annotated dataset of this kind in Bengali. We have proposed a Top-Down Attention-based approach in this study and conducted several studies to assess our model’s performance.\",\"PeriodicalId\":255498,\"journal\":{\"name\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCIT57492.2022.10055205\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT57492.2022.10055205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Deep Learning-Based Bengali Visual Question Answering System
Visual Question Answering (VQA) is a challenging task in Artificial Intelligence (AI), where an AI agent answers questions regarding visual content based on images provided. Therefore, to implement a VQA system, a computer system requires complex reasoning over visual aspects of images and textual parts of the questions to anticipate the correct answer. Although there is a good deal of VQA research in English, Bengali still needs to thoroughly explore this area of artificial intelligence. To address this, we have constructed a Bengali VQA dataset by preparing human-annotated question-answers using a small portion of the images from the VQA v2.0 dataset. To overcome high linguistic priors that hide the importance of precise visual information in visual question answering, we have used real-life scenarios to construct a balanced Bengali VQA dataset. This is the first human-annotated dataset of this kind in Bengali. We have proposed a Top-Down Attention-based approach in this study and conducted several studies to assess our model’s performance.