基于深度学习的孟加拉语视觉问答系统

Mahamudul Hasan Rafi, Shifat Islam, S. M. Hasan Imtiaz Labib, SM Sajid Hasan, F. Shah, Sifat Ahmed
{"title":"基于深度学习的孟加拉语视觉问答系统","authors":"Mahamudul Hasan Rafi, Shifat Islam, S. M. Hasan Imtiaz Labib, SM Sajid Hasan, F. Shah, Sifat Ahmed","doi":"10.1109/ICCIT57492.2022.10055205","DOIUrl":null,"url":null,"abstract":"Visual Question Answering (VQA) is a challenging task in Artificial Intelligence (AI), where an AI agent answers questions regarding visual content based on images provided. Therefore, to implement a VQA system, a computer system requires complex reasoning over visual aspects of images and textual parts of the questions to anticipate the correct answer. Although there is a good deal of VQA research in English, Bengali still needs to thoroughly explore this area of artificial intelligence. To address this, we have constructed a Bengali VQA dataset by preparing human-annotated question-answers using a small portion of the images from the VQA v2.0 dataset. To overcome high linguistic priors that hide the importance of precise visual information in visual question answering, we have used real-life scenarios to construct a balanced Bengali VQA dataset. This is the first human-annotated dataset of this kind in Bengali. We have proposed a Top-Down Attention-based approach in this study and conducted several studies to assess our model’s performance.","PeriodicalId":255498,"journal":{"name":"2022 25th International Conference on Computer and Information Technology (ICCIT)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Deep Learning-Based Bengali Visual Question Answering System\",\"authors\":\"Mahamudul Hasan Rafi, Shifat Islam, S. M. Hasan Imtiaz Labib, SM Sajid Hasan, F. Shah, Sifat Ahmed\",\"doi\":\"10.1109/ICCIT57492.2022.10055205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual Question Answering (VQA) is a challenging task in Artificial Intelligence (AI), where an AI agent answers questions regarding visual content based on images provided. Therefore, to implement a VQA system, a computer system requires complex reasoning over visual aspects of images and textual parts of the questions to anticipate the correct answer. Although there is a good deal of VQA research in English, Bengali still needs to thoroughly explore this area of artificial intelligence. To address this, we have constructed a Bengali VQA dataset by preparing human-annotated question-answers using a small portion of the images from the VQA v2.0 dataset. To overcome high linguistic priors that hide the importance of precise visual information in visual question answering, we have used real-life scenarios to construct a balanced Bengali VQA dataset. This is the first human-annotated dataset of this kind in Bengali. We have proposed a Top-Down Attention-based approach in this study and conducted several studies to assess our model’s performance.\",\"PeriodicalId\":255498,\"journal\":{\"name\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCIT57492.2022.10055205\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT57492.2022.10055205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

视觉问答(VQA)在人工智能(AI)中是一项具有挑战性的任务,人工智能代理根据提供的图像回答有关视觉内容的问题。因此,为了实现VQA系统,计算机系统需要对图像的视觉方面和问题的文本部分进行复杂的推理,以预测正确的答案。尽管在英语中有大量的VQA研究,但孟加拉语仍然需要深入探索人工智能的这一领域。为了解决这个问题,我们使用VQA v2.0数据集中的一小部分图像来准备人工注释的问答,从而构建了一个孟加拉语VQA数据集。为了克服隐藏精确视觉信息在视觉问答中的重要性的高语言先验,我们使用现实场景构建了一个平衡的孟加拉语VQA数据集。这是第一个在孟加拉语中人工注释的数据集。我们在本研究中提出了一种自上而下的基于注意力的方法,并进行了几项研究来评估我们的模型的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Deep Learning-Based Bengali Visual Question Answering System
Visual Question Answering (VQA) is a challenging task in Artificial Intelligence (AI), where an AI agent answers questions regarding visual content based on images provided. Therefore, to implement a VQA system, a computer system requires complex reasoning over visual aspects of images and textual parts of the questions to anticipate the correct answer. Although there is a good deal of VQA research in English, Bengali still needs to thoroughly explore this area of artificial intelligence. To address this, we have constructed a Bengali VQA dataset by preparing human-annotated question-answers using a small portion of the images from the VQA v2.0 dataset. To overcome high linguistic priors that hide the importance of precise visual information in visual question answering, we have used real-life scenarios to construct a balanced Bengali VQA dataset. This is the first human-annotated dataset of this kind in Bengali. We have proposed a Top-Down Attention-based approach in this study and conducted several studies to assess our model’s performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
SlotFinder: A Spatio-temporal based Car Parking System Land Cover and Land Use Detection using Semi-Supervised Learning Comparative Analysis of Process Scheduling Algorithm using AI models Throughput Optimization of IEEE 802.15.4e TSCH-Based Scheduling: A Deep Neural Network (DNN) Scheme Towards Developing a Voice-Over-Guided System for Visually Impaired People to Learn Writing the Alphabets
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1