Flewin Dsouza, Aditi Bodade, Hrugved Kolhe, Paresh Chaudhari, M. Madankar
{"title":"Optimizing MRC Tasks: Understanding and Resolving Ambiguities","authors":"Flewin Dsouza, Aditi Bodade, Hrugved Kolhe, Paresh Chaudhari, M. Madankar","doi":"10.1109/PCEMS58491.2023.10136031","DOIUrl":null,"url":null,"abstract":"The attention model allows paying flexible attention to only those components of the input that contribute to the effective execution of the task at hand. An artificial intelligence competition known as Machine Reading Comprehension (MRC) asks machines to respond to questions based on passages that they have been provided with. The primary purpose of this research is to provide responses to questions that were taken from the Stanford Question Answering Dataset (SQUAD), which includes paragraphs along with questions and the answers that correlate to those questions. This study focuses on the implementation of various approaches that take advantage of the attention mechanism. A thorough examination of emerging methods for producing word embeddings, feature extraction, attention mechanisms, and answer selection. The flaws and concerns with the model’s fairness and trustworthiness have also been noted.","PeriodicalId":330870,"journal":{"name":"2023 2nd International Conference on Paradigm Shifts in Communications Embedded Systems, Machine Learning and Signal Processing (PCEMS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd International Conference on Paradigm Shifts in Communications Embedded Systems, Machine Learning and Signal Processing (PCEMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PCEMS58491.2023.10136031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The attention model allows paying flexible attention to only those components of the input that contribute to the effective execution of the task at hand. An artificial intelligence competition known as Machine Reading Comprehension (MRC) asks machines to respond to questions based on passages that they have been provided with. The primary purpose of this research is to provide responses to questions that were taken from the Stanford Question Answering Dataset (SQUAD), which includes paragraphs along with questions and the answers that correlate to those questions. This study focuses on the implementation of various approaches that take advantage of the attention mechanism. A thorough examination of emerging methods for producing word embeddings, feature extraction, attention mechanisms, and answer selection. The flaws and concerns with the model’s fairness and trustworthiness have also been noted.