{"title":"ASMR: Aggregated Semantic Matching Retrieval Unleashing Commonsense Ability of LLM through Open-Ended Question Answering","authors":"Pei-Ying Lin, Erick Chandra, Jane Yung-jen Hsu","doi":"10.1609/aaaiss.v3i1.31195","DOIUrl":null,"url":null,"abstract":"Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"30 13","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset.