{"title":"结构化推理和答案验证:提高问答系统的准确性和可解释性","authors":"Jihyung Lee , Gary Geunbae Lee","doi":"10.1016/j.knosys.2025.113091","DOIUrl":null,"url":null,"abstract":"<div><div>The performance of question-answering (QA) models has significantly advanced, yet challenges remain in verifying the accuracy of generated answers and providing clear explanations of the reasoning behind them. In response, this study introduces a novel answer verification model that detects inaccuracies in QA system outputs and offers structured, multi-step explanations to enhance both understanding and reliability. We built an answer verification system consisting of a stepwise prover and two types of verifiers and tested the proposed system on the EntailmentBank dataset as well as the ARC, AQUA-RAT, and AR-LSAT datasets from the STREET benchmark. By correcting the answers generated by the T5-large and GPT-3.5 QA models and comparing the results before and after correction, we observed notable improvements in answer accuracy and explanation clarity. Specifically, the proposed model increased the exact match score of the T5-large model by 1.76% and that of GPT-3.5 by 3.53% on the EntailmentBank dataset. Additionally, to address potential data scarcity, the study proposes a data augmentation technique that employs large language models and multi-hop datasets to generate reasoning chains, thereby enriching the training data. Although the augmented data did not match the quality of the gold data, which is manually curated and verified by humans, our experiments demonstrated that combining gold data with augmented data resulted in better performance than using only a subset of the gold data.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"311 ","pages":"Article 113091"},"PeriodicalIF":7.6000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Structured reasoning and answer verification: Enhancing question answering system accuracy and explainability\",\"authors\":\"Jihyung Lee , Gary Geunbae Lee\",\"doi\":\"10.1016/j.knosys.2025.113091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The performance of question-answering (QA) models has significantly advanced, yet challenges remain in verifying the accuracy of generated answers and providing clear explanations of the reasoning behind them. In response, this study introduces a novel answer verification model that detects inaccuracies in QA system outputs and offers structured, multi-step explanations to enhance both understanding and reliability. We built an answer verification system consisting of a stepwise prover and two types of verifiers and tested the proposed system on the EntailmentBank dataset as well as the ARC, AQUA-RAT, and AR-LSAT datasets from the STREET benchmark. By correcting the answers generated by the T5-large and GPT-3.5 QA models and comparing the results before and after correction, we observed notable improvements in answer accuracy and explanation clarity. Specifically, the proposed model increased the exact match score of the T5-large model by 1.76% and that of GPT-3.5 by 3.53% on the EntailmentBank dataset. Additionally, to address potential data scarcity, the study proposes a data augmentation technique that employs large language models and multi-hop datasets to generate reasoning chains, thereby enriching the training data. Although the augmented data did not match the quality of the gold data, which is manually curated and verified by humans, our experiments demonstrated that combining gold data with augmented data resulted in better performance than using only a subset of the gold data.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"311 \",\"pages\":\"Article 113091\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125001388\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/2/5 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125001388","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/5 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Structured reasoning and answer verification: Enhancing question answering system accuracy and explainability
The performance of question-answering (QA) models has significantly advanced, yet challenges remain in verifying the accuracy of generated answers and providing clear explanations of the reasoning behind them. In response, this study introduces a novel answer verification model that detects inaccuracies in QA system outputs and offers structured, multi-step explanations to enhance both understanding and reliability. We built an answer verification system consisting of a stepwise prover and two types of verifiers and tested the proposed system on the EntailmentBank dataset as well as the ARC, AQUA-RAT, and AR-LSAT datasets from the STREET benchmark. By correcting the answers generated by the T5-large and GPT-3.5 QA models and comparing the results before and after correction, we observed notable improvements in answer accuracy and explanation clarity. Specifically, the proposed model increased the exact match score of the T5-large model by 1.76% and that of GPT-3.5 by 3.53% on the EntailmentBank dataset. Additionally, to address potential data scarcity, the study proposes a data augmentation technique that employs large language models and multi-hop datasets to generate reasoning chains, thereby enriching the training data. Although the augmented data did not match the quality of the gold data, which is manually curated and verified by humans, our experiments demonstrated that combining gold data with augmented data resulted in better performance than using only a subset of the gold data.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.