{"title":"CauseJudger: Identifying the Cause with LLMs for Abductive Logical Reasoning","authors":"Jinwei He, Feng Lu","doi":"arxiv-2409.05559","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have been utilized in solving diverse reasoning\ntasks, encompassing common sense, arithmetic and deduction tasks. However, with\ndifficulties of reversing thinking patterns and irrelevant premises, how to\ndetermine the authenticity of the cause in abductive logical reasoning remains\nunderexplored. Inspired by hypothesis and verification method and\nidentification of irrelevant information in human thinking process, we propose\na new framework for LLMs abductive logical reasoning called CauseJudger (CJ),\nwhich identifies the authenticity of possible cause by transforming thinking\nfrom reverse to forward and removing irrelevant information. In addition, we\nconstruct an abductive logical reasoning dataset for decision task called\nCauseLogics, which contains 200,000 tasks of varying reasoning lengths. Our\nexperiments show the efficiency of CJ with overall experiments and ablation\nexperiments as well as case studies on our dataset and reconstructed public\ndataset. Notably, CJ's implementation is efficient, requiring only two calls to\nLLM. Its impact is profound: when using gpt-3.5, CJ achieves a maximum\ncorrectness improvement of 41% compared to Zero-Shot-CoT. Moreover, with gpt-4,\nCJ attains an accuracy exceeding 90% across all datasets.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"62 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) have been utilized in solving diverse reasoning
tasks, encompassing common sense, arithmetic and deduction tasks. However, with
difficulties of reversing thinking patterns and irrelevant premises, how to
determine the authenticity of the cause in abductive logical reasoning remains
underexplored. Inspired by hypothesis and verification method and
identification of irrelevant information in human thinking process, we propose
a new framework for LLMs abductive logical reasoning called CauseJudger (CJ),
which identifies the authenticity of possible cause by transforming thinking
from reverse to forward and removing irrelevant information. In addition, we
construct an abductive logical reasoning dataset for decision task called
CauseLogics, which contains 200,000 tasks of varying reasoning lengths. Our
experiments show the efficiency of CJ with overall experiments and ablation
experiments as well as case studies on our dataset and reconstructed public
dataset. Notably, CJ's implementation is efficient, requiring only two calls to
LLM. Its impact is profound: when using gpt-3.5, CJ achieves a maximum
correctness improvement of 41% compared to Zero-Shot-CoT. Moreover, with gpt-4,
CJ attains an accuracy exceeding 90% across all datasets.