Divij Handa, Pavel Dolin, Shrinidhi Kumbhar, Chitta Baral, Tran Cao Son
{"title":"行动推理平台(ActionReasoningBench):推理有拉姆约束和无拉姆约束的行动","authors":"Divij Handa, Pavel Dolin, Shrinidhi Kumbhar, Chitta Baral, Tran Cao Son","doi":"arxiv-2406.04046","DOIUrl":null,"url":null,"abstract":"Reasoning about actions and change (RAC) has historically driven the\ndevelopment of many early AI challenges, such as the frame problem, and many AI\ndisciplines, including non-monotonic and commonsense reasoning. The role of RAC\nremains important even now, particularly for tasks involving dynamic\nenvironments, interactive scenarios, and commonsense reasoning. Despite the\nprogress of Large Language Models (LLMs) in various AI domains, their\nperformance on RAC is underexplored. To address this gap, we introduce a new\nbenchmark, ActionReasoningBench, encompassing 13 domains and rigorously\nevaluating LLMs across eight different areas of RAC. These include - Object\nTracking, Fluent Tracking, State Tracking, Action Executability, Effects of\nActions, Numerical RAC, Hallucination Detection, and Composite Questions.\nFurthermore, we also investigate the indirect effect of actions due to\nramification constraints for every domain. Finally, we evaluate our benchmark\nusing open-sourced and commercial state-of-the-art LLMs, including GPT-4o,\nGemini-1.0-Pro, Llama2-7b-chat, Llama2-13b-chat, Llama3-8b-instruct,\nGemma-2b-instruct, and Gemma-7b-instruct. Our findings indicate that these\nmodels face significant challenges across all categories included in our\nbenchmark.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"33 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints\",\"authors\":\"Divij Handa, Pavel Dolin, Shrinidhi Kumbhar, Chitta Baral, Tran Cao Son\",\"doi\":\"arxiv-2406.04046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reasoning about actions and change (RAC) has historically driven the\\ndevelopment of many early AI challenges, such as the frame problem, and many AI\\ndisciplines, including non-monotonic and commonsense reasoning. The role of RAC\\nremains important even now, particularly for tasks involving dynamic\\nenvironments, interactive scenarios, and commonsense reasoning. Despite the\\nprogress of Large Language Models (LLMs) in various AI domains, their\\nperformance on RAC is underexplored. To address this gap, we introduce a new\\nbenchmark, ActionReasoningBench, encompassing 13 domains and rigorously\\nevaluating LLMs across eight different areas of RAC. These include - Object\\nTracking, Fluent Tracking, State Tracking, Action Executability, Effects of\\nActions, Numerical RAC, Hallucination Detection, and Composite Questions.\\nFurthermore, we also investigate the indirect effect of actions due to\\nramification constraints for every domain. Finally, we evaluate our benchmark\\nusing open-sourced and commercial state-of-the-art LLMs, including GPT-4o,\\nGemini-1.0-Pro, Llama2-7b-chat, Llama2-13b-chat, Llama3-8b-instruct,\\nGemma-2b-instruct, and Gemma-7b-instruct. Our findings indicate that these\\nmodels face significant challenges across all categories included in our\\nbenchmark.\",\"PeriodicalId\":501024,\"journal\":{\"name\":\"arXiv - CS - Computational Complexity\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computational Complexity\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2406.04046\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Complexity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.04046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints
Reasoning about actions and change (RAC) has historically driven the
development of many early AI challenges, such as the frame problem, and many AI
disciplines, including non-monotonic and commonsense reasoning. The role of RAC
remains important even now, particularly for tasks involving dynamic
environments, interactive scenarios, and commonsense reasoning. Despite the
progress of Large Language Models (LLMs) in various AI domains, their
performance on RAC is underexplored. To address this gap, we introduce a new
benchmark, ActionReasoningBench, encompassing 13 domains and rigorously
evaluating LLMs across eight different areas of RAC. These include - Object
Tracking, Fluent Tracking, State Tracking, Action Executability, Effects of
Actions, Numerical RAC, Hallucination Detection, and Composite Questions.
Furthermore, we also investigate the indirect effect of actions due to
ramification constraints for every domain. Finally, we evaluate our benchmark
using open-sourced and commercial state-of-the-art LLMs, including GPT-4o,
Gemini-1.0-Pro, Llama2-7b-chat, Llama2-13b-chat, Llama3-8b-instruct,
Gemma-2b-instruct, and Gemma-7b-instruct. Our findings indicate that these
models face significant challenges across all categories included in our
benchmark.