认知 LLMs:为制造决策整合认知架构和大型语言模型

Siyu Wu, Alessandro Oltramari, Jonathan Francis, C. Lee Giles, Frank E. Ritter
{"title":"认知 LLMs:为制造决策整合认知架构和大型语言模型","authors":"Siyu Wu, Alessandro Oltramari, Jonathan Francis, C. Lee Giles, Frank E. Ritter","doi":"arxiv-2408.09176","DOIUrl":null,"url":null,"abstract":"Resolving the dichotomy between the human-like yet constrained reasoning\nprocesses of Cognitive Architectures and the broad but often noisy inference\nbehavior of Large Language Models (LLMs) remains a challenging but exciting\npursuit, for enabling reliable machine reasoning capabilities in production\nsystems. Because Cognitive Architectures are famously developed for the purpose\nof modeling the internal mechanisms of human cognitive decision-making at a\ncomputational level, new investigations consider the goal of informing LLMs\nwith the knowledge necessary for replicating such processes, e.g., guided\nperception, memory, goal-setting, and action. Previous approaches that use LLMs\nfor grounded decision-making struggle with complex reasoning tasks that require\nslower, deliberate cognition over fast and intuitive inference -- reporting\nissues related to the lack of sufficient grounding, as in hallucination. To\nresolve these challenges, we introduce LLM-ACTR, a novel neuro-symbolic\narchitecture that provides human-aligned and versatile decision-making by\nintegrating the ACT-R Cognitive Architecture with LLMs. Our framework extracts\nand embeds knowledge of ACT-R's internal decision-making process as latent\nneural representations, injects this information into trainable LLM adapter\nlayers, and fine-tunes the LLMs for downstream prediction. Our experiments on\nnovel Design for Manufacturing tasks show both improved task performance as\nwell as improved grounded decision-making capability of our approach, compared\nto LLM-only baselines that leverage chain-of-thought reasoning strategies.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"322 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making\",\"authors\":\"Siyu Wu, Alessandro Oltramari, Jonathan Francis, C. Lee Giles, Frank E. Ritter\",\"doi\":\"arxiv-2408.09176\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Resolving the dichotomy between the human-like yet constrained reasoning\\nprocesses of Cognitive Architectures and the broad but often noisy inference\\nbehavior of Large Language Models (LLMs) remains a challenging but exciting\\npursuit, for enabling reliable machine reasoning capabilities in production\\nsystems. Because Cognitive Architectures are famously developed for the purpose\\nof modeling the internal mechanisms of human cognitive decision-making at a\\ncomputational level, new investigations consider the goal of informing LLMs\\nwith the knowledge necessary for replicating such processes, e.g., guided\\nperception, memory, goal-setting, and action. Previous approaches that use LLMs\\nfor grounded decision-making struggle with complex reasoning tasks that require\\nslower, deliberate cognition over fast and intuitive inference -- reporting\\nissues related to the lack of sufficient grounding, as in hallucination. To\\nresolve these challenges, we introduce LLM-ACTR, a novel neuro-symbolic\\narchitecture that provides human-aligned and versatile decision-making by\\nintegrating the ACT-R Cognitive Architecture with LLMs. Our framework extracts\\nand embeds knowledge of ACT-R's internal decision-making process as latent\\nneural representations, injects this information into trainable LLM adapter\\nlayers, and fine-tunes the LLMs for downstream prediction. Our experiments on\\nnovel Design for Manufacturing tasks show both improved task performance as\\nwell as improved grounded decision-making capability of our approach, compared\\nto LLM-only baselines that leverage chain-of-thought reasoning strategies.\",\"PeriodicalId\":501033,\"journal\":{\"name\":\"arXiv - CS - Symbolic Computation\",\"volume\":\"322 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Symbolic Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.09176\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09176","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

要在产品系统中实现可靠的机器推理能力,解决认知架构的类人但受限的推理过程与大型语言模型(LLM)的广泛但往往嘈杂的推理行为之间的对立仍然是一项具有挑战性但令人兴奋的探索。由于认知架构的开发目的是在计算层面上模拟人类认知决策的内部机制,因此新的研究考虑的目标是为 LLM 提供复制此类过程(如引导感知、记忆、目标设定和行动)所需的知识。以往使用 LLM 进行基础决策的方法在复杂的推理任务中举步维艰,因为这些任务需要较慢的、深思熟虑的认知,而不是快速的、直观的推理--报告的问题与缺乏足够的基础有关,就像在幻觉中一样。为了解决这些难题,我们引入了 LLM-ACTR,这是一种新颖的神经符号架构,通过将 ACT-R 认知架构与 LLM 相结合,提供与人类相一致的多功能决策。我们的框架提取并嵌入 ACT-R 内部决策过程的知识作为潜在神经表征,将这些信息注入可训练的 LLM 适配器,并微调 LLM 以进行下游预测。我们在新颖的 "制造设计 "任务上进行的实验表明,与利用思维链推理策略的纯 LLM 基线相比,我们的方法既提高了任务性能,也提高了基础决策能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making
Resolving the dichotomy between the human-like yet constrained reasoning processes of Cognitive Architectures and the broad but often noisy inference behavior of Large Language Models (LLMs) remains a challenging but exciting pursuit, for enabling reliable machine reasoning capabilities in production systems. Because Cognitive Architectures are famously developed for the purpose of modeling the internal mechanisms of human cognitive decision-making at a computational level, new investigations consider the goal of informing LLMs with the knowledge necessary for replicating such processes, e.g., guided perception, memory, goal-setting, and action. Previous approaches that use LLMs for grounded decision-making struggle with complex reasoning tasks that require slower, deliberate cognition over fast and intuitive inference -- reporting issues related to the lack of sufficient grounding, as in hallucination. To resolve these challenges, we introduce LLM-ACTR, a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making by integrating the ACT-R Cognitive Architecture with LLMs. Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations, injects this information into trainable LLM adapter layers, and fine-tunes the LLMs for downstream prediction. Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability of our approach, compared to LLM-only baselines that leverage chain-of-thought reasoning strategies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Synthesizing Evolving Symbolic Representations for Autonomous Systems Introducing Quantification into a Hierarchical Graph Rewriting Language Towards Verified Polynomial Factorisation Symbolic Regression with a Learned Concept Library Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1