Agent Workflow Memory

Zora Zhiruo Wang, Jiayuan Mao, Daniel Fried, Graham Neubig
{"title":"Agent Workflow Memory","authors":"Zora Zhiruo Wang, Jiayuan Mao, Daniel Fried, Graham Neubig","doi":"arxiv-2409.07429","DOIUrl":null,"url":null,"abstract":"Despite the potential of language model-based agents to solve real-world\ntasks such as web navigation, current methods still struggle with long-horizon\ntasks with complex action trajectories. In contrast, humans can flexibly solve\ncomplex tasks by learning reusable task workflows from past experiences and\nusing them to guide future actions. To build agents that can similarly benefit\nfrom this process, we introduce Agent Workflow Memory (AWM), a method for\ninducing commonly reused routines, i.e., workflows, and selectively providing\nworkflows to the agent to guide subsequent generations. AWM flexibly applies to\nboth offline and online scenarios, where agents induce workflows from training\nexamples beforehand or from test queries on the fly. We experiment on two major\nweb navigation benchmarks -- Mind2Web and WebArena -- that collectively cover\n1000+ tasks from 200+ domains across travel, shopping, and social media, among\nothers. AWM substantially improves the baseline results by 24.6% and 51.1%\nrelative success rate on Mind2Web and WebArena while reducing the number of\nsteps taken to solve WebArena tasks successfully. Furthermore, online AWM\nrobustly generalizes in cross-task, website, and domain evaluations, surpassing\nbaselines from 8.9 to 14.0 absolute points as train-test task distribution gaps\nwiden.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07429","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Despite the potential of language model-based agents to solve real-world tasks such as web navigation, current methods still struggle with long-horizon tasks with complex action trajectories. In contrast, humans can flexibly solve complex tasks by learning reusable task workflows from past experiences and using them to guide future actions. To build agents that can similarly benefit from this process, we introduce Agent Workflow Memory (AWM), a method for inducing commonly reused routines, i.e., workflows, and selectively providing workflows to the agent to guide subsequent generations. AWM flexibly applies to both offline and online scenarios, where agents induce workflows from training examples beforehand or from test queries on the fly. We experiment on two major web navigation benchmarks -- Mind2Web and WebArena -- that collectively cover 1000+ tasks from 200+ domains across travel, shopping, and social media, among others. AWM substantially improves the baseline results by 24.6% and 51.1% relative success rate on Mind2Web and WebArena while reducing the number of steps taken to solve WebArena tasks successfully. Furthermore, online AWM robustly generalizes in cross-task, website, and domain evaluations, surpassing baselines from 8.9 to 14.0 absolute points as train-test task distribution gaps widen.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
代理工作流程内存
尽管基于语言模型的代理在解决现实世界的任务(如网络导航)方面具有很大潜力,但目前的方法仍然难以应对具有复杂行动轨迹的长期任务。相比之下,人类可以从过去的经验中学习可重复使用的任务工作流,并用它们来指导未来的行动,从而灵活地解决复杂的任务。为了构建能从这一过程中同样受益的代理,我们引入了代理工作流记忆(AWM),这是一种诱导常用重用例程(即工作流)的方法,并有选择性地向代理提供工作流以指导后续生成。AWM 可灵活应用于离线和在线场景,代理可事先从训练示例或测试查询中诱导工作流。我们在两个主要的网络导航基准--Mind2Web 和 WebArena--上进行了实验,这两个基准涵盖了旅游、购物和社交媒体等 200 多个领域的 1000 多个任务。在 Mind2Web 和 WebArena 上,AWM 大幅提高了基准结果,相对成功率分别提高了 24.6% 和 51.1%,同时减少了成功解决 WebArena 任务所需的步骤数。此外,在线 AWM 在跨任务、网站和领域评估中具有强大的通用性,随着训练-测试任务分布差距的缩小,其绝对值超过基线 8.9 到 14.0 个百分点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1