Interface agents as surrogate users

R. Amant
{"title":"Interface agents as surrogate users","authors":"R. Amant","doi":"10.1145/337897.337998","DOIUrl":null,"url":null,"abstract":"Interactive applications extend human abilities along an enormous number of dimensions. What can we learn from agents that use these same software tools? Artificial intelligence and human-computer interaction have close historical ties, going back to Newell and Simon's work on human problem solving [3], and farther. We see the influence of AI on HCI, for example, in the notion of the user as a rational problem-solving agent and task analysis concepts that match the goals and actions of planning representations. Conversely, user interface issues have given AI developers challenging problems in realistic environments, leading to results in automatic interface adaptation, multi-modal interaction, interface generation, and agent interaction, among a wide range of other areas [2]. The relationship is natural. Both fields are concerned with facilitating the interaction of agents with their environments-humans in software environments, artificial agents in a variety of problem-solving domains. In a sense, agent developers and user interface designers see opposite sides of the same problem. As AI developers, we build better and better agents, driven by the complexity of an environment or problem domain we are given. As user interface designers, in contrast, we canÕt simply build better human beings. Fortunately, the environment of the user interface is not fixed; we can tailor it to the capabilities and limitations of its human users. Though the means differ, the goal in both cases is effective interaction between the agent and the environment. Research and development toward intelligent interface agents can contribute to this goal in many ways. This article examines two approaches. The first is a modeling approach, in which we treat interface agents as surrogate users. Building engineering models of a user, or programmable user models [5], lets us predict some aspects of the usability of an interface through analysis or simulation, rather than testing with real users, a more expensive and time-consuming process. In the second approach, which has a more traditional agents flavor, we treat the user interface as a tool-using environment for an autonomous agent. The tools provided by a general-purpose software environment significantly extend the capabilities of a software agent, ideally to approach the competence we would ordinarily expect of human users.","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"16 1","pages":"28-38"},"PeriodicalIF":0.0000,"publicationDate":"2000-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Appl. Intell.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/337897.337998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

Interactive applications extend human abilities along an enormous number of dimensions. What can we learn from agents that use these same software tools? Artificial intelligence and human-computer interaction have close historical ties, going back to Newell and Simon's work on human problem solving [3], and farther. We see the influence of AI on HCI, for example, in the notion of the user as a rational problem-solving agent and task analysis concepts that match the goals and actions of planning representations. Conversely, user interface issues have given AI developers challenging problems in realistic environments, leading to results in automatic interface adaptation, multi-modal interaction, interface generation, and agent interaction, among a wide range of other areas [2]. The relationship is natural. Both fields are concerned with facilitating the interaction of agents with their environments-humans in software environments, artificial agents in a variety of problem-solving domains. In a sense, agent developers and user interface designers see opposite sides of the same problem. As AI developers, we build better and better agents, driven by the complexity of an environment or problem domain we are given. As user interface designers, in contrast, we canÕt simply build better human beings. Fortunately, the environment of the user interface is not fixed; we can tailor it to the capabilities and limitations of its human users. Though the means differ, the goal in both cases is effective interaction between the agent and the environment. Research and development toward intelligent interface agents can contribute to this goal in many ways. This article examines two approaches. The first is a modeling approach, in which we treat interface agents as surrogate users. Building engineering models of a user, or programmable user models [5], lets us predict some aspects of the usability of an interface through analysis or simulation, rather than testing with real users, a more expensive and time-consuming process. In the second approach, which has a more traditional agents flavor, we treat the user interface as a tool-using environment for an autonomous agent. The tools provided by a general-purpose software environment significantly extend the capabilities of a software agent, ideally to approach the competence we would ordinarily expect of human users.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
作为代理用户的接口代理
交互式应用程序沿着大量的维度扩展了人类的能力。我们可以从使用相同软件工具的代理中学到什么?人工智能和人机交互有着密切的历史联系,可以追溯到1960年纽厄尔和西蒙关于人类解决问题的工作,甚至更远。我们看到人工智能对HCI的影响,例如,在用户作为理性解决问题的代理的概念和任务分析概念中,这些概念与规划表示的目标和行动相匹配。相反,用户界面问题给人工智能开发人员在现实环境中提出了具有挑战性的问题,导致了自动界面适应、多模态交互、界面生成和代理交互等广泛领域的结果[10]。这种关系是很自然的。这两个领域都关注于促进代理与其环境的交互——软件环境中的人类,各种问题解决领域中的人工代理。从某种意义上说,代理开发人员和用户界面设计人员看到了同一个问题的相反方面。作为人工智能开发人员,我们在环境或问题领域的复杂性的驱动下,构建了越来越好的代理。相反,作为用户界面设计师,我们canÕt只是创造更好的人类。幸运的是,用户界面的环境不是固定的;我们可以根据人类用户的能力和限制来定制它。虽然手段不同,但两种情况下的目标都是agent和环境之间的有效交互。智能接口代理的研究和开发可以在许多方面有助于实现这一目标。本文研究了两种方法。第一种是建模方法,在这种方法中,我们将接口代理视为代理用户。构建用户的工程模型,或可编程用户模型b[5],使我们能够通过分析或模拟来预测界面可用性的某些方面,而不是与真实用户进行测试,这是一个更昂贵和耗时的过程。在第二种方法中,它具有更传统的代理风格,我们将用户界面视为自主代理的工具使用环境。通用软件环境提供的工具极大地扩展了软件代理的功能,理想情况下可以接近我们通常期望的人类用户的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Object interaction-based surveillance video synopsis Total generalized variational-liked network for image denoising Multi-level clustering based on cluster order constructed with dynamic local density Natural-language processing for computer-supported instruction Is AI abstract and impractical? isn't the answer obvious?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1