{"title":"Interface agents as surrogate users","authors":"R. Amant","doi":"10.1145/337897.337998","DOIUrl":null,"url":null,"abstract":"Interactive applications extend human abilities along an enormous number of dimensions. What can we learn from agents that use these same software tools? Artificial intelligence and human-computer interaction have close historical ties, going back to Newell and Simon's work on human problem solving [3], and farther. We see the influence of AI on HCI, for example, in the notion of the user as a rational problem-solving agent and task analysis concepts that match the goals and actions of planning representations. Conversely, user interface issues have given AI developers challenging problems in realistic environments, leading to results in automatic interface adaptation, multi-modal interaction, interface generation, and agent interaction, among a wide range of other areas [2]. The relationship is natural. Both fields are concerned with facilitating the interaction of agents with their environments-humans in software environments, artificial agents in a variety of problem-solving domains. In a sense, agent developers and user interface designers see opposite sides of the same problem. As AI developers, we build better and better agents, driven by the complexity of an environment or problem domain we are given. As user interface designers, in contrast, we canÕt simply build better human beings. Fortunately, the environment of the user interface is not fixed; we can tailor it to the capabilities and limitations of its human users. Though the means differ, the goal in both cases is effective interaction between the agent and the environment. Research and development toward intelligent interface agents can contribute to this goal in many ways. This article examines two approaches. The first is a modeling approach, in which we treat interface agents as surrogate users. Building engineering models of a user, or programmable user models [5], lets us predict some aspects of the usability of an interface through analysis or simulation, rather than testing with real users, a more expensive and time-consuming process. In the second approach, which has a more traditional agents flavor, we treat the user interface as a tool-using environment for an autonomous agent. The tools provided by a general-purpose software environment significantly extend the capabilities of a software agent, ideally to approach the competence we would ordinarily expect of human users.","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"16 1","pages":"28-38"},"PeriodicalIF":0.0000,"publicationDate":"2000-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Appl. Intell.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/337897.337998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22
Abstract
Interactive applications extend human abilities along an enormous number of dimensions. What can we learn from agents that use these same software tools? Artificial intelligence and human-computer interaction have close historical ties, going back to Newell and Simon's work on human problem solving [3], and farther. We see the influence of AI on HCI, for example, in the notion of the user as a rational problem-solving agent and task analysis concepts that match the goals and actions of planning representations. Conversely, user interface issues have given AI developers challenging problems in realistic environments, leading to results in automatic interface adaptation, multi-modal interaction, interface generation, and agent interaction, among a wide range of other areas [2]. The relationship is natural. Both fields are concerned with facilitating the interaction of agents with their environments-humans in software environments, artificial agents in a variety of problem-solving domains. In a sense, agent developers and user interface designers see opposite sides of the same problem. As AI developers, we build better and better agents, driven by the complexity of an environment or problem domain we are given. As user interface designers, in contrast, we canÕt simply build better human beings. Fortunately, the environment of the user interface is not fixed; we can tailor it to the capabilities and limitations of its human users. Though the means differ, the goal in both cases is effective interaction between the agent and the environment. Research and development toward intelligent interface agents can contribute to this goal in many ways. This article examines two approaches. The first is a modeling approach, in which we treat interface agents as surrogate users. Building engineering models of a user, or programmable user models [5], lets us predict some aspects of the usability of an interface through analysis or simulation, rather than testing with real users, a more expensive and time-consuming process. In the second approach, which has a more traditional agents flavor, we treat the user interface as a tool-using environment for an autonomous agent. The tools provided by a general-purpose software environment significantly extend the capabilities of a software agent, ideally to approach the competence we would ordinarily expect of human users.