{"title":"人类行为的建模、复制和预测:一项调查","authors":"Andrew Fuchs, Andrea Passarella, Marco Conti","doi":"https://dl.acm.org/doi/10.1145/3580492","DOIUrl":null,"url":null,"abstract":"<p>Given the popular presupposition of human reasoning as the standard for learning and decision making, there have been significant efforts and a growing trend in research to replicate these innate human abilities in artificial systems. As such, topics including Game Theory, Theory of Mind, and Machine Learning, among others, integrate concepts that are assumed components of human reasoning. These serve as techniques to replicate and understand the behaviors of humans. In addition, next-generation autonomous and adaptive systems will largely include AI agents and humans working together as teams. To make this possible, autonomous agents will require the ability to embed practical models of human behavior, allowing them not only to replicate human models as a technique to “learn” but also to understand the actions of users and anticipate their behavior, so as to truly operate in symbiosis with them. The main objective of this article is to provide a succinct yet systematic review of important approaches in two areas dealing with quantitative models of human behaviors. Specifically, we focus on (i) techniques that learn a model or policy of behavior through exploration and feedback, such as Reinforcement Learning, and (ii) directly model mechanisms of human reasoning, such as beliefs and bias, without necessarily learning via trial and error.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"6 6","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modeling, Replicating, and Predicting Human Behavior: A Survey\",\"authors\":\"Andrew Fuchs, Andrea Passarella, Marco Conti\",\"doi\":\"https://dl.acm.org/doi/10.1145/3580492\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Given the popular presupposition of human reasoning as the standard for learning and decision making, there have been significant efforts and a growing trend in research to replicate these innate human abilities in artificial systems. As such, topics including Game Theory, Theory of Mind, and Machine Learning, among others, integrate concepts that are assumed components of human reasoning. These serve as techniques to replicate and understand the behaviors of humans. In addition, next-generation autonomous and adaptive systems will largely include AI agents and humans working together as teams. To make this possible, autonomous agents will require the ability to embed practical models of human behavior, allowing them not only to replicate human models as a technique to “learn” but also to understand the actions of users and anticipate their behavior, so as to truly operate in symbiosis with them. The main objective of this article is to provide a succinct yet systematic review of important approaches in two areas dealing with quantitative models of human behaviors. Specifically, we focus on (i) techniques that learn a model or policy of behavior through exploration and feedback, such as Reinforcement Learning, and (ii) directly model mechanisms of human reasoning, such as beliefs and bias, without necessarily learning via trial and error.</p>\",\"PeriodicalId\":50919,\"journal\":{\"name\":\"ACM Transactions on Autonomous and Adaptive Systems\",\"volume\":\"6 6\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2023-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Autonomous and Adaptive Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/https://dl.acm.org/doi/10.1145/3580492\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Autonomous and Adaptive Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3580492","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Modeling, Replicating, and Predicting Human Behavior: A Survey
Given the popular presupposition of human reasoning as the standard for learning and decision making, there have been significant efforts and a growing trend in research to replicate these innate human abilities in artificial systems. As such, topics including Game Theory, Theory of Mind, and Machine Learning, among others, integrate concepts that are assumed components of human reasoning. These serve as techniques to replicate and understand the behaviors of humans. In addition, next-generation autonomous and adaptive systems will largely include AI agents and humans working together as teams. To make this possible, autonomous agents will require the ability to embed practical models of human behavior, allowing them not only to replicate human models as a technique to “learn” but also to understand the actions of users and anticipate their behavior, so as to truly operate in symbiosis with them. The main objective of this article is to provide a succinct yet systematic review of important approaches in two areas dealing with quantitative models of human behaviors. Specifically, we focus on (i) techniques that learn a model or policy of behavior through exploration and feedback, such as Reinforcement Learning, and (ii) directly model mechanisms of human reasoning, such as beliefs and bias, without necessarily learning via trial and error.
期刊介绍:
TAAS addresses research on autonomous and adaptive systems being undertaken by an increasingly interdisciplinary research community -- and provides a common platform under which this work can be published and disseminated. TAAS encourages contributions aimed at supporting the understanding, development, and control of such systems and of their behaviors.
TAAS addresses research on autonomous and adaptive systems being undertaken by an increasingly interdisciplinary research community - and provides a common platform under which this work can be published and disseminated. TAAS encourages contributions aimed at supporting the understanding, development, and control of such systems and of their behaviors. Contributions are expected to be based on sound and innovative theoretical models, algorithms, engineering and programming techniques, infrastructures and systems, or technological and application experiences.