P. Paruchuri, Pradeep Varakantham, K. Sycara, P. Scerri
{"title":"人类偏见对人- agent团队的影响","authors":"P. Paruchuri, Pradeep Varakantham, K. Sycara, P. Scerri","doi":"10.1109/WI-IAT.2010.104","DOIUrl":null,"url":null,"abstract":"As human-agent teams get increasingly deployed in the real-world, agent designers need to take into account that humans and agents have different abilities to specify preferences. In this paper, we focus on how human biases in specifying preferences for resources impacts the performance of large, heterogeneous teams. In particular, we model the inclination of humans to simplify their preference functions and to exaggerate their utility for desired resources, and show the effect of these biases on the team performance. We demonstrate this on two different problems, which are representative of many resource allocation problems addressed in literature. In both these problems, the agents and humans optimize their constraints in a distributed manner. This paper makes two key contributions: (a) Proves theoretical properties of the algorithm used (named DSA) for solving distributed constraint optimization problems, which ensures robustness against human biases; and (b) Empirically illustrates that the effect of human biases on team performance for different problem settings and for varying team sizes is not significant. Both our theoretical and empirical studies support the fact that the solutions provided by DSA for mid to large sized teams are very robust to the common types of human biases.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Effect of Human Biases on Human-Agent Teams\",\"authors\":\"P. Paruchuri, Pradeep Varakantham, K. Sycara, P. Scerri\",\"doi\":\"10.1109/WI-IAT.2010.104\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As human-agent teams get increasingly deployed in the real-world, agent designers need to take into account that humans and agents have different abilities to specify preferences. In this paper, we focus on how human biases in specifying preferences for resources impacts the performance of large, heterogeneous teams. In particular, we model the inclination of humans to simplify their preference functions and to exaggerate their utility for desired resources, and show the effect of these biases on the team performance. We demonstrate this on two different problems, which are representative of many resource allocation problems addressed in literature. In both these problems, the agents and humans optimize their constraints in a distributed manner. This paper makes two key contributions: (a) Proves theoretical properties of the algorithm used (named DSA) for solving distributed constraint optimization problems, which ensures robustness against human biases; and (b) Empirically illustrates that the effect of human biases on team performance for different problem settings and for varying team sizes is not significant. Both our theoretical and empirical studies support the fact that the solutions provided by DSA for mid to large sized teams are very robust to the common types of human biases.\",\"PeriodicalId\":340211,\"journal\":{\"name\":\"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology\",\"volume\":\"47 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WI-IAT.2010.104\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WI-IAT.2010.104","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
As human-agent teams get increasingly deployed in the real-world, agent designers need to take into account that humans and agents have different abilities to specify preferences. In this paper, we focus on how human biases in specifying preferences for resources impacts the performance of large, heterogeneous teams. In particular, we model the inclination of humans to simplify their preference functions and to exaggerate their utility for desired resources, and show the effect of these biases on the team performance. We demonstrate this on two different problems, which are representative of many resource allocation problems addressed in literature. In both these problems, the agents and humans optimize their constraints in a distributed manner. This paper makes two key contributions: (a) Proves theoretical properties of the algorithm used (named DSA) for solving distributed constraint optimization problems, which ensures robustness against human biases; and (b) Empirically illustrates that the effect of human biases on team performance for different problem settings and for varying team sizes is not significant. Both our theoretical and empirical studies support the fact that the solutions provided by DSA for mid to large sized teams are very robust to the common types of human biases.