{"title":"Human-Like Learning of Social Reasoning via Analogy","authors":"Irina Rabkina","doi":"10.1609/aaaiss.v3i1.31284","DOIUrl":null,"url":null,"abstract":"Neurotypical adult humans are impeccably good social reasoners. Despite the occasional faux pas, we know how to interact in most social settings and how to consider others' points of view. Young children, on the other hand, do not. Social reasoning, like many of our most important skills, is learned. \n\nMuch like human children, AI agents are not good social reasoners. While some algorithms can perform some aspects of social reasoning, we are a ways off from AI that can interact naturally and appropriately in the broad range of settings that people can. In this talk, I will argue that learning social reasoning via the same processes used by people will help AI agents reason--and interact--more like people do. Specifically, I will argue that children learn social reasoning via analogy, and that AI agents should, too. I will present evidence from cognitive modeling experiments demonstrating the former and AI experiments demonstrating the latter. I will also propose future directions for social reasoning research that both demonstrate the need for robust, human-like social reasoning in AI and test the utility of common approaches.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"10 15","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Neurotypical adult humans are impeccably good social reasoners. Despite the occasional faux pas, we know how to interact in most social settings and how to consider others' points of view. Young children, on the other hand, do not. Social reasoning, like many of our most important skills, is learned.
Much like human children, AI agents are not good social reasoners. While some algorithms can perform some aspects of social reasoning, we are a ways off from AI that can interact naturally and appropriately in the broad range of settings that people can. In this talk, I will argue that learning social reasoning via the same processes used by people will help AI agents reason--and interact--more like people do. Specifically, I will argue that children learn social reasoning via analogy, and that AI agents should, too. I will present evidence from cognitive modeling experiments demonstrating the former and AI experiments demonstrating the latter. I will also propose future directions for social reasoning research that both demonstrate the need for robust, human-like social reasoning in AI and test the utility of common approaches.