Tom Hitron, Benny Megidish, Etay Todress, Noa Morag, H. Erel
{"title":"人机交互中的AI偏见:性别偏见机器人的风险评估","authors":"Tom Hitron, Benny Megidish, Etay Todress, Noa Morag, H. Erel","doi":"10.1109/RO-MAN53752.2022.9900673","DOIUrl":null,"url":null,"abstract":"With recent advancements in AI, there are growing concerns about human biases implemented in AI decisions. Threats posed by AI bias may be even more drastic when applied to robots that are perceived as independent entities and are not mediated by humans. Furthermore, technology is typically perceived as objective and there is a risk that people will embrace its decisions without considering possible biases. In order to understand the extent of threats brought about by such biases, we evaluated participants’ responses to a gender-biased robot mediating a debate between two participants (male and female). The vast majority of participants did not associate the robot’s behavior with a bias, despite being informed that the robot’s algorithm is based on human examples. Participants attributed the robot’s decisions to their own performance and used explanations involving gender stereotypes. Our findings suggest that robots’ biased behaviors can serve as validation for common human stereotypes.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"AI bias in Human-Robot Interaction: An evaluation of the Risk in Gender Biased Robots\",\"authors\":\"Tom Hitron, Benny Megidish, Etay Todress, Noa Morag, H. Erel\",\"doi\":\"10.1109/RO-MAN53752.2022.9900673\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With recent advancements in AI, there are growing concerns about human biases implemented in AI decisions. Threats posed by AI bias may be even more drastic when applied to robots that are perceived as independent entities and are not mediated by humans. Furthermore, technology is typically perceived as objective and there is a risk that people will embrace its decisions without considering possible biases. In order to understand the extent of threats brought about by such biases, we evaluated participants’ responses to a gender-biased robot mediating a debate between two participants (male and female). The vast majority of participants did not associate the robot’s behavior with a bias, despite being informed that the robot’s algorithm is based on human examples. Participants attributed the robot’s decisions to their own performance and used explanations involving gender stereotypes. Our findings suggest that robots’ biased behaviors can serve as validation for common human stereotypes.\",\"PeriodicalId\":250997,\"journal\":{\"name\":\"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN53752.2022.9900673\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900673","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AI bias in Human-Robot Interaction: An evaluation of the Risk in Gender Biased Robots
With recent advancements in AI, there are growing concerns about human biases implemented in AI decisions. Threats posed by AI bias may be even more drastic when applied to robots that are perceived as independent entities and are not mediated by humans. Furthermore, technology is typically perceived as objective and there is a risk that people will embrace its decisions without considering possible biases. In order to understand the extent of threats brought about by such biases, we evaluated participants’ responses to a gender-biased robot mediating a debate between two participants (male and female). The vast majority of participants did not associate the robot’s behavior with a bias, despite being informed that the robot’s algorithm is based on human examples. Participants attributed the robot’s decisions to their own performance and used explanations involving gender stereotypes. Our findings suggest that robots’ biased behaviors can serve as validation for common human stereotypes.