O. Parlangeli, Stefano Guidi, E. Marchigiani, P. Palmitesta, A. Andreadis, S. Roncato
{"title":"How guilty is a robot who kills other robots?","authors":"O. Parlangeli, Stefano Guidi, E. Marchigiani, P. Palmitesta, A. Andreadis, S. Roncato","doi":"10.1109/IISA50023.2020.9284338","DOIUrl":null,"url":null,"abstract":"Safety may depends crucially on making moral judgments. To date we have a lack of knowledge about the possibility of intervening in the processes that lead to moral judgments in relation to the behavior of artificial agents. The study reported here involved 293 students from the University of Siena who made moral judgments after reading the description of an event in which a person or robot killed other people or robots. The study was conducted through an online questionnaire. The results suggest that moral judgments essentially depend on the type of victim and that are different if they involve human or artificial agents. Furthermore, some characteristics of the evaluators, such as the greater or lesser disposition to attribute mental states to artificial agents, have an influence on these evaluations. On the other hand, the level of familiarity with these systems seems to have a limited effect.","PeriodicalId":109238,"journal":{"name":"2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISA50023.2020.9284338","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Safety may depends crucially on making moral judgments. To date we have a lack of knowledge about the possibility of intervening in the processes that lead to moral judgments in relation to the behavior of artificial agents. The study reported here involved 293 students from the University of Siena who made moral judgments after reading the description of an event in which a person or robot killed other people or robots. The study was conducted through an online questionnaire. The results suggest that moral judgments essentially depend on the type of victim and that are different if they involve human or artificial agents. Furthermore, some characteristics of the evaluators, such as the greater or lesser disposition to attribute mental states to artificial agents, have an influence on these evaluations. On the other hand, the level of familiarity with these systems seems to have a limited effect.