{"title":"Algebraic Modeling of Trolley Problems on a Boolean Multivalued Logic","authors":"Jiaqi Peng, Rintaro Mizutani, Kujira Suzuki, Akira Midorikawa, Hisashi Suzuki","doi":"10.1109/IAI55780.2022.9976864","DOIUrl":null,"url":null,"abstract":"Instead of the well-known three laws of robotics that seem difficult to be applied to solving the trolley problems in the context of frame problems, this paper proposes algebraic modeling of the trolley problems on a Boolean multivalued logic so that we can analyze psychologically any knowledge simply by quasi-optimizing the truth values of logic formulae for inference in a class of Boolean algebra. Some simulation results suggest a possibility that, by introducing an atom that takes the truth values of directly killing person(s), we can control the utilitarian over-rationalization of sacrificing person(s) on AI machines.","PeriodicalId":138951,"journal":{"name":"2022 4th International Conference on Industrial Artificial Intelligence (IAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Industrial Artificial Intelligence (IAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAI55780.2022.9976864","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Instead of the well-known three laws of robotics that seem difficult to be applied to solving the trolley problems in the context of frame problems, this paper proposes algebraic modeling of the trolley problems on a Boolean multivalued logic so that we can analyze psychologically any knowledge simply by quasi-optimizing the truth values of logic formulae for inference in a class of Boolean algebra. Some simulation results suggest a possibility that, by introducing an atom that takes the truth values of directly killing person(s), we can control the utilitarian over-rationalization of sacrificing person(s) on AI machines.