S. Bringsjord, Naveen Sundar G., Daniel P. Thero, Mei Si
{"title":"Akratic机器人及其计算逻辑","authors":"S. Bringsjord, Naveen Sundar G., Daniel P. Thero, Mei Si","doi":"10.1109/ETHICS.2014.6893436","DOIUrl":null,"url":null,"abstract":"Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.","PeriodicalId":101738,"journal":{"name":"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering","volume":"130 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Akratic robots and the computational logic thereof\",\"authors\":\"S. Bringsjord, Naveen Sundar G., Daniel P. Thero, Mei Si\",\"doi\":\"10.1109/ETHICS.2014.6893436\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.\",\"PeriodicalId\":101738,\"journal\":{\"name\":\"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering\",\"volume\":\"130 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ETHICS.2014.6893436\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETHICS.2014.6893436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Akratic robots and the computational logic thereof
Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.