{"title":"解释我们不了解的技术","authors":"Greg Adamson","doi":"10.1109/TTS.2023.3240107","DOIUrl":null,"url":null,"abstract":"Since 2016 a significant program of work has been initiated by the U.S. Defense Advanced Research Projects Agency (DARPA) under the title of explainable artificial intelligence (XAI). This program is seen as important for AI adoption, in this case to include the needs of warfighters to effectively collaborate with AI “partners.” Technology adoption is often promoted based on beliefs, which bears little relationship to the benefit a technology will provide. These beliefs include “progress,” technology superiority, and technology as cornucopia. The XAI program has widely promoted a new belief: that AI is in general explainable. As AI systems often have concealed or black box characteristics, the problem of explainability is significant. This paper argues that due to their complexity, AI systems should be approached in a way similar to the way the scientific method is used to approach natural phenomena. One approach encouraged by DARPA, model induction, is based on post-hoc reasoning. Such inductive reasoning is consistent with the scientific method. However, that method has a history of controls that are applied to create confidence in an uncertain, inductive, outcome. The paper proposes some controls consistent with a philosophical examination of black boxes. As AI systems are being used to determine who should have access to scarce resources and who should be punished and in what way, the claim that AI can be explained is important. Widespread recent experimentation with ChatGPT has also highlighted the challenges and expectations of AI systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"34-45"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Explaining Technology We Do Not Understand\",\"authors\":\"Greg Adamson\",\"doi\":\"10.1109/TTS.2023.3240107\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since 2016 a significant program of work has been initiated by the U.S. Defense Advanced Research Projects Agency (DARPA) under the title of explainable artificial intelligence (XAI). This program is seen as important for AI adoption, in this case to include the needs of warfighters to effectively collaborate with AI “partners.” Technology adoption is often promoted based on beliefs, which bears little relationship to the benefit a technology will provide. These beliefs include “progress,” technology superiority, and technology as cornucopia. The XAI program has widely promoted a new belief: that AI is in general explainable. As AI systems often have concealed or black box characteristics, the problem of explainability is significant. This paper argues that due to their complexity, AI systems should be approached in a way similar to the way the scientific method is used to approach natural phenomena. One approach encouraged by DARPA, model induction, is based on post-hoc reasoning. Such inductive reasoning is consistent with the scientific method. However, that method has a history of controls that are applied to create confidence in an uncertain, inductive, outcome. The paper proposes some controls consistent with a philosophical examination of black boxes. As AI systems are being used to determine who should have access to scarce resources and who should be punished and in what way, the claim that AI can be explained is important. Widespread recent experimentation with ChatGPT has also highlighted the challenges and expectations of AI systems.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"4 1\",\"pages\":\"34-45\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10032112/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10032112/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Since 2016 a significant program of work has been initiated by the U.S. Defense Advanced Research Projects Agency (DARPA) under the title of explainable artificial intelligence (XAI). This program is seen as important for AI adoption, in this case to include the needs of warfighters to effectively collaborate with AI “partners.” Technology adoption is often promoted based on beliefs, which bears little relationship to the benefit a technology will provide. These beliefs include “progress,” technology superiority, and technology as cornucopia. The XAI program has widely promoted a new belief: that AI is in general explainable. As AI systems often have concealed or black box characteristics, the problem of explainability is significant. This paper argues that due to their complexity, AI systems should be approached in a way similar to the way the scientific method is used to approach natural phenomena. One approach encouraged by DARPA, model induction, is based on post-hoc reasoning. Such inductive reasoning is consistent with the scientific method. However, that method has a history of controls that are applied to create confidence in an uncertain, inductive, outcome. The paper proposes some controls consistent with a philosophical examination of black boxes. As AI systems are being used to determine who should have access to scarce resources and who should be punished and in what way, the claim that AI can be explained is important. Widespread recent experimentation with ChatGPT has also highlighted the challenges and expectations of AI systems.