{"title":"ML 可解释性:简单并不容易","authors":"Tim Räz","doi":"10.1016/j.shpsa.2023.12.007","DOIUrl":null,"url":null,"abstract":"<div><p>The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the “interpretability spectrum”. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. It is found that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.</p></div>","PeriodicalId":49467,"journal":{"name":"Studies in History and Philosophy of Science","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0039368123001723/pdfft?md5=762e303beb843a4a645f0e00470c907b&pid=1-s2.0-S0039368123001723-main.pdf","citationCount":"0","resultStr":"{\"title\":\"ML interpretability: Simple isn't easy\",\"authors\":\"Tim Räz\",\"doi\":\"10.1016/j.shpsa.2023.12.007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the “interpretability spectrum”. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. It is found that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.</p></div>\",\"PeriodicalId\":49467,\"journal\":{\"name\":\"Studies in History and Philosophy of Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0039368123001723/pdfft?md5=762e303beb843a4a645f0e00470c907b&pid=1-s2.0-S0039368123001723-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in History and Philosophy of Science\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0039368123001723\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HISTORY & PHILOSOPHY OF SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in History and Philosophy of Science","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0039368123001723","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HISTORY & PHILOSOPHY OF SCIENCE","Score":null,"Total":0}
引用次数: 0
摘要
ML 模型的可解释性很重要,但它的意义何在却并不清楚。迄今为止,大多数哲学家都在讨论神经网络等黑箱模型缺乏可解释性的问题,以及旨在使这些模型更加透明的可解释人工智能等方法。本文的目的是通过关注 "可解释性光谱 "的另一端来澄清可解释性的本质。本文将探讨一些模型(线性模型和决策树)具有高度可解释性的原因,以及更一般的模型(MARS 和 GAM)如何保留一定程度的可解释性。研究发现,虽然我们获得可解释性的方式存在差异,但在特定情况下,可解释性是什么可以用一种明确的方式加以解释。
The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the “interpretability spectrum”. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. It is found that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.
期刊介绍:
Studies in History and Philosophy of Science is devoted to the integrated study of the history, philosophy and sociology of the sciences. The editors encourage contributions both in the long-established areas of the history of the sciences and the philosophy of the sciences and in the topical areas of historiography of the sciences, the sciences in relation to gender, culture and society and the sciences in relation to arts. The Journal is international in scope and content and publishes papers from a wide range of countries and cultural traditions.