{"title":"机器学习中的公平性、可解释性和可解释性:以PRIM为例","authors":"Rym Nassih, A. Berrado","doi":"10.1145/3419604.3419776","DOIUrl":null,"url":null,"abstract":"The adoption of complex machine learning (ML) models in recent years has brought along a new challenge related to how to interpret, understand, and explain the reasoning behind these complex models' predictions. Treating complex ML systems as trustworthy black boxes without domain knowledge checking has led to some disastrous outcomes. In this context, interpretability and explainability are often used unintelligibly, and fairness, on the other hand, has become lately popular due to some discrimination problems in ML. While closely related, interpretability and explainability denote different features of prediction. In this sight, the aim of this paper is to give an overview of the interpretability, explainability and the fairness concepts in the literature and to evaluate the performance of the Patient Rule Induction Method (PRIM) concerning these aspects.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"State of the art of Fairness, Interpretability and Explainability in Machine Learning: Case of PRIM\",\"authors\":\"Rym Nassih, A. Berrado\",\"doi\":\"10.1145/3419604.3419776\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The adoption of complex machine learning (ML) models in recent years has brought along a new challenge related to how to interpret, understand, and explain the reasoning behind these complex models' predictions. Treating complex ML systems as trustworthy black boxes without domain knowledge checking has led to some disastrous outcomes. In this context, interpretability and explainability are often used unintelligibly, and fairness, on the other hand, has become lately popular due to some discrimination problems in ML. While closely related, interpretability and explainability denote different features of prediction. In this sight, the aim of this paper is to give an overview of the interpretability, explainability and the fairness concepts in the literature and to evaluate the performance of the Patient Rule Induction Method (PRIM) concerning these aspects.\",\"PeriodicalId\":250715,\"journal\":{\"name\":\"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3419604.3419776\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3419604.3419776","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
State of the art of Fairness, Interpretability and Explainability in Machine Learning: Case of PRIM
The adoption of complex machine learning (ML) models in recent years has brought along a new challenge related to how to interpret, understand, and explain the reasoning behind these complex models' predictions. Treating complex ML systems as trustworthy black boxes without domain knowledge checking has led to some disastrous outcomes. In this context, interpretability and explainability are often used unintelligibly, and fairness, on the other hand, has become lately popular due to some discrimination problems in ML. While closely related, interpretability and explainability denote different features of prediction. In this sight, the aim of this paper is to give an overview of the interpretability, explainability and the fairness concepts in the literature and to evaluate the performance of the Patient Rule Induction Method (PRIM) concerning these aspects.