Lucas F. F. Cardoso, Joseph Ribeiro, Vitor Santos, Raíssa Silva, M. Mota, R. Prudêncio, Ronnie Alves
{"title":"基于项目反应理论的举例解释","authors":"Lucas F. F. Cardoso, Joseph Ribeiro, Vitor Santos, Raíssa Silva, M. Mota, R. Prudêncio, Ronnie Alves","doi":"10.48550/arXiv.2210.01638","DOIUrl":null,"url":null,"abstract":", Abstract. Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initia-tives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hy-pothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable. Learning (ML) · Item Response Theory (IRT) · Classification.","PeriodicalId":335206,"journal":{"name":"Brazilian Conference on Intelligent Systems","volume":"974 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Explanation-by-Example Based on Item Response Theory\",\"authors\":\"Lucas F. F. Cardoso, Joseph Ribeiro, Vitor Santos, Raíssa Silva, M. Mota, R. Prudêncio, Ronnie Alves\",\"doi\":\"10.48550/arXiv.2210.01638\",\"DOIUrl\":null,\"url\":null,\"abstract\":\", Abstract. Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initia-tives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hy-pothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable. Learning (ML) · Item Response Theory (IRT) · Classification.\",\"PeriodicalId\":335206,\"journal\":{\"name\":\"Brazilian Conference on Intelligent Systems\",\"volume\":\"974 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Brazilian Conference on Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2210.01638\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brazilian Conference on Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2210.01638","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explanation-by-Example Based on Item Response Theory
, Abstract. Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initia-tives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hy-pothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable. Learning (ML) · Item Response Theory (IRT) · Classification.