{"title":"基于Fβ测度的谨慎分类器决策规则的超参数建模研究","authors":"Abdelhak Imoussaten","doi":"10.1016/j.array.2023.100310","DOIUrl":null,"url":null,"abstract":"<div><p>In some sensitive domains where data imperfections are present, standard classification techniques reach their limits. To avoid misclassifications that have serious consequences, recent works propose cautious classification algorithms to handle this problem. Despite of the presence of uncertainty and/or imprecision, a point prediction classifier is forced to bet on a single class. While a cautious classifier proposes the appropriate subset of candidate classes that can be assigned to the sample in the presence of imperfect information. On the other hand, cautiousness should not be at the expense of precision and a trade-off has to be made between these two criteria. Among the existing cautious classifiers, two classifiers propose to manage this trade-off in the decision step by the mean of a parametrized objective function. The first one is the non-deterministic classifier (ndc) proposed within the framework of probability theory and the second one is “evidential classifier based on imprecise relabelling” (eclair) proposed within the framework of belief functions. The theoretical aim of the mentioned hyper-parameters is to control the size of predictions for both classifiers. This paper proposes to study this hyper-parameter in order to select the “best” value in a classification task. First the utility for each candidate subset is studied related to the values of the hyper-parameter and some thresholds are proposed to control the size of the predictions. Then two illustrations are proposed where a method to choose this hyper-parameters based on the calibration data is proposed. The first illustration concerns randomly generated data and the second one concerns the images data of fashion mnist. These illustrations show how to control the size of the predictions and give a comparison between the performances of the two classifiers for a tuning based on our proposition and the one based on grid search method.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The study of the hyper-parameter modelling the decision rule of the cautious classifiers based on the Fβ measure\",\"authors\":\"Abdelhak Imoussaten\",\"doi\":\"10.1016/j.array.2023.100310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In some sensitive domains where data imperfections are present, standard classification techniques reach their limits. To avoid misclassifications that have serious consequences, recent works propose cautious classification algorithms to handle this problem. Despite of the presence of uncertainty and/or imprecision, a point prediction classifier is forced to bet on a single class. While a cautious classifier proposes the appropriate subset of candidate classes that can be assigned to the sample in the presence of imperfect information. On the other hand, cautiousness should not be at the expense of precision and a trade-off has to be made between these two criteria. Among the existing cautious classifiers, two classifiers propose to manage this trade-off in the decision step by the mean of a parametrized objective function. The first one is the non-deterministic classifier (ndc) proposed within the framework of probability theory and the second one is “evidential classifier based on imprecise relabelling” (eclair) proposed within the framework of belief functions. The theoretical aim of the mentioned hyper-parameters is to control the size of predictions for both classifiers. This paper proposes to study this hyper-parameter in order to select the “best” value in a classification task. First the utility for each candidate subset is studied related to the values of the hyper-parameter and some thresholds are proposed to control the size of the predictions. Then two illustrations are proposed where a method to choose this hyper-parameters based on the calibration data is proposed. The first illustration concerns randomly generated data and the second one concerns the images data of fashion mnist. These illustrations show how to control the size of the predictions and give a comparison between the performances of the two classifiers for a tuning based on our proposition and the one based on grid search method.</p></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005623000358\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005623000358","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
The study of the hyper-parameter modelling the decision rule of the cautious classifiers based on the Fβ measure
In some sensitive domains where data imperfections are present, standard classification techniques reach their limits. To avoid misclassifications that have serious consequences, recent works propose cautious classification algorithms to handle this problem. Despite of the presence of uncertainty and/or imprecision, a point prediction classifier is forced to bet on a single class. While a cautious classifier proposes the appropriate subset of candidate classes that can be assigned to the sample in the presence of imperfect information. On the other hand, cautiousness should not be at the expense of precision and a trade-off has to be made between these two criteria. Among the existing cautious classifiers, two classifiers propose to manage this trade-off in the decision step by the mean of a parametrized objective function. The first one is the non-deterministic classifier (ndc) proposed within the framework of probability theory and the second one is “evidential classifier based on imprecise relabelling” (eclair) proposed within the framework of belief functions. The theoretical aim of the mentioned hyper-parameters is to control the size of predictions for both classifiers. This paper proposes to study this hyper-parameter in order to select the “best” value in a classification task. First the utility for each candidate subset is studied related to the values of the hyper-parameter and some thresholds are proposed to control the size of the predictions. Then two illustrations are proposed where a method to choose this hyper-parameters based on the calibration data is proposed. The first illustration concerns randomly generated data and the second one concerns the images data of fashion mnist. These illustrations show how to control the size of the predictions and give a comparison between the performances of the two classifiers for a tuning based on our proposition and the one based on grid search method.