{"title":"关于距离指标性能的实证研究","authors":"F. Aydin","doi":"10.35414/akufemubid.1325843","DOIUrl":null,"url":null,"abstract":"Metrics are used to measure the distance, similarity, or dissimilarity between two points in a metric space. Metric learning algorithms perform the finding task of data points that are closest or furthest to a query point in m-dimensional metric space. Some metrics take into account the assumption that the whole dimensions are of equal importance, and vice versa. However, this assumption does not incorporate a number of real-world problems that classification algorithms tackle. In this research, the existing information gain, the information gain ratio, and some well-known conventional metrics have been compared by each other. The 1-Nearest Neighbor algorithm taking these metrics as its meta-parameter has been applied to forty-nine benchmark datasets. Only the accuracy rate criterion has been employed in order to quantify the performance of the metrics. The experimental results show that each metric is successful on datasets corresponding to its own domain. In other words, each metric is favorable on datasets overlapping its own assumption. In addition, there also exists incompleteness in classification tasks for metrics just like there is for learning algorithms.","PeriodicalId":7433,"journal":{"name":"Afyon Kocatepe University Journal of Sciences and Engineering","volume":"31 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Uzaklık Metriklerinin Performansı Üzerine Ampirik Bir Çalışma\",\"authors\":\"F. Aydin\",\"doi\":\"10.35414/akufemubid.1325843\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Metrics are used to measure the distance, similarity, or dissimilarity between two points in a metric space. Metric learning algorithms perform the finding task of data points that are closest or furthest to a query point in m-dimensional metric space. Some metrics take into account the assumption that the whole dimensions are of equal importance, and vice versa. However, this assumption does not incorporate a number of real-world problems that classification algorithms tackle. In this research, the existing information gain, the information gain ratio, and some well-known conventional metrics have been compared by each other. The 1-Nearest Neighbor algorithm taking these metrics as its meta-parameter has been applied to forty-nine benchmark datasets. Only the accuracy rate criterion has been employed in order to quantify the performance of the metrics. The experimental results show that each metric is successful on datasets corresponding to its own domain. In other words, each metric is favorable on datasets overlapping its own assumption. In addition, there also exists incompleteness in classification tasks for metrics just like there is for learning algorithms.\",\"PeriodicalId\":7433,\"journal\":{\"name\":\"Afyon Kocatepe University Journal of Sciences and Engineering\",\"volume\":\"31 6\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Afyon Kocatepe University Journal of Sciences and Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.35414/akufemubid.1325843\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Afyon Kocatepe University Journal of Sciences and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.35414/akufemubid.1325843","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
度量用于测量度量空间中两点之间的距离、相似性或不相似性。度量学习算法的任务是找到 m 维度量空间中与查询点距离最近或最远的数据点。一些度量方法考虑了整个维度同等重要的假设,反之亦然。然而,这一假设并没有考虑到分类算法所要解决的一些实际问题。在这项研究中,现有的信息增益、信息增益比和一些著名的传统度量指标进行了相互比较。以这些指标为元参数的 1-Nearest Neighbor 算法已应用于 49 个基准数据集。为了量化这些指标的性能,只采用了准确率标准。实验结果表明,每个指标都能在与其自身领域相对应的数据集上取得成功。换句话说,每种度量标准在与自身假设重叠的数据集上都是有利的。此外,就像学习算法一样,度量标准在分类任务中也存在不完整性。
Uzaklık Metriklerinin Performansı Üzerine Ampirik Bir Çalışma
Metrics are used to measure the distance, similarity, or dissimilarity between two points in a metric space. Metric learning algorithms perform the finding task of data points that are closest or furthest to a query point in m-dimensional metric space. Some metrics take into account the assumption that the whole dimensions are of equal importance, and vice versa. However, this assumption does not incorporate a number of real-world problems that classification algorithms tackle. In this research, the existing information gain, the information gain ratio, and some well-known conventional metrics have been compared by each other. The 1-Nearest Neighbor algorithm taking these metrics as its meta-parameter has been applied to forty-nine benchmark datasets. Only the accuracy rate criterion has been employed in order to quantify the performance of the metrics. The experimental results show that each metric is successful on datasets corresponding to its own domain. In other words, each metric is favorable on datasets overlapping its own assumption. In addition, there also exists incompleteness in classification tasks for metrics just like there is for learning algorithms.