{"title":"Community detection in feature-rich networks to meet K-means","authors":"S. Shalileh, B. Mirkin","doi":"10.1145/3487351.3488356","DOIUrl":null,"url":null,"abstract":"We derive two extensions of the celebrated K-means algorithm as a tool for community detection in feature-rich networks. We define a data-recovery criterion additively combining conventional least-squares criteria for approximation of the network link data and the feature data at network nodes by a partition along with its within-cluster \"centers\". The dimension of the space at which the method operates is the sum of the number of nodes and the number of features, which may be high indeed. To tackle the so-called curse of dimensionality, we may replace the innate Euclidean distance with cosine distance sometimes. We experimentally validate our proposed methods and demonstrate their efficiency by comparing them to most popular approaches.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3487351.3488356","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We derive two extensions of the celebrated K-means algorithm as a tool for community detection in feature-rich networks. We define a data-recovery criterion additively combining conventional least-squares criteria for approximation of the network link data and the feature data at network nodes by a partition along with its within-cluster "centers". The dimension of the space at which the method operates is the sum of the number of nodes and the number of features, which may be high indeed. To tackle the so-called curse of dimensionality, we may replace the innate Euclidean distance with cosine distance sometimes. We experimentally validate our proposed methods and demonstrate their efficiency by comparing them to most popular approaches.