Stability estimation for unsupervised clustering: A review.

IF 4.4 2区 数学 Q1 STATISTICS & PROBABILITY Wiley Interdisciplinary Reviews-Computational Statistics Pub Date : 2022-11-01 Epub Date: 2022-01-09 DOI:10.1002/wics.1575
Tianmou Liu, Han Yu, Rachael Hageman Blair
{"title":"Stability estimation for unsupervised clustering: A review.","authors":"Tianmou Liu, Han Yu, Rachael Hageman Blair","doi":"10.1002/wics.1575","DOIUrl":null,"url":null,"abstract":"<p><p>Cluster analysis remains one of the most challenging yet fundamental tasks in unsupervised learning. This is due in part to the fact that there are no labels or gold standards by which performance can be measured. Moreover, the wide range of clustering methods available is governed by different objective functions, different parameters, and dissimilarity measures. The purpose of clustering is versatile, often playing critical roles in the early stages of exploratory data analysis and as an endpoint for knowledge and discovery. Thus, understanding the quality of a clustering is of critical importance. The concept of <i>stability</i> has emerged as a strategy for assessing the performance and reproducibility of data clustering. The key idea is to produce perturbed data sets that are very close to the original, and cluster them. If the clustering is stable, then the clusters from the original data will be preserved in the perturbed data clustering. The nature of the perturbation, and the methods for quantifying similarity between clusterings, are nontrivial, and ultimately what distinguishes many of the stability estimation methods apart. In this review, we provide an overview of the very active research area of cluster stability estimation and discuss some of the open questions and challenges that remain in the field. This article is categorized under:Statistical Learning and Exploratory Methods of the Data Sciences > Clustering and Classification.</p>","PeriodicalId":47779,"journal":{"name":"Wiley Interdisciplinary Reviews-Computational Statistics","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/0e/84/WICS-14-e1575.PMC9787023.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wiley Interdisciplinary Reviews-Computational Statistics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1002/wics.1575","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/1/9 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 0

Abstract

Cluster analysis remains one of the most challenging yet fundamental tasks in unsupervised learning. This is due in part to the fact that there are no labels or gold standards by which performance can be measured. Moreover, the wide range of clustering methods available is governed by different objective functions, different parameters, and dissimilarity measures. The purpose of clustering is versatile, often playing critical roles in the early stages of exploratory data analysis and as an endpoint for knowledge and discovery. Thus, understanding the quality of a clustering is of critical importance. The concept of stability has emerged as a strategy for assessing the performance and reproducibility of data clustering. The key idea is to produce perturbed data sets that are very close to the original, and cluster them. If the clustering is stable, then the clusters from the original data will be preserved in the perturbed data clustering. The nature of the perturbation, and the methods for quantifying similarity between clusterings, are nontrivial, and ultimately what distinguishes many of the stability estimation methods apart. In this review, we provide an overview of the very active research area of cluster stability estimation and discuss some of the open questions and challenges that remain in the field. This article is categorized under:Statistical Learning and Exploratory Methods of the Data Sciences > Clustering and Classification.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
无监督聚类的稳定性估计:综述。
聚类分析仍然是无监督学习中最具挑战性的基本任务之一。部分原因在于没有标签或黄金标准来衡量性能。此外,现有的各种聚类方法受制于不同的目标函数、不同的参数和差异度量。聚类的目的是多方面的,通常在探索性数据分析的早期阶段发挥关键作用,也是知识和发现的终点。因此,了解聚类的质量至关重要。稳定性概念已成为评估数据聚类性能和可重复性的一种策略。其关键思路是生成与原始数据非常接近的扰动数据集,并对其进行聚类。如果聚类是稳定的,那么原始数据中的聚类将在扰动数据聚类中得到保留。扰动的性质以及聚类之间相似性的量化方法并不复杂,这也是许多稳定性估计方法的最终区别所在。在这篇综述中,我们将概述非常活跃的聚类稳定性估计研究领域,并讨论该领域仍存在的一些开放性问题和挑战。本文所属分类:数据科学的统计学习与探索方法 > 聚类与分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.20
自引率
0.00%
发文量
31
期刊最新文献
A spectrum of explainable and interpretable machine learning approaches for genomic studies Functional neuroimaging in the era of Big Data and Open Science: A modern overview Neuroimaging statistical approaches for determining neural correlates of Alzheimer's disease via positron emission tomography imaging Information criteria for model selection Data Integration in Causal Inference.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1