利用信息准则检测期望最大化聚类过程中的聚类数量

U. Gupta, Vinay Menon, Uday Babbar
{"title":"利用信息准则检测期望最大化聚类过程中的聚类数量","authors":"U. Gupta, Vinay Menon, Uday Babbar","doi":"10.1109/ICMLC.2010.47","DOIUrl":null,"url":null,"abstract":"This paper presents an algorithm to automatically determine the number of clusters in a given input data set, under a mixture of Gaussians assumption. Our algorithm extends the Expectation- Maximization clustering approach by starting with a single cluster assumption for the data, and recursively splitting one of the clusters in order to find a tighter fit. An Information Criterion parameter is used to make a selection between the current and previous model after each split. We build this approach upon prior work done on both the K-Means and Expectation-Maximization algorithms. We also present a novel idea for intelligent cluster splitting which minimizes convergence time and substantially improves accuracy.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"Detecting the Number of Clusters during Expectation-Maximization Clustering Using Information Criterion\",\"authors\":\"U. Gupta, Vinay Menon, Uday Babbar\",\"doi\":\"10.1109/ICMLC.2010.47\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an algorithm to automatically determine the number of clusters in a given input data set, under a mixture of Gaussians assumption. Our algorithm extends the Expectation- Maximization clustering approach by starting with a single cluster assumption for the data, and recursively splitting one of the clusters in order to find a tighter fit. An Information Criterion parameter is used to make a selection between the current and previous model after each split. We build this approach upon prior work done on both the K-Means and Expectation-Maximization algorithms. We also present a novel idea for intelligent cluster splitting which minimizes convergence time and substantially improves accuracy.\",\"PeriodicalId\":423912,\"journal\":{\"name\":\"2010 Second International Conference on Machine Learning and Computing\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-02-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 Second International Conference on Machine Learning and Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLC.2010.47\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 Second International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLC.2010.47","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

摘要

本文提出了一种在混合高斯假设下,自动确定给定输入数据集中聚类数目的算法。我们的算法扩展了期望最大化聚类方法,从数据的单个聚类假设开始,并递归地拆分其中一个聚类以找到更紧密的拟合。每次分割后,使用Information Criterion参数在当前模型和以前的模型之间进行选择。我们在先前对K-Means和期望最大化算法所做的工作基础上构建了这种方法。我们还提出了一种新的智能聚类分割思想,使收敛时间最小化并大大提高了准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Detecting the Number of Clusters during Expectation-Maximization Clustering Using Information Criterion
This paper presents an algorithm to automatically determine the number of clusters in a given input data set, under a mixture of Gaussians assumption. Our algorithm extends the Expectation- Maximization clustering approach by starting with a single cluster assumption for the data, and recursively splitting one of the clusters in order to find a tighter fit. An Information Criterion parameter is used to make a selection between the current and previous model after each split. We build this approach upon prior work done on both the K-Means and Expectation-Maximization algorithms. We also present a novel idea for intelligent cluster splitting which minimizes convergence time and substantially improves accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Modified Ant Miner for Intrusion Detection An Approach Based on Clustering Method for Object Finding Mobile Robots Using ACO Statistical Feature Extraction for Classification of Image Spam Using Artificial Neural Networks Recognition of Faces Using Improved Principal Component Analysis Autonomous Navigation in Rubber Plantations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1