Partitioning the variables for alternating optimization of real-valued scalar fields

J. Bezdek, R. Hathaway
{"title":"Partitioning the variables for alternating optimization of real-valued scalar fields","authors":"J. Bezdek, R. Hathaway","doi":"10.1109/NAFIPS.2002.1018067","DOIUrl":null,"url":null,"abstract":"Summary form only given, as follows. Let x be a real-valued scalar field, partitioned into t subsets of non-overlapping variables X/sub i/ (i=1, ..., t). Alternating optimization (AO) is an iterative procedure for minimizing (or maximizing) the function f(x)= f(X/sub 1/, X/sub 2/, ..., X/sub t/) jointly over all variables by alternating restricted minimizations (or maximizations) over the individual subsets of variables X/sub 1/, ..., X/sub t/. AO is the basis for the c-means clustering algorithm (t=2), many forms of vector quantization (t = 2, 3 and 4) and the expectation maximization algorithm (t=4) for normal mixture decomposition. First we review where and how AO fits into the overall optimization landscape. Then we state (without proofs) two new theorems that give very general local and global convergence and rate-of-convergence results which hold for all partitionings of x. Finally, we discuss the important problem of choosing a \"best\" partitioning of the input variables for the AO approach. We show that the number of possible AO iteration schemes is larger than the number of standard partitions of the input variables. Two numerical examples are given to illustrate that the selection of the t subsets of x has an important effect on the rate of convergence of the corresponding AO algorithm to a solution.","PeriodicalId":348314,"journal":{"name":"2002 Annual Meeting of the North American Fuzzy Information Processing Society Proceedings. NAFIPS-FLINT 2002 (Cat. No. 02TH8622)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2002 Annual Meeting of the North American Fuzzy Information Processing Society Proceedings. NAFIPS-FLINT 2002 (Cat. No. 02TH8622)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAFIPS.2002.1018067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Summary form only given, as follows. Let x be a real-valued scalar field, partitioned into t subsets of non-overlapping variables X/sub i/ (i=1, ..., t). Alternating optimization (AO) is an iterative procedure for minimizing (or maximizing) the function f(x)= f(X/sub 1/, X/sub 2/, ..., X/sub t/) jointly over all variables by alternating restricted minimizations (or maximizations) over the individual subsets of variables X/sub 1/, ..., X/sub t/. AO is the basis for the c-means clustering algorithm (t=2), many forms of vector quantization (t = 2, 3 and 4) and the expectation maximization algorithm (t=4) for normal mixture decomposition. First we review where and how AO fits into the overall optimization landscape. Then we state (without proofs) two new theorems that give very general local and global convergence and rate-of-convergence results which hold for all partitionings of x. Finally, we discuss the important problem of choosing a "best" partitioning of the input variables for the AO approach. We show that the number of possible AO iteration schemes is larger than the number of standard partitions of the input variables. Two numerical examples are given to illustrate that the selection of the t subsets of x has an important effect on the rate of convergence of the corresponding AO algorithm to a solution.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
实值标量场交替优化的变量划分
仅给出摘要形式,如下。设x为实值标量场,划分为t个不重叠变量的子集x /下标i/ (i=1,…)交替优化(AO)是一个迭代过程,用于最小化(或最大化)函数f(x)= f(x /下标1/,x /下标2/,…, X/下标t/)通过交替地对变量X/下标1/,…的各个子集进行限制最小化(或最大化),联合地对所有变量进行求解。X/下标t/。AO是c-均值聚类算法(t=2)、多种形式的矢量量化(t= 2,3和4)以及用于正常混合分解的期望最大化算法(t=4)的基础。首先,我们回顾一下AO在何处以及如何适应整个优化环境。然后,我们陈述了(没有证明)两个新的定理,它们给出了非常一般的局部和全局收敛性和收敛速度结果,适用于x的所有划分。最后,我们讨论了为AO方法选择输入变量的“最佳”划分的重要问题。我们证明了可能的AO迭代方案的数量大于输入变量的标准分区的数量。给出了两个数值算例,说明了x的t个子集的选择对相应AO算法收敛到一个解的速度有重要影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fuzzy linear clustering for fabric selection from online database Fuzzy clustering in vision recognition applied in NAVI Fuzzy functions to select an optimal action in decision theory Fuzzy systems and soft O.R Conceptual fuzzy sets-based navigation system for Yahoo!
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1