Pareto神经进化:用多目标优化构造神经网络集合

Hussein A. Abbass
{"title":"Pareto神经进化:用多目标优化构造神经网络集合","authors":"Hussein A. Abbass","doi":"10.1109/CEC.2003.1299928","DOIUrl":null,"url":null,"abstract":"In this paper, we present a comparison between two multiobjective formulations to the formation of neuro-ensembles. The first formulation splits the training set into two nonoverlapping stratified subsets and form an objective to minimize the training error on each subset, while the second formulation adds random noise to the training set to form a second objective. A variation of the memetic Pareto artificial neural network (MPANN) algorithm is used. MPANN is based on differential evolution for continuous optimization. The ensemble is formed from all networks on the Pareto frontier. It is found that the first formulation outperformed the second. The first formulation is also found to be competitive to other methods in the literature.","PeriodicalId":416243,"journal":{"name":"The 2003 Congress on Evolutionary Computation, 2003. CEC '03.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Pareto neuro-evolution: constructing ensemble of neural networks using multi-objective optimization\",\"authors\":\"Hussein A. Abbass\",\"doi\":\"10.1109/CEC.2003.1299928\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present a comparison between two multiobjective formulations to the formation of neuro-ensembles. The first formulation splits the training set into two nonoverlapping stratified subsets and form an objective to minimize the training error on each subset, while the second formulation adds random noise to the training set to form a second objective. A variation of the memetic Pareto artificial neural network (MPANN) algorithm is used. MPANN is based on differential evolution for continuous optimization. The ensemble is formed from all networks on the Pareto frontier. It is found that the first formulation outperformed the second. The first formulation is also found to be competitive to other methods in the literature.\",\"PeriodicalId\":416243,\"journal\":{\"name\":\"The 2003 Congress on Evolutionary Computation, 2003. CEC '03.\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2003 Congress on Evolutionary Computation, 2003. CEC '03.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC.2003.1299928\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2003 Congress on Evolutionary Computation, 2003. CEC '03.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.2003.1299928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

在本文中,我们提出了两种多目标公式的形成神经系统的比较。第一种公式将训练集分成两个不重叠的分层子集,形成一个目标,使每个子集上的训练误差最小化,第二种公式在训练集上加入随机噪声,形成第二个目标。采用了模因帕累托人工神经网络(MPANN)算法的一种变体。MPANN是一种基于差分进化的连续优化算法。这个整体是由帕累托边境的所有网络组成的。结果表明,第一种配方优于第二种配方。第一种配方也被发现与文献中的其他方法具有竞争力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Pareto neuro-evolution: constructing ensemble of neural networks using multi-objective optimization
In this paper, we present a comparison between two multiobjective formulations to the formation of neuro-ensembles. The first formulation splits the training set into two nonoverlapping stratified subsets and form an objective to minimize the training error on each subset, while the second formulation adds random noise to the training set to form a second objective. A variation of the memetic Pareto artificial neural network (MPANN) algorithm is used. MPANN is based on differential evolution for continuous optimization. The ensemble is formed from all networks on the Pareto frontier. It is found that the first formulation outperformed the second. The first formulation is also found to be competitive to other methods in the literature.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Searching oligo sets of human chromosome 12 using evolutionary strategies A nonlinear control system design based on HJB/HJI/FBI equations via differential genetic programming approach Particle swarm optimizers for Pareto optimization with enhanced archiving techniques Epigenetic programming: an approach of embedding epigenetic learning via modification of histones in genetic programming A new particle swarm optimiser for linearly constrained optimisation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1