{"title":"Improving the performance of the LMS algorithm via cooperative learning","authors":"R. Das, B. K. Das, M. Chakraborty","doi":"10.1109/NCC.2013.6487980","DOIUrl":null,"url":null,"abstract":"Combination of two adaptive filters working in parallel for achieving better performance both in term of convergence speed and excess mean square error (EMSE) has been considered by several researchers in recent past. Prominent among these include convex combination (where combinational weight factors are within the range [0 1], while summing up to one), affine combination (where the combinational weight factors are free from any range constraint, while still summing up to one) and unconstrained model combination (where the output of constituent filters are combined using another adaptive algorithm). In this paper, we propose a novel way of using two adaptive filters for achieving better performance, using the cooperative learning approach. For this, we employ one LMS based adaptive filter that uses a larger step size and thus has a faster rate of convergence at the expense of higher EMSE. The other filter employed uses a modified version of the LMS algorithm, which employs a much lesser step size, but has one extra update term in the weight update relation that helps in learning from the faster filter its filter weight information. The learning takes place during the transient phase, while, in the steady state, two filters become almost independent of each other. Presence of the learning component in the weight update recursion enables the filter to converge much faster while a smaller step size ensures much less steady state EMSE. The claims are supported by theoretical as well as detailed simulation studies.","PeriodicalId":202526,"journal":{"name":"2013 National Conference on Communications (NCC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC.2013.6487980","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Combination of two adaptive filters working in parallel for achieving better performance both in term of convergence speed and excess mean square error (EMSE) has been considered by several researchers in recent past. Prominent among these include convex combination (where combinational weight factors are within the range [0 1], while summing up to one), affine combination (where the combinational weight factors are free from any range constraint, while still summing up to one) and unconstrained model combination (where the output of constituent filters are combined using another adaptive algorithm). In this paper, we propose a novel way of using two adaptive filters for achieving better performance, using the cooperative learning approach. For this, we employ one LMS based adaptive filter that uses a larger step size and thus has a faster rate of convergence at the expense of higher EMSE. The other filter employed uses a modified version of the LMS algorithm, which employs a much lesser step size, but has one extra update term in the weight update relation that helps in learning from the faster filter its filter weight information. The learning takes place during the transient phase, while, in the steady state, two filters become almost independent of each other. Presence of the learning component in the weight update recursion enables the filter to converge much faster while a smaller step size ensures much less steady state EMSE. The claims are supported by theoretical as well as detailed simulation studies.