{"title":"离散时间细胞神经网络的连续学习算法","authors":"H. Magnussen, G. Papoutsis, J. Nossek","doi":"10.1109/CNNA.1994.381689","DOIUrl":null,"url":null,"abstract":"The SGN-type nonlinearity of a standard discrete-time cellular neural network (DTCNN) is replaced by a smooth, sigmoidal nonlinearity with variable gain. Therefore, the resulting dynamical system is fully differentiable. Bounds on gain of the sigmoidal function are given, so that the new smooth system approximates the standard DTCNN within certain limits. A learning algorithm is proposed, which finds the template parameters for the standard DTCNN by gradually increasing the gain of the sigmoidal function.<<ETX>>","PeriodicalId":248898,"journal":{"name":"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Continuation-based learning algorithm for discrete-time cellular neural networks\",\"authors\":\"H. Magnussen, G. Papoutsis, J. Nossek\",\"doi\":\"10.1109/CNNA.1994.381689\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The SGN-type nonlinearity of a standard discrete-time cellular neural network (DTCNN) is replaced by a smooth, sigmoidal nonlinearity with variable gain. Therefore, the resulting dynamical system is fully differentiable. Bounds on gain of the sigmoidal function are given, so that the new smooth system approximates the standard DTCNN within certain limits. A learning algorithm is proposed, which finds the template parameters for the standard DTCNN by gradually increasing the gain of the sigmoidal function.<<ETX>>\",\"PeriodicalId\":248898,\"journal\":{\"name\":\"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CNNA.1994.381689\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CNNA.1994.381689","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Continuation-based learning algorithm for discrete-time cellular neural networks
The SGN-type nonlinearity of a standard discrete-time cellular neural network (DTCNN) is replaced by a smooth, sigmoidal nonlinearity with variable gain. Therefore, the resulting dynamical system is fully differentiable. Bounds on gain of the sigmoidal function are given, so that the new smooth system approximates the standard DTCNN within certain limits. A learning algorithm is proposed, which finds the template parameters for the standard DTCNN by gradually increasing the gain of the sigmoidal function.<>