{"title":"完全稳定细胞神经网络的循环感知器学习算法","authors":"C. Guzelis, S. Karamahmut","doi":"10.1109/CNNA.1994.381688","DOIUrl":null,"url":null,"abstract":"A supervised learning algorithm for obtaining the template coefficients in completely stable cellular neural networks (CNNs) is presented. The proposed algorithm resembles the well-known perceptron learning algorithm and hence is called as recurrent perceptron learning algorithm (RPLA) as applied to a dynamical network, CNN. The RPLA can be described as the following set of rules: (i) increase each feedback template coefficient which defines the connection to a mismatching cell from its neighbor whose steady-state output is same with the mismatching cell's desired output. On the contrary, decrease each feedback template coefficient which defines the connection to a mismatching cell from its neighbor whose steady-state is different from the mismatching cell's desired output. (ii) Change the input template coefficients according to the rule stated in (i) by only replacing the word of \"neighbor\" with \"input\". (iii) Retain the template coefficients unchanged if the actual outputs match the desired outputs. The proposed algorithm RPLA has been applied for training CNNs to perform several image processing tasks such as edge detection, hole filling and corner detection. The performance of the templates obtained for the chosen input-(desired)output training pairs has been tested on a set of images which are different from the input images used in the training phase.<<ETX>>","PeriodicalId":248898,"journal":{"name":"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":"{\"title\":\"Recurrent perceptron learning algorithm for completely stable cellular neural networks\",\"authors\":\"C. Guzelis, S. Karamahmut\",\"doi\":\"10.1109/CNNA.1994.381688\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A supervised learning algorithm for obtaining the template coefficients in completely stable cellular neural networks (CNNs) is presented. The proposed algorithm resembles the well-known perceptron learning algorithm and hence is called as recurrent perceptron learning algorithm (RPLA) as applied to a dynamical network, CNN. The RPLA can be described as the following set of rules: (i) increase each feedback template coefficient which defines the connection to a mismatching cell from its neighbor whose steady-state output is same with the mismatching cell's desired output. On the contrary, decrease each feedback template coefficient which defines the connection to a mismatching cell from its neighbor whose steady-state is different from the mismatching cell's desired output. (ii) Change the input template coefficients according to the rule stated in (i) by only replacing the word of \\\"neighbor\\\" with \\\"input\\\". (iii) Retain the template coefficients unchanged if the actual outputs match the desired outputs. The proposed algorithm RPLA has been applied for training CNNs to perform several image processing tasks such as edge detection, hole filling and corner detection. The performance of the templates obtained for the chosen input-(desired)output training pairs has been tested on a set of images which are different from the input images used in the training phase.<<ETX>>\",\"PeriodicalId\":248898,\"journal\":{\"name\":\"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"40\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CNNA.1994.381688\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CNNA.1994.381688","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recurrent perceptron learning algorithm for completely stable cellular neural networks
A supervised learning algorithm for obtaining the template coefficients in completely stable cellular neural networks (CNNs) is presented. The proposed algorithm resembles the well-known perceptron learning algorithm and hence is called as recurrent perceptron learning algorithm (RPLA) as applied to a dynamical network, CNN. The RPLA can be described as the following set of rules: (i) increase each feedback template coefficient which defines the connection to a mismatching cell from its neighbor whose steady-state output is same with the mismatching cell's desired output. On the contrary, decrease each feedback template coefficient which defines the connection to a mismatching cell from its neighbor whose steady-state is different from the mismatching cell's desired output. (ii) Change the input template coefficients according to the rule stated in (i) by only replacing the word of "neighbor" with "input". (iii) Retain the template coefficients unchanged if the actual outputs match the desired outputs. The proposed algorithm RPLA has been applied for training CNNs to perform several image processing tasks such as edge detection, hole filling and corner detection. The performance of the templates obtained for the chosen input-(desired)output training pairs has been tested on a set of images which are different from the input images used in the training phase.<>