{"title":"一种新的学习方法来提高Hopfield模型的存储容量","authors":"H. Oh, S. Kothari","doi":"10.1109/IJCNN.1991.170650","DOIUrl":null,"url":null,"abstract":"A new learning technique is introduced to solve the problem of the small and restrictive storage capacity of the Hopfield model. The technique exploits the maximum storage capacity. It fails only if appropriate weights do not exist to store the given set of patterns. The technique is not based on the concept of function minimization. Thus, there is no danger of getting stuck in local minima. The technique is free from the step size and moving target problems. Learning speed is very fast and depends on difficulties presented by the training patterns and not so much on the parameters of the algorithm. The technique is scalable. Its performance does not degrade as the problem size increases. An extensive analysis of the learning technique is provided through simulation results.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"A new learning approach to enhance the storage capacity of the Hopfield model\",\"authors\":\"H. Oh, S. Kothari\",\"doi\":\"10.1109/IJCNN.1991.170650\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A new learning technique is introduced to solve the problem of the small and restrictive storage capacity of the Hopfield model. The technique exploits the maximum storage capacity. It fails only if appropriate weights do not exist to store the given set of patterns. The technique is not based on the concept of function minimization. Thus, there is no danger of getting stuck in local minima. The technique is free from the step size and moving target problems. Learning speed is very fast and depends on difficulties presented by the training patterns and not so much on the parameters of the algorithm. The technique is scalable. Its performance does not degrade as the problem size increases. An extensive analysis of the learning technique is provided through simulation results.<<ETX>>\",\"PeriodicalId\":211135,\"journal\":{\"name\":\"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1991.170650\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1991.170650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A new learning approach to enhance the storage capacity of the Hopfield model
A new learning technique is introduced to solve the problem of the small and restrictive storage capacity of the Hopfield model. The technique exploits the maximum storage capacity. It fails only if appropriate weights do not exist to store the given set of patterns. The technique is not based on the concept of function minimization. Thus, there is no danger of getting stuck in local minima. The technique is free from the step size and moving target problems. Learning speed is very fast and depends on difficulties presented by the training patterns and not so much on the parameters of the algorithm. The technique is scalable. Its performance does not degrade as the problem size increases. An extensive analysis of the learning technique is provided through simulation results.<>