{"title":"基于LVQ2的地图动态构建神经网络","authors":"E. Maillard, B. Solaiman","doi":"10.1109/ICNN.1994.374274","DOIUrl":null,"url":null,"abstract":"HLVQ network achieves a synthesis of supervised and unsupervised learning. Promising results have been reported elsewhere. A dynamic map-building technique for HLVQ is introduced, During learning, the creation of neurons follows a loose KD-tree algorithm. A criterion for the detection of the network weakness to match the topology of the training set is presented. This information is localized in the input space. When the weakness criterion is matched, a neuron is added to the existing map in a way that preserves the topology of the network. This new algorithm sets the network almost free of a crucial external parameter: the size of the neuron map. Furthermore, it is shown that the network presents highest classification score when employing constant learning rate and neighborhood size.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A neural network based on LVQ2 with dynamic building of the map\",\"authors\":\"E. Maillard, B. Solaiman\",\"doi\":\"10.1109/ICNN.1994.374274\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"HLVQ network achieves a synthesis of supervised and unsupervised learning. Promising results have been reported elsewhere. A dynamic map-building technique for HLVQ is introduced, During learning, the creation of neurons follows a loose KD-tree algorithm. A criterion for the detection of the network weakness to match the topology of the training set is presented. This information is localized in the input space. When the weakness criterion is matched, a neuron is added to the existing map in a way that preserves the topology of the network. This new algorithm sets the network almost free of a crucial external parameter: the size of the neuron map. Furthermore, it is shown that the network presents highest classification score when employing constant learning rate and neighborhood size.<<ETX>>\",\"PeriodicalId\":209128,\"journal\":{\"name\":\"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNN.1994.374274\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNN.1994.374274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A neural network based on LVQ2 with dynamic building of the map
HLVQ network achieves a synthesis of supervised and unsupervised learning. Promising results have been reported elsewhere. A dynamic map-building technique for HLVQ is introduced, During learning, the creation of neurons follows a loose KD-tree algorithm. A criterion for the detection of the network weakness to match the topology of the training set is presented. This information is localized in the input space. When the weakness criterion is matched, a neuron is added to the existing map in a way that preserves the topology of the network. This new algorithm sets the network almost free of a crucial external parameter: the size of the neuron map. Furthermore, it is shown that the network presents highest classification score when employing constant learning rate and neighborhood size.<>