J. D. Fernández-Rodríguez, E. Palomo, J. M. Ortiz-de-Lazcano-Lobato, G. Ramos-Jiménez, Ezequiel López-Rubio
{"title":"连续无监督学习的动态学习率","authors":"J. D. Fernández-Rodríguez, E. Palomo, J. M. Ortiz-de-Lazcano-Lobato, G. Ramos-Jiménez, Ezequiel López-Rubio","doi":"10.3233/ica-230701","DOIUrl":null,"url":null,"abstract":"The dilemma between stability and plasticity is crucial in machine learning, especially when non-stationary input distributions are considered. This issue can be addressed by continual learning in order to alleviate catastrophic forgetting. This strategy has been previously proposed for supervised and reinforcement learning models. However, little attention has been devoted to unsupervised learning. This work presents a dynamic learning rate framework for unsupervised neural networks that can handle non-stationary distributions. In order for the model to adapt to the input as it changes its characteristics, a varying learning rate that does not merely depend on the training step but on the reconstruction error has been proposed. In the experiments, different configurations for classical competitive neural networks, self-organizing maps and growing neural gas with either per-neuron or per-network dynamic learning rate have been tested. Experimental results on document clustering tasks demonstrate the suitability of the proposal for real-world problems.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"3 1","pages":"257-273"},"PeriodicalIF":5.8000,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Dynamic learning rates for continual unsupervised learning\",\"authors\":\"J. D. Fernández-Rodríguez, E. Palomo, J. M. Ortiz-de-Lazcano-Lobato, G. Ramos-Jiménez, Ezequiel López-Rubio\",\"doi\":\"10.3233/ica-230701\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The dilemma between stability and plasticity is crucial in machine learning, especially when non-stationary input distributions are considered. This issue can be addressed by continual learning in order to alleviate catastrophic forgetting. This strategy has been previously proposed for supervised and reinforcement learning models. However, little attention has been devoted to unsupervised learning. This work presents a dynamic learning rate framework for unsupervised neural networks that can handle non-stationary distributions. In order for the model to adapt to the input as it changes its characteristics, a varying learning rate that does not merely depend on the training step but on the reconstruction error has been proposed. In the experiments, different configurations for classical competitive neural networks, self-organizing maps and growing neural gas with either per-neuron or per-network dynamic learning rate have been tested. Experimental results on document clustering tasks demonstrate the suitability of the proposal for real-world problems.\",\"PeriodicalId\":50358,\"journal\":{\"name\":\"Integrated Computer-Aided Engineering\",\"volume\":\"3 1\",\"pages\":\"257-273\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2023-02-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Integrated Computer-Aided Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.3233/ica-230701\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integrated Computer-Aided Engineering","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/ica-230701","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Dynamic learning rates for continual unsupervised learning
The dilemma between stability and plasticity is crucial in machine learning, especially when non-stationary input distributions are considered. This issue can be addressed by continual learning in order to alleviate catastrophic forgetting. This strategy has been previously proposed for supervised and reinforcement learning models. However, little attention has been devoted to unsupervised learning. This work presents a dynamic learning rate framework for unsupervised neural networks that can handle non-stationary distributions. In order for the model to adapt to the input as it changes its characteristics, a varying learning rate that does not merely depend on the training step but on the reconstruction error has been proposed. In the experiments, different configurations for classical competitive neural networks, self-organizing maps and growing neural gas with either per-neuron or per-network dynamic learning rate have been tested. Experimental results on document clustering tasks demonstrate the suitability of the proposal for real-world problems.
期刊介绍:
Integrated Computer-Aided Engineering (ICAE) was founded in 1993. "Based on the premise that interdisciplinary thinking and synergistic collaboration of disciplines can solve complex problems, open new frontiers, and lead to true innovations and breakthroughs, the cornerstone of industrial competitiveness and advancement of the society" as noted in the inaugural issue of the journal.
The focus of ICAE is the integration of leading edge and emerging computer and information technologies for innovative solution of engineering problems. The journal fosters interdisciplinary research and presents a unique forum for innovative computer-aided engineering. It also publishes novel industrial applications of CAE, thus helping to bring new computational paradigms from research labs and classrooms to reality. Areas covered by the journal include (but are not limited to) artificial intelligence, advanced signal processing, biologically inspired computing, cognitive modeling, concurrent engineering, database management, distributed computing, evolutionary computing, fuzzy logic, genetic algorithms, geometric modeling, intelligent and adaptive systems, internet-based technologies, knowledge discovery and engineering, machine learning, mechatronics, mobile computing, multimedia technologies, networking, neural network computing, object-oriented systems, optimization and search, parallel processing, robotics virtual reality, and visualization techniques.