{"title":"利用雅可比秩亏,提出了一种系统有效的参数和网络整定方法","authors":"G. Zhou, J. Si, S. Lin","doi":"10.1109/ISCAS.1997.608830","DOIUrl":null,"url":null,"abstract":"Most of neural network applications rely on the fundamental approximation property of feed-forward networks. In a realistic problem setting, a mechanism is needed to devise a learning process for implementing this approximate mapping based on available data, starting from choosing an appropriate set of parameters in order to avoid overfitting, to an efficient learning algorithm measured by computation and memory complexities, as well as the accuracy of the training procedure, and not forgetting testing and cross-validation for generalization. In the present paper we develop a comprehensive procedure to address the above issues in a systematic manner. This process is based on a common observation of Jacobian rank deficiency. A new numerical procedure for solving the nonlinear optimization problem in supervised learning is introduced which not only reduces the training time and overall complexity but also achieves good training accuracy and generalization.","PeriodicalId":68559,"journal":{"name":"电路与系统学报","volume":"54 1","pages":"597-600 vol.1"},"PeriodicalIF":0.0000,"publicationDate":"1997-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A systematic and effective parameter and network tuning method by utilizing Jacobian rank deficiency\",\"authors\":\"G. Zhou, J. Si, S. Lin\",\"doi\":\"10.1109/ISCAS.1997.608830\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most of neural network applications rely on the fundamental approximation property of feed-forward networks. In a realistic problem setting, a mechanism is needed to devise a learning process for implementing this approximate mapping based on available data, starting from choosing an appropriate set of parameters in order to avoid overfitting, to an efficient learning algorithm measured by computation and memory complexities, as well as the accuracy of the training procedure, and not forgetting testing and cross-validation for generalization. In the present paper we develop a comprehensive procedure to address the above issues in a systematic manner. This process is based on a common observation of Jacobian rank deficiency. A new numerical procedure for solving the nonlinear optimization problem in supervised learning is introduced which not only reduces the training time and overall complexity but also achieves good training accuracy and generalization.\",\"PeriodicalId\":68559,\"journal\":{\"name\":\"电路与系统学报\",\"volume\":\"54 1\",\"pages\":\"597-600 vol.1\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1997-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"电路与系统学报\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCAS.1997.608830\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"电路与系统学报","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1109/ISCAS.1997.608830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A systematic and effective parameter and network tuning method by utilizing Jacobian rank deficiency
Most of neural network applications rely on the fundamental approximation property of feed-forward networks. In a realistic problem setting, a mechanism is needed to devise a learning process for implementing this approximate mapping based on available data, starting from choosing an appropriate set of parameters in order to avoid overfitting, to an efficient learning algorithm measured by computation and memory complexities, as well as the accuracy of the training procedure, and not forgetting testing and cross-validation for generalization. In the present paper we develop a comprehensive procedure to address the above issues in a systematic manner. This process is based on a common observation of Jacobian rank deficiency. A new numerical procedure for solving the nonlinear optimization problem in supervised learning is introduced which not only reduces the training time and overall complexity but also achieves good training accuracy and generalization.