{"title":"An Empirical Study of the Hidden Matrix Rank for Neural Networks with Random Weights","authors":"Pablo A. Henríquez, G. A. Ruz","doi":"10.1109/ICMLA.2017.00-44","DOIUrl":null,"url":null,"abstract":"Neural networks with random weights can be regarded as feed-forward neural networks built with a specific randomized algorithm, i.e., the input weights and biases are randomly assigned and fixed during the training phase, and the output weights are analytically evaluated by the least square method. This paper presents an empirical study of the hidden matrix rank for neural networks with random weights. We study the impacts of the scope of random parameters on the model's performance, and show that the assignment of the input weights in the range [-1,1] is misleading. Experiments were conducted using two types of neural networks obtaining insights not only on the input weights but also how these relate to different architectures.","PeriodicalId":6636,"journal":{"name":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"99 1","pages":"883-888"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2017.00-44","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Neural networks with random weights can be regarded as feed-forward neural networks built with a specific randomized algorithm, i.e., the input weights and biases are randomly assigned and fixed during the training phase, and the output weights are analytically evaluated by the least square method. This paper presents an empirical study of the hidden matrix rank for neural networks with random weights. We study the impacts of the scope of random parameters on the model's performance, and show that the assignment of the input weights in the range [-1,1] is misleading. Experiments were conducted using two types of neural networks obtaining insights not only on the input weights but also how these relate to different architectures.