{"title":"神经网络学习的有限精度误差分析","authors":"J. L. Holt, Jenq-Neng Hwang","doi":"10.1109/ANN.1991.213471","DOIUrl":null,"url":null,"abstract":"The high speed desired in the implementation of many neural network algorithms, such as backpropagation learning in a multilayer perceptron (MLP), may be attained through the use of finite precision hardware. This finite precision hardware, however, is prone to errors. A method of theoretically deriving and statistically evaluating this error is presented and could be used as a guide to the details of hardware design and algorithm implementation. The paper is devoted to the derivation of the techniques involved as well as the details of the backpropagation example. The intent is to provide a general framework by which most neural network algorithms under any set of hardware constraints may be evaluated.<<ETX>>","PeriodicalId":119713,"journal":{"name":"Proceedings of the First International Forum on Applications of Neural Networks to Power Systems","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Finite precision error analysis for neural network learning\",\"authors\":\"J. L. Holt, Jenq-Neng Hwang\",\"doi\":\"10.1109/ANN.1991.213471\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The high speed desired in the implementation of many neural network algorithms, such as backpropagation learning in a multilayer perceptron (MLP), may be attained through the use of finite precision hardware. This finite precision hardware, however, is prone to errors. A method of theoretically deriving and statistically evaluating this error is presented and could be used as a guide to the details of hardware design and algorithm implementation. The paper is devoted to the derivation of the techniques involved as well as the details of the backpropagation example. The intent is to provide a general framework by which most neural network algorithms under any set of hardware constraints may be evaluated.<<ETX>>\",\"PeriodicalId\":119713,\"journal\":{\"name\":\"Proceedings of the First International Forum on Applications of Neural Networks to Power Systems\",\"volume\":\"81 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the First International Forum on Applications of Neural Networks to Power Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ANN.1991.213471\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the First International Forum on Applications of Neural Networks to Power Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ANN.1991.213471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Finite precision error analysis for neural network learning
The high speed desired in the implementation of many neural network algorithms, such as backpropagation learning in a multilayer perceptron (MLP), may be attained through the use of finite precision hardware. This finite precision hardware, however, is prone to errors. A method of theoretically deriving and statistically evaluating this error is presented and could be used as a guide to the details of hardware design and algorithm implementation. The paper is devoted to the derivation of the techniques involved as well as the details of the backpropagation example. The intent is to provide a general framework by which most neural network algorithms under any set of hardware constraints may be evaluated.<>