{"title":"利用动态可重构fpga的随机神经结构","authors":"M. V. Daalen, P. Jeavons, J. Shawe-Taylor","doi":"10.1109/FPGA.1993.279462","DOIUrl":null,"url":null,"abstract":"The authors present an expandable digital architecture that provides an efficient real time implementation platform for large neural networks. The architecture makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic calculations. In this design all real valued quantities are encoded on to stochastic bit streams in which the '1' density is proportional to the given quantity. The actual digital circuitry is simple and highly regular thus allowing very efficient space usage of fine grained FPGAs. Another feature of the design is that the large number of weights required by a neural network are generated by circuitry tailored to each of their specific values, thus saving valuable cells. Whenever one of these values is required to change, the appropriate circuitry must be dynamically reconfigured. This may always be achieved in a fixed and minimum number of cells for a given bit stream resolution.<<ETX>>","PeriodicalId":104383,"journal":{"name":"[1993] Proceedings IEEE Workshop on FPGAs for Custom Computing Machines","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"69","resultStr":"{\"title\":\"A stochastic neural architecture that exploits dynamically reconfigurable FPGAs\",\"authors\":\"M. V. Daalen, P. Jeavons, J. Shawe-Taylor\",\"doi\":\"10.1109/FPGA.1993.279462\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The authors present an expandable digital architecture that provides an efficient real time implementation platform for large neural networks. The architecture makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic calculations. In this design all real valued quantities are encoded on to stochastic bit streams in which the '1' density is proportional to the given quantity. The actual digital circuitry is simple and highly regular thus allowing very efficient space usage of fine grained FPGAs. Another feature of the design is that the large number of weights required by a neural network are generated by circuitry tailored to each of their specific values, thus saving valuable cells. Whenever one of these values is required to change, the appropriate circuitry must be dynamically reconfigured. This may always be achieved in a fixed and minimum number of cells for a given bit stream resolution.<<ETX>>\",\"PeriodicalId\":104383,\"journal\":{\"name\":\"[1993] Proceedings IEEE Workshop on FPGAs for Custom Computing Machines\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1993-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"69\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[1993] Proceedings IEEE Workshop on FPGAs for Custom Computing Machines\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FPGA.1993.279462\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[1993] Proceedings IEEE Workshop on FPGAs for Custom Computing Machines","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FPGA.1993.279462","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A stochastic neural architecture that exploits dynamically reconfigurable FPGAs
The authors present an expandable digital architecture that provides an efficient real time implementation platform for large neural networks. The architecture makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic calculations. In this design all real valued quantities are encoded on to stochastic bit streams in which the '1' density is proportional to the given quantity. The actual digital circuitry is simple and highly regular thus allowing very efficient space usage of fine grained FPGAs. Another feature of the design is that the large number of weights required by a neural network are generated by circuitry tailored to each of their specific values, thus saving valuable cells. Whenever one of these values is required to change, the appropriate circuitry must be dynamically reconfigured. This may always be achieved in a fixed and minimum number of cells for a given bit stream resolution.<>