{"title":"洛克希德概率神经网络处理器","authors":"T. Washburne, M. Okamura, D. Specht, W. A. Fisher","doi":"10.1109/ICNN.1991.163367","DOIUrl":null,"url":null,"abstract":"The probabilistic neural network processor (PNNP) is a custom neural network parallel processor optimized for the high-speed execution (three billion connections per second) of the probabilistic neural network (PNN) paradigm. The performance goals for the hardware processor were established to provide a three order of magnitude increase in processing speed over existing neural net accelerator cards (HNC, FORD, SAIC). The PNN algorithm compares an input vector with a training vector previously stored in local memory. Each training vector belongs to one of 256 categories indicated by a descriptor table, which is previously filled by the user. The result of the comparison/conversion is accumulated in bins according to the original training vector's descriptor byte. The result is a vector of 256 floating-point works that is used in the final probability density function calculations.<<ETX>>","PeriodicalId":296300,"journal":{"name":"[1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"The Lockheed probabilistic neural network processor\",\"authors\":\"T. Washburne, M. Okamura, D. Specht, W. A. Fisher\",\"doi\":\"10.1109/ICNN.1991.163367\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The probabilistic neural network processor (PNNP) is a custom neural network parallel processor optimized for the high-speed execution (three billion connections per second) of the probabilistic neural network (PNN) paradigm. The performance goals for the hardware processor were established to provide a three order of magnitude increase in processing speed over existing neural net accelerator cards (HNC, FORD, SAIC). The PNN algorithm compares an input vector with a training vector previously stored in local memory. Each training vector belongs to one of 256 categories indicated by a descriptor table, which is previously filled by the user. The result of the comparison/conversion is accumulated in bins according to the original training vector's descriptor byte. The result is a vector of 256 floating-point works that is used in the final probability density function calculations.<<ETX>>\",\"PeriodicalId\":296300,\"journal\":{\"name\":\"[1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering\",\"volume\":\"102 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNN.1991.163367\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNN.1991.163367","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Lockheed probabilistic neural network processor
The probabilistic neural network processor (PNNP) is a custom neural network parallel processor optimized for the high-speed execution (three billion connections per second) of the probabilistic neural network (PNN) paradigm. The performance goals for the hardware processor were established to provide a three order of magnitude increase in processing speed over existing neural net accelerator cards (HNC, FORD, SAIC). The PNN algorithm compares an input vector with a training vector previously stored in local memory. Each training vector belongs to one of 256 categories indicated by a descriptor table, which is previously filled by the user. The result of the comparison/conversion is accumulated in bins according to the original training vector's descriptor byte. The result is a vector of 256 floating-point works that is used in the final probability density function calculations.<>