The Lockheed probabilistic neural network processor

T. Washburne, M. Okamura, D. Specht, W. A. Fisher
{"title":"The Lockheed probabilistic neural network processor","authors":"T. Washburne, M. Okamura, D. Specht, W. A. Fisher","doi":"10.1109/ICNN.1991.163367","DOIUrl":null,"url":null,"abstract":"The probabilistic neural network processor (PNNP) is a custom neural network parallel processor optimized for the high-speed execution (three billion connections per second) of the probabilistic neural network (PNN) paradigm. The performance goals for the hardware processor were established to provide a three order of magnitude increase in processing speed over existing neural net accelerator cards (HNC, FORD, SAIC). The PNN algorithm compares an input vector with a training vector previously stored in local memory. Each training vector belongs to one of 256 categories indicated by a descriptor table, which is previously filled by the user. The result of the comparison/conversion is accumulated in bins according to the original training vector's descriptor byte. The result is a vector of 256 floating-point works that is used in the final probability density function calculations.<<ETX>>","PeriodicalId":296300,"journal":{"name":"[1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"[1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNN.1991.163367","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

The probabilistic neural network processor (PNNP) is a custom neural network parallel processor optimized for the high-speed execution (three billion connections per second) of the probabilistic neural network (PNN) paradigm. The performance goals for the hardware processor were established to provide a three order of magnitude increase in processing speed over existing neural net accelerator cards (HNC, FORD, SAIC). The PNN algorithm compares an input vector with a training vector previously stored in local memory. Each training vector belongs to one of 256 categories indicated by a descriptor table, which is previously filled by the user. The result of the comparison/conversion is accumulated in bins according to the original training vector's descriptor byte. The result is a vector of 256 floating-point works that is used in the final probability density function calculations.<>
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
洛克希德概率神经网络处理器
概率神经网络处理器(PNNP)是一种定制的神经网络并行处理器,针对概率神经网络(PNN)范式的高速执行(每秒30亿个连接)进行了优化。硬件处理器的性能目标是提供比现有神经网络加速卡(HNC, FORD, SAIC)的处理速度提高三个数量级。PNN算法将输入向量与先前存储在本地内存中的训练向量进行比较。每个训练向量属于描述符表所指示的256个类别中的一个,该描述符表先前由用户填充。比较/转换的结果根据原始训练向量的描述符字节累积在箱子中。结果是一个256个浮点数的矢量,用于最终的概率密度函数计算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Evaluation of neural network and conventional techniques for sonar signal discrimination The potential of a neural network based sonar system in classifying fish Neural network for underwater target detection Design of an intelligent control system for remotely operated vehicles All neural network sonar discrimination system
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1