{"title":"神经生物学上似是而非的向量符号架构","authors":"Daniel E. Padilla, M. McDonnell","doi":"10.1109/ICSC.2014.40","DOIUrl":null,"url":null,"abstract":"Vector Symbolic Architectures (VSA) are approaches to representing symbols and structured combinations of symbols as high-dimensional vectors. They have applications in machine learning and for understanding information processing in neurobiology. VSAs are typically described in an abstract mathematical form in terms of vectors and operations on vectors. In this work, we show that a machine learning approach known as hierarchical temporal memory, which is based on the anatomy and function of mammalian neocortex, is inherently capable of supporting important VSA functionality. This follows because the approach learns sequences of semantics-preserving sparse distributed representations.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"A Neurobiologically Plausible Vector Symbolic Architecture\",\"authors\":\"Daniel E. Padilla, M. McDonnell\",\"doi\":\"10.1109/ICSC.2014.40\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Vector Symbolic Architectures (VSA) are approaches to representing symbols and structured combinations of symbols as high-dimensional vectors. They have applications in machine learning and for understanding information processing in neurobiology. VSAs are typically described in an abstract mathematical form in terms of vectors and operations on vectors. In this work, we show that a machine learning approach known as hierarchical temporal memory, which is based on the anatomy and function of mammalian neocortex, is inherently capable of supporting important VSA functionality. This follows because the approach learns sequences of semantics-preserving sparse distributed representations.\",\"PeriodicalId\":175352,\"journal\":{\"name\":\"2014 IEEE International Conference on Semantic Computing\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-06-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE International Conference on Semantic Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSC.2014.40\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Semantic Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSC.2014.40","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Neurobiologically Plausible Vector Symbolic Architecture
Vector Symbolic Architectures (VSA) are approaches to representing symbols and structured combinations of symbols as high-dimensional vectors. They have applications in machine learning and for understanding information processing in neurobiology. VSAs are typically described in an abstract mathematical form in terms of vectors and operations on vectors. In this work, we show that a machine learning approach known as hierarchical temporal memory, which is based on the anatomy and function of mammalian neocortex, is inherently capable of supporting important VSA functionality. This follows because the approach learns sequences of semantics-preserving sparse distributed representations.