Boltzmann神经网络中的超经典逻辑建模:II不协调

IF 0.7 4区 数学 Q3 COMPUTER SCIENCE, THEORY & METHODS Journal of Logic and Computation Pub Date : 2023-03-11 DOI:10.1093/logcom/exac104
G. Blanchette, A. Robins
{"title":"Boltzmann神经网络中的超经典逻辑建模:II不协调","authors":"G. Blanchette, A. Robins","doi":"10.1093/logcom/exac104","DOIUrl":null,"url":null,"abstract":"\n Information present in any training set of vectors for machine learning can be interpreted in two different ways, either as whole states or as individual atomic units. In this paper, we show that these alternative information distributions are often inherently incongruent within the training set. When learning with a Boltzmann machine, modifications in the network architecture can select one type of distributional information over the other; favouring the activation of either state exemplar or atomic characteristics.\n This choice of distributional information is of relevance when considering the representation of knowledge in logic. Traditional logic only utilises preference that is the correlate of whole state exemplar frequency. We propose that knowledge representation derived from atomic characteristic activation frequencies is the correlate of compositional typicality, which currently has limited formal definition or application in logic. Further, we argue by counter-example, that any representation of typicality by ‘most preferred model semantics’ is inadequate. We provide a definition of typicality derived from the probability of characteristic features; based on neural network modelling.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2023-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modelling Supra-Classical Logic in a Boltzmann Neural Network: II Incongruence\",\"authors\":\"G. Blanchette, A. Robins\",\"doi\":\"10.1093/logcom/exac104\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Information present in any training set of vectors for machine learning can be interpreted in two different ways, either as whole states or as individual atomic units. In this paper, we show that these alternative information distributions are often inherently incongruent within the training set. When learning with a Boltzmann machine, modifications in the network architecture can select one type of distributional information over the other; favouring the activation of either state exemplar or atomic characteristics.\\n This choice of distributional information is of relevance when considering the representation of knowledge in logic. Traditional logic only utilises preference that is the correlate of whole state exemplar frequency. We propose that knowledge representation derived from atomic characteristic activation frequencies is the correlate of compositional typicality, which currently has limited formal definition or application in logic. Further, we argue by counter-example, that any representation of typicality by ‘most preferred model semantics’ is inadequate. We provide a definition of typicality derived from the probability of characteristic features; based on neural network modelling.\",\"PeriodicalId\":50162,\"journal\":{\"name\":\"Journal of Logic and Computation\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2023-03-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Logic and Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1093/logcom/exac104\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Logic and Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1093/logcom/exac104","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

用于机器学习的任何向量训练集中存在的信息可以用两种不同的方式解释,要么作为整体状态,要么作为单个原子单元。在本文中,我们证明了这些替代信息分布在训练集中往往是固有的不协调的。当使用玻尔兹曼机学习时,网络架构中的修改可以选择一种类型的分布信息而不是另一种类型;有利于激活状态范例或原子特性。当考虑到知识在逻辑中的表示时,这种分布信息的选择是相关的。传统的逻辑只利用作为整体状态样本频率相关性的偏好。我们提出,从原子特征激活频率导出的知识表示是组成典型性的相关性,而组成典型性目前在逻辑中的形式定义或应用有限。此外,我们通过反例论证,“最优选模型语义”对典型性的任何表示都是不充分的。我们提供了一个典型性的定义,该定义源于特征的概率;基于神经网络建模。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Modelling Supra-Classical Logic in a Boltzmann Neural Network: II Incongruence
Information present in any training set of vectors for machine learning can be interpreted in two different ways, either as whole states or as individual atomic units. In this paper, we show that these alternative information distributions are often inherently incongruent within the training set. When learning with a Boltzmann machine, modifications in the network architecture can select one type of distributional information over the other; favouring the activation of either state exemplar or atomic characteristics. This choice of distributional information is of relevance when considering the representation of knowledge in logic. Traditional logic only utilises preference that is the correlate of whole state exemplar frequency. We propose that knowledge representation derived from atomic characteristic activation frequencies is the correlate of compositional typicality, which currently has limited formal definition or application in logic. Further, we argue by counter-example, that any representation of typicality by ‘most preferred model semantics’ is inadequate. We provide a definition of typicality derived from the probability of characteristic features; based on neural network modelling.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Logic and Computation
Journal of Logic and Computation 工程技术-计算机:理论方法
CiteScore
1.90
自引率
14.30%
发文量
82
审稿时长
6-12 weeks
期刊介绍: Logic has found application in virtually all aspects of Information Technology, from software engineering and hardware to programming and artificial intelligence. Indeed, logic, artificial intelligence and theoretical computing are influencing each other to the extent that a new interdisciplinary area of Logic and Computation is emerging. The Journal of Logic and Computation aims to promote the growth of logic and computing, including, among others, the following areas of interest: Logical Systems, such as classical and non-classical logic, constructive logic, categorical logic, modal logic, type theory, feasible maths.... Logical issues in logic programming, knowledge-based systems and automated reasoning; logical issues in knowledge representation, such as non-monotonic reasoning and systems of knowledge and belief; logics and semantics of programming; specification and verification of programs and systems; applications of logic in hardware and VLSI, natural language, concurrent computation, planning, and databases. The bulk of the content is technical scientific papers, although letters, reviews, and discussions, as well as relevant conference reviews, are included.
期刊最新文献
ASPECT: Answer Set rePresentation as vEctor graphiCs in laTex Modal weak Kleene logics: axiomatizations and relational semantics A Gödel-Dugundji-style theorem for the minimal structural logic Perfect proofs at first order Intuitionistic S4 as a logic of topological spaces
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1