Query languages for neural networks

Martin Grohe, Christoph Standke, Juno Steegmans, Jan Van den Bussche
{"title":"Query languages for neural networks","authors":"Martin Grohe, Christoph Standke, Juno Steegmans, Jan Van den Bussche","doi":"arxiv-2408.10362","DOIUrl":null,"url":null,"abstract":"We lay the foundations for a database-inspired approach to interpreting and\nunderstanding neural network models by querying them using declarative\nlanguages. Towards this end we study different query languages, based on\nfirst-order logic, that mainly differ in their access to the neural network\nmodel. First-order logic over the reals naturally yields a language which views\nthe network as a black box; only the input--output function defined by the\nnetwork can be queried. This is essentially the approach of constraint query\nlanguages. On the other hand, a white-box language can be obtained by viewing\nthe network as a weighted graph, and extending first-order logic with summation\nover weight terms. The latter approach is essentially an abstraction of SQL. In\ngeneral, the two approaches are incomparable in expressive power, as we will\nshow. Under natural circumstances, however, the white-box approach can subsume\nthe black-box approach; this is our main result. We prove the result concretely\nfor linear constraint queries over real functions definable by feedforward\nneural networks with a fixed number of hidden layers and piecewise linear\nactivation functions.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"131 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Logic in Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.10362","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We lay the foundations for a database-inspired approach to interpreting and understanding neural network models by querying them using declarative languages. Towards this end we study different query languages, based on first-order logic, that mainly differ in their access to the neural network model. First-order logic over the reals naturally yields a language which views the network as a black box; only the input--output function defined by the network can be queried. This is essentially the approach of constraint query languages. On the other hand, a white-box language can be obtained by viewing the network as a weighted graph, and extending first-order logic with summation over weight terms. The latter approach is essentially an abstraction of SQL. In general, the two approaches are incomparable in expressive power, as we will show. Under natural circumstances, however, the white-box approach can subsume the black-box approach; this is our main result. We prove the result concretely for linear constraint queries over real functions definable by feedforward neural networks with a fixed number of hidden layers and piecewise linear activation functions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
神经网络查询语言
我们为一种受数据库启发的方法奠定了基础,这种方法通过使用声明性语言查询神经网络模型来解释和理解神经网络模型。为此,我们研究了基于一阶逻辑的不同查询语言,它们的主要区别在于对神经网络模型的访问。一阶逻辑自然产生了一种将网络视为黑盒子的语言;只有网络定义的输入输出函数才能被查询。这基本上就是约束查询语言的方法。另一方面,将网络视为加权图,并用加权项求和扩展一阶逻辑,就可以得到白盒语言。后一种方法本质上是对 SQL 的抽象。一般来说,这两种方法在表达能力上无法相提并论,我们将展示这一点。不过,在自然情况下,白箱方法可以归并黑箱方法;这就是我们的主要结果。我们将通过具有固定隐藏层数和片断线性激活函数的前馈神经网络来具体证明对实函数的线性约束查询结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Imperative Language for Verified Exact Real-Number Computation On Randomized Computational Models and Complexity Classes: a Historical Overview Computation and Complexity of Preference Inference Based on Hierarchical Models Stability Property for the Call-by-Value $λ$-calculus through Taylor Expansion Resource approximation for the $λμ$-calculus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1