{"title":"VeriFlow:为神经网络验证建立分布模型","authors":"Faried Abu Zaid, Daniel Neider, Mustafa Yalçıner","doi":"arxiv-2406.14265","DOIUrl":null,"url":null,"abstract":"Formal verification has emerged as a promising method to ensure the safety\nand reliability of neural networks. Naively verifying a safety property amounts\nto ensuring the safety of a neural network for the whole input space\nirrespective of any training or test set. However, this also implies that the\nsafety of the neural network is checked even for inputs that do not occur in\nthe real-world and have no meaning at all, often resulting in spurious errors.\nTo tackle this shortcoming, we propose the VeriFlow architecture as a flow\nbased density model tailored to allow any verification approach to restrict its\nsearch to the some data distribution of interest. We argue that our\narchitecture is particularly well suited for this purpose because of two major\nproperties. First, we show that the transformation and log-density function\nthat are defined by our model are piece-wise affine. Therefore, the model\nallows the usage of verifiers based on SMT with linear arithmetic. Second,\nupper density level sets (UDL) of the data distribution take the shape of an\n$L^p$-ball in the latent space. As a consequence, representations of UDLs\nspecified by a given probability are effectively computable in latent space.\nThis allows for SMT and abstract interpretation approaches with fine-grained,\nprobabilistically interpretable, control regarding on how (a)typical the inputs\nsubject to verification are.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VeriFlow: Modeling Distributions for Neural Network Verification\",\"authors\":\"Faried Abu Zaid, Daniel Neider, Mustafa Yalçıner\",\"doi\":\"arxiv-2406.14265\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Formal verification has emerged as a promising method to ensure the safety\\nand reliability of neural networks. Naively verifying a safety property amounts\\nto ensuring the safety of a neural network for the whole input space\\nirrespective of any training or test set. However, this also implies that the\\nsafety of the neural network is checked even for inputs that do not occur in\\nthe real-world and have no meaning at all, often resulting in spurious errors.\\nTo tackle this shortcoming, we propose the VeriFlow architecture as a flow\\nbased density model tailored to allow any verification approach to restrict its\\nsearch to the some data distribution of interest. We argue that our\\narchitecture is particularly well suited for this purpose because of two major\\nproperties. First, we show that the transformation and log-density function\\nthat are defined by our model are piece-wise affine. Therefore, the model\\nallows the usage of verifiers based on SMT with linear arithmetic. Second,\\nupper density level sets (UDL) of the data distribution take the shape of an\\n$L^p$-ball in the latent space. As a consequence, representations of UDLs\\nspecified by a given probability are effectively computable in latent space.\\nThis allows for SMT and abstract interpretation approaches with fine-grained,\\nprobabilistically interpretable, control regarding on how (a)typical the inputs\\nsubject to verification are.\",\"PeriodicalId\":501033,\"journal\":{\"name\":\"arXiv - CS - Symbolic Computation\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Symbolic Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2406.14265\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.14265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VeriFlow: Modeling Distributions for Neural Network Verification
Formal verification has emerged as a promising method to ensure the safety
and reliability of neural networks. Naively verifying a safety property amounts
to ensuring the safety of a neural network for the whole input space
irrespective of any training or test set. However, this also implies that the
safety of the neural network is checked even for inputs that do not occur in
the real-world and have no meaning at all, often resulting in spurious errors.
To tackle this shortcoming, we propose the VeriFlow architecture as a flow
based density model tailored to allow any verification approach to restrict its
search to the some data distribution of interest. We argue that our
architecture is particularly well suited for this purpose because of two major
properties. First, we show that the transformation and log-density function
that are defined by our model are piece-wise affine. Therefore, the model
allows the usage of verifiers based on SMT with linear arithmetic. Second,
upper density level sets (UDL) of the data distribution take the shape of an
$L^p$-ball in the latent space. As a consequence, representations of UDLs
specified by a given probability are effectively computable in latent space.
This allows for SMT and abstract interpretation approaches with fine-grained,
probabilistically interpretable, control regarding on how (a)typical the inputs
subject to verification are.