{"title":"Neural transition system abstraction for neural network dynamical system models and its application to Computational Tree Logic verification","authors":"Yejiang Yang , Tao Wang , Weiming Xiang","doi":"10.1016/j.neunet.2025.107261","DOIUrl":null,"url":null,"abstract":"<div><div>This paper proposes an explainable abstraction-based verification method that prioritizes user interaction and enhances interpretability. By partitioning the system’s state space using a data-driven process, we can abstract the dynamics into words consisting of state labels. When given a trained neural network model, a set-valued reachability analysis method is introduced to estimate the relationship between each subsystem. We construct the neural transition system abstraction with the neural network model and the relationships between partitions. Then, the abstracted model can be verified through Computational Tree Logic (CTL), enabling formal verification of the system’s behavior. This approach greatly enhances the interpretability and verification of data-driven models, as well as the ability to validate against the specification. Finally, examples of the Maglev model and handwritten model abstractions are given to illustrate our proposed model verification framework, which demonstrates that the proposed framework has advantages in enhancing model interpretability and verifying user-specified properties based on CTL.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"186 ","pages":"Article 107261"},"PeriodicalIF":6.0000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025001406","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes an explainable abstraction-based verification method that prioritizes user interaction and enhances interpretability. By partitioning the system’s state space using a data-driven process, we can abstract the dynamics into words consisting of state labels. When given a trained neural network model, a set-valued reachability analysis method is introduced to estimate the relationship between each subsystem. We construct the neural transition system abstraction with the neural network model and the relationships between partitions. Then, the abstracted model can be verified through Computational Tree Logic (CTL), enabling formal verification of the system’s behavior. This approach greatly enhances the interpretability and verification of data-driven models, as well as the ability to validate against the specification. Finally, examples of the Maglev model and handwritten model abstractions are given to illustrate our proposed model verification framework, which demonstrates that the proposed framework has advantages in enhancing model interpretability and verifying user-specified properties based on CTL.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.