Simplicial sets generalize many categories of graphs. In this paper, we give a complete characterization of the Lawvere-Tierney topologies on (semi-)simplicial sets, on bicolored graphs, and on fuzzy sets. We apply our results to establish that 'partially simple' simplicial sets and 'partially simple' graphs form quasitoposes.
{"title":"Characterisation of Lawvere-Tierney Topologies on Simplicial Sets, Bicolored Graphs, and Fuzzy Sets","authors":"Aloïs Rosset, Helle Hvid Hansen, Jörg Endrullis","doi":"arxiv-2407.04535","DOIUrl":"https://doi.org/arxiv-2407.04535","url":null,"abstract":"Simplicial sets generalize many categories of graphs. In this paper, we give\u0000a complete characterization of the Lawvere-Tierney topologies on\u0000(semi-)simplicial sets, on bicolored graphs, and on fuzzy sets. We apply our\u0000results to establish that 'partially simple' simplicial sets and 'partially\u0000simple' graphs form quasitoposes.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141573758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Central to near-term quantum machine learning is the use of hybrid quantum-classical algorithms. This paper develops a formal framework for describing these algorithms in terms of string diagrams: a key step towards integrating these hybrid algorithms into existing work using string diagrams for machine learning and differentiable programming. A notable feature of our string diagrams is the use of functor boxes, which correspond to a quantum-classical interfaces. The functor used is a lax monoidal functor embedding the quantum systems into classical, and the lax monoidality imposes restrictions on the string diagrams when extracting classical data from quantum systems via measurement. In this way, our framework provides initial steps toward a denotational semantics for hybrid quantum machine learning algorithms that captures important features of quantum-classical interactions.
{"title":"Hybrid Quantum-Classical Machine Learning with String Diagrams","authors":"Alexander Koziell-Pipe, Aleks Kissinger","doi":"arxiv-2407.03673","DOIUrl":"https://doi.org/arxiv-2407.03673","url":null,"abstract":"Central to near-term quantum machine learning is the use of hybrid\u0000quantum-classical algorithms. This paper develops a formal framework for\u0000describing these algorithms in terms of string diagrams: a key step towards\u0000integrating these hybrid algorithms into existing work using string diagrams\u0000for machine learning and differentiable programming. A notable feature of our\u0000string diagrams is the use of functor boxes, which correspond to a\u0000quantum-classical interfaces. The functor used is a lax monoidal functor\u0000embedding the quantum systems into classical, and the lax monoidality imposes\u0000restrictions on the string diagrams when extracting classical data from quantum\u0000systems via measurement. In this way, our framework provides initial steps\u0000toward a denotational semantics for hybrid quantum machine learning algorithms\u0000that captures important features of quantum-classical interactions.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141577932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dagur AsgeirssonIMJ-PRG, Riccardo BrascaIMJ-PRG, Nikolas KuhnUiO, Filippo Alberto Edoardo Nuccio Mortarino Majno Di CapriglioICJ, UJM, CTN, Adam Topaz
Condensed mathematics, developed by Clausen and Scholze over the last few years, proposes a generalization of topology with better categorical properties. It replaces the concept of a topological space by that of a condensed set, which can be defined as a sheaf for the coherent topology on a certain category of compact Hausdorff spaces. In this case, the sheaf condition has a fairly simple explicit description, which arises from studying the relationship between the coherent, regular and extensive topologies. In this paper, we establish this relationship under minimal assumptions on the category, going beyond the case of compact Hausdorff spaces. Along the way, we also provide a characterizations of sheaves and covering sieves for these categories. All results in this paper have been fully formalized in the Lean proof assistant.
{"title":"Categorical Foundations of Formalized Condensed Mathematics","authors":"Dagur AsgeirssonIMJ-PRG, Riccardo BrascaIMJ-PRG, Nikolas KuhnUiO, Filippo Alberto Edoardo Nuccio Mortarino Majno Di CapriglioICJ, UJM, CTN, Adam Topaz","doi":"arxiv-2407.12840","DOIUrl":"https://doi.org/arxiv-2407.12840","url":null,"abstract":"Condensed mathematics, developed by Clausen and Scholze over the last few\u0000years, proposes a generalization of topology with better categorical\u0000properties. It replaces the concept of a topological space by that of a\u0000condensed set, which can be defined as a sheaf for the coherent topology on a\u0000certain category of compact Hausdorff spaces. In this case, the sheaf condition\u0000has a fairly simple explicit description, which arises from studying the\u0000relationship between the coherent, regular and extensive topologies. In this\u0000paper, we establish this relationship under minimal assumptions on the\u0000category, going beyond the case of compact Hausdorff spaces. Along the way, we\u0000also provide a characterizations of sheaves and covering sieves for these\u0000categories. All results in this paper have been fully formalized in the Lean\u0000proof assistant.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Rodatz, Ian Fan, Tuomas Laakkonen, Neil John Ortega, Thomas Hoffman, Vincent Wang-Mascianica
Idealised as universal approximators, learners such as neural networks can be viewed as "variable functions" that may become one of a range of concrete functions after training. In the same way that equations constrain the possible values of variables in algebra, we may view objective functions as constraints on the behaviour of learners. We extract the equivalences perfectly optimised objective functions impose, calling them "tasks". For these tasks, we develop a formal graphical language that allows us to: (1) separate the core tasks of a behaviour from its implementation details; (2) reason about and design behaviours model-agnostically; and (3) simply describe and unify approaches in machine learning across domains. As proof-of-concept, we design a novel task that enables converting classifiers into generative models we call "manipulators", which we implement by directly translating task specifications into code. The resulting models exhibit capabilities such as style transfer and interpretable latent-space editing, without the need for custom architectures, adversarial training or random sampling. We formally relate the behaviour of manipulators to GANs, and empirically demonstrate their competitive performance with VAEs. We report on experiments across vision and language domains aiming to characterise manipulators as approximate Bayesian inversions of discriminative classifiers.
{"title":"A Pattern Language for Machine Learning Tasks","authors":"Benjamin Rodatz, Ian Fan, Tuomas Laakkonen, Neil John Ortega, Thomas Hoffman, Vincent Wang-Mascianica","doi":"arxiv-2407.02424","DOIUrl":"https://doi.org/arxiv-2407.02424","url":null,"abstract":"Idealised as universal approximators, learners such as neural networks can be\u0000viewed as \"variable functions\" that may become one of a range of concrete\u0000functions after training. In the same way that equations constrain the possible\u0000values of variables in algebra, we may view objective functions as constraints\u0000on the behaviour of learners. We extract the equivalences perfectly optimised\u0000objective functions impose, calling them \"tasks\". For these tasks, we develop a\u0000formal graphical language that allows us to: (1) separate the core tasks of a\u0000behaviour from its implementation details; (2) reason about and design\u0000behaviours model-agnostically; and (3) simply describe and unify approaches in\u0000machine learning across domains. As proof-of-concept, we design a novel task that enables converting\u0000classifiers into generative models we call \"manipulators\", which we implement\u0000by directly translating task specifications into code. The resulting models\u0000exhibit capabilities such as style transfer and interpretable latent-space\u0000editing, without the need for custom architectures, adversarial training or\u0000random sampling. We formally relate the behaviour of manipulators to GANs, and\u0000empirically demonstrate their competitive performance with VAEs. We report on\u0000experiments across vision and language domains aiming to characterise\u0000manipulators as approximate Bayesian inversions of discriminative classifiers.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"72 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikhil Khatri, Tuomas Laakkonen, Jonathon Liu, Vincent Wang-Maścianica
We introduce a category-theoretic diagrammatic formalism in order to systematically relate and reason about machine learning models. Our diagrams present architectures intuitively but without loss of essential detail, where natural relationships between models are captured by graphical transformations, and important differences and similarities can be identified at a glance. In this paper, we focus on attention mechanisms: translating folklore into mathematical derivations, and constructing a taxonomy of attention variants in the literature. As a first example of an empirical investigation underpinned by our formalism, we identify recurring anatomical components of attention, which we exhaustively recombine to explore a space of variations on the attention mechanism.
{"title":"On the Anatomy of Attention","authors":"Nikhil Khatri, Tuomas Laakkonen, Jonathon Liu, Vincent Wang-Maścianica","doi":"arxiv-2407.02423","DOIUrl":"https://doi.org/arxiv-2407.02423","url":null,"abstract":"We introduce a category-theoretic diagrammatic formalism in order to\u0000systematically relate and reason about machine learning models. Our diagrams\u0000present architectures intuitively but without loss of essential detail, where\u0000natural relationships between models are captured by graphical transformations,\u0000and important differences and similarities can be identified at a glance. In\u0000this paper, we focus on attention mechanisms: translating folklore into\u0000mathematical derivations, and constructing a taxonomy of attention variants in\u0000the literature. As a first example of an empirical investigation underpinned by\u0000our formalism, we identify recurring anatomical components of attention, which\u0000we exhaustively recombine to explore a space of variations on the attention\u0000mechanism.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"2013 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper has two purposes. The first is to extend the theory of linearly distributive categories by considering the structures that emerge in a special case: the normal duoidal category $(mathsf{Poly} ,mathcal{y}, otimes, triangleleft )$ of polynomial functors under Dirichlet and substitution product. This is an isomix LDC which is neither $*$-autonomous nor fully symmetric. The additional structures of interest here are a closure for $otimes$ and a co-closure for $triangleleft$, making $mathsf{Poly}$ a bi-closed LDC, which is a notion we introduce in this paper. The second purpose is to use $mathsf{Poly}$ as a source of examples and intuition about various structures that can occur in the setting of LDCs, including duals, cores, linear monoids, and others, as well as how these generalize to the non-symmetric setting. To that end, we characterize the linearly dual objects in $mathsf{Poly}$: every linear polynomial has a right dual which is a representable. It turns out that the linear and representable polynomials also form the left and right cores of $mathsf{Poly}$. Finally, we provide examples of linear monoids, linear comonoids, and linear bialgebras in $mathsf{Poly}$.
{"title":"What kind of linearly distributive category do polynomial functors form?","authors":"David I. Spivak, Priyaa Varshinee Srinivasan","doi":"arxiv-2407.01849","DOIUrl":"https://doi.org/arxiv-2407.01849","url":null,"abstract":"This paper has two purposes. The first is to extend the theory of linearly\u0000distributive categories by considering the structures that emerge in a special\u0000case: the normal duoidal category $(mathsf{Poly} ,mathcal{y}, otimes,\u0000triangleleft )$ of polynomial functors under Dirichlet and substitution\u0000product. This is an isomix LDC which is neither $*$-autonomous nor fully\u0000symmetric. The additional structures of interest here are a closure for\u0000$otimes$ and a co-closure for $triangleleft$, making $mathsf{Poly}$ a\u0000bi-closed LDC, which is a notion we introduce in this paper. The second purpose is to use $mathsf{Poly}$ as a source of examples and\u0000intuition about various structures that can occur in the setting of LDCs,\u0000including duals, cores, linear monoids, and others, as well as how these\u0000generalize to the non-symmetric setting. To that end, we characterize the\u0000linearly dual objects in $mathsf{Poly}$: every linear polynomial has a right\u0000dual which is a representable. It turns out that the linear and representable\u0000polynomials also form the left and right cores of $mathsf{Poly}$. Finally, we\u0000provide examples of linear monoids, linear comonoids, and linear bialgebras in\u0000$mathsf{Poly}$.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Game comonads offer a categorical view of a number of model-comparison games central to model theory, such as pebble and Ehrenfeucht-Fra"iss'e games. Remarkably, the categories of coalgebras for these comonads capture preservation of several fragments of resource-bounded logics, such as (infinitary) first-order logic with n variables or bounded quantifier rank, and corresponding combinatorial parameters such as tree-width and tree-depth. In this way, game comonads provide a new bridge between categorical methods developed for semantics, and the combinatorial and algorithmic methods of resource-sensitive model theory. We give an overview of this framework and outline some of its applications, including the study of homomorphism counting results in finite model theory, and of equi-resource homomorphism preservation theorems in logic using the axiomatic setting of arboreal categories. Finally, we describe some homotopical ideas that arise naturally in the context of game comonads.
值得注意的是,这些组合体的煤层范畴捕捉到了资源有界逻辑的几个片段的保留,如具有 n 个变量的(无穷)一阶逻辑或有界量词秩,以及相应的组合参数,如树宽和树深。这样,博弈组合体就在为语义学开发的分类方法与资源敏感模型理论的组合和算法方法之间架起了一座新的桥梁。我们概述了这一框架,并概述了它的一些应用,包括有限模型理论中同态计数结果的研究,以及利用树栖范畴的大同设置研究逻辑中的等资源同态保留定理。最后,我们描述了一些在博弈彗星背景下自然产生的同构思想。
{"title":"An invitation to game comonads","authors":"Samson Abramsky, Luca Reggio","doi":"arxiv-2407.00606","DOIUrl":"https://doi.org/arxiv-2407.00606","url":null,"abstract":"Game comonads offer a categorical view of a number of model-comparison games\u0000central to model theory, such as pebble and Ehrenfeucht-Fra\"iss'e games.\u0000Remarkably, the categories of coalgebras for these comonads capture\u0000preservation of several fragments of resource-bounded logics, such as\u0000(infinitary) first-order logic with n variables or bounded quantifier rank, and\u0000corresponding combinatorial parameters such as tree-width and tree-depth. In\u0000this way, game comonads provide a new bridge between categorical methods\u0000developed for semantics, and the combinatorial and algorithmic methods of\u0000resource-sensitive model theory. We give an overview of this framework and outline some of its applications,\u0000including the study of homomorphism counting results in finite model theory,\u0000and of equi-resource homomorphism preservation theorems in logic using the\u0000axiomatic setting of arboreal categories. Finally, we describe some homotopical\u0000ideas that arise naturally in the context of game comonads.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce string diagrams for physical duoidal categories (normal $otimes$-symmetric duoidal categories): they consist of string diagrams with wires forming a zigzag-free partial order and order-preserving nodes whose inputs and outputs form intervals.
{"title":"String Diagrams for Physical Duoidal Categories","authors":"Mario Román","doi":"arxiv-2406.19816","DOIUrl":"https://doi.org/arxiv-2406.19816","url":null,"abstract":"We introduce string diagrams for physical duoidal categories (normal\u0000$otimes$-symmetric duoidal categories): they consist of string diagrams with\u0000wires forming a zigzag-free partial order and order-preserving nodes whose\u0000inputs and outputs form intervals.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We use pointwise Kan extensions to generate new subcategories out of old ones. We investigate the properties of these newly produced categories and give sufficient conditions for their cartesian closedness to hold. Our methods are of general use. Here we apply them particularly to the study of the properties of certain categories of fibrewise topological spaces. In particular, we prove that the categories of fibrewise compactly generated spaces, fibrewise sequential spaces and fibrewise Alexandroff spaces are cartesian closed provided that the base space satisfies the right separation axiom.
{"title":"Kan extendable subcategories and fibrewise topology","authors":"Moncef Ghazel","doi":"arxiv-2406.18399","DOIUrl":"https://doi.org/arxiv-2406.18399","url":null,"abstract":"We use pointwise Kan extensions to generate new subcategories out of old\u0000ones. We investigate the properties of these newly produced categories and give\u0000sufficient conditions for their cartesian closedness to hold. Our methods are\u0000of general use. Here we apply them particularly to the study of the properties\u0000of certain categories of fibrewise topological spaces. In particular, we prove\u0000that the categories of fibrewise compactly generated spaces, fibrewise\u0000sequential spaces and fibrewise Alexandroff spaces are cartesian closed\u0000provided that the base space satisfies the right separation axiom.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"154 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sean Tull, Robin Lorenz, Stephen Clark, Ilyas Khan, Bob Coecke
Artificial intelligence (AI) is currently based largely on black-box machine learning models which lack interpretability. The field of eXplainable AI (XAI) strives to address this major concern, being critical in high-stakes areas such as the finance, legal and health sectors. We present an approach to defining AI models and their interpretability based on category theory. For this we employ the notion of a compositional model, which sees a model in terms of formal string diagrams which capture its abstract structure together with its concrete implementation. This comprehensive view incorporates deterministic, probabilistic and quantum models. We compare a wide range of AI models as compositional models, including linear and rule-based models, (recurrent) neural networks, transformers, VAEs, and causal and DisCoCirc models. Next we give a definition of interpretation of a model in terms of its compositional structure, demonstrating how to analyse the interpretability of a model, and using this to clarify common themes in XAI. We find that what makes the standard 'intrinsically interpretable' models so transparent is brought out most clearly diagrammatically. This leads us to the more general notion of compositionally-interpretable (CI) models, which additionally include, for instance, causal, conceptual space, and DisCoCirc models. We next demonstrate the explainability benefits of CI models. Firstly, their compositional structure may allow the computation of other quantities of interest, and may facilitate inference from the model to the modelled phenomenon by matching its structure. Secondly, they allow for diagrammatic explanations for their behaviour, based on influence constraints, diagram surgery and rewrite explanations. Finally, we discuss many future directions for the approach, raising the question of how to learn such meaningfully structured models in practice.
{"title":"Towards Compositional Interpretability for XAI","authors":"Sean Tull, Robin Lorenz, Stephen Clark, Ilyas Khan, Bob Coecke","doi":"arxiv-2406.17583","DOIUrl":"https://doi.org/arxiv-2406.17583","url":null,"abstract":"Artificial intelligence (AI) is currently based largely on black-box machine\u0000learning models which lack interpretability. The field of eXplainable AI (XAI)\u0000strives to address this major concern, being critical in high-stakes areas such\u0000as the finance, legal and health sectors. We present an approach to defining AI models and their interpretability based\u0000on category theory. For this we employ the notion of a compositional model,\u0000which sees a model in terms of formal string diagrams which capture its\u0000abstract structure together with its concrete implementation. This\u0000comprehensive view incorporates deterministic, probabilistic and quantum\u0000models. We compare a wide range of AI models as compositional models, including\u0000linear and rule-based models, (recurrent) neural networks, transformers, VAEs,\u0000and causal and DisCoCirc models. Next we give a definition of interpretation of a model in terms of its\u0000compositional structure, demonstrating how to analyse the interpretability of a\u0000model, and using this to clarify common themes in XAI. We find that what makes\u0000the standard 'intrinsically interpretable' models so transparent is brought out\u0000most clearly diagrammatically. This leads us to the more general notion of\u0000compositionally-interpretable (CI) models, which additionally include, for\u0000instance, causal, conceptual space, and DisCoCirc models. We next demonstrate the explainability benefits of CI models. Firstly, their\u0000compositional structure may allow the computation of other quantities of\u0000interest, and may facilitate inference from the model to the modelled\u0000phenomenon by matching its structure. Secondly, they allow for diagrammatic\u0000explanations for their behaviour, based on influence constraints, diagram\u0000surgery and rewrite explanations. Finally, we discuss many future directions\u0000for the approach, raising the question of how to learn such meaningfully\u0000structured models in practice.","PeriodicalId":501135,"journal":{"name":"arXiv - MATH - Category Theory","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}