Akshay GopalakrishnanMcGill University, Clark VerbruggeMcGill University, Mark BattyUniversity of Kent
A memory consistency model specifies the allowed behaviors of shared memory concurrent programs. At the language level, these models are known to have a non-trivial impact on the safety of program optimizations, limiting the ability to rearrange/refactor code without introducing new behaviors. Existing programming language memory models try to address this by permitting more (relaxed/weak) concurrent behaviors but are still unable to allow all the desired optimizations. A core problem is that weaker consistency models may also render optimizations unsafe, a conclusion that goes against the intuition of them allowing more behaviors. This exposes an open problem of the compositional interaction between memory consistency semantics and optimizations: which parts of the semantics correspond to allowing/disallowing which set of optimizations is unclear. In this work, we establish a formal foundation suitable enough to understand this compositional nature, decomposing optimizations into a finite set of elementary effects on program execution traces, over which aspects of safety can be assessed. We use this decomposition to identify a desirable compositional property (complete) that would guarantee the safety of optimizations from one memory model to another. We showcase its practicality by proving such a property between Sequential Consistency (SC) and $SC_{RR}$, the latter allowing independent read-read reordering over $SC$. Our work potentially paves way to a new design methodology of programming-language memory models, one that places emphasis on the optimizations desired to be performed.
{"title":"Memory Consistency and Program Transformations","authors":"Akshay GopalakrishnanMcGill University, Clark VerbruggeMcGill University, Mark BattyUniversity of Kent","doi":"arxiv-2409.12013","DOIUrl":"https://doi.org/arxiv-2409.12013","url":null,"abstract":"A memory consistency model specifies the allowed behaviors of shared memory\u0000concurrent programs. At the language level, these models are known to have a\u0000non-trivial impact on the safety of program optimizations, limiting the ability\u0000to rearrange/refactor code without introducing new behaviors. Existing\u0000programming language memory models try to address this by permitting more\u0000(relaxed/weak) concurrent behaviors but are still unable to allow all the\u0000desired optimizations. A core problem is that weaker consistency models may\u0000also render optimizations unsafe, a conclusion that goes against the intuition\u0000of them allowing more behaviors. This exposes an open problem of the\u0000compositional interaction between memory consistency semantics and\u0000optimizations: which parts of the semantics correspond to allowing/disallowing\u0000which set of optimizations is unclear. In this work, we establish a formal\u0000foundation suitable enough to understand this compositional nature, decomposing\u0000optimizations into a finite set of elementary effects on program execution\u0000traces, over which aspects of safety can be assessed. We use this decomposition\u0000to identify a desirable compositional property (complete) that would guarantee\u0000the safety of optimizations from one memory model to another. We showcase its\u0000practicality by proving such a property between Sequential Consistency (SC) and\u0000$SC_{RR}$, the latter allowing independent read-read reordering over $SC$. Our\u0000work potentially paves way to a new design methodology of programming-language\u0000memory models, one that places emphasis on the optimizations desired to be\u0000performed.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiparty Session Types (MPSTs) offer a structured way of specifying communication protocols and guarantee relevant communication properties, such as deadlock-freedom. In this paper, we extend a minimal MPST system with quantum data and operations, enabling the specification of quantum protocols. Quantum MPSTs (QMPSTs) provide a formal notation to describe quantum protocols, both at the abstract level of global types, describing which communications can take place in the system and their dependencies, and at the concrete level of local types and quantum processes, describing the expected behavior of each participant in the protocol. Type-checking relates these two levels formally, ensuring that processes behave as prescribed by the global type. Beyond usual communication properties, QMPSTs also allow us to prove that qubits are owned by a single process at any time, capturing the quantum no-cloning and no-deleting theorems. We use our approach to verify four quantum protocols from the literature, respectively Teleportation, Secret Sharing, Bit-Commitment, and Key Distribution.
{"title":"Towards Quantum Multiparty Session Types","authors":"Ivan Lanese, Ugo Dal Lago, Vikraman Choudhury","doi":"arxiv-2409.11133","DOIUrl":"https://doi.org/arxiv-2409.11133","url":null,"abstract":"Multiparty Session Types (MPSTs) offer a structured way of specifying\u0000communication protocols and guarantee relevant communication properties, such\u0000as deadlock-freedom. In this paper, we extend a minimal MPST system with\u0000quantum data and operations, enabling the specification of quantum protocols.\u0000Quantum MPSTs (QMPSTs) provide a formal notation to describe quantum protocols,\u0000both at the abstract level of global types, describing which communications can\u0000take place in the system and their dependencies, and at the concrete level of\u0000local types and quantum processes, describing the expected behavior of each\u0000participant in the protocol. Type-checking relates these two levels formally,\u0000ensuring that processes behave as prescribed by the global type. Beyond usual\u0000communication properties, QMPSTs also allow us to prove that qubits are owned\u0000by a single process at any time, capturing the quantum no-cloning and\u0000no-deleting theorems. We use our approach to verify four quantum protocols from\u0000the literature, respectively Teleportation, Secret Sharing, Bit-Commitment, and\u0000Key Distribution.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We advance the thesis that the simulation of quantum circuits is fundamentally about the efficient management of a large (potentially exponential) number of delimited continuations. The family of Scheme languages, with its efficient implementations of first-class continuations and with its imperative constructs, provides an elegant host for modeling and simulating quantum circuits.
{"title":"Scheme Pearl: Quantum Continuations","authors":"Vikraman Choudhury, Borislav Agapiev, Amr Sabry","doi":"arxiv-2409.11106","DOIUrl":"https://doi.org/arxiv-2409.11106","url":null,"abstract":"We advance the thesis that the simulation of quantum circuits is\u0000fundamentally about the efficient management of a large (potentially\u0000exponential) number of delimited continuations. The family of Scheme languages,\u0000with its efficient implementations of first-class continuations and with its\u0000imperative constructs, provides an elegant host for modeling and simulating\u0000quantum circuits.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programming language frameworks allow us to generate language tools (e.g., interpreters) just from a formal description of the syntax and semantics of a programming language. As these frameworks tend to be quite complex, an issue arises whether we can trust the generated tools. To address this issue, we introduce a practical formal programming language framework called Minuska, which always generates a provably correct interpreter given a valid language definition. This is achieved by (1) defining a language MinusLang for expressing programming language definitions and giving it formal semantics and (2) using the Coq proof assistant to implement an interpreter parametric in a MinusLang definition and to prove it correct. Minuska provides strong correctness guarantees and can support nontrivial languages while performing well. This is the extended version of the SEFM24 paper of the same name.
{"title":"Minuska: Towards a Formally Verified Programming Language Framework","authors":"Jan Tušil, Jan Obdržálek","doi":"arxiv-2409.11530","DOIUrl":"https://doi.org/arxiv-2409.11530","url":null,"abstract":"Programming language frameworks allow us to generate language tools (e.g.,\u0000interpreters) just from a formal description of the syntax and semantics of a\u0000programming language. As these frameworks tend to be quite complex, an issue\u0000arises whether we can trust the generated tools. To address this issue, we\u0000introduce a practical formal programming language framework called Minuska,\u0000which always generates a provably correct interpreter given a valid language\u0000definition. This is achieved by (1) defining a language MinusLang for\u0000expressing programming language definitions and giving it formal semantics and\u0000(2) using the Coq proof assistant to implement an interpreter parametric in a\u0000MinusLang definition and to prove it correct. Minuska provides strong\u0000correctness guarantees and can support nontrivial languages while performing\u0000well. This is the extended version of the SEFM24 paper of the same name.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augusto Seben da Rosa, Marlon Daniel Angeli, Jorge Aikes Junior, Alef Iury Ferreira, Lucas Rafael Gris, Anderson da Silva Soares, Arnaldo Candido Junior, Frederico Santos de Oliveira, Gabriel Trevisan Damke, Rafael Teixeira Sousa
We developed a jitted compiler for training Artificial Neural Networks using C++, LLVM and Cuda. It features object-oriented characteristics, strong typing, parallel workers for data pre-processing, pythonic syntax for expressions, PyTorch like model declaration and Automatic Differentiation. We implement the mechanisms of cache and pooling in order to manage VRAM, cuBLAS for high performance matrix multiplication and cuDNN for convolutional layers. Our experiments with Residual Convolutional Neural Networks on ImageNet, we reach similar speed but degraded performance. Also, the GRU network experiments show similar accuracy, but our compiler have degraded speed in that task. However, our compiler demonstrates promising results at the CIFAR-10 benchmark, in which we reach the same performance and about the same speed as PyTorch. We make the code publicly available at: https://github.com/NoSavedDATA/NoSavedKaleidoscope
{"title":"No Saved Kaleidosope: an 100% Jitted Neural Network Coding Language with Pythonic Syntax","authors":"Augusto Seben da Rosa, Marlon Daniel Angeli, Jorge Aikes Junior, Alef Iury Ferreira, Lucas Rafael Gris, Anderson da Silva Soares, Arnaldo Candido Junior, Frederico Santos de Oliveira, Gabriel Trevisan Damke, Rafael Teixeira Sousa","doi":"arxiv-2409.11600","DOIUrl":"https://doi.org/arxiv-2409.11600","url":null,"abstract":"We developed a jitted compiler for training Artificial Neural Networks using\u0000C++, LLVM and Cuda. It features object-oriented characteristics, strong typing,\u0000parallel workers for data pre-processing, pythonic syntax for expressions,\u0000PyTorch like model declaration and Automatic Differentiation. We implement the\u0000mechanisms of cache and pooling in order to manage VRAM, cuBLAS for high\u0000performance matrix multiplication and cuDNN for convolutional layers. Our\u0000experiments with Residual Convolutional Neural Networks on ImageNet, we reach\u0000similar speed but degraded performance. Also, the GRU network experiments show\u0000similar accuracy, but our compiler have degraded speed in that task. However,\u0000our compiler demonstrates promising results at the CIFAR-10 benchmark, in which\u0000we reach the same performance and about the same speed as PyTorch. We make the\u0000code publicly available at: https://github.com/NoSavedDATA/NoSavedKaleidoscope","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decompilation of binary code has arisen as a highly-important application in the space of Ethereum VM (EVM) smart contracts. Major new decompilers appear nearly every year and attain popularity, for a multitude of reverse-engineering or tool-building purposes. Technically, the problem is fundamental: it consists of recovering high-level control flow from a highly-optimized continuation-passing-style (CPS) representation. Architecturally, decompilers can be built using either static analysis or symbolic execution techniques. We present Shrknr, a static-analysis-based decompiler succeeding the state-of-the-art Elipmoc decompiler. Shrknr manages to achieve drastic improvements relative to the state of the art, in all significant dimensions: scalability, completeness, precision. Chief among the techniques employed is a new variant of static analysis context: shrinking context sensitivity. Shrinking context sensitivity performs deep cuts in the static analysis context, eagerly "forgetting" control-flow history, in order to leave room for further precise reasoning. We compare Shrnkr to state-of-the-art decompilers, both static-analysis- and symbolic-execution-based. In a standard benchmark set, Shrnkr scales to over 99.5% of contracts (compared to ~95%), covers (i.e., reaches and manages to decompile) 67% more code, and reduces key imprecision metrics by over 65%.
{"title":"The Incredible Shrinking Context... in a decompiler near you","authors":"Sifis Lagouvardos, Yannis Bollanos, Neville Grech, Yannis Smaragdakis","doi":"arxiv-2409.11157","DOIUrl":"https://doi.org/arxiv-2409.11157","url":null,"abstract":"Decompilation of binary code has arisen as a highly-important application in\u0000the space of Ethereum VM (EVM) smart contracts. Major new decompilers appear\u0000nearly every year and attain popularity, for a multitude of reverse-engineering\u0000or tool-building purposes. Technically, the problem is fundamental: it consists\u0000of recovering high-level control flow from a highly-optimized\u0000continuation-passing-style (CPS) representation. Architecturally, decompilers\u0000can be built using either static analysis or symbolic execution techniques. We present Shrknr, a static-analysis-based decompiler succeeding the\u0000state-of-the-art Elipmoc decompiler. Shrknr manages to achieve drastic\u0000improvements relative to the state of the art, in all significant dimensions:\u0000scalability, completeness, precision. Chief among the techniques employed is a\u0000new variant of static analysis context: shrinking context sensitivity.\u0000Shrinking context sensitivity performs deep cuts in the static analysis\u0000context, eagerly \"forgetting\" control-flow history, in order to leave room for\u0000further precise reasoning. We compare Shrnkr to state-of-the-art decompilers, both static-analysis- and\u0000symbolic-execution-based. In a standard benchmark set, Shrnkr scales to over\u000099.5% of contracts (compared to ~95%), covers (i.e., reaches and manages to\u0000decompile) 67% more code, and reduces key imprecision metrics by over 65%.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Coordination-free Collaborative Replication (CCR), a new method for maintaining consistency across replicas in distributed systems without requiring explicit coordination messages. CCR automates conflict resolution, contrasting with traditional Data-sharing systems that typically involve centralized update management or predefined consistency rules. Operational Transformation (OT), commonly used in collaborative editing, ensures consistency by transforming operations while maintaining document integrity across replicas. However, OT assumes server-based coordination, which is unsuitable for modern, decentralized Peer-to-Peer (P2P) systems. Conflict-free Replicated Data Type (CRDT), like Two-Phase Sets (2P-Sets), guarantees eventual consistency by allowing commutative and associative operations but often result in counterintuitive behaviors, such as failing to re-add an item to a shopping cart once removed. In contrast, CCR employs a more intuitive approach to replication. It allows for straightforward updates and conflict resolution based on the current data state, enhancing clarity and usability compared to CRDTs. Furthermore, CCR addresses inefficiencies in messaging by developing a versatile protocol based on data stream confluence, thus providing a more efficient and practical solution for collaborative data sharing in distributed systems.
{"title":"Coordination-free Collaborative Replication based on Operational Transformation","authors":"Masato Takeichi","doi":"arxiv-2409.09934","DOIUrl":"https://doi.org/arxiv-2409.09934","url":null,"abstract":"We introduce Coordination-free Collaborative Replication (CCR), a new method\u0000for maintaining consistency across replicas in distributed systems without\u0000requiring explicit coordination messages. CCR automates conflict resolution,\u0000contrasting with traditional Data-sharing systems that typically involve\u0000centralized update management or predefined consistency rules. Operational Transformation (OT), commonly used in collaborative editing,\u0000ensures consistency by transforming operations while maintaining document\u0000integrity across replicas. However, OT assumes server-based coordination, which\u0000is unsuitable for modern, decentralized Peer-to-Peer (P2P) systems. Conflict-free Replicated Data Type (CRDT), like Two-Phase Sets (2P-Sets),\u0000guarantees eventual consistency by allowing commutative and associative\u0000operations but often result in counterintuitive behaviors, such as failing to\u0000re-add an item to a shopping cart once removed. In contrast, CCR employs a more intuitive approach to replication. It allows\u0000for straightforward updates and conflict resolution based on the current data\u0000state, enhancing clarity and usability compared to CRDTs. Furthermore, CCR\u0000addresses inefficiencies in messaging by developing a versatile protocol based\u0000on data stream confluence, thus providing a more efficient and practical\u0000solution for collaborative data sharing in distributed systems.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktor Palmkvist, Anders Ågren Thuné, Elias Castegren, David Broman
The choice of how to represent an abstract type can have a major impact on the performance of a program, yet mainstream compilers cannot perform optimizations at such a high level. When dealing with optimizations of data type representations, an important feature is having extensible representation-flexible data types; the ability for a programmer to add new abstract types and operations, as well as concrete implementations of these, without modifying the compiler or a previously defined library. Many research projects support high-level optimizations through static analysis, instrumentation, or benchmarking, but they are all restricted in at least one aspect of extensibility. This paper presents a new approach to representation-flexible data types without such restrictions and which still finds efficient optimizations. Our approach centers around a single built-in type $texttt{repr}$ and function overloading with cost annotations for operation implementations. We evaluate our approach (i) by defining a universal collection type as a library, a single type for all conventional collections, and (ii) by designing and implementing a representation-flexible graph library. Programs using $texttt{repr}$ types are typically faster than programs with idiomatic representation choices -- sometimes dramatically so -- as long as the compiler finds good implementations for all operations. Our compiler performs the analysis efficiently by finding optimized solutions quickly and by reusing previous results to avoid recomputations.
{"title":"Repr Types: One Abstraction to Rule Them All","authors":"Viktor Palmkvist, Anders Ågren Thuné, Elias Castegren, David Broman","doi":"arxiv-2409.07950","DOIUrl":"https://doi.org/arxiv-2409.07950","url":null,"abstract":"The choice of how to represent an abstract type can have a major impact on\u0000the performance of a program, yet mainstream compilers cannot perform\u0000optimizations at such a high level. When dealing with optimizations of data\u0000type representations, an important feature is having extensible\u0000representation-flexible data types; the ability for a programmer to add new\u0000abstract types and operations, as well as concrete implementations of these,\u0000without modifying the compiler or a previously defined library. Many research\u0000projects support high-level optimizations through static analysis,\u0000instrumentation, or benchmarking, but they are all restricted in at least one\u0000aspect of extensibility. This paper presents a new approach to representation-flexible data types\u0000without such restrictions and which still finds efficient optimizations. Our\u0000approach centers around a single built-in type $texttt{repr}$ and function\u0000overloading with cost annotations for operation implementations. We evaluate\u0000our approach (i) by defining a universal collection type as a library, a single\u0000type for all conventional collections, and (ii) by designing and implementing a\u0000representation-flexible graph library. Programs using $texttt{repr}$ types are\u0000typically faster than programs with idiomatic representation choices --\u0000sometimes dramatically so -- as long as the compiler finds good implementations\u0000for all operations. Our compiler performs the analysis efficiently by finding\u0000optimized solutions quickly and by reusing previous results to avoid\u0000recomputations.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Formal mathematics and computer science proofs are formalized using Hilbert-Russell-style logical systems which are designed to not admit paradoxes and self-refencing reasoning. These logical systems are natural way to describe and reason syntactic about tree-like data structures. We found that Wittgenstein-style logic is an alternate system whose propositional elements are directed graphs (points and arrows) capable of performing paraconsistent self-referencing reasoning without exploding. Imperative programming language are typically compiled and optimized with SSA-based graphs whose most general representation is the Sea of Node. By restricting the Sea of Nodes to only the data dependencies nodes, we attempted to stablish syntactic-semantic correspondences with the Lambda-calculus optimization. Surprisingly, when we tested our optimizer of the lambda calculus we performed a natural extension onto the $mulambda$ which is always terminating. This always terminating algorithm is an actual paradox whose resulting graphs are geometrical fractals, which seem to be isomorphic to original source program. These fractal structures looks like a perfect compressor of a program, which seem to resemble an actual physical black-hole with a naked singularity. In addition to these surprising results, we propose two additional extensions to the calculus to model the cognitive process of self-aware beings: 1) $epsilon$-expressions to model syntactic to semantic expansion as a general model of macros; 2) $delta$-functional expressions as a minimal model of input and output. We provide detailed step-by-step construction of our language interpreter, compiler and optimizer.
{"title":"$μλεδ$-Calculus: A Self Optimizing Language that Seems to Exhibit Paradoxical Transfinite Cognitive Capabilities","authors":"Ronie Salgado","doi":"arxiv-2409.05351","DOIUrl":"https://doi.org/arxiv-2409.05351","url":null,"abstract":"Formal mathematics and computer science proofs are formalized using\u0000Hilbert-Russell-style logical systems which are designed to not admit paradoxes\u0000and self-refencing reasoning. These logical systems are natural way to describe\u0000and reason syntactic about tree-like data structures. We found that\u0000Wittgenstein-style logic is an alternate system whose propositional elements\u0000are directed graphs (points and arrows) capable of performing paraconsistent\u0000self-referencing reasoning without exploding. Imperative programming language\u0000are typically compiled and optimized with SSA-based graphs whose most general\u0000representation is the Sea of Node. By restricting the Sea of Nodes to only the\u0000data dependencies nodes, we attempted to stablish syntactic-semantic\u0000correspondences with the Lambda-calculus optimization. Surprisingly, when we\u0000tested our optimizer of the lambda calculus we performed a natural extension\u0000onto the $mulambda$ which is always terminating. This always terminating\u0000algorithm is an actual paradox whose resulting graphs are geometrical fractals,\u0000which seem to be isomorphic to original source program. These fractal\u0000structures looks like a perfect compressor of a program, which seem to resemble\u0000an actual physical black-hole with a naked singularity. In addition to these\u0000surprising results, we propose two additional extensions to the calculus to\u0000model the cognitive process of self-aware beings: 1) $epsilon$-expressions to\u0000model syntactic to semantic expansion as a general model of macros; 2)\u0000$delta$-functional expressions as a minimal model of input and output. We\u0000provide detailed step-by-step construction of our language interpreter,\u0000compiler and optimizer.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurrent computations resemble conversations. In a conversation, participants direct utterances at others and, as the conversation evolves, exploit the known common context to advance the conversation. Similarly, collaborating software components share knowledge with each other in order to make progress as a group towards a common goal. This dissertation studies concurrency from the perspective of cooperative knowledge-sharing, taking the conversational exchange of knowledge as a central concern in the design of concurrent programming languages. In doing so, it makes five contributions: 1. It develops the idea of a common dataspace as a medium for knowledge exchange among concurrent components, enabling a new approach to concurrent programming. While dataspaces loosely resemble both "fact spaces" from the world of Linda-style languages and Erlang's collaborative model, they significantly differ in many details. 2. It offers the first crisp formulation of cooperative, conversational knowledge-exchange as a mathematical model. 3. It describes two faithful implementations of the model for two quite different languages. 4. It proposes a completely novel suite of linguistic constructs for organizing the internal structure of individual actors in a conversational setting. The combination of dataspaces with these constructs is dubbed Syndicate. 5. It presents and analyzes evidence suggesting that the proposed techniques and constructs combine to simplify concurrent programming. The dataspace concept stands alone in its focus on representation and manipulation of conversational frames and conversational state and in its integral use of explicit epistemic knowledge. The design is particularly suited to integration of general-purpose I/O with otherwise-functional languages, but also applies to actor-like settings more generally.
并发计算类似于对话。在会话中,参与者会直接向他人发表言论,并随着会话的发展,利用已知的共同语境来推进会话。同样,协作软件组件之间也会共享知识,以便作为一个群体朝着共同的目标前进。本论文从合作知识共享的角度研究并发性,将知识的对话交流作为并发编程语言设计的核心问题。为此,本论文做出了五项贡献:1.它提出了将公共数据空间作为并发组件之间进行知识交流的媒介的观点,为并发编程提供了一种新方法。虽然数据空间与 Linda 风格语言世界中的 "事实空间 "和 Erlang 的协作模型大致相似,但它们在许多细节上存在显著差异。2.它首次以数学模型的形式清晰地表述了合作式会话知识交换。3.3. 它描述了两种完全不同的语言对该模型的两种忠实实现。4.4. 它提出了一套全新的语言构造,用于组织会话环境中个体行动者的内部结构。数据空间与这些结构的组合被称为 Syndicate。5.它提出并分析了一些证据,这些证据表明,所提出的技术与结构相结合,可以简化当前的程序设计。数据空间概念的独特之处在于,它侧重于会话框架和会话状态的表示和操作,以及对显式认识论知识的整合使用。这种设计特别适用于将通用输入/输出与其他功能语言整合在一起,但也适用于更广泛的演员式设置。
{"title":"Conversational Concurrency","authors":"Tony Garnock-Jones","doi":"arxiv-2409.04055","DOIUrl":"https://doi.org/arxiv-2409.04055","url":null,"abstract":"Concurrent computations resemble conversations. In a conversation,\u0000participants direct utterances at others and, as the conversation evolves,\u0000exploit the known common context to advance the conversation. Similarly,\u0000collaborating software components share knowledge with each other in order to\u0000make progress as a group towards a common goal. This dissertation studies concurrency from the perspective of cooperative\u0000knowledge-sharing, taking the conversational exchange of knowledge as a central\u0000concern in the design of concurrent programming languages. In doing so, it\u0000makes five contributions: 1. It develops the idea of a common dataspace as a\u0000medium for knowledge exchange among concurrent components, enabling a new\u0000approach to concurrent programming. While dataspaces loosely resemble both\u0000\"fact spaces\" from the world of Linda-style languages and Erlang's\u0000collaborative model, they significantly differ in many details. 2. It offers\u0000the first crisp formulation of cooperative, conversational knowledge-exchange\u0000as a mathematical model. 3. It describes two faithful implementations of the\u0000model for two quite different languages. 4. It proposes a completely novel\u0000suite of linguistic constructs for organizing the internal structure of\u0000individual actors in a conversational setting. The combination of dataspaces\u0000with these constructs is dubbed Syndicate. 5. It presents and analyzes evidence\u0000suggesting that the proposed techniques and constructs combine to simplify\u0000concurrent programming. The dataspace concept stands alone in its focus on representation and\u0000manipulation of conversational frames and conversational state and in its\u0000integral use of explicit epistemic knowledge. The design is particularly suited\u0000to integration of general-purpose I/O with otherwise-functional languages, but\u0000also applies to actor-like settings more generally.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}