Akshay GopalakrishnanMcGill University, Clark VerbruggeMcGill University, Mark BattyUniversity of Kent
{"title":"Memory Consistency and Program Transformations","authors":"Akshay GopalakrishnanMcGill University, Clark VerbruggeMcGill University, Mark BattyUniversity of Kent","doi":"arxiv-2409.12013","DOIUrl":null,"url":null,"abstract":"A memory consistency model specifies the allowed behaviors of shared memory\nconcurrent programs. At the language level, these models are known to have a\nnon-trivial impact on the safety of program optimizations, limiting the ability\nto rearrange/refactor code without introducing new behaviors. Existing\nprogramming language memory models try to address this by permitting more\n(relaxed/weak) concurrent behaviors but are still unable to allow all the\ndesired optimizations. A core problem is that weaker consistency models may\nalso render optimizations unsafe, a conclusion that goes against the intuition\nof them allowing more behaviors. This exposes an open problem of the\ncompositional interaction between memory consistency semantics and\noptimizations: which parts of the semantics correspond to allowing/disallowing\nwhich set of optimizations is unclear. In this work, we establish a formal\nfoundation suitable enough to understand this compositional nature, decomposing\noptimizations into a finite set of elementary effects on program execution\ntraces, over which aspects of safety can be assessed. We use this decomposition\nto identify a desirable compositional property (complete) that would guarantee\nthe safety of optimizations from one memory model to another. We showcase its\npracticality by proving such a property between Sequential Consistency (SC) and\n$SC_{RR}$, the latter allowing independent read-read reordering over $SC$. Our\nwork potentially paves way to a new design methodology of programming-language\nmemory models, one that places emphasis on the optimizations desired to be\nperformed.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Programming Languages","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A memory consistency model specifies the allowed behaviors of shared memory
concurrent programs. At the language level, these models are known to have a
non-trivial impact on the safety of program optimizations, limiting the ability
to rearrange/refactor code without introducing new behaviors. Existing
programming language memory models try to address this by permitting more
(relaxed/weak) concurrent behaviors but are still unable to allow all the
desired optimizations. A core problem is that weaker consistency models may
also render optimizations unsafe, a conclusion that goes against the intuition
of them allowing more behaviors. This exposes an open problem of the
compositional interaction between memory consistency semantics and
optimizations: which parts of the semantics correspond to allowing/disallowing
which set of optimizations is unclear. In this work, we establish a formal
foundation suitable enough to understand this compositional nature, decomposing
optimizations into a finite set of elementary effects on program execution
traces, over which aspects of safety can be assessed. We use this decomposition
to identify a desirable compositional property (complete) that would guarantee
the safety of optimizations from one memory model to another. We showcase its
practicality by proving such a property between Sequential Consistency (SC) and
$SC_{RR}$, the latter allowing independent read-read reordering over $SC$. Our
work potentially paves way to a new design methodology of programming-language
memory models, one that places emphasis on the optimizations desired to be
performed.