Pub Date : 2023-09-14DOI: 10.1007/s00236-023-00445-5
Tonatiuh Tapia-Flores, Ernesto López-Mellado
A novel and efficient method for discovering concurrent workflow processes is presented. It allows building a suitable workflow net (WFN) from a large event log (lambda ), which represents the behaviour of complex iterative processes involving concurrency. First, the t-invariants are determined from (lambda ); this allows computing the causal and concurrent relations between the events and the implicit causal relations between events that do not appear consecutively in (lambda ). Then a 1-bounded WFN is built, which could be eventually adjusted if its t-invariants do not match with those computed from (lambda ). The discovered model allows firing all the traces in (lambda ). The procedures derived from the method are polynomial time on (|lambda |); they have been implemented and tested on artificial logs.
{"title":"Discovering workflow nets of concurrent iterative processes","authors":"Tonatiuh Tapia-Flores, Ernesto López-Mellado","doi":"10.1007/s00236-023-00445-5","DOIUrl":"10.1007/s00236-023-00445-5","url":null,"abstract":"<div><p>A novel and efficient method for discovering concurrent workflow processes is presented. It allows building a suitable workflow net (WFN) from a large event log <span>(lambda )</span>, which represents the behaviour of complex iterative processes involving concurrency. First, the <i>t</i>-invariants are determined from <span>(lambda )</span>; this allows computing the causal and concurrent relations between the events and the implicit causal relations between events that do not appear consecutively in <span>(lambda )</span>. Then a 1-bounded WFN is built, which could be eventually adjusted if its <i>t</i>-invariants do not match with those computed from <span>(lambda )</span>. The discovered model allows firing all the traces in <span>(lambda )</span>. The procedures derived from the method are polynomial time on <span>(|lambda |)</span>; they have been implemented and tested on artificial logs.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"61 1","pages":"1 - 21"},"PeriodicalIF":0.4,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-023-00445-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134911044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-08DOI: 10.1007/s00236-023-00444-6
Chunhua Cao, Jiao Xu, Lei Liao, Di Yang, Guichuan Jia, Qian Du
In the fields of combinatorics on words and theory of codes, a two-word language ({x, y}) is a code if and only if (xy not = yx). But up to now, corresponding characterizations for a three-word language, which forms a code, have not been completely found. Let (X={x, y, z}) be a three-word language and (|x|, |y|, |z|) be their lengths. When (|x| = |y| < |z|), a necessary and sufficient condition for X to be a code was obtained in 2018. If (|x| < |y| = |z| le 2|x|), a necessary and sufficient condition for X to be a code is proposed in this paper.
在词的组合学领域和码理论中,两个词的语言({x, y})是一个码当且仅当(xy not = yx)。但是到目前为止,对于构成一个码的三字语言,还没有完全找到相应的表征。让(X={x, y, z})是一个三个单词的语言,(|x|, |y|, |z|)是它们的长度。当(|x| = |y| < |z|), 2018年获得X为代码的充分必要条件。如果(|x| < |y| = |z| le 2|x|),本文给出了X为码的充分必要条件。
{"title":"The second step in characterizing a three-word code","authors":"Chunhua Cao, Jiao Xu, Lei Liao, Di Yang, Guichuan Jia, Qian Du","doi":"10.1007/s00236-023-00444-6","DOIUrl":"10.1007/s00236-023-00444-6","url":null,"abstract":"<div><p>In the fields of combinatorics on words and theory of codes, a two-word language <span>({x, y})</span> is a code if and only if <span>(xy not = yx)</span>. But up to now, corresponding characterizations for a three-word language, which forms a code, have not been completely found. Let <span>(X={x, y, z})</span> be a three-word language and <span>(|x|, |y|, |z|)</span> be their lengths. When <span>(|x| = |y| < |z|)</span>, a necessary and sufficient condition for <i>X</i> to be a code was obtained in 2018. If <span>(|x| < |y| = |z| le 2|x|)</span>, a necessary and sufficient condition for <i>X</i> to be a code is proposed in this paper.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 4","pages":"453 - 465"},"PeriodicalIF":0.6,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48790117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1007/s00236-023-00441-9
Luca Aceto, Ian Cassar, Adrian Francalanza, Anna Ingólfsdóttir
Runtime enforcement is a dynamic analysis technique that uses monitors to enforce the behaviour specified by some correctness property on an executing system. The enforceability of a logic captures the extent to which the properties expressible via the logic can be enforced at runtime for a specified operational model of enforcing monitors. We study the enforceability of branching-time, first-order properties expressed in the Hennessy–Milner Logic with Recursion ((mu )HML) with respect to monitors that can enforce behaviour involving events that carry data. To this end, we develop an operational framework for first-order enforcement via suppressions, insertions and replacements. We then use this model to formalise the meaning of enforcing a branching-time property. We also show that a safety syntactic fragment of the logic is enforceable within this framework by providing an automated synthesis function that generates correct suppression monitors from any formula taken from this logical fragment.
{"title":"On first-order runtime enforcement of branching-time properties","authors":"Luca Aceto, Ian Cassar, Adrian Francalanza, Anna Ingólfsdóttir","doi":"10.1007/s00236-023-00441-9","DOIUrl":"10.1007/s00236-023-00441-9","url":null,"abstract":"<div><p>Runtime enforcement is a dynamic analysis technique that uses monitors to enforce the behaviour specified by some correctness property on an executing system. The enforceability of a logic captures the extent to which the properties expressible via the logic can be enforced at runtime for a specified operational model of enforcing monitors. We study the enforceability of branching-time, first-order properties expressed in the Hennessy–Milner Logic with Recursion (<span>(mu )</span> <span>HML</span>) with respect to monitors that can enforce behaviour involving events that carry data. To this end, we develop an operational framework for first-order enforcement via suppressions, insertions and replacements. We then use this model to formalise the meaning of enforcing a branching-time property. We also show that a safety syntactic fragment of the logic is enforceable within this framework by providing an automated synthesis function that generates correct suppression monitors from any formula taken from this logical fragment.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 4","pages":"385 - 451"},"PeriodicalIF":0.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41339460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.1007/s00236-023-00443-7
Farnaz Sheikhi, Behnam Zeraatkar, Sama Hanaie
Dot pattern points are the samples taken from all regions of a 2D object, either inside or the boundary. Given a set of dot pattern points in the plane, the shape reconstruction problem seeks to find the boundaries of the points. These boundaries are not mathematically well-defined. Hence, a superior algorithm is the one which produces the result closest to the human visual perception. There are different challenges in designing these algorithms, such as the independence from human supervision, and the ability to detect multiple components, holes and sharp corners. In this paper, we present a thorough review on the rich body of research in shape reconstruction, classify the ideas behind the algorithms, and highlight their pros and cons. Moreover, to overcome the barriers of implementing these algorithms, we provide an integrated application to visualize the outputs of the prominent algorithms for further comparison.
{"title":"Dot to dot, simple or sophisticated: a survey on shape reconstruction algorithms","authors":"Farnaz Sheikhi, Behnam Zeraatkar, Sama Hanaie","doi":"10.1007/s00236-023-00443-7","DOIUrl":"10.1007/s00236-023-00443-7","url":null,"abstract":"<div><p><i>Dot pattern</i> points are the samples taken from all regions of a 2D object, either inside or the boundary. Given a set of dot pattern points in the plane, the <i>shape reconstruction</i> problem seeks to find the boundaries of the points. These boundaries are not mathematically well-defined. Hence, a superior algorithm is the one which produces the result closest to the human visual perception. There are different challenges in designing these algorithms, such as the independence from human supervision, and the ability to detect multiple components, holes and sharp corners. In this paper, we present a thorough review on the rich body of research in shape reconstruction, classify the ideas behind the algorithms, and highlight their pros and cons. Moreover, to overcome the barriers of implementing these algorithms, we provide an integrated application to visualize the outputs of the prominent algorithms for further comparison.\u0000</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 4","pages":"335 - 359"},"PeriodicalIF":0.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46914067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-17DOI: 10.1007/s00236-023-00442-8
Richard Lassaigne, Michel de Rougemont
Given a timed automaton which admits thick components and a timed word w, we present a tester which decides if w is in the language of the automaton or if w is (epsilon )-far from the language, using finitely many samples taken from the weighted time distribution (mu ) associated with the input w. We introduce a distance between timed words, the timed edit distance, which generalizes the classical edit distance. A timed word w is (epsilon )-far from a timed language if its relative distance to the language is greater than (epsilon ).
{"title":"Testing membership for timed automata","authors":"Richard Lassaigne, Michel de Rougemont","doi":"10.1007/s00236-023-00442-8","DOIUrl":"10.1007/s00236-023-00442-8","url":null,"abstract":"<div><p>Given a timed automaton which admits thick components and a timed word <i>w</i>, we present a tester which decides if <i>w</i> is in the language of the automaton or if <i>w</i> is <span>(epsilon )</span>-far from the language, using finitely many samples taken from the weighted time distribution <span>(mu )</span> associated with the input <i>w</i>. We introduce a distance between timed words, the <i>timed edit distance</i>, which generalizes the classical edit distance. A timed word <i>w</i> is <span>(epsilon )</span>-far from a timed language if its relative distance to the language is greater than <span>(epsilon )</span>.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 4","pages":"361 - 384"},"PeriodicalIF":0.6,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71910443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-08DOI: 10.1007/s00236-023-00440-w
Pál Dömösi, Géza Horváth, Norbert Tihanyi
Random numbers are very important in many fields of computer science. Generating high-quality random numbers using only basic arithmetic operations is challenging, especially for devices with limited hardware capabilities, such as Internet of Things (IoT) devices. In this paper, we present a novel pseudorandom number generator, the simple chain automaton random number generator (SCARNG), based on compositions of abstract automata. The main advantage of the presented algorithm is its simple structure that can be implemented easily for very low computing capacity IoT systems, FPGAs or GPU hardware. The generated random numbers demonstrate promising statistical behavior and satisfy the NIST statistical suite requirements, highlighting the potential of the SCARNG for practical applications.
{"title":"Simple chain automaton random number generator for IoT devices","authors":"Pál Dömösi, Géza Horváth, Norbert Tihanyi","doi":"10.1007/s00236-023-00440-w","DOIUrl":"10.1007/s00236-023-00440-w","url":null,"abstract":"<div><p>Random numbers are very important in many fields of computer science. Generating high-quality random numbers using only basic arithmetic operations is challenging, especially for devices with limited hardware capabilities, such as Internet of Things (IoT) devices. In this paper, we present a novel pseudorandom number generator, the simple chain automaton random number generator (SCARNG), based on compositions of abstract automata. The main advantage of the presented algorithm is its simple structure that can be implemented easily for very low computing capacity IoT systems, FPGAs or GPU hardware. The generated random numbers demonstrate promising statistical behavior and satisfy the NIST statistical suite requirements, highlighting the potential of the SCARNG for practical applications.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"317 - 329"},"PeriodicalIF":0.6,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-023-00440-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44369997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.1007/s00236-023-00437-5
Niklas Kochdumper, Matthias Althoff
We introduce constrained polynomial zonotopes, a novel non-convex set representation that is closed under linear map, Minkowski sum, Cartesian product, convex hull, intersection, union, and quadratic as well as higher-order maps. We show that the computational complexity of the above-mentioned set operations for constrained polynomial zonotopes is at most polynomial in the representation size. The fact that constrained polynomial zonotopes are generalizations of zonotopes, polytopes, polynomial zonotopes, Taylor models, and ellipsoids further substantiates the relevance of this new set representation. In addition, the conversion from other set representations to constrained polynomial zonotopes is at most polynomial with respect to the dimension, and we present efficient methods for representation size reduction and for enclosing constrained polynomial zonotopes by simpler set representations.
{"title":"Constrained polynomial zonotopes","authors":"Niklas Kochdumper, Matthias Althoff","doi":"10.1007/s00236-023-00437-5","DOIUrl":"10.1007/s00236-023-00437-5","url":null,"abstract":"<div><p>We introduce constrained polynomial zonotopes, a novel non-convex set representation that is closed under linear map, Minkowski sum, Cartesian product, convex hull, intersection, union, and quadratic as well as higher-order maps. We show that the computational complexity of the above-mentioned set operations for constrained polynomial zonotopes is at most polynomial in the representation size. The fact that constrained polynomial zonotopes are generalizations of zonotopes, polytopes, polynomial zonotopes, Taylor models, and ellipsoids further substantiates the relevance of this new set representation. In addition, the conversion from other set representations to constrained polynomial zonotopes is at most polynomial with respect to the dimension, and we present efficient methods for representation size reduction and for enclosing constrained polynomial zonotopes by simpler set representations.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"279 - 316"},"PeriodicalIF":0.6,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-023-00437-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47177535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.4049/jimmunol.2300035
Barton F Haynes
{"title":"HIV Never Sleeps: Evidence to Support Early Antiretroviral Treatment.","authors":"Barton F Haynes","doi":"10.4049/jimmunol.2300035","DOIUrl":"10.4049/jimmunol.2300035","url":null,"abstract":"","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"28 1","pages":"1181-1182"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72984401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-10DOI: 10.1007/s00236-023-00439-3
Jingnan Xie, Harry B. Hunt III
In Freydenberger (Theory Comput Syst 53(2):159–193, 2013. https://doi.org/10.1007/s00224-012-9389-0), Freydenberger shows that the set of invalid computations of an extended Turing machine can be recognized by a synchronized regular expression [as defined in Della Penna et al. (Acta Informatica 39(1):31–70, 2003. https://doi.org/10.1007/s00236-002-0099-y)]. Therefore, the widely discussed predicate “(={0,1}^*)” is not recursively enumerable for synchronized regular expressions (SRE). In this paper, we employ a stronger form of non-recursive enumerability called productiveness and show that the set of invalid computations of a deterministic Turing machine on a single input can be recognized by a synchronized regular expression. Hence, for a polynomial-time decidable subset of SRE, where each expression generates either ({0, 1}^*) or ({0, 1}^* -{w}) where (w in {0, 1}^*), the predicate “(={0,1}^*)” is productive. This result can be easily applied to other classes of language descriptors due to the simplicity of the construction in its proof. This result also implies that many computational problems, especially promise problems, for SRE are productive. These problems include language class comparison problems (e.g., does a given synchronized regular expression generate a context-free language?), and equivalence and containment problems of several types (e.g., does a given synchronized regular expression generate a language equal to a fixed unbounded regular set?). In addition, we study the descriptional complexity of SRE. A generalized method for studying trade-offs between SRE and many classes of language descriptors is established.
[j] .计算机科学,2013,(2):1 - 4。https://doi.org/10.1007/s00224-012-9389-0), Freydenberger证明了扩展图灵机的无效计算集可以通过同步正则表达式[定义在Della Penna et al.(信息学报39(1):31 - 70,2003]来识别。https://doi.org/10.1007/s00236-002-0099-y)]。因此,广泛讨论的谓词“(={0,1}^*)”对于同步正则表达式(SRE)来说是不可递归枚举的。本文采用了非递归可枚举性的一种更强的形式——生产力,并证明了确定性图灵机在单输入上的无效计算集可以被同步正则表达式识别。因此,对于SRE的多项式时间可确定子集,其中每个表达式生成({0, 1}^*)或({0, 1}^* -{w}),其中(w in {0, 1}^*),谓词“(={0,1}^*)”是有效的。由于其证明结构的简单性,该结果可以很容易地应用于其他类型的语言描述符。这一结果也意味着许多计算问题,特别是承诺问题,对于SRE是有效的。这些问题包括语言类比较问题(例如,给定的同步正则表达式是否生成与上下文无关的语言?),以及几种类型的等价和包含问题(例如,给定的同步正则表达式是否生成与固定无界正则集相等的语言?)。此外,我们还研究了SRE的描述复杂度。建立了一种通用的方法来研究SRE和多种语言描述符之间的权衡。
{"title":"On the undecidability and descriptional complexity of synchronized regular expressions","authors":"Jingnan Xie, Harry B. Hunt III","doi":"10.1007/s00236-023-00439-3","DOIUrl":"10.1007/s00236-023-00439-3","url":null,"abstract":"<div><p>In Freydenberger (Theory Comput Syst 53(2):159–193, 2013. https://doi.org/10.1007/s00224-012-9389-0), Freydenberger shows that the set of invalid computations of an extended Turing machine can be recognized by a synchronized regular expression [as defined in Della Penna et al. (Acta Informatica 39(1):31–70, 2003. https://doi.org/10.1007/s00236-002-0099-y)]. Therefore, the widely discussed predicate “<span>(={0,1}^*)</span>” is not recursively enumerable for synchronized regular expressions (SRE). In this paper, we employ a stronger form of non-recursive enumerability called <i>productiveness</i> and show that the set of invalid computations of a deterministic Turing machine on a single input can be recognized by a synchronized regular expression. Hence, for a polynomial-time decidable subset of SRE, where each expression generates either <span>({0, 1}^*)</span> or <span>({0, 1}^* -{w})</span> where <span>(w in {0, 1}^*)</span>, the predicate “<span>(={0,1}^*)</span>” is productive. This result can be easily applied to other classes of language descriptors due to the simplicity of the construction in its proof. This result also implies that many computational problems, especially promise problems, for SRE are productive. These problems include language class comparison problems (e.g., does a given synchronized regular expression generate a context-free language?), and equivalence and containment problems of several types (e.g., does a given synchronized regular expression generate a language equal to a fixed unbounded regular set?). In addition, we study the descriptional complexity of SRE. A generalized method for studying trade-offs between SRE and many classes of language descriptors is established.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"257 - 278"},"PeriodicalIF":0.6,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-023-00439-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44264085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-27DOI: 10.1007/s00236-023-00438-4
Besma Khaireddine, Aleksandr Zakharchenko, Matias Martinez, Ali Mili
To repair a program does not mean to make it (absolutely) correct; it only means to make it more-correct than it was originally. This is not a mundane academic distinction: given that programs typically have about a dozen faults per KLOC, it is important for program repair methods and tools to be designed in such a way that they map an incorrect program into a more-correct, albeit still potentially incorrect, program. Yet in the absence of a concept of relative correctness, many program repair methods and tools resort to approximations of absolute correctness; since these methods and tools are often validated against programs with a single fault, making them absolutely correct is indistinguishable from making them more-correct; this has contributed to conceal/obscure the absence of (and the need for) relative correctness. In this paper, we propose a theory of program repair based on a concept of relative correctness. We aspire to encourage researchers in program repair to explicitly specify what concept of relative correctness their method or tool is based upon; and to validate their method or tool by proving that it does enhance relative correctness, as defined.
{"title":"Toward a theory of program repair","authors":"Besma Khaireddine, Aleksandr Zakharchenko, Matias Martinez, Ali Mili","doi":"10.1007/s00236-023-00438-4","DOIUrl":"10.1007/s00236-023-00438-4","url":null,"abstract":"<div><p>To repair a program does not mean to make it (absolutely) correct; it only means to make it more-correct than it was originally. This is not a mundane academic distinction: given that programs typically have about a dozen faults per KLOC, it is important for program repair methods and tools to be designed in such a way that they map an incorrect program into a more-correct, albeit still potentially incorrect, program. Yet in the absence of a concept of relative correctness, many program repair methods and tools resort to approximations of absolute correctness; since these methods and tools are often validated against programs with a single fault, making them absolutely correct is indistinguishable from making them more-correct; this has contributed to conceal/obscure the absence of (and the need for) relative correctness. In this paper, we propose a theory of program repair based on a concept of relative correctness. We aspire to encourage researchers in program repair to explicitly specify what concept of relative correctness their method or tool is based upon; and to validate their method or tool by proving that it does enhance relative correctness, as defined.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"209 - 255"},"PeriodicalIF":0.6,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41562782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}