Pub Date : 2023-08-01DOI: 10.1007/s00236-023-00443-7
Farnaz Sheikhi, Behnam Zeraatkar, Sama Hanaie
Dot pattern points are the samples taken from all regions of a 2D object, either inside or the boundary. Given a set of dot pattern points in the plane, the shape reconstruction problem seeks to find the boundaries of the points. These boundaries are not mathematically well-defined. Hence, a superior algorithm is the one which produces the result closest to the human visual perception. There are different challenges in designing these algorithms, such as the independence from human supervision, and the ability to detect multiple components, holes and sharp corners. In this paper, we present a thorough review on the rich body of research in shape reconstruction, classify the ideas behind the algorithms, and highlight their pros and cons. Moreover, to overcome the barriers of implementing these algorithms, we provide an integrated application to visualize the outputs of the prominent algorithms for further comparison.
{"title":"Dot to dot, simple or sophisticated: a survey on shape reconstruction algorithms","authors":"Farnaz Sheikhi, Behnam Zeraatkar, Sama Hanaie","doi":"10.1007/s00236-023-00443-7","DOIUrl":"10.1007/s00236-023-00443-7","url":null,"abstract":"<div><p><i>Dot pattern</i> points are the samples taken from all regions of a 2D object, either inside or the boundary. Given a set of dot pattern points in the plane, the <i>shape reconstruction</i> problem seeks to find the boundaries of the points. These boundaries are not mathematically well-defined. Hence, a superior algorithm is the one which produces the result closest to the human visual perception. There are different challenges in designing these algorithms, such as the independence from human supervision, and the ability to detect multiple components, holes and sharp corners. In this paper, we present a thorough review on the rich body of research in shape reconstruction, classify the ideas behind the algorithms, and highlight their pros and cons. Moreover, to overcome the barriers of implementing these algorithms, we provide an integrated application to visualize the outputs of the prominent algorithms for further comparison.\u0000</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 4","pages":"335 - 359"},"PeriodicalIF":0.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46914067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-17DOI: 10.1007/s00236-023-00442-8
Richard Lassaigne, Michel de Rougemont
Given a timed automaton which admits thick components and a timed word w, we present a tester which decides if w is in the language of the automaton or if w is (epsilon )-far from the language, using finitely many samples taken from the weighted time distribution (mu ) associated with the input w. We introduce a distance between timed words, the timed edit distance, which generalizes the classical edit distance. A timed word w is (epsilon )-far from a timed language if its relative distance to the language is greater than (epsilon ).
{"title":"Testing membership for timed automata","authors":"Richard Lassaigne, Michel de Rougemont","doi":"10.1007/s00236-023-00442-8","DOIUrl":"10.1007/s00236-023-00442-8","url":null,"abstract":"<div><p>Given a timed automaton which admits thick components and a timed word <i>w</i>, we present a tester which decides if <i>w</i> is in the language of the automaton or if <i>w</i> is <span>(epsilon )</span>-far from the language, using finitely many samples taken from the weighted time distribution <span>(mu )</span> associated with the input <i>w</i>. We introduce a distance between timed words, the <i>timed edit distance</i>, which generalizes the classical edit distance. A timed word <i>w</i> is <span>(epsilon )</span>-far from a timed language if its relative distance to the language is greater than <span>(epsilon )</span>.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 4","pages":"361 - 384"},"PeriodicalIF":0.6,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71910443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-08DOI: 10.1007/s00236-023-00440-w
Pál Dömösi, Géza Horváth, Norbert Tihanyi
Random numbers are very important in many fields of computer science. Generating high-quality random numbers using only basic arithmetic operations is challenging, especially for devices with limited hardware capabilities, such as Internet of Things (IoT) devices. In this paper, we present a novel pseudorandom number generator, the simple chain automaton random number generator (SCARNG), based on compositions of abstract automata. The main advantage of the presented algorithm is its simple structure that can be implemented easily for very low computing capacity IoT systems, FPGAs or GPU hardware. The generated random numbers demonstrate promising statistical behavior and satisfy the NIST statistical suite requirements, highlighting the potential of the SCARNG for practical applications.
{"title":"Simple chain automaton random number generator for IoT devices","authors":"Pál Dömösi, Géza Horváth, Norbert Tihanyi","doi":"10.1007/s00236-023-00440-w","DOIUrl":"10.1007/s00236-023-00440-w","url":null,"abstract":"<div><p>Random numbers are very important in many fields of computer science. Generating high-quality random numbers using only basic arithmetic operations is challenging, especially for devices with limited hardware capabilities, such as Internet of Things (IoT) devices. In this paper, we present a novel pseudorandom number generator, the simple chain automaton random number generator (SCARNG), based on compositions of abstract automata. The main advantage of the presented algorithm is its simple structure that can be implemented easily for very low computing capacity IoT systems, FPGAs or GPU hardware. The generated random numbers demonstrate promising statistical behavior and satisfy the NIST statistical suite requirements, highlighting the potential of the SCARNG for practical applications.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"317 - 329"},"PeriodicalIF":0.6,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-023-00440-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44369997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.1007/s00236-023-00437-5
Niklas Kochdumper, Matthias Althoff
We introduce constrained polynomial zonotopes, a novel non-convex set representation that is closed under linear map, Minkowski sum, Cartesian product, convex hull, intersection, union, and quadratic as well as higher-order maps. We show that the computational complexity of the above-mentioned set operations for constrained polynomial zonotopes is at most polynomial in the representation size. The fact that constrained polynomial zonotopes are generalizations of zonotopes, polytopes, polynomial zonotopes, Taylor models, and ellipsoids further substantiates the relevance of this new set representation. In addition, the conversion from other set representations to constrained polynomial zonotopes is at most polynomial with respect to the dimension, and we present efficient methods for representation size reduction and for enclosing constrained polynomial zonotopes by simpler set representations.
{"title":"Constrained polynomial zonotopes","authors":"Niklas Kochdumper, Matthias Althoff","doi":"10.1007/s00236-023-00437-5","DOIUrl":"10.1007/s00236-023-00437-5","url":null,"abstract":"<div><p>We introduce constrained polynomial zonotopes, a novel non-convex set representation that is closed under linear map, Minkowski sum, Cartesian product, convex hull, intersection, union, and quadratic as well as higher-order maps. We show that the computational complexity of the above-mentioned set operations for constrained polynomial zonotopes is at most polynomial in the representation size. The fact that constrained polynomial zonotopes are generalizations of zonotopes, polytopes, polynomial zonotopes, Taylor models, and ellipsoids further substantiates the relevance of this new set representation. In addition, the conversion from other set representations to constrained polynomial zonotopes is at most polynomial with respect to the dimension, and we present efficient methods for representation size reduction and for enclosing constrained polynomial zonotopes by simpler set representations.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"279 - 316"},"PeriodicalIF":0.6,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-023-00437-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47177535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.4049/jimmunol.2300035
Barton F Haynes
{"title":"HIV Never Sleeps: Evidence to Support Early Antiretroviral Treatment.","authors":"Barton F Haynes","doi":"10.4049/jimmunol.2300035","DOIUrl":"10.4049/jimmunol.2300035","url":null,"abstract":"","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"28 1","pages":"1181-1182"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72984401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-10DOI: 10.1007/s00236-023-00439-3
Jingnan Xie, Harry B. Hunt III
In Freydenberger (Theory Comput Syst 53(2):159–193, 2013. https://doi.org/10.1007/s00224-012-9389-0), Freydenberger shows that the set of invalid computations of an extended Turing machine can be recognized by a synchronized regular expression [as defined in Della Penna et al. (Acta Informatica 39(1):31–70, 2003. https://doi.org/10.1007/s00236-002-0099-y)]. Therefore, the widely discussed predicate “(={0,1}^*)” is not recursively enumerable for synchronized regular expressions (SRE). In this paper, we employ a stronger form of non-recursive enumerability called productiveness and show that the set of invalid computations of a deterministic Turing machine on a single input can be recognized by a synchronized regular expression. Hence, for a polynomial-time decidable subset of SRE, where each expression generates either ({0, 1}^*) or ({0, 1}^* -{w}) where (w in {0, 1}^*), the predicate “(={0,1}^*)” is productive. This result can be easily applied to other classes of language descriptors due to the simplicity of the construction in its proof. This result also implies that many computational problems, especially promise problems, for SRE are productive. These problems include language class comparison problems (e.g., does a given synchronized regular expression generate a context-free language?), and equivalence and containment problems of several types (e.g., does a given synchronized regular expression generate a language equal to a fixed unbounded regular set?). In addition, we study the descriptional complexity of SRE. A generalized method for studying trade-offs between SRE and many classes of language descriptors is established.
[j] .计算机科学,2013,(2):1 - 4。https://doi.org/10.1007/s00224-012-9389-0), Freydenberger证明了扩展图灵机的无效计算集可以通过同步正则表达式[定义在Della Penna et al.(信息学报39(1):31 - 70,2003]来识别。https://doi.org/10.1007/s00236-002-0099-y)]。因此,广泛讨论的谓词“(={0,1}^*)”对于同步正则表达式(SRE)来说是不可递归枚举的。本文采用了非递归可枚举性的一种更强的形式——生产力,并证明了确定性图灵机在单输入上的无效计算集可以被同步正则表达式识别。因此,对于SRE的多项式时间可确定子集,其中每个表达式生成({0, 1}^*)或({0, 1}^* -{w}),其中(w in {0, 1}^*),谓词“(={0,1}^*)”是有效的。由于其证明结构的简单性,该结果可以很容易地应用于其他类型的语言描述符。这一结果也意味着许多计算问题,特别是承诺问题,对于SRE是有效的。这些问题包括语言类比较问题(例如,给定的同步正则表达式是否生成与上下文无关的语言?),以及几种类型的等价和包含问题(例如,给定的同步正则表达式是否生成与固定无界正则集相等的语言?)。此外,我们还研究了SRE的描述复杂度。建立了一种通用的方法来研究SRE和多种语言描述符之间的权衡。
{"title":"On the undecidability and descriptional complexity of synchronized regular expressions","authors":"Jingnan Xie, Harry B. Hunt III","doi":"10.1007/s00236-023-00439-3","DOIUrl":"10.1007/s00236-023-00439-3","url":null,"abstract":"<div><p>In Freydenberger (Theory Comput Syst 53(2):159–193, 2013. https://doi.org/10.1007/s00224-012-9389-0), Freydenberger shows that the set of invalid computations of an extended Turing machine can be recognized by a synchronized regular expression [as defined in Della Penna et al. (Acta Informatica 39(1):31–70, 2003. https://doi.org/10.1007/s00236-002-0099-y)]. Therefore, the widely discussed predicate “<span>(={0,1}^*)</span>” is not recursively enumerable for synchronized regular expressions (SRE). In this paper, we employ a stronger form of non-recursive enumerability called <i>productiveness</i> and show that the set of invalid computations of a deterministic Turing machine on a single input can be recognized by a synchronized regular expression. Hence, for a polynomial-time decidable subset of SRE, where each expression generates either <span>({0, 1}^*)</span> or <span>({0, 1}^* -{w})</span> where <span>(w in {0, 1}^*)</span>, the predicate “<span>(={0,1}^*)</span>” is productive. This result can be easily applied to other classes of language descriptors due to the simplicity of the construction in its proof. This result also implies that many computational problems, especially promise problems, for SRE are productive. These problems include language class comparison problems (e.g., does a given synchronized regular expression generate a context-free language?), and equivalence and containment problems of several types (e.g., does a given synchronized regular expression generate a language equal to a fixed unbounded regular set?). In addition, we study the descriptional complexity of SRE. A generalized method for studying trade-offs between SRE and many classes of language descriptors is established.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"257 - 278"},"PeriodicalIF":0.6,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-023-00439-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44264085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-27DOI: 10.1007/s00236-023-00438-4
Besma Khaireddine, Aleksandr Zakharchenko, Matias Martinez, Ali Mili
To repair a program does not mean to make it (absolutely) correct; it only means to make it more-correct than it was originally. This is not a mundane academic distinction: given that programs typically have about a dozen faults per KLOC, it is important for program repair methods and tools to be designed in such a way that they map an incorrect program into a more-correct, albeit still potentially incorrect, program. Yet in the absence of a concept of relative correctness, many program repair methods and tools resort to approximations of absolute correctness; since these methods and tools are often validated against programs with a single fault, making them absolutely correct is indistinguishable from making them more-correct; this has contributed to conceal/obscure the absence of (and the need for) relative correctness. In this paper, we propose a theory of program repair based on a concept of relative correctness. We aspire to encourage researchers in program repair to explicitly specify what concept of relative correctness their method or tool is based upon; and to validate their method or tool by proving that it does enhance relative correctness, as defined.
{"title":"Toward a theory of program repair","authors":"Besma Khaireddine, Aleksandr Zakharchenko, Matias Martinez, Ali Mili","doi":"10.1007/s00236-023-00438-4","DOIUrl":"10.1007/s00236-023-00438-4","url":null,"abstract":"<div><p>To repair a program does not mean to make it (absolutely) correct; it only means to make it more-correct than it was originally. This is not a mundane academic distinction: given that programs typically have about a dozen faults per KLOC, it is important for program repair methods and tools to be designed in such a way that they map an incorrect program into a more-correct, albeit still potentially incorrect, program. Yet in the absence of a concept of relative correctness, many program repair methods and tools resort to approximations of absolute correctness; since these methods and tools are often validated against programs with a single fault, making them absolutely correct is indistinguishable from making them more-correct; this has contributed to conceal/obscure the absence of (and the need for) relative correctness. In this paper, we propose a theory of program repair based on a concept of relative correctness. We aspire to encourage researchers in program repair to explicitly specify what concept of relative correctness their method or tool is based upon; and to validate their method or tool by proving that it does enhance relative correctness, as defined.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 3","pages":"209 - 255"},"PeriodicalIF":0.6,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41562782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-30DOI: 10.1007/s00236-022-00436-y
Ivano Lodato, Snehal M. Shekatkar, Tian An Wong
We consider a generalization of the classical 100 prisoner problem and its variant, involving empty boxes, whereby winning probabilities for a team depend on the number of attempts, as well as on the number of winners. We call this the unconstrained 100 prisoner problem. After introducing the 3 main classes of strategies, we define a variety of ‘hybrid’ strategies and quantify their winning-efficiency. Whenever analytic results are not available, we make use of Monte Carlo simulations to estimate with high accuracy the winning probabilities. Based on the results obtained, we conjecture that all strategies, except for the strategy maximizing the winning probability of the classical (constrained) problem, converge to the random strategy under weak conditions on the number of players or empty boxes. We conclude by commenting on the possible applications of our results in understanding processes of information retrieval, such as “memory” in living organisms.
{"title":"On partial information retrieval: the unconstrained 100 prisoner problem","authors":"Ivano Lodato, Snehal M. Shekatkar, Tian An Wong","doi":"10.1007/s00236-022-00436-y","DOIUrl":"10.1007/s00236-022-00436-y","url":null,"abstract":"<div><p>We consider a generalization of the classical 100 prisoner problem and its variant, involving empty boxes, whereby winning probabilities for a team depend on the number of attempts, as well as on the number of winners. We call this the unconstrained 100 prisoner problem. After introducing the 3 main classes of strategies, we define a variety of ‘hybrid’ strategies and quantify their winning-efficiency. Whenever analytic results are not available, we make use of Monte Carlo simulations to estimate with high accuracy the winning probabilities. Based on the results obtained, we conjecture that <i>all</i> strategies, except for the strategy maximizing the winning probability of the classical (constrained) problem, converge to the random strategy under weak conditions on the number of players or empty boxes. We conclude by commenting on the possible applications of our results in understanding processes of information retrieval, such as “memory” in living organisms.\u0000</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 2","pages":"179 - 208"},"PeriodicalIF":0.6,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49279314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Message-based systems are usually distributed in nature, and distributed components collaborate via asynchronous message passing. In some cases, particular ordering among the messages may lead to violation of the desired properties such as data confidentiality. Due to the absence of a global clock and usage of off-the-shelf components, such unwanted orderings can be neither statically inspected nor verified by revising their codes at design time. We propose a choreography-based runtime verification algorithm that given an automata-based specification of unwanted message sequences detects the formation of the unwanted sequences. Our algorithm is fully decentralized in the sense that each component is equipped with a monitor, as opposed to having a centralized monitor, and also the specification of the unwanted sequences is decomposed among monitors. In this way, when a component sends a message, its monitor inspects if there is a possibility for the formation of unwanted message sequences. As there is no global clock in message-based systems, monitors cannot determine the exact ordering among messages. In such cases, they decide conservatively and declare a sequence formation even if that sequence has not been formed. We prevent such conservative declarations in our algorithm as much as possible and then characterize its operational guarantees. We evaluate the efficiency and scalability of our algorithm in terms of the communication overhead, the memory consumption, and the latency of the result declaration through simulation.
{"title":"Decentralized runtime verification of message sequences in message-based systems","authors":"Mahboubeh Samadi, Fatemeh Ghassemi, Ramtin Khosravi","doi":"10.1007/s00236-022-00435-z","DOIUrl":"10.1007/s00236-022-00435-z","url":null,"abstract":"<div><p>Message-based systems are usually distributed in nature, and distributed components collaborate via asynchronous message passing. In some cases, particular ordering among the messages may lead to violation of the desired properties such as data confidentiality. Due to the absence of a global clock and usage of off-the-shelf components, such unwanted orderings can be neither statically inspected nor verified by revising their codes at design time. We propose a choreography-based runtime verification algorithm that given an automata-based specification of unwanted message sequences detects the formation of the unwanted sequences. Our algorithm is fully decentralized in the sense that each component is equipped with a monitor, as opposed to having a centralized monitor, and also the specification of the unwanted sequences is decomposed among monitors. In this way, when a component sends a message, its monitor inspects if there is a possibility for the formation of unwanted message sequences. As there is no global clock in message-based systems, monitors cannot determine the exact ordering among messages. In such cases, they decide conservatively and declare a sequence formation even if that sequence has not been formed. We prevent such conservative declarations in our algorithm as much as possible and then characterize its operational guarantees. We evaluate the efficiency and scalability of our algorithm in terms of the communication overhead, the memory consumption, and the latency of the result declaration through simulation.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"60 2","pages":"145 - 178"},"PeriodicalIF":0.6,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46797326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-27DOI: 10.1007/s00236-022-00421-5
Thomas Erlebach, Jakob T. Spooner
A temporal graph with lifetime L is a sequence of L graphs (G_1, ldots ,G_L), called layers, all of which have the same vertex set V but can have different edge sets. The underlying graph is the graph with vertex set V that contains all the edges that appear in at least one layer. The temporal graph is always connected if each layer is a connected graph, and it is k-edge-deficient if each layer contains all except at most k edges of the underlying graph. For a given start vertex s, a temporal exploration is a temporal walk that starts at s, traverses at most one edge in each layer, and visits all vertices of the temporal graph. We show that always-connected, k-edge-deficient temporal graphs with sufficient lifetime can always be explored in (O(kn log n)) time steps. We also construct always-connected, k-edge-deficient temporal graphs for which any exploration requires (varOmega (n log k)) time steps. For always-connected, 1-edge-deficient temporal graphs, we show that O(n) time steps suffice for temporal exploration.
{"title":"Exploration of k-edge-deficient temporal graphs","authors":"Thomas Erlebach, Jakob T. Spooner","doi":"10.1007/s00236-022-00421-5","DOIUrl":"10.1007/s00236-022-00421-5","url":null,"abstract":"<div><p>A temporal graph with lifetime <i>L</i> is a sequence of <i>L</i> graphs <span>(G_1, ldots ,G_L)</span>, called layers, all of which have the same vertex set <i>V</i> but can have different edge sets. The underlying graph is the graph with vertex set <i>V</i> that contains all the edges that appear in at least one layer. The temporal graph is always connected if each layer is a connected graph, and it is <i>k</i>-edge-deficient if each layer contains all except at most <i>k</i> edges of the underlying graph. For a given start vertex <i>s</i>, a temporal exploration is a temporal walk that starts at <i>s</i>, traverses at most one edge in each layer, and visits all vertices of the temporal graph. We show that always-connected, <i>k</i>-edge-deficient temporal graphs with sufficient lifetime can always be explored in <span>(O(kn log n))</span> time steps. We also construct always-connected, <i>k</i>-edge-deficient temporal graphs for which any exploration requires <span>(varOmega (n log k))</span> time steps. For always-connected, 1-edge-deficient temporal graphs, we show that <i>O</i>(<i>n</i>) time steps suffice for temporal exploration.</p></div>","PeriodicalId":7189,"journal":{"name":"Acta Informatica","volume":"59 4","pages":"387 - 407"},"PeriodicalIF":0.6,"publicationDate":"2022-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00236-022-00421-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42775206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}