Pub Date : 1985-01-01Epub Date: 2005-05-05DOI: 10.1016/S0019-9958(85)80042-5
Rüdiger Reischuk
We define a new model for algorithms to reach Byzantine Agreement. It allows one to measure the complexity more accurately, to differentiate between processor faults, and to include communication link failures. A deterministic algorithm is presented that exhibits early stopping by phase 2f + 3 in the worst case, where f is the actual number of faults, under less stringent conditions than the ones of previous algorithms. Its average performance can also easily be analysed making realistic assumptions on random distribution of faults. We show that it stops with high probability after a small number of phases.
{"title":"A new solution for the Byzantine generals problem","authors":"Rüdiger Reischuk","doi":"10.1016/S0019-9958(85)80042-5","DOIUrl":"10.1016/S0019-9958(85)80042-5","url":null,"abstract":"<div><p>We define a new model for algorithms to reach Byzantine Agreement. It allows one to measure the complexity more accurately, to differentiate between processor faults, and to include communication link failures. A deterministic algorithm is presented that exhibits early stopping by phase 2<em>f</em> + 3 in the worst case, where <em>f</em> is the actual number of faults, under less stringent conditions than the ones of previous algorithms. Its average performance can also easily be analysed making realistic assumptions on random distribution of faults. We show that it stops with high probability after a small number of phases.</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"64 1","pages":"Pages 23-42"},"PeriodicalIF":0.0,"publicationDate":"1985-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(85)80042-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79412157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1985-01-01Epub Date: 2005-05-05DOI: 10.1016/S0019-9958(85)80047-4
D. Harel, R. Sherman
Following a suggestion of Pratt, we consider propositional dynamic logic in which programs are nondeterministic finite automata over atomic programs and tests (i.e., flowcharts), rather than regular expressions. While the resulting version of PDL, call it APDL, is clearly equivalent in expressive power to PDL, it is also (in the worst case) exponentially more succinct. In particular, deciding its validity problem by reducing it to that of PDL leads to a double exponential time procedure, although PDL itself is decidable in exponential time. We present an elementary combined proof of the completeness of a simple axiom system for APDL and decidability of the validity problem in exponential time. The results are thus stronger than those for PDL, since PDL can be encoded in APDL with no additional cost, and the proofs simpler, since induction on the structure of programs is virtually eliminated. Our axiom system for APDL relates to the PDL system just as Floyd's proof method for partial correctness relates to Hoare's.
{"title":"Propositional dynamic logic of flowcharts","authors":"D. Harel, R. Sherman","doi":"10.1016/S0019-9958(85)80047-4","DOIUrl":"10.1016/S0019-9958(85)80047-4","url":null,"abstract":"<div><p>Following a suggestion of Pratt, we consider propositional dynamic logic in which programs are nondeterministic finite automata over atomic programs and tests (i.e., flowcharts), rather than regular expressions. While the resulting version of PDL, call it APDL, is clearly equivalent in expressive power to PDL, it is also (in the worst case) exponentially more succinct. In particular, deciding its validity problem by reducing it to that of PDL leads to a double exponential time procedure, although PDL itself is decidable in exponential time. We present an elementary combined proof of the completeness of a simple axiom system for APDL and decidability of the validity problem in exponential time. The results are thus stronger than those for PDL, since PDL can be encoded in APDL with no additional cost, and the proofs simpler, since induction on the structure of programs is virtually eliminated. Our axiom system for APDL relates to the PDL system just as Floyd's proof method for partial correctness relates to Hoare's.</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"64 1","pages":"Pages 119-135"},"PeriodicalIF":0.0,"publicationDate":"1985-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(85)80047-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72865924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1985-01-01Epub Date: 2005-05-05DOI: 10.1016/S0019-9958(85)80044-9
Stefan Hertel, Kurt Mehlhorn
Let P1,…, Pk be pairwise non-intersecting simple polygons with a total of n vertices and s start vertices. A start vertex, in general, is a vertex both of which neighbors have larger x coordinate. We present an algorithm for triangulating P1,…, Pk in time O(n + s log s). s may be viewed as a measure of non-convexity. In particular, s is always bounded by the number of concave angles + 1, and is usually much smaller. We also describe two new applications of triangulation. Given a triangulation of the plane with respect to a set of k pairwise non-intersecting simple polygons, then the intersection of this set with a convex polygon Q can be computed in time linear with respect to the combined number of vertices of the k + 1 polygons. Such a result had only be known for two convex polygons. The other application improves the bound on the number of convex parts into which a polygon can be decomposed.
设P1,…,Pk为一对不相交的简单多边形,共n个顶点,s个起始点。一般来说,起始顶点是两个相邻顶点的x坐标都较大的顶点。我们提出了一种在时间O(n + s log s)内三角化P1,…,Pk的算法。s可以被视为非凸性的度量。特别地,s总是以凹角的数量+ 1为界,并且通常要小得多。我们还描述了三角测量的两种新应用。给定一个平面的关于k对不相交简单多边形的三角剖分,那么这个集合与一个凸多边形Q的交点可以根据k + 1个多边形的顶点总数在时间线性上计算出来。这样的结果只对两个凸多边形是已知的。另一个应用改进了一个多边形可以分解成凸部分的数目的界限。
{"title":"Fast triangulation of the plane with respect to simple polygons","authors":"Stefan Hertel, Kurt Mehlhorn","doi":"10.1016/S0019-9958(85)80044-9","DOIUrl":"10.1016/S0019-9958(85)80044-9","url":null,"abstract":"<div><p>Let <em>P</em><sub>1</sub>,…, <em>P<sub>k</sub></em> be pairwise non-intersecting simple polygons with a total of <em>n</em> vertices and <em>s</em> start vertices. A start vertex, in general, is a vertex both of which neighbors have larger <em>x</em> coordinate. We present an algorithm for triangulating <em>P</em><sub>1</sub>,…, <em>P<sub>k</sub></em> in time <em>O</em>(<em>n</em> + <em>s</em> log <em>s</em>). <em>s</em> may be viewed as a measure of non-convexity. In particular, <em>s</em> is always bounded by the number of concave angles + 1, and is usually much smaller. We also describe two new applications of triangulation. Given a triangulation of the plane with respect to a set of <em>k</em> pairwise non-intersecting simple polygons, then the intersection of this set with a convex polygon <em>Q</em> can be computed in time linear with respect to the combined number of vertices of the <em>k</em> + 1 polygons. Such a result had only be known for two <em>convex polygons</em>. The other application improves the bound on the number of convex parts into which a polygon can be decomposed.</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"64 1","pages":"Pages 52-76"},"PeriodicalIF":0.0,"publicationDate":"1985-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(85)80044-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88428689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1985-01-01Epub Date: 2005-05-05DOI: 10.1016/S0019-9958(85)80041-3
Stephen A. Cook
The class NC consists of problems solvable very fast (in time polynomial in log n) in parallel with a feasible (polynomial) number of processors. Many natural problems in NC are known; in this paper an attempt is made to identify important subclasses of NC and give interesting examples in each subclass. The notion of NC1-reducibility is introduced and used throughout (problem R is NC1-reducible to problem S if R can be solved with uniform log-depth circuits using oracles for S). Problems complete with respect to this reducibility are given for many of the subclasses of NC. A general technique, the “parallel greedy algorithm,” is identified and used to show that finding a minimum spanning forest of a graph is reducible to the graph accessibility problem and hence is in NC2 (solvable by uniform Boolean circuits of depth O(log2n) and polynomial size). The class LOGCFL is given a new characterization in terms of circuit families. The class DET of problems reducible to integer determinants is defined and many examples given. A new problem complete for deterministic polynomial time is given, namely, finding the lexicographically first maximal clique in a graph. This paper is a revised version of S. A. Cook, (1983, in “Proceedings 1983 Intl. Found. Comut. Sci. Conf.,” Lecture Notes in Computer Science Vol. 158, pp. 78–93, Springer-Verlag, Berlin/New York).
NC类由可快速解决的问题(在log n的时间多项式内)与可行的(多项式)处理器数量并行组成。NC的许多自然问题是已知的;本文试图找出NC的重要子类,并在每个子类中给出有趣的例子。nc1可约性的概念被引入并贯穿始终(问题R是nc1可约为问题S,如果R可以用S的一致对数深度电路来解决)。关于这种可约性的完整问题给出了NC的许多子类。一种通用的技术,“并行贪婪算法”,被识别并用于证明寻找图的最小生成森林可简化为图可达性问题,因此在NC2中(可通过深度为O(log2 n)和多项式大小的一致布尔电路解决)。从电路族的角度对LOGCFL类进行了新的表征。定义了可约为整数行列式的DET类问题,并给出了许多例子。给出了在确定多项式时间内完成的一个新问题,即寻找图中字典顺序上的第一个极大团。本文是S. a . Cook(1983)在《Proceedings 1983 Intl》中的修订版。发现。Comut。科学。Conf.,“计算机科学讲义卷158,第78-93页,Springer-Verlag,柏林/纽约)。
{"title":"A taxonomy of problems with fast parallel algorithms","authors":"Stephen A. Cook","doi":"10.1016/S0019-9958(85)80041-3","DOIUrl":"10.1016/S0019-9958(85)80041-3","url":null,"abstract":"<div><p>The class <em>NC</em> consists of problems solvable very fast (in time polynomial in log <em>n</em>) in parallel with a feasible (polynomial) number of processors. Many natural problems in <em>NC</em> are known; in this paper an attempt is made to identify important subclasses of <em>NC</em> and give interesting examples in each subclass. The notion of <em>NC</em><sup>1</sup>-reducibility is introduced and used throughout (problem <em>R</em> is <em>NC</em><sup>1</sup>-reducible to problem <em>S</em> if <em>R</em> can be solved with uniform log-depth circuits using oracles for <em>S</em>). Problems complete with respect to this reducibility are given for many of the subclasses of <em>NC</em>. A general technique, the “parallel greedy algorithm,” is identified and used to show that finding a minimum spanning forest of a graph is reducible to the graph accessibility problem and hence is in <em>NC</em><sup>2</sup> (solvable by uniform Boolean circuits of depth <em>O</em>(log<sup>2</sup> <em>n</em>) and polynomial size). The class LOGCFL is given a new characterization in terms of circuit families. The class DET of problems reducible to integer determinants is defined and many examples given. A new problem complete for deterministic polynomial time is given, namely, finding the lexicographically first maximal clique in a graph. This paper is a revised version of S. A. Cook, (1983, <em>in</em> “Proceedings 1983 Intl. Found. Comut. Sci. Conf.,” Lecture Notes in Computer Science Vol. 158, pp. 78–93, Springer-Verlag, Berlin/New York).</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"64 1","pages":"Pages 2-22"},"PeriodicalIF":0.0,"publicationDate":"1985-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(85)80041-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75878713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1984-12-01Epub Date: 2005-05-02DOI: 10.1016/S0019-9958(84)80015-7
M. Ajtai, M. Fredman, J. Komlós
The complexity of priority queue operations is analyzed with respect to the cell probe computational model of A. Yao (J. Assoc. Comput. Mach.28, No. 3 (1981), 615–628). A method utilizing families of hash functions is developed which permits priority queue operations to be implemented in constant worst-case time provided that a size constraint is satisfied. The minimum necessary size of a family of hash functions for computing the rank function is estimated and contrasted with the minimum size required for perfect hashing.
基于姚(A. Yao, J. Assoc.)的细胞探针计算模型,分析了优先队列操作的复杂性。第一版。28马赫,第3期(1981),615-628页)。开发了一种利用哈希函数族的方法,在满足大小约束的情况下,允许在恒定的最坏情况时间内实现优先级队列操作。估计计算秩函数所需的哈希函数族的最小大小,并将其与完美哈希所需的最小大小进行比较。
{"title":"Hash functions for priority queues","authors":"M. Ajtai, M. Fredman, J. Komlós","doi":"10.1016/S0019-9958(84)80015-7","DOIUrl":"https://doi.org/10.1016/S0019-9958(84)80015-7","url":null,"abstract":"<div><p>The complexity of priority queue operations is analyzed with respect to the cell probe computational model of A. Yao (<em>J. Assoc. Comput. Mach.</em> <strong>28</strong>, No. 3 (1981), 615–628). A method utilizing families of hash functions is developed which permits priority queue operations to be implemented in constant worst-case time provided that a size constraint is satisfied. The minimum necessary size of a family of hash functions for computing the rank function is estimated and contrasted with the minimum size required for perfect hashing.</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"63 3","pages":"Pages 217-225"},"PeriodicalIF":0.0,"publicationDate":"1984-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(84)80015-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136520345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1984-12-01Epub Date: 2005-05-02DOI: 10.1016/S0019-9958(84)80016-9
{"title":"Author index for volume 63","authors":"","doi":"10.1016/S0019-9958(84)80016-9","DOIUrl":"https://doi.org/10.1016/S0019-9958(84)80016-9","url":null,"abstract":"","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"63 3","pages":"Page 226"},"PeriodicalIF":0.0,"publicationDate":"1984-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(84)80016-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136520401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1984-12-01Epub Date: 2005-05-02DOI: 10.1016/S0019-9958(84)80013-3
Andrzej Ehrenfeucht, Grzegorz Rozenberg
We present an algorithm which given an arbitrary A-free context-free grammar produces an equivalent context-free grammar in 2 Greibach normal form. The upper bound on the size of the resulting grammar in terms of the size of the initially given grammar is given. Our algorithm consists of an elementary construction, while the upper bound on the size of the resulting grammar is not bigger than the bounds known for other algorithms for converting context-free grammars into equivalent context-free grammars in Greibach normal form.
{"title":"An easy proof of Greibach normal form","authors":"Andrzej Ehrenfeucht, Grzegorz Rozenberg","doi":"10.1016/S0019-9958(84)80013-3","DOIUrl":"10.1016/S0019-9958(84)80013-3","url":null,"abstract":"<div><p>We present an algorithm which given an arbitrary <em>A</em>-free context-free grammar produces an equivalent context-free grammar in 2 Greibach normal form. The upper bound on the size of the resulting grammar in terms of the size of the initially given grammar is given. Our algorithm consists of an elementary construction, while the upper bound on the size of the resulting grammar is not bigger than the bounds known for other algorithms for converting context-free grammars into equivalent context-free grammars in Greibach normal form.</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"63 3","pages":"Pages 190-199"},"PeriodicalIF":0.0,"publicationDate":"1984-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(84)80013-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81931448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1984-12-01Epub Date: 2005-05-02DOI: 10.1016/S0019-9958(84)80012-1
D.S. Franzblau, D.J. Kleitman
Decomposing a polygon into simple shapes is a basic problem in computational geometry, with applications in pattern recognition and integrated circuit manufacture. Here we examine the special case of covering a rectilinear polygon (or polyomino) with the minimum number of rectangles, with overlapping allowed. The problem is NP-hard. However, we give here an O(v2) algorithm for constructing a minimum rectangle cover, when the polygon is vertically convex. (Here v is the number of vertices.) The problem is first reduced to a 1-dimensional interval “basis” problem. In showing our algorithm produces an optimal cover we give a new proof of a minimum basis-maximum independent set duality theorem first proved by E. Györi (J. Combin Theory Ser. B37, No. 1, 1–9).
{"title":"An algorithm for covering polygons with rectangles","authors":"D.S. Franzblau, D.J. Kleitman","doi":"10.1016/S0019-9958(84)80012-1","DOIUrl":"10.1016/S0019-9958(84)80012-1","url":null,"abstract":"<div><p>Decomposing a polygon into simple shapes is a basic problem in computational geometry, with applications in pattern recognition and integrated circuit manufacture. Here we examine the special case of covering a rectilinear polygon (or polyomino) with the minimum number of rectangles, with overlapping allowed. The problem is <em>NP</em>-hard. However, we give here an <em>O</em>(<em>v</em><sup>2</sup>) algorithm for constructing a minimum rectangle cover, when the polygon is vertically convex. (Here <em>v</em> is the number of vertices.) The problem is first reduced to a 1-dimensional interval “basis” problem. In showing our algorithm produces an optimal cover we give a new proof of a minimum basis-maximum independent set duality theorem first proved by E. Györi (<em>J. Combin Theory Ser. B</em> <strong>37</strong>, No. 1, 1–9).</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"63 3","pages":"Pages 164-189"},"PeriodicalIF":0.0,"publicationDate":"1984-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(84)80012-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75717710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1984-12-01Epub Date: 2005-05-02DOI: 10.1016/S0019-9958(84)80011-X
Ralf Hartmut Güting
A set of polygons is called c-oriented if the edges of all polygons are oriented in a constant number of previously defined directions. The intersection searching problem is studied for such objects, namely: Given a set of c-oriented polygons P and a c-oriented query polygon q, find all polygons in P that intersect q. It is shown that this problem can be solved in O(log2n + t) time with O(n log n) space and O(n log2n) preprocessing, where n is the cardinality of P and t the number of answers to a query. Furthermore, the solution is extended to the cases in which P is a semidynamic or dynamic set of polygons. Whereas planar intersection searching can be carried out more efficiently for orthogonal objects (e.g., rectangles) it is expensive for arbitrary polygons. This suggests that the c-oriented solution be used in appropriate areas of application, for instance, in VLSI-design.
如果一组多边形的所有边都以恒定数量的先前定义的方向定向,则称为面向c的多边形。研究了这类对象的相交搜索问题,即:给定一组面向c的多边形P和一个面向c的查询多边形q,找出P中与q相交的所有多边形。通过O(n log n)空间和O(n log2 n)预处理,可以在O(log2 n + t)时间内求解该问题,其中n为P的基数,t为查询的答案个数。进一步,将解推广到P是半动态或动态多边形集的情况。平面相交搜索对于正交对象(如矩形)可以更有效地进行,而对于任意多边形则是昂贵的。这表明,面向c的解决方案应用于适当的应用领域,例如,在vlsi设计中。
{"title":"Dynamic C-oriented polygonal intersection searching","authors":"Ralf Hartmut Güting","doi":"10.1016/S0019-9958(84)80011-X","DOIUrl":"10.1016/S0019-9958(84)80011-X","url":null,"abstract":"<div><p>A set of polygons is called <em>c</em>-oriented if the edges of all polygons are oriented in a constant number of previously defined directions. The intersection searching problem is studied for such objects, namely: Given a set of <em>c</em>-oriented polygons <em>P</em> and a <em>c</em>-oriented query polygon <em>q</em>, find all polygons in <em>P</em> that intersect <em>q</em>. It is shown that this problem can be solved in <em>O</em>(log<sup>2</sup> <em>n</em> + <em>t</em>) time with <em>O</em>(<em>n</em> log <em>n</em>) space and <em>O</em>(<em>n</em> log<sup>2</sup> <em>n</em>) preprocessing, where <em>n</em> is the cardinality of <em>P</em> and <em>t</em> the number of answers to a query. Furthermore, the solution is extended to the cases in which <em>P</em> is a semidynamic or dynamic set of polygons. Whereas planar intersection searching can be carried out more efficiently for orthogonal objects (e.g., rectangles) it is expensive for arbitrary polygons. This suggests that the <em>c</em>-oriented solution be used in appropriate areas of application, for instance, in VLSI-design.</p></div>","PeriodicalId":38164,"journal":{"name":"信息与控制","volume":"63 3","pages":"Pages 143-163"},"PeriodicalIF":0.0,"publicationDate":"1984-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0019-9958(84)80011-X","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80753056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}