Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139463
Liqun Jin, Jiahua Qian
A formal transformation process to implement algebraic specification is introduced which relaxes the linear restriction of the algebraic axioms. The formal transformation process can process nonlinear algebraic specification, and it can make the expression of the algebraic specification more flexible and richer. From this formal process, different implementation systems can be derived formally based on different programming languages. A Pascal-based transformation system which transforms EAS (embedded algebraic specification) specification into a Pascal program is described. This Pascal-based transformation system has been developed on a Micro Vax-II GPX workstation running Ultrix.<>
{"title":"Transformation technique of algebraic specification","authors":"Liqun Jin, Jiahua Qian","doi":"10.1109/CMPSAC.1990.139463","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139463","url":null,"abstract":"A formal transformation process to implement algebraic specification is introduced which relaxes the linear restriction of the algebraic axioms. The formal transformation process can process nonlinear algebraic specification, and it can make the expression of the algebraic specification more flexible and richer. From this formal process, different implementation systems can be derived formally based on different programming languages. A Pascal-based transformation system which transforms EAS (embedded algebraic specification) specification into a Pascal program is described. This Pascal-based transformation system has been developed on a Micro Vax-II GPX workstation running Ultrix.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123944344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139352
C. King, T. Shiau, Chin-Piao Chan
The authors propose a hybrid static/dynamic scheduling scheme on distributed-memory multiple-processor systems, e.g., distributed systems and multicomputers. Using this self-balancing scheme, computations are first scheduled statically, and then dynamically redistributed to adapt to the run-time environments. The rescheduling operations are directed by a number of program parameters, which can be directly accessed from within the program and will serve as processor load indices. As a result the self-balancing operations can be implemented entirely at the application level, which requires minimal system supports. To illustrate the concept, the self-balancing technique is applied to the asynchronous iterative methods. Various design tradeoffs are discussed, and preliminary performance results on an NCUBE multicomputer are presented.<>
{"title":"Application-level software self-balancing","authors":"C. King, T. Shiau, Chin-Piao Chan","doi":"10.1109/CMPSAC.1990.139352","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139352","url":null,"abstract":"The authors propose a hybrid static/dynamic scheduling scheme on distributed-memory multiple-processor systems, e.g., distributed systems and multicomputers. Using this self-balancing scheme, computations are first scheduled statically, and then dynamically redistributed to adapt to the run-time environments. The rescheduling operations are directed by a number of program parameters, which can be directly accessed from within the program and will serve as processor load indices. As a result the self-balancing operations can be implemented entirely at the application level, which requires minimal system supports. To illustrate the concept, the self-balancing technique is applied to the asynchronous iterative methods. Various design tradeoffs are discussed, and preliminary performance results on an NCUBE multicomputer are presented.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116011204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139307
R. Jacoby, Y. Tohma
The hyper-geometric distribution is used to estimate the number of initial faults residual in software at the beginning of the test-and-debug phase. The hyper-geometric distribution growth model (HGD model) is well suited to making estimates for the observed growth curves of the accumulated number of detected faults. The advantage of the proposed model is the applicability to all kinds of observed data. By application of a single model, exponential growth curves as well as S-shaped growth curves can be estimated. The precise formulation of the HGD model is presented. The exact relationship of this model to the NHPP Goel-Okumoto growth model and the delayed S-shaped growth model is shown. With the introduction of a variable fault detection rate, the goodness of fit of the estimated growth curve to the growth curve of real observed faults is increased significantly. Different examples of the applicability of the model to real observed data are presented.<>
{"title":"The hyper-geometric distribution software reliability growth model (HGDM): precise formulation and applicability","authors":"R. Jacoby, Y. Tohma","doi":"10.1109/CMPSAC.1990.139307","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139307","url":null,"abstract":"The hyper-geometric distribution is used to estimate the number of initial faults residual in software at the beginning of the test-and-debug phase. The hyper-geometric distribution growth model (HGD model) is well suited to making estimates for the observed growth curves of the accumulated number of detected faults. The advantage of the proposed model is the applicability to all kinds of observed data. By application of a single model, exponential growth curves as well as S-shaped growth curves can be estimated. The precise formulation of the HGD model is presented. The exact relationship of this model to the NHPP Goel-Okumoto growth model and the delayed S-shaped growth model is shown. With the introduction of a variable fault detection rate, the goodness of fit of the estimated growth curve to the growth curve of real observed faults is increased significantly. Different examples of the applicability of the model to real observed data are presented.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116062054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139383
Rosalee Nerheim
Adaptive DCT (discrete cosine transform) compression methods outperform fixed DCT compression methods in terms of image quality, but they need a large amount of scratch space for the transformed image file. The author proposes a semi-adaptive DCT compression method that outperforms fixed DCT compression, but uses only a small amount of scratch space. This method was designed for use in an electronic still camera that is being developed by NASA. Simulation results show that at 2.25 bits per pixel, the SNR (signal to noise ratio) of the semi-adaptive method ranged from 35 dB to 42 dB as compared to a range of 34 dB to 42 dB for the fixed DCT method. At 3 bits per pixel, the semiadaptive method has an SNR that ranges from 40 dB to 47 dB.<>
{"title":"A semi-adaptive DCT compression method that uses minimal space","authors":"Rosalee Nerheim","doi":"10.1109/CMPSAC.1990.139383","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139383","url":null,"abstract":"Adaptive DCT (discrete cosine transform) compression methods outperform fixed DCT compression methods in terms of image quality, but they need a large amount of scratch space for the transformed image file. The author proposes a semi-adaptive DCT compression method that outperforms fixed DCT compression, but uses only a small amount of scratch space. This method was designed for use in an electronic still camera that is being developed by NASA. Simulation results show that at 2.25 bits per pixel, the SNR (signal to noise ratio) of the semi-adaptive method ranged from 35 dB to 42 dB as compared to a range of 34 dB to 42 dB for the fixed DCT method. At 3 bits per pixel, the semiadaptive method has an SNR that ranges from 40 dB to 47 dB.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131314041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139410
Junichi Takahashi
The author describes hybrid relations in relational databases that allow existing relations to be altered by the addition of new attributes without reorganization of the database scheme. The values of new attributes with respect to an existing relation are stored separately from the relation as a set of triples of tuple identifier, attribute name, and value. At query time, a hybrid relation, which has only the attributes requested in a query, is derived virtually by combining the relation and this set of triples. A relation can be reorganized by upgrading its attribute values from these triples. The hybrid relation is defined as an algebraic expression, and equivalent expressions of a query on the hybrid relations are shown for efficient query processing.<>
{"title":"Hybrid relations for database schema evolution","authors":"Junichi Takahashi","doi":"10.1109/CMPSAC.1990.139410","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139410","url":null,"abstract":"The author describes hybrid relations in relational databases that allow existing relations to be altered by the addition of new attributes without reorganization of the database scheme. The values of new attributes with respect to an existing relation are stored separately from the relation as a set of triples of tuple identifier, attribute name, and value. At query time, a hybrid relation, which has only the attributes requested in a query, is derived virtually by combining the relation and this set of triples. A relation can be reorganized by upgrading its attribute values from these triples. The hybrid relation is defined as an algebraic expression, and equivalent expressions of a query on the hybrid relations are shown for efficient query processing.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126515611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139382
Shin-Nine Yang, T. Lin
The authors propose a new algorithm for finding the three-dimensional border of linear octrees stored in a one dimensional array. A simple method is proposed to check whether an octant is a border octant. Then, the border finding procedure can be carried out node by node according to their location code ordering. In order to improve the performance of the algorithm, a new and efficient neighbor finding technique is proposed. The time complexity of the proposed neighbor finding method is analyzed and proved to be O(1) on the average. Compared with the existing border algorithms, the proposed algorithm has the following advantages: (1) no preprocessing is required to arrange the input data according to their grouping factors, (2) the border found is already a sorted sequence of border voxels with no extra sorting required, and (3) the average time complexity is improved from O(N log N) to O(N), where N is the number of nodes in the linear octree.<>
{"title":"A new 3D-border algorithm by neighbor finding","authors":"Shin-Nine Yang, T. Lin","doi":"10.1109/CMPSAC.1990.139382","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139382","url":null,"abstract":"The authors propose a new algorithm for finding the three-dimensional border of linear octrees stored in a one dimensional array. A simple method is proposed to check whether an octant is a border octant. Then, the border finding procedure can be carried out node by node according to their location code ordering. In order to improve the performance of the algorithm, a new and efficient neighbor finding technique is proposed. The time complexity of the proposed neighbor finding method is analyzed and proved to be O(1) on the average. Compared with the existing border algorithms, the proposed algorithm has the following advantages: (1) no preprocessing is required to arrange the input data according to their grouping factors, (2) the border found is already a sorted sequence of border voxels with no extra sorting required, and (3) the average time complexity is improved from O(N log N) to O(N), where N is the number of nodes in the linear octree.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117343218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139462
Kirk Scott, W. Perrizo
The authors consider the problem of optimizing join query processing in a database distributed over a bus type local area network which uses the carrier sense multiple access/collision detection (CSMA/CD) access protocol. Some new algorithms are proposed which use a compatible access protocol, movable slot time division multiplexing (MSTDM), to achieve improved performance over existing algorithms. Analysis of example cases shows the improved performance potential for MSTDM. It is concluded that the proposed algorithms explicitly account for packetization and other costs unaccounted for in existing algorithms. If the overhead of CSMA/CD and MSTDM algorithms is comparable, MSTDM's performance characteristics translate directly into improved distributed join processing.<>
{"title":"Methods for distributed join processing using a voice-data protocol","authors":"Kirk Scott, W. Perrizo","doi":"10.1109/CMPSAC.1990.139462","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139462","url":null,"abstract":"The authors consider the problem of optimizing join query processing in a database distributed over a bus type local area network which uses the carrier sense multiple access/collision detection (CSMA/CD) access protocol. Some new algorithms are proposed which use a compatible access protocol, movable slot time division multiplexing (MSTDM), to achieve improved performance over existing algorithms. Analysis of example cases shows the improved performance potential for MSTDM. It is concluded that the proposed algorithms explicitly account for packetization and other costs unaccounted for in existing algorithms. If the overhead of CSMA/CD and MSTDM algorithms is comparable, MSTDM's performance characteristics translate directly into improved distributed join processing.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139336
Jason R. Lee, Kuo-Hua Wang, C. Chou
A set of debugging tools has been implemented for monitoring and controlling the execution sequences of Concurrent-C programs. Among these tools, REPLAY MONITOR can be employed for monitoring the synchronization events while RELAY REPRODUCER serves for reproducing the monitored synchronization events. When some processes of a concurrent program are irrelevant and time-consuming, PARTIAL REPLAY MONITOR and PARTIAL REPLAY REPRODUCER can be utilized for monitoring and reproducing synchronization events while ignoring the actual computation of those irrelevant processes in order to save debugging time.<>
{"title":"An implementation of software tools for replay and partial replay of Concurrent-C programs","authors":"Jason R. Lee, Kuo-Hua Wang, C. Chou","doi":"10.1109/CMPSAC.1990.139336","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139336","url":null,"abstract":"A set of debugging tools has been implemented for monitoring and controlling the execution sequences of Concurrent-C programs. Among these tools, REPLAY MONITOR can be employed for monitoring the synchronization events while RELAY REPRODUCER serves for reproducing the monitored synchronization events. When some processes of a concurrent program are irrelevant and time-consuming, PARTIAL REPLAY MONITOR and PARTIAL REPLAY REPRODUCER can be utilized for monitoring and reproducing synchronization events while ignoring the actual computation of those irrelevant processes in order to save debugging time.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127403481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139305
S. N. Weiss
The comparative analysis of test data criteria in software testing is considered, and an attempt is made to investigate how criteria have been and should be compared to each other. It is argued that there are two fundamentally different goals in comparing criteria: (1) to compare the error-exposing ability of criteria, and (2) to compare the cost of using the criteria for selecting and/or evaluating test data. Relations such as the power relation and probable correctness are clearly in the first category, and test case counting is clearly in the second category. Subsumption, in contrast, is not entirely in either category. It is shown that the subsumption relation primarily compares the difficulty of satisfying two criteria. If one assumes that the criteria being compared are applicable, then one can infer their relative power and size complexities from the subsumption relation. In addition, it is shown that, while the size complexity of a criterion gives some indication of the relative cost of using the criterion, it is by no means a sufficient measure of the overall difficulty of using that criterion, which also includes the process of checking whether the predicate defined by the criterion has been satisfied, which may not only be difficult, but impossible.<>
{"title":"Methods of comparing test data adequacy criteria","authors":"S. N. Weiss","doi":"10.1109/CMPSAC.1990.139305","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139305","url":null,"abstract":"The comparative analysis of test data criteria in software testing is considered, and an attempt is made to investigate how criteria have been and should be compared to each other. It is argued that there are two fundamentally different goals in comparing criteria: (1) to compare the error-exposing ability of criteria, and (2) to compare the cost of using the criteria for selecting and/or evaluating test data. Relations such as the power relation and probable correctness are clearly in the first category, and test case counting is clearly in the second category. Subsumption, in contrast, is not entirely in either category. It is shown that the subsumption relation primarily compares the difficulty of satisfying two criteria. If one assumes that the criteria being compared are applicable, then one can infer their relative power and size complexities from the subsumption relation. In addition, it is shown that, while the size complexity of a criterion gives some indication of the relative cost of using the criterion, it is by no means a sufficient measure of the overall difficulty of using that criterion, which also includes the process of checking whether the predicate defined by the criterion has been satisfied, which may not only be difficult, but impossible.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126600539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1990-10-31DOI: 10.1109/CMPSAC.1990.139445
B. Krämer, H. Schmidt
A description is given of Graspin, a workstation-based prototype environment that aids in the incremental construction, verification, and prototyping of specifications for concurrent and distributed software systems. It includes a Petri net-based specification formalism, an editor generator with graphical capabilities, and tools for static semantics checking, automated verification of static and dynamic properties of specifications, and specification-based prototyping. The Graspin architecture and kernel environment have shown their flexibility in the development of a prototype environment supporting the formal specification language SEGRAS. It was necessary to extend the kernel by semantic tools such as type checker and net simulator and to integrate separately developed tools such as the rewrite rule subsystem into a coherent environment. New Graspin features include a graphical refinement method and a multi-level net animation technique.<>
{"title":"Architecture and functionality of a specification environment for distributed software","authors":"B. Krämer, H. Schmidt","doi":"10.1109/CMPSAC.1990.139445","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139445","url":null,"abstract":"A description is given of Graspin, a workstation-based prototype environment that aids in the incremental construction, verification, and prototyping of specifications for concurrent and distributed software systems. It includes a Petri net-based specification formalism, an editor generator with graphical capabilities, and tools for static semantics checking, automated verification of static and dynamic properties of specifications, and specification-based prototyping. The Graspin architecture and kernel environment have shown their flexibility in the development of a prototype environment supporting the formal specification language SEGRAS. It was necessary to extend the kernel by semantic tools such as type checker and net simulator and to integrate separately developed tools such as the rewrite rule subsystem into a coherent environment. New Graspin features include a graphical refinement method and a multi-level net animation technique.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129567064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}