Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218456
K. Rizman, I. Rozman
The topics discussed are complexities in software development; object-oriented (OO) technology for managing complexities in software construction; the object environment; the benefits of the introduction of events and the object environment; an event-driven approach to OO software design; and the reuse of subsystems.<>
{"title":"Application development through object composition by means of events and object environment","authors":"K. Rizman, I. Rozman","doi":"10.1109/CMPEUR.1992.218456","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218456","url":null,"abstract":"The topics discussed are complexities in software development; object-oriented (OO) technology for managing complexities in software construction; the object environment; the benefits of the introduction of events and the object environment; an event-driven approach to OO software design; and the reuse of subsystems.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130976193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218462
M. Baker, W. Shih
The authors describe a modeling methodology that extends software performance engineering to handle large multitasking applications. The objective was to provide throughput projections and design evaluation for a development team building an image/fax server intended to function in a larger distributed image processing application. Software performance engineering is reviewed. The methodology described integrates analytic modeling with real-time simulation to construct a performance prototype. The application of the methodology to the performance engineering of the image/fax server is reported.<>
{"title":"Performance prototyping: a simulation methodology for software performance engineering","authors":"M. Baker, W. Shih","doi":"10.1109/CMPEUR.1992.218462","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218462","url":null,"abstract":"The authors describe a modeling methodology that extends software performance engineering to handle large multitasking applications. The objective was to provide throughput projections and design evaluation for a development team building an image/fax server intended to function in a larger distributed image processing application. Software performance engineering is reviewed. The methodology described integrates analytic modeling with real-time simulation to construct a performance prototype. The application of the methodology to the performance engineering of the image/fax server is reported.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133060059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218410
D. Dalton
Presents a parallel processing approach to logic simulation, called APPLES, in which gate evaluations and signal updating are executed in parallel in associative memory, rather than in the processor. This approach does not require any event scheduling mechanism and can model various logic gate types and delay models. Two concepts are coupled together to form a parallel acceleration technique in the simulator. The first concept deals with representing signals on a line over a period of time as a bit-sequence. This sequence representation can be incorporated into evaluating the output of any logic gate, by comparing the bit-sequences of its inputs with a predetermined series of bit patterns. Numerous bit operations, such as shifting and comparing, must be performed in parallel on the input bit-sequences of various logic components. The second concept, that of an associative memory with word shift capabilities, is really a hardware implementation of these bit operations. Therefore, these concepts are presented as an abstract model followed by its physical realization. APPLES has been simulated at a behavioral and gate level description in System-Hilo.<>
{"title":"An associative memory approach to parallel logic event-driven simulation","authors":"D. Dalton","doi":"10.1109/CMPEUR.1992.218410","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218410","url":null,"abstract":"Presents a parallel processing approach to logic simulation, called APPLES, in which gate evaluations and signal updating are executed in parallel in associative memory, rather than in the processor. This approach does not require any event scheduling mechanism and can model various logic gate types and delay models. Two concepts are coupled together to form a parallel acceleration technique in the simulator. The first concept deals with representing signals on a line over a period of time as a bit-sequence. This sequence representation can be incorporated into evaluating the output of any logic gate, by comparing the bit-sequences of its inputs with a predetermined series of bit patterns. Numerous bit operations, such as shifting and comparing, must be performed in parallel on the input bit-sequences of various logic components. The second concept, that of an associative memory with word shift capabilities, is really a hardware implementation of these bit operations. Therefore, these concepts are presented as an abstract model followed by its physical realization. APPLES has been simulated at a behavioral and gate level description in System-Hilo.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133601098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218443
K. Lano
A framework for knowledge representation that combines the fuzzy reasoning of systems and object-oriented databases is suggested. The use of objects to represent knowledge has become popular. However, this organization of knowledge, as a classification of entities by means of their attributes and their characteristic operations, returns to a traditional view of the formation of concepts (H. Gardner, 1985). This view, that conceptual categories can all be defined in the crisp way that mathematical concepts are defined, is not plausible for many real-world examples, and the idea of categories as formed from a clustering of data around a conceptual prototype, with an associated nearness measure, was substituted in its place (E. Rosch, 1978). A system that combines these two apparently distinct means of representation is described. Machine learning techniques are applied to the formation of suitable metrics for concepts.<>
{"title":"Combining object-oriented representations of knowledge with proximity to conceptual prototypes","authors":"K. Lano","doi":"10.1109/CMPEUR.1992.218443","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218443","url":null,"abstract":"A framework for knowledge representation that combines the fuzzy reasoning of systems and object-oriented databases is suggested. The use of objects to represent knowledge has become popular. However, this organization of knowledge, as a classification of entities by means of their attributes and their characteristic operations, returns to a traditional view of the formation of concepts (H. Gardner, 1985). This view, that conceptual categories can all be defined in the crisp way that mathematical concepts are defined, is not plausible for many real-world examples, and the idea of categories as formed from a clustering of data around a conceptual prototype, with an associated nearness measure, was substituted in its place (E. Rosch, 1978). A system that combines these two apparently distinct means of representation is described. Machine learning techniques are applied to the formation of suitable metrics for concepts.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133297687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218474
D. Shapiro
The author considers from a new viewpoint the understanding of multisignificant images by means of the estimation of interconnections of the image elements. The knowledge about the world has, as a rule, multisignificant and uncertain characteristics. The knowledge is presented by means of input patterns-a metatext. The problem of metatext understanding requires a method for estimation of multisignificant images. Two aspects of the problem are studied, the understanding of gestalts and metaphors.<>
{"title":"A strategy of understanding of multisignificant knowledge","authors":"D. Shapiro","doi":"10.1109/CMPEUR.1992.218474","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218474","url":null,"abstract":"The author considers from a new viewpoint the understanding of multisignificant images by means of the estimation of interconnections of the image elements. The knowledge about the world has, as a rule, multisignificant and uncertain characteristics. The knowledge is presented by means of input patterns-a metatext. The problem of metatext understanding requires a method for estimation of multisignificant images. Two aspects of the problem are studied, the understanding of gestalts and metaphors.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133245283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218405
E. van Ammers, M. Kramer
The authors have developed a module extractor called VAMP, that cooperates with a standard formatter. They describe the VAMP approach to literate programming. Literate programming identifies a method of documentation which explains to people what a computer is supposed to do. Generally, this means that refinement steps are documented in such a way that modules can be extracted from the documentation files. The tool has been in use since 1982. The experiences are predominantly positive, in spite of the overhead implicit in the methodology. The fact that VAMP is independent of both programming language and formatter distinguishes it from WEB and its derivatives.<>
{"title":"VAMP: A tool for literate programming independent of programming language and formatter","authors":"E. van Ammers, M. Kramer","doi":"10.1109/CMPEUR.1992.218405","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218405","url":null,"abstract":"The authors have developed a module extractor called VAMP, that cooperates with a standard formatter. They describe the VAMP approach to literate programming. Literate programming identifies a method of documentation which explains to people what a computer is supposed to do. Generally, this means that refinement steps are documented in such a way that modules can be extracted from the documentation files. The tool has been in use since 1982. The experiences are predominantly positive, in spite of the overhead implicit in the methodology. The fact that VAMP is independent of both programming language and formatter distinguishes it from WEB and its derivatives.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123182179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218412
W. Steenaart, J.Y. Zhang
Introduces an adaptive recursive state-space algorithm and its implementation. The adaptive state-space filtering is realized by using the least mean square adaptation algorithm and the gradients for filter parameters are derived directly from the state equations. To reduce the computational complexity, a parallel form of second-order sections is used. The performance of adaptive state-space filters, in terms of stability monitoring, roundoff noise and convergence rate, is given. A possible roundoff noise performance improvement by using adaptive state-space filtering is shown using simulation examples. The stability monitoring is simple since each section is a second-order filter. VLSI array processors are suggested for real-time applications.<>
{"title":"Parallel-form adaptive state-space filtering and its implementation","authors":"W. Steenaart, J.Y. Zhang","doi":"10.1109/CMPEUR.1992.218412","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218412","url":null,"abstract":"Introduces an adaptive recursive state-space algorithm and its implementation. The adaptive state-space filtering is realized by using the least mean square adaptation algorithm and the gradients for filter parameters are derived directly from the state equations. To reduce the computational complexity, a parallel form of second-order sections is used. The performance of adaptive state-space filters, in terms of stability monitoring, roundoff noise and convergence rate, is given. A possible roundoff noise performance improvement by using adaptive state-space filtering is shown using simulation examples. The stability monitoring is simple since each section is a second-order filter. VLSI array processors are suggested for real-time applications.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117264417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218450
M. Cavaiuolo, A. Yakovleff, C. R. Watson, J. Kershaw
The architecture introduced, the neural accelerator board (NAB) system, has been designed to perform a range a machine vision functions, of which one is motion parallax range estimation. The approach used in the NAB system, is to perform real-time calculations on the image data, captured by a moving video camera, using a highly parallel neural network architecture. The NAB system architecture, its physical implementation, and the surrounding application system are described. The implementation of the motion parallax algorithm, as a demonstration of the capabilities of this architecture, is outlined.<>
{"title":"A systolic neural network image processing architecture","authors":"M. Cavaiuolo, A. Yakovleff, C. R. Watson, J. Kershaw","doi":"10.1109/CMPEUR.1992.218450","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218450","url":null,"abstract":"The architecture introduced, the neural accelerator board (NAB) system, has been designed to perform a range a machine vision functions, of which one is motion parallax range estimation. The approach used in the NAB system, is to perform real-time calculations on the image data, captured by a moving video camera, using a highly parallel neural network architecture. The NAB system architecture, its physical implementation, and the surrounding application system are described. The implementation of the motion parallax algorithm, as a demonstration of the capabilities of this architecture, is outlined.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"652 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116419374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218432
Min-Sheng Lin, Deng-Jyi Chen
The authors present an algorithm for computing distributed program reliability in distributed computing systems (DCSs). The algorithm, called FREA (fast reliability evaluation algorithm), is based on the generalized factoring theorem with several incorporated reliability-preserving reductions to speed up reliability evaluation. The effect of file distributions, program distributions, and various topologies on reliability of the DCS was studied by using the proposed algorithm. Compared with existing algorithms on various network topologies, file distributions, and program distributions, the proposed algorithm was much more economic in both time and space. To compute the distributed program reliability, the ARPA network is studied to illustrate the feasibility of the proposed algorithm.<>
{"title":"FREA: a distributed program reliability analysis program","authors":"Min-Sheng Lin, Deng-Jyi Chen","doi":"10.1109/CMPEUR.1992.218432","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218432","url":null,"abstract":"The authors present an algorithm for computing distributed program reliability in distributed computing systems (DCSs). The algorithm, called FREA (fast reliability evaluation algorithm), is based on the generalized factoring theorem with several incorporated reliability-preserving reductions to speed up reliability evaluation. The effect of file distributions, program distributions, and various topologies on reliability of the DCS was studied by using the proposed algorithm. Compared with existing algorithms on various network topologies, file distributions, and program distributions, the proposed algorithm was much more economic in both time and space. To compute the distributed program reliability, the ARPA network is studied to illustrate the feasibility of the proposed algorithm.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1992-05-04DOI: 10.1109/CMPEUR.1992.218501
S. Olariu, J. L. Schwing, J. Zhang
Time-optimal sorting and convex hull algorithms are proposed for two classes of enhanced meshes, the mesh with multiple broadcasting and the reconfigurable mesh. The authors show that the fundamental problem of sorting n items can be solved in O(log n) time on a mesh with multiple broadcasting of size n*n, which leads to an O(log n) time algorithm to compute the convex hull of an arbitrary set of n points in the plane. Based on the constant-time sorting algorithm on reconfigurable meshes, it is suggested that the convex hull problem of size n can be solved in constant time on a reconfigurable mesh of size n*n. All these algorithms can achieve time lower bounds for their respective models of computation.<>
{"title":"Time-optimal sorting and applications on n*n enhanced meshes","authors":"S. Olariu, J. L. Schwing, J. Zhang","doi":"10.1109/CMPEUR.1992.218501","DOIUrl":"https://doi.org/10.1109/CMPEUR.1992.218501","url":null,"abstract":"Time-optimal sorting and convex hull algorithms are proposed for two classes of enhanced meshes, the mesh with multiple broadcasting and the reconfigurable mesh. The authors show that the fundamental problem of sorting n items can be solved in O(log n) time on a mesh with multiple broadcasting of size n*n, which leads to an O(log n) time algorithm to compute the convex hull of an arbitrary set of n points in the plane. Based on the constant-time sorting algorithm on reconfigurable meshes, it is suggested that the convex hull problem of size n can be solved in constant time on a reconfigurable mesh of size n*n. All these algorithms can achieve time lower bounds for their respective models of computation.<<ETX>>","PeriodicalId":390273,"journal":{"name":"CompEuro 1992 Proceedings Computer Systems and Software Engineering","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128295591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}