Quality of service has recently emerged as one of the most important research topics and engineering problems in a number of fields, including the Internet, multimedia, and wireless communication. However, the interest in QoS in these fields has not been matched by similar interest among system designers. Therefore, a large and growing gap exists between theoretical discussions of QoS in the multimedia and networking literature and the practical application of QoS paradigms in system synthesis. At the same time, researchers have pursued many aspects of low-power designs and have proposed a variety of modeling and optimization techniques. Also, there are hundreds of research papers on, and dozens of tools for power minimization at essentially all stages of the design process. Nevertheless, no one has addressed the relationship between power optimization and QoS. Therefore, in the very near future we will likely see a flurry of research and development to realize design methodologies and synthesis tools that incorporate QoS methodologies and optimize system power consumption. To build a basis and provide an impetus for this research, I briefly survey the state-of-the-art QoS research and practice.
{"title":"Low power and QoS","authors":"M. Potkonjak","doi":"10.1109/4434.806973","DOIUrl":"https://doi.org/10.1109/4434.806973","url":null,"abstract":"Quality of service has recently emerged as one of the most important research topics and engineering problems in a number of fields, including the Internet, multimedia, and wireless communication. However, the interest in QoS in these fields has not been matched by similar interest among system designers. Therefore, a large and growing gap exists between theoretical discussions of QoS in the multimedia and networking literature and the practical application of QoS paradigms in system synthesis. At the same time, researchers have pursued many aspects of low-power designs and have proposed a variety of modeling and optimization techniques. Also, there are hundreds of research papers on, and dozens of tools for power minimization at essentially all stages of the design process. Nevertheless, no one has addressed the relationship between power optimization and QoS. Therefore, in the very near future we will likely see a flurry of research and development to realize design methodologies and synthesis tools that incorporate QoS methodologies and optimize system power consumption. To build a basis and provide an impetus for this research, I briefly survey the state-of-the-art QoS research and practice.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124873315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Focuses on using a modular and actor-based approach to the development of time-dependent distributed systems that separates functional aspects from timing. The Time Warp mechanism regulates the interaction policy among logical processes, shifting the overhead from communication to computation.
{"title":"Distributed simulation of timed Petri nets. A modular approach using actors and Time Warp","authors":"R. Beraldi, L. Nigro","doi":"10.1109/4434.806979","DOIUrl":"https://doi.org/10.1109/4434.806979","url":null,"abstract":"Focuses on using a modular and actor-based approach to the development of time-dependent distributed systems that separates functional aspects from timing. The Time Warp mechanism regulates the interaction policy among logical processes, shifting the overhead from communication to computation.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"57 21","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IEEE Concurrency Structured Development of Parallel Programs presents a structured programming methodology for parallel computations that ensures portability, programmability, and good performance. The book’s ultimate goal is to develop a suitable programming language for parallel programming and its compiler. This language is meant to deliver typical parallel constructs (skeletons) and their realizations (templates) on various architectures. The book’s first half presents a critical analysis of the state of the art of parallel software development. It also closely examines several existing approaches to parallel programming, concluding that template-based systems are the best compromise. In this approach, the programmer selects skeletons and their conversion rules, then uses them to build a program. Its performance might not match that of a lowlevel graph-based approach, but it is predictable and easily ensures programmability and portability. The book’s second half describes the P3L template-based methodology and its realization as the P3L language and its compiler, offering application examples. The author maintains that the template-based system gives rise to accurate performance models for the skeletons library designer as well as for the programmer. The technical and mapping details are left to the skeleton library designer, who can fully exploit specific properties of particular skeletons. The P3L methodology incorporates a small set of basic skeletons and their combination rules. Skeleton selection is based on the analysis of existing approaches. The skeletons reflect typical constructs that parallel program designers use. The P3L methodology might be a good starting point for developing efficient highlevel languages for parallel programming. It suggests how to ensure compromise between performance and portability and programmability. In any case, we should not treat it as something closed and finally established— high-level parallel programming languages continue to develop and improve. Such high-level languages would let the programmer concentrate less on the details of the machine’s architecture and more on the algorithm’s design. The lack of high-level languages is one of the major obstacles hampering large, complex software projects and the development of computational algorithms. Currently, the progress of these languages is severely delayed compared to the pure parallel hardware performance. An efficient, high-level language for parallel programming available on computers with parallel processors and on clusters of machines used for distributed computations would be an important tool for people developing general theoretical and application-oriented algorithms. This book should interest people working on parallel algorithms, but, more importantly, it should interest researchers and software engineers developing languages for parallel computations. It might also be of interest to both undergraduate and graduate computer science stud
{"title":"Structured development of parallel programs","authors":"M. Paprzycki","doi":"10.1109/MCC.1999.806989","DOIUrl":"https://doi.org/10.1109/MCC.1999.806989","url":null,"abstract":"IEEE Concurrency Structured Development of Parallel Programs presents a structured programming methodology for parallel computations that ensures portability, programmability, and good performance. The book’s ultimate goal is to develop a suitable programming language for parallel programming and its compiler. This language is meant to deliver typical parallel constructs (skeletons) and their realizations (templates) on various architectures. The book’s first half presents a critical analysis of the state of the art of parallel software development. It also closely examines several existing approaches to parallel programming, concluding that template-based systems are the best compromise. In this approach, the programmer selects skeletons and their conversion rules, then uses them to build a program. Its performance might not match that of a lowlevel graph-based approach, but it is predictable and easily ensures programmability and portability. The book’s second half describes the P3L template-based methodology and its realization as the P3L language and its compiler, offering application examples. The author maintains that the template-based system gives rise to accurate performance models for the skeletons library designer as well as for the programmer. The technical and mapping details are left to the skeleton library designer, who can fully exploit specific properties of particular skeletons. The P3L methodology incorporates a small set of basic skeletons and their combination rules. Skeleton selection is based on the analysis of existing approaches. The skeletons reflect typical constructs that parallel program designers use. The P3L methodology might be a good starting point for developing efficient highlevel languages for parallel programming. It suggests how to ensure compromise between performance and portability and programmability. In any case, we should not treat it as something closed and finally established— high-level parallel programming languages continue to develop and improve. Such high-level languages would let the programmer concentrate less on the details of the machine’s architecture and more on the algorithm’s design. The lack of high-level languages is one of the major obstacles hampering large, complex software projects and the development of computational algorithms. Currently, the progress of these languages is severely delayed compared to the pure parallel hardware performance. An efficient, high-level language for parallel programming available on computers with parallel processors and on clusters of machines used for distributed computations would be an important tool for people developing general theoretical and application-oriented algorithms. This book should interest people working on parallel algorithms, but, more importantly, it should interest researchers and software engineers developing languages for parallel computations. It might also be of interest to both undergraduate and graduate computer science stud","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"9 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113957465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents a set of cost measures that can be applied to parallel algorithms to predict their computation, data access and communication performance. These measures make it possible to compare different parallel implementation strategies for data mining techniques without benchmarking each one.
{"title":"Strategies for parallel data mining","authors":"D. Skillicorn","doi":"10.1109/4434.806976","DOIUrl":"https://doi.org/10.1109/4434.806976","url":null,"abstract":"This article presents a set of cost measures that can be applied to parallel algorithms to predict their computation, data access and communication performance. These measures make it possible to compare different parallel implementation strategies for data mining techniques without benchmarking each one.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134068414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The author considers how two approaches appear to meet the communication requirements of mobile multimedia applications. One is to add packet switching capabilities (and higher bandwidths) to cellular systems, as the third-generation International Mobile Telecommunications systems do. The other approach is to build a wireless equivalent of the successful local area networks that carry the bulk of today's multimedia traffic. We need new approaches that can create the technological breakthrough required for the next generation of mobile multimedia systems. The author discusses user interfaces, data compression and protocols.
{"title":"Mobile multimedia considerations","authors":"A. Krikells","doi":"10.1109/4434.806984","DOIUrl":"https://doi.org/10.1109/4434.806984","url":null,"abstract":"The author considers how two approaches appear to meet the communication requirements of mobile multimedia applications. One is to add packet switching capabilities (and higher bandwidths) to cellular systems, as the third-generation International Mobile Telecommunications systems do. The other approach is to build a wireless equivalent of the successful local area networks that carry the bulk of today's multimedia traffic. We need new approaches that can create the technological breakthrough required for the next generation of mobile multimedia systems. The author discusses user interfaces, data compression and protocols.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123903779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors compare four CORBA naming-service implementations, offering results from their study. They also examine conformance to the naming-service specification, interoperability issues, and nonstandard extensions.
{"title":"CORBA naming-service evaluation","authors":"Sean Landis, William Shapiro","doi":"10.1109/4434.806978","DOIUrl":"https://doi.org/10.1109/4434.806978","url":null,"abstract":"The authors compare four CORBA naming-service implementations, offering results from their study. They also examine conformance to the naming-service specification, interoperability issues, and nonstandard extensions.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121241969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Part 1 Fundamentals: basic architectural ideas classifying architectures some example applications decomposition techniques terms and measures. Part 3 Data parallelism: basic operations an inside-out syntax other data-parallel operations automatic parallelization controlling and exploiting data placement discussion. Part 3 Shared variables: creating and coordinating processes practical synchronization mechanisms futures caching scheduling and mapping parallel programmes parallel I/O systems discussion. Part 4 Message passing: channels the crystalline model procedural message-passing systems watching programmes run discussion. Part 5 Generative communication: the generative model managing data structures in Tuple space active data structures message passing through Tuple space implementing generative communication enhancing generative communication some other high-level alternatives discussion. Appendices: the Fortran-K programming language a short history lesson recommended reading.
{"title":"Practical parallel programming","authors":"G. V. Wilson","doi":"10.1109/MCC.1999.806991","DOIUrl":"https://doi.org/10.1109/MCC.1999.806991","url":null,"abstract":"Part 1 Fundamentals: basic architectural ideas classifying architectures some example applications decomposition techniques terms and measures. Part 3 Data parallelism: basic operations an inside-out syntax other data-parallel operations automatic parallelization controlling and exploiting data placement discussion. Part 3 Shared variables: creating and coordinating processes practical synchronization mechanisms futures caching scheduling and mapping parallel programmes parallel I/O systems discussion. Part 4 Message passing: channels the crystalline model procedural message-passing systems watching programmes run discussion. Part 5 Generative communication: the generative model managing data structures in Tuple space active data structures message passing through Tuple space implementing generative communication enhancing generative communication some other high-level alternatives discussion. Appendices: the Fortran-K programming language a short history lesson recommended reading.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121810580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ASA’s Information Power Grid is an example of an emerging, exciting concept that can potentially make high-performance computing power accessible to general users as easily and seamlessly as electricity from an electrical power grid. In the IPG system, high-performance computers located at geographically distributed sites will be connected via a high-speed interconnection network. Users will be able to submit computational jobs at any site, and the system will seek the best available computational resources, transfer the user’s input data sets to that system, access other needed data sets from remote sites, perform the specified computations and analysis, and then return the resulting data sets to the user. Systems such as the IPG will be able to support larger applications than ever before. New types of applications will also be enabled, such as multidisciplinary collaboration environments that couple geographically dispersed compute, data, scientific instruments, and people resources together using a suite of grid-wide services. IPG’s fundamental technology comes from current research results in the area of large-scale computational grids. Figure 1 provides an intuitive view of a wide-area computational grid.
{"title":"Information power grid: The new frontier in parallel computing?","authors":"William Leinberger, Vipin Kumar","doi":"10.1109/MCC.1999.806982","DOIUrl":"https://doi.org/10.1109/MCC.1999.806982","url":null,"abstract":"ASA’s Information Power Grid is an example of an emerging, exciting concept that can potentially make high-performance computing power accessible to general users as easily and seamlessly as electricity from an electrical power grid. In the IPG system, high-performance computers located at geographically distributed sites will be connected via a high-speed interconnection network. Users will be able to submit computational jobs at any site, and the system will seek the best available computational resources, transfer the user’s input data sets to that system, access other needed data sets from remote sites, perform the specified computations and analysis, and then return the resulting data sets to the user. Systems such as the IPG will be able to support larger applications than ever before. New types of applications will also be enabled, such as multidisciplinary collaboration environments that couple geographically dispersed compute, data, scientific instruments, and people resources together using a suite of grid-wide services. IPG’s fundamental technology comes from current research results in the area of large-scale computational grids. Figure 1 provides an intuitive view of a wide-area computational grid.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134449148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Baentsch, P. Buhler, T. Eirich, Frank Höring, M. Oestreicher
In this final of three related articles about smart card technology, the authors discuss the JavaCard, a much-hyped technology that is finally taking off as a multiapplication smart card. The main reason for the hype is JavaCard's potential. Not only would it let all Java programmers develop smart card code, but such code could be downloaded to cards that have already been issued to customers.
{"title":"JavaCard-from hype to reality","authors":"M. Baentsch, P. Buhler, T. Eirich, Frank Höring, M. Oestreicher","doi":"10.1109/4434.806977","DOIUrl":"https://doi.org/10.1109/4434.806977","url":null,"abstract":"In this final of three related articles about smart card technology, the authors discuss the JavaCard, a much-hyped technology that is finally taking off as a multiapplication smart card. The main reason for the hype is JavaCard's potential. Not only would it let all Java programmers develop smart card code, but such code could be downloaded to cards that have already been issued to customers.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128278259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}