The authors demonstrate how their Minimal i386 Software Fault Isolation Tool (MiSFIT) protects applications from end user extensions written in otherwise unsafe languages. They also compare the performance of unprotected code with MiSFIT-protected versions. MiSFIT can be used to fault isolate dynamically linked extensions to Web browsers, operating system extensions, or client code linked to a database server. As performance results show, by providing safety at a reasonably small overhead, MiSFIT is part of an end-to-end solution to the problem of constructing extensible systems.
{"title":"MiSFIT: constructing safe extensible systems","authors":"Christopher Small, M. Seltzer","doi":"10.1109/4434.708254","DOIUrl":"https://doi.org/10.1109/4434.708254","url":null,"abstract":"The authors demonstrate how their Minimal i386 Software Fault Isolation Tool (MiSFIT) protects applications from end user extensions written in otherwise unsafe languages. They also compare the performance of unprotected code with MiSFIT-protected versions. MiSFIT can be used to fault isolate dynamically linked extensions to Web browsers, operating system extensions, or client code linked to a database server. As performance results show, by providing safety at a reasonably small overhead, MiSFIT is part of an end-to-end solution to the problem of constructing extensible systems.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130374900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote procedure call systems have been around since around 1984 when they were first proposed (A.D. Birrell and B.J. Nelson, 1984). During the intervening 15 years, numerous evolutionary improvements have occurred in the basic RPC system, leading to improved systems-such as NCS (T.H. Dineen et al., 1987)-that offer programmers more functionality or greater simplicity. The Common Object Request Broker Architecture from the Object Management Group and Microsoft's Distributed Common Object Model are this evolutionary process's latest outgrowths. With the introduction of Java Developer's Kit release 1.1, a third alternative for creating distributed applications has emerged. The Java Remote Method Invocation system has many of the same features of other RPC systems, letting an object running in one Java virtual machine make a method call on an object running in another, perhaps on a different physical machine. On the surface, the RMI system is just another RPC mechanism, much like Corba and DCOM. But on closer inspection, RMI represents a very different evolutionary progression, one that results in a system that differs not just in detail but in the very set of assumptions made about the distributed systems in which it operates. These differences lead to differences in the programming model, capabilities, and the way the mechanisms interact with the code that implements and built the distributed systems.
{"title":"Remote procedure calls and Java Remote Method Invocation","authors":"J. Waldo","doi":"10.1109/4434.708248","DOIUrl":"https://doi.org/10.1109/4434.708248","url":null,"abstract":"Remote procedure call systems have been around since around 1984 when they were first proposed (A.D. Birrell and B.J. Nelson, 1984). During the intervening 15 years, numerous evolutionary improvements have occurred in the basic RPC system, leading to improved systems-such as NCS (T.H. Dineen et al., 1987)-that offer programmers more functionality or greater simplicity. The Common Object Request Broker Architecture from the Object Management Group and Microsoft's Distributed Common Object Model are this evolutionary process's latest outgrowths. With the introduction of Java Developer's Kit release 1.1, a third alternative for creating distributed applications has emerged. The Java Remote Method Invocation system has many of the same features of other RPC systems, letting an object running in one Java virtual machine make a method call on an object running in another, perhaps on a different physical machine. On the surface, the RMI system is just another RPC mechanism, much like Corba and DCOM. But on closer inspection, RMI represents a very different evolutionary progression, one that results in a system that differs not just in detail but in the very set of assumptions made about the distributed systems in which it operates. These differences lead to differences in the programming model, capabilities, and the way the mechanisms interact with the code that implements and built the distributed systems.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133115668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors describe how they extended a framework of active objects, named Actalk, into a generic multiagent platform, named DIMA. They discuss how they implemented this extension and report on one DIMA application that simulates economic models.
{"title":"From Active Objects to Autonomous Agents","authors":"Z. Guessoum, Jean-Pierre Briot","doi":"10.1109/4434.788781","DOIUrl":"https://doi.org/10.1109/4434.788781","url":null,"abstract":"The authors describe how they extended a framework of active objects, named Actalk, into a generic multiagent platform, named DIMA. They discuss how they implemented this extension and report on one DIMA application that simulates economic models.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"37 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123507542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A distributed system comprising networked heterogeneous processors requires efficient task-to-processor assignment to achieve fast turnaround time. Although reasonable heuristics exist to address optimal processor assignment for small problems, larger problems require better algorithms. The authors describe two new algorithms based on the A* technique which are considerably faster, are more memory-efficient, and give optimal solutions. The first is a sequential algorithm that reduces the search space. The second proposes to lower time complexity, by running the assignment algorithm in parallel, and achieves significant speedup. The authors test their results on a library of task graphs and processor topologies.
{"title":"Optimal task assignment in heterogeneous distributed computing systems","authors":"Muhammad Kafil, I. Ahmad","doi":"10.1109/4434.708255","DOIUrl":"https://doi.org/10.1109/4434.708255","url":null,"abstract":"A distributed system comprising networked heterogeneous processors requires efficient task-to-processor assignment to achieve fast turnaround time. Although reasonable heuristics exist to address optimal processor assignment for small problems, larger problems require better algorithms. The authors describe two new algorithms based on the A* technique which are considerably faster, are more memory-efficient, and give optimal solutions. The first is a sequential algorithm that reduces the search space. The second proposes to lower time complexity, by running the assignment algorithm in parallel, and achieves significant speedup. The authors test their results on a library of task graphs and processor topologies.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126101042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article discusses system-level issues and agent programming requirements that arise in the design of mobile agent systems. The authors describe several mobile agent systems to illustrate different approaches designers have taken in addressing these challenges. The following areas are discussed: agent mobility, naming, security issues, privacy and integrity, authentication, authorization and access control, metering and charging mechanisms, programming primitives, agent communication and synchronization primitives, agent monitoring and control primitives, and fault tolerance primitives.
{"title":"Design issues in mobile agent programming systems","authors":"Neeran M. Karnik, A. Tripathi","doi":"10.1109/4434.708256","DOIUrl":"https://doi.org/10.1109/4434.708256","url":null,"abstract":"The article discusses system-level issues and agent programming requirements that arise in the design of mobile agent systems. The authors describe several mobile agent systems to illustrate different approaches designers have taken in addressing these challenges. The following areas are discussed: agent mobility, naming, security issues, privacy and integrity, authentication, authorization and access control, metering and charging mechanisms, programming primitives, agent communication and synchronization primitives, agent monitoring and control primitives, and fault tolerance primitives.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115720953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rocco Aversa, A. Mazzeo, N. Mazzocca, Umberto Villano
PS (PVM simulator), is a simulator of PVM programs which lets users conduct performance prediction and analysis of distributed applications executed in heterogeneous and network computing environments. The article describes the tool and its development environment. As a prediction tool, the PS simulator lets developers obtain extrapolated performance data by estimating the behavior that a parallel application would attain on different types of architectures from traces collected on a workstation or on a scaled down distributed environment. As an analysis tool, it lets developers collect aggregate and analytical indexes related to heterogeneous system performance (such as efficiency, throughput, response time, and individual processor utilization) or traces that can be processed offline by a variety of tools for performance visualization and analysis (such as ParaGraph). It also lets users evaluate the effect of such factors as time spent in blocks of code, processor speed, network latency, and bandwidth on the overall application performance.
{"title":"Heterogeneous system performance prediction and analysis using PS","authors":"Rocco Aversa, A. Mazzeo, N. Mazzocca, Umberto Villano","doi":"10.1109/4434.708252","DOIUrl":"https://doi.org/10.1109/4434.708252","url":null,"abstract":"PS (PVM simulator), is a simulator of PVM programs which lets users conduct performance prediction and analysis of distributed applications executed in heterogeneous and network computing environments. The article describes the tool and its development environment. As a prediction tool, the PS simulator lets developers obtain extrapolated performance data by estimating the behavior that a parallel application would attain on different types of architectures from traces collected on a workstation or on a scaled down distributed environment. As an analysis tool, it lets developers collect aggregate and analytical indexes related to heterogeneous system performance (such as efficiency, throughput, response time, and individual processor utilization) or traces that can be processed offline by a variety of tools for performance visualization and analysis (such as ParaGraph). It also lets users evaluate the effect of such factors as time spent in blocks of code, processor speed, network latency, and bandwidth on the overall application performance.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128435522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Languages that support task and data parallelism are highly general and can exploit both forms of parallelism in a single application. However, cleanly integrating the two forms of parallelism in a programming model is difficult. The authors describe four programming systems that attempt such an integration: Fx, Opus, data-parallel Orca, and Braid.
{"title":"Approaches for integrating task and data parallelism","authors":"H. Bal, M. Haines","doi":"10.1109/4434.708258","DOIUrl":"https://doi.org/10.1109/4434.708258","url":null,"abstract":"Languages that support task and data parallelism are highly general and can exploit both forms of parallelism in a single application. However, cleanly integrating the two forms of parallelism in a programming model is difficult. The authors describe four programming systems that attempt such an integration: Fx, Opus, data-parallel Orca, and Braid.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128891299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although the development and use of Internet multimedia applications are increasing, the ability to manipulate and process multimedia information on the Internet is missing. The Internet's rich connectivity makes it an ideal blueprint for a high-performance computing system: millions and millions of heterogeneous computing nodes that can support open-standards protocols for communication and exchange of information. The implementation of high-performance multimedia computing on the Internet requires mapping the application computation onto a set of networked processing resources. Fortunately, multimedia processing exhibits a high degree of parallelism that can benefit from the Internet architecture's concurrent nature. Multimedia applications can exploit three types of parallelism: functional, temporal and spatial.
{"title":"High-performance multimedia applications and the Internet","authors":"A. Krikelis","doi":"10.1109/4434.708251","DOIUrl":"https://doi.org/10.1109/4434.708251","url":null,"abstract":"Although the development and use of Internet multimedia applications are increasing, the ability to manipulate and process multimedia information on the Internet is missing. The Internet's rich connectivity makes it an ideal blueprint for a high-performance computing system: millions and millions of heterogeneous computing nodes that can support open-standards protocols for communication and exchange of information. The implementation of high-performance multimedia computing on the Internet requires mapping the application computation onto a set of networked processing resources. Fortunately, multimedia processing exhibits a high degree of parallelism that can benefit from the Internet architecture's concurrent nature. Multimedia applications can exploit three types of parallelism: functional, temporal and spatial.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133675112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Providing video-over-wireless capability to mobile computing platforms results in several interesting challenges. Wireless networks provide less transmission bandwidth than hard wired networks. Because today's wireless local area network technology can provide only around 2 Mbps transmission rates, video compression is essential for transmitting to mobile devices. Due to increased user sensitivity to cost and power consumption, mobile computing platforms prefer a host processor-only solution, opposed to a host processor in conjunction with a digital signal processor. Most general purpose microprocessor architectures have recently extended their instruction set architectures to include parallel instructions for improved performance on multimedia applications, including MPEG (Motion Pictures Expert Group) video. The article highlights the features of several of these extended ISAs for processing MPEG video. Each uses a modified single instruction, multiple data execution model as a technique to enable concurrent execution. In the modified micro SIMD execution model, a single instruction initiates parallel execution on data organized in parallel. The article illustrates the micro SIMD execution of an add instruction. Micro SIMD execution using packed data types (with byte, half word, or word quantities) makes more efficient use of the processor data path for 64 or 128 bit architectures. We refer to this particular form of micro SIMD execution as subword execution.
{"title":"Subword extensions for video processing on mobile systems","authors":"Matthew D. Jennings, T. Conte","doi":"10.1109/4434.708250","DOIUrl":"https://doi.org/10.1109/4434.708250","url":null,"abstract":"Providing video-over-wireless capability to mobile computing platforms results in several interesting challenges. Wireless networks provide less transmission bandwidth than hard wired networks. Because today's wireless local area network technology can provide only around 2 Mbps transmission rates, video compression is essential for transmitting to mobile devices. Due to increased user sensitivity to cost and power consumption, mobile computing platforms prefer a host processor-only solution, opposed to a host processor in conjunction with a digital signal processor. Most general purpose microprocessor architectures have recently extended their instruction set architectures to include parallel instructions for improved performance on multimedia applications, including MPEG (Motion Pictures Expert Group) video. The article highlights the features of several of these extended ISAs for processing MPEG video. Each uses a modified single instruction, multiple data execution model as a technique to enable concurrent execution. In the modified micro SIMD execution model, a single instruction initiates parallel execution on data organized in parallel. The article illustrates the micro SIMD execution of an add instruction. Micro SIMD execution using packed data types (with byte, half word, or word quantities) makes more efficient use of the processor data path for 64 or 128 bit architectures. We refer to this particular form of micro SIMD execution as subword execution.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129265842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Load sharing improves the performance of distributed systems by moving work from heavily loaded to lightly loaded nodes. The author compares the performance of two principal load sharing policies under different circumstances, providing a generalized description of their behavior that holds true regardless of the specific system and workload models, and parameter values.
{"title":"Sensitivity evaluation of dynamic load sharing in distributed systems","authors":"S. Dandamudi","doi":"10.1109/4434.708257","DOIUrl":"https://doi.org/10.1109/4434.708257","url":null,"abstract":"Load sharing improves the performance of distributed systems by moving work from heavily loaded to lightly loaded nodes. The author compares the performance of two principal load sharing policies under different circumstances, providing a generalized description of their behavior that holds true regardless of the specific system and workload models, and parameter values.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"416 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124173000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}