In the Aurora distributed shared data system, the programmer instantiates shared-data objects and uses scoped behavior to incrementally tune applications on a per-object and per-context basis. A class library implements shared-data objects as abstract data types and scoped behavior implements the optimizations within standard C++. Using a network of workstations connected by an ATM switch, the author demonstrates that Aurora performs comparably to message passing.
{"title":"Implementing scoped behavior for flexible distributed data sharing","authors":"P. Lu","doi":"10.1109/4434.865895","DOIUrl":"https://doi.org/10.1109/4434.865895","url":null,"abstract":"In the Aurora distributed shared data system, the programmer instantiates shared-data objects and uses scoped behavior to incrementally tune applications on a per-object and per-context basis. A class library implements shared-data objects as abstract data types and scoped behavior implements the optimizations within standard C++. Using a network of workstations connected by an ATM switch, the author demonstrates that Aurora performs comparably to message passing.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132354434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational grids are an exciting new technology that blends high-performance computing, distributed systems and operating systems. Grids provide a coherent software infrastructure that permits the seamless integration of wide-area resources, such as computers, instruments, devices and data archives, to solve large-scale scientific and engineering problems. In grids, the wide-area network appears to be a large virtual computer. Computational grids are a powerful technology that has engendered a new paradigm for high-performance distributed computing. Teaching about grids is crucial for enabling the future grid workforce and increasing its productivity. Developing high-quality grid courseware is difficult; instructors at the University of Minnesota have provided a helpful Web site based on their experience with teaching a metacomputing class. Grid researchers and educators need to pool their talents and resources to develop effective shareable courseware. To aid this effort, university instructors are developing a repository for grid courseware.
{"title":"Grids in the classroom","authors":"J. Weissman","doi":"10.1109/4434.865885","DOIUrl":"https://doi.org/10.1109/4434.865885","url":null,"abstract":"Computational grids are an exciting new technology that blends high-performance computing, distributed systems and operating systems. Grids provide a coherent software infrastructure that permits the seamless integration of wide-area resources, such as computers, instruments, devices and data archives, to solve large-scale scientific and engineering problems. In grids, the wide-area network appears to be a large virtual computer. Computational grids are a powerful technology that has engendered a new paradigm for high-performance distributed computing. Teaching about grids is crucial for enabling the future grid workforce and increasing its productivity. Developing high-quality grid courseware is difficult; instructors at the University of Minnesota have provided a helpful Web site based on their experience with teaching a metacomputing class. Grid researchers and educators need to pool their talents and resources to develop effective shareable courseware. To aid this effort, university instructors are developing a repository for grid courseware.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130167605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent cache-memory research has focused on approaches that split the first-level data cache into two independent subcaches. The authors introduce a methodology for helping cache designers devise splitting schemes and survey a representative set of the published cache schemes.
{"title":"Splitting the data cache: a survey","authors":"J. Sahuquillo, A. Pont","doi":"10.1109/4434.865890","DOIUrl":"https://doi.org/10.1109/4434.865890","url":null,"abstract":"Recent cache-memory research has focused on approaches that split the first-level data cache into two independent subcaches. The authors introduce a methodology for helping cache designers devise splitting schemes and survey a representative set of the published cache schemes.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126098847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors describe a class of directory coherence protocols called delta coherence protocols. These protocols use network guarantees to support a new and highly concurrent approach to maintaining a consistent shared memory.
{"title":"Delta coherence protocols","authors":"Craig Williams, P. Reynolds, B. D. Supinski","doi":"10.1109/4434.865889","DOIUrl":"https://doi.org/10.1109/4434.865889","url":null,"abstract":"The authors describe a class of directory coherence protocols called delta coherence protocols. These protocols use network guarantees to support a new and highly concurrent approach to maintaining a consistent shared memory.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126795559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In bus-based shared-memory multiprocessors, several techniques reduce cache misses and bus traffic, which are the key obstacles to high performance.
在基于总线的共享内存多处理器中,有几种技术可以减少缓存丢失和总线流量,这是高性能的主要障碍。
{"title":"Achieving high performance in bus-based shared-memory multiprocessors","authors":"A. Milenković","doi":"10.1109/4434.865891","DOIUrl":"https://doi.org/10.1109/4434.865891","url":null,"abstract":"In bus-based shared-memory multiprocessors, several techniques reduce cache misses and bus traffic, which are the key obstacles to high performance.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115172808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan-Carlos Cano, A. Pont, J. Sahuquillo, J. A. Gil
The authors discuss the similarities in caching between the extensively studied distributed shared memory systems and the emerging proxy systems. They believe that several of the techniques used in distributed shared memory systems can be adapted and applied to proxy systems.
{"title":"The differences between distributed shared memory caching and proxy caching","authors":"Juan-Carlos Cano, A. Pont, J. Sahuquillo, J. A. Gil","doi":"10.1109/4434.865892","DOIUrl":"https://doi.org/10.1109/4434.865892","url":null,"abstract":"The authors discuss the similarities in caching between the extensively studied distributed shared memory systems and the emerging proxy systems. They believe that several of the techniques used in distributed shared memory systems can be adapted and applied to proxy systems.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126931760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Alonso, C. Hagen, D. Agrawal, A. E. Abbadi, C. Mohan
Today's commercial workflow systems, although useful, do not scale well, have limited fault tolerance, and don't interoperate well with other workflow systems. The authors discuss current research directions and potential future extensions that might enable workflow services to meet the needs of mission-critical applications.
{"title":"Enhancing the fault tolerance of workflow management systems","authors":"G. Alonso, C. Hagen, D. Agrawal, A. E. Abbadi, C. Mohan","doi":"10.1109/4434.865896","DOIUrl":"https://doi.org/10.1109/4434.865896","url":null,"abstract":"Today's commercial workflow systems, although useful, do not scale well, have limited fault tolerance, and don't interoperate well with other workflow systems. The authors discuss current research directions and potential future extensions that might enable workflow services to meet the needs of mission-critical applications.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116798391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To avoid the throughput limitations of traditional data retrieval algorithms, the authors have developed a family of algorithms that exploit emerging smart-disk technologies and increase data throughput on high-performance continuous media servers.
{"title":"Hierarchical caching and prefetching for continuous media servers with smart disks","authors":"S. Harizopoulos, C. Harizakis, P. Triantafillou","doi":"10.1109/4434.865888","DOIUrl":"https://doi.org/10.1109/4434.865888","url":null,"abstract":"To avoid the throughput limitations of traditional data retrieval algorithms, the authors have developed a family of algorithms that exploit emerging smart-disk technologies and increase data throughput on high-performance continuous media servers.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124281746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors address the design and implementation of a caching approach for CORBA-based systems, including a new removal algorithm that uses a double-linked structure and a hash table for eviction. Their experiments demonstrate important performance gains.
{"title":"Cache management in CORBA distributed object systems","authors":"Z. Tari, Herry Hamidjaja, Qitang Lin","doi":"10.1109/4434.865893","DOIUrl":"https://doi.org/10.1109/4434.865893","url":null,"abstract":"The authors address the design and implementation of a caching approach for CORBA-based systems, including a new removal algorithm that uses a double-linked structure and a hash table for eviction. Their experiments demonstrate important performance gains.","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"403 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123378784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern computer systems, as well as the Internet, use caching to maximize their efficiency. Nowadays, caching occurs in many different system layers. Analysis of these layers will lead to a deeper understanding of cache performance. ... comes from the uniprocessor environment. Spatial locality implies that the next data item in the address space is most likely to be used next, while temporal locality implies that the last data item used is most likely to be used next. Implementation is typically based on a fast but expensive memory (the price is affordable because, by definition, cache memory is small). Even if we use the same technology for the main memory and cache memory, the cache memory will be faster because smaller memories have a shorter access time. Recent research tries to split the CPU cache into two subcaches: one for spatial locality and one for temporal locality. # SMP On the SMP level, spatial and temporal
{"title":"Caching in distributed systems","authors":"V. Milutinovic","doi":"10.1109/MCC.2000.865887","DOIUrl":"https://doi.org/10.1109/MCC.2000.865887","url":null,"abstract":"Modern computer systems, as well as the Internet, use caching to maximize their efficiency. Nowadays, caching occurs in many different system layers. Analysis of these layers will lead to a deeper understanding of cache performance. ... comes from the uniprocessor environment. Spatial locality implies that the next data item in the address space is most likely to be used next, while temporal locality implies that the last data item used is most likely to be used next. Implementation is typically based on a fast but expensive memory (the price is affordable because, by definition, cache memory is small). Even if we use the same technology for the main memory and cache memory, the cache memory will be faster because smaller memories have a shorter access time. Recent research tries to split the CPU cache into two subcaches: one for spatial locality and one for temporal locality. # SMP On the SMP level, spatial and temporal","PeriodicalId":282630,"journal":{"name":"IEEE Concurr.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127155999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}