Pub Date : 1996-01-22DOI: 10.1109/M-PDT.1996.494611
J. Zalewski
All three books are collections of articles on related subjects that were previously published, mostly in IEEE Computer Society publications. They appear in an unnamedalbeit known for about a decade and highly rated-series of IEEE tutorials. The books have very similar contents; therefore, their joint review seems appropriate. Interconnection Networksfor Multiprocessors and Multicomputers has 10 chapters and over 50 articles, including chapter introductions. The first chapter, written by the editors, introduces the entire book and gives a proper perspective on its contents. Four subsequent chapters discuss interconnections from the point of view of their topologies. In particular, there are articles on Clos and Benes networks, multistage networks, buses, and crossbars. Although it is hard to distinguish among the articles in this collection and point to the one of particular value, I must confess that I read with great pleasure Leiserson’s 10-yearold article on fat trees. Although the chapters just mentioned discuss individual properties of various topologies, the next three chapters specifically address general properties of interconnection networks. These properties include routing (to provide required functionality), reliability, and performance. I took a closer look at the chapter on “Fault-tolerance and Reliability.” As the editors point out, an interconnecoon network‘s ability to avoid failures’is usually measured as its reliability or availability. A network achieves high reliability or availability normally through some form of fault tolerance. Thus, fault tolerance, in the form of various kinds of redundancy (in space or time), is the major subject of all the articles ii chapter, which provide a reasonably com coverage of the most important issues. I have mixed feelings about the las chapters of the book: one on algorithm applications, and one that includes case ies. The first attempts to cover sut related to designing applications and rithms for parallel machines. This ai broad enough to take at least another volume (such as Introduction to Parallel rzthms and Architectures, by F.T. Leigl Morgan Kaufmann, 1992), so providing approximate coverage in one chapter 1 definition, impossible. However, the ch presenting case studies is reasonably plete and includes articles on several res1 machines, as well as on those once com cially available. In summary, this book is a good vol providing a wealth of valuable informatic theoretical aspects of interconnecting n ple processors. I commend the editor writing comprehensive introductions I chapters, a custom less and less commc this series of IEEE tutorials. On the nee side, the book doesn’t even mention ce important topics such as cache coherencl newer solutions such as ATM, but such r rial is probably suited for other volumes no one can cover everything importani single book like this. The second book, Interconnectaon Nefi fir High-Pe$omzance Parallel Computers, i! prisingly similar, not only by
这三本书都是以前发表的相关主题的文章的集合,主要是在IEEE计算机协会的出版物上。它们出现在一个不知名的IEEE系列教程中,尽管这个系列已经有大约十年的历史了。这两本书的内容非常相似;因此,他们的联合审查似乎是适当的。多处理器和多计算机的互连网络有10章和50多篇文章,包括章节介绍。第一章由编辑撰写,介绍了整本书,并对其内容给出了适当的视角。接下来的四章从拓扑学的角度讨论互连。特别是,有关于Clos和Benes网络、多级网络、总线和横梁的文章。虽然很难区分这个合集里的文章,并指出哪一篇特别有价值,但我必须承认,我非常高兴地阅读了雷瑟森10年前关于胖树的文章。虽然刚刚提到的章节讨论了各种拓扑的个别属性,但接下来的三章专门讨论了互连网络的一般属性。这些属性包括路由(以提供所需的功能)、可靠性和性能。我仔细阅读了“容错和可靠性”一章。正如编者所指出的,互连网络避免故障的能力通常以其可靠性或可用性来衡量。网络通常通过某种形式的容错实现高可靠性或可用性。因此,容错,以各种冗余(在空间或时间上)的形式,是所有文章ii章的主要主题,这些文章ii章提供了对最重要问题的合理覆盖。对于这本书的最后几章,我的感觉很复杂:一章是关于算法应用的,另一章是关于案例的。第一次尝试涵盖套件相关的设计应用程序和并行机器的节奏。这足够广泛,至少需要另一卷书(如F.T. Leigl Morgan Kaufmann的《并行节奏和架构介绍》,1992),所以在第一章的定义中提供大致的覆盖是不可能的。然而,所介绍的案例研究是相当完整的,包括一些关于res1机器的文章,以及那些曾经可用的机器。总之,这本书是一个很好的卷,提供了丰富的有价值的信息理论方面的互连在多个处理器。我推荐编者写的全面的介绍1章,自定义的越来越少的通信这一系列的IEEE教程。在需求方面,这本书甚至没有提到一些重要的主题,比如缓存一致性和像ATM这样的新解决方案,但这样的竞争可能适合其他卷,没有人可以在这样一本书中涵盖所有重要的内容。第二本书《互联网络与高性能并行计算机》出版了。非常相似,不仅在标题上,而且在所有内容上。它包含约70篇论文,分为11章。覆盖的区域基本相同,但侧重点不同。我
{"title":"Interconnection Networks for High-Performance Parallel Computer [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.494611","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494611","url":null,"abstract":"All three books are collections of articles on related subjects that were previously published, mostly in IEEE Computer Society publications. They appear in an unnamedalbeit known for about a decade and highly rated-series of IEEE tutorials. The books have very similar contents; therefore, their joint review seems appropriate. Interconnection Networksfor Multiprocessors and Multicomputers has 10 chapters and over 50 articles, including chapter introductions. The first chapter, written by the editors, introduces the entire book and gives a proper perspective on its contents. Four subsequent chapters discuss interconnections from the point of view of their topologies. In particular, there are articles on Clos and Benes networks, multistage networks, buses, and crossbars. Although it is hard to distinguish among the articles in this collection and point to the one of particular value, I must confess that I read with great pleasure Leiserson’s 10-yearold article on fat trees. Although the chapters just mentioned discuss individual properties of various topologies, the next three chapters specifically address general properties of interconnection networks. These properties include routing (to provide required functionality), reliability, and performance. I took a closer look at the chapter on “Fault-tolerance and Reliability.” As the editors point out, an interconnecoon network‘s ability to avoid failures’is usually measured as its reliability or availability. A network achieves high reliability or availability normally through some form of fault tolerance. Thus, fault tolerance, in the form of various kinds of redundancy (in space or time), is the major subject of all the articles ii chapter, which provide a reasonably com coverage of the most important issues. I have mixed feelings about the las chapters of the book: one on algorithm applications, and one that includes case ies. The first attempts to cover sut related to designing applications and rithms for parallel machines. This ai broad enough to take at least another volume (such as Introduction to Parallel rzthms and Architectures, by F.T. Leigl Morgan Kaufmann, 1992), so providing approximate coverage in one chapter 1 definition, impossible. However, the ch presenting case studies is reasonably plete and includes articles on several res1 machines, as well as on those once com cially available. In summary, this book is a good vol providing a wealth of valuable informatic theoretical aspects of interconnecting n ple processors. I commend the editor writing comprehensive introductions I chapters, a custom less and less commc this series of IEEE tutorials. On the nee side, the book doesn’t even mention ce important topics such as cache coherencl newer solutions such as ATM, but such r rial is probably suited for other volumes no one can cover everything importani single book like this. The second book, Interconnectaon Nefi fir High-Pe$omzance Parallel Computers, i! prisingly similar, not only by ","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131670936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/M-PDT.1996.494610
J. Zalewski
All three books are collections of articles on related subjects that were previously published, mostly in IEEE Computer Society publications. They appear in an unnamedalbeit known for about a decade and highly rated-series of IEEE tutorials. The books have very similar contents; therefore, their joint review seems appropriate. Interconnection Networksfor Multiprocessors and Multicomputers has 10 chapters and over 50 articles, including chapter introductions. The first chapter, written by the editors, introduces the entire book and gives a proper perspective on its contents. Four subsequent chapters discuss interconnections from the point of view of their topologies. In particular, there are articles on Clos and Benes networks, multistage networks, buses, and crossbars. Although it is hard to distinguish among the articles in this collection and point to the one of particular value, I must confess that I read with great pleasure Leiserson’s 10-yearold article on fat trees. Although the chapters just mentioned discuss individual properties of various topologies, the next three chapters specifically address general properties of interconnection networks. These properties include routing (to provide required functionality), reliability, and performance. I took a closer look at the chapter on “Fault-tolerance and Reliability.” As the editors point out, an interconnecoon network‘s ability to avoid failures’is usually measured as its reliability or availability. A network achieves high reliability or availability normally through some form of fault tolerance. Thus, fault tolerance, in the form of various kinds of redundancy (in space or time), is the major subject of all the articles ii chapter, which provide a reasonably com coverage of the most important issues. I have mixed feelings about the las chapters of the book: one on algorithm applications, and one that includes case ies. The first attempts to cover sut related to designing applications and rithms for parallel machines. This ai broad enough to take at least another volume (such as Introduction to Parallel rzthms and Architectures, by F.T. Leigl Morgan Kaufmann, 1992), so providing approximate coverage in one chapter 1 definition, impossible. However, the ch presenting case studies is reasonably plete and includes articles on several res1 machines, as well as on those once com cially available. In summary, this book is a good vol providing a wealth of valuable informatic theoretical aspects of interconnecting n ple processors. I commend the editor writing comprehensive introductions I chapters, a custom less and less commc this series of IEEE tutorials. On the nee side, the book doesn’t even mention ce important topics such as cache coherencl newer solutions such as ATM, but such r rial is probably suited for other volumes no one can cover everything importani single book like this. The second book, Interconnectaon Nefi fir High-Pe$omzance Parallel Computers, i! prisingly similar, not only by
这三本书都是以前发表的相关主题的文章的集合,主要是在IEEE计算机协会的出版物上。它们出现在一个不知名的IEEE系列教程中,尽管这个系列已经有大约十年的历史了。这两本书的内容非常相似;因此,他们的联合审查似乎是适当的。多处理器和多计算机的互连网络有10章和50多篇文章,包括章节介绍。第一章由编辑撰写,介绍了整本书,并对其内容给出了适当的视角。接下来的四章从拓扑学的角度讨论互连。特别是,有关于Clos和Benes网络、多级网络、总线和横梁的文章。虽然很难区分这个合集里的文章,并指出哪一篇特别有价值,但我必须承认,我非常高兴地阅读了雷瑟森10年前关于胖树的文章。虽然刚刚提到的章节讨论了各种拓扑的个别属性,但接下来的三章专门讨论了互连网络的一般属性。这些属性包括路由(以提供所需的功能)、可靠性和性能。我仔细阅读了“容错和可靠性”一章。正如编者所指出的,互连网络避免故障的能力通常以其可靠性或可用性来衡量。网络通常通过某种形式的容错实现高可靠性或可用性。因此,容错,以各种冗余(在空间或时间上)的形式,是所有文章ii章的主要主题,这些文章ii章提供了对最重要问题的合理覆盖。对于这本书的最后几章,我的感觉很复杂:一章是关于算法应用的,另一章是关于案例的。第一次尝试涵盖套件相关的设计应用程序和并行机器的节奏。这足够广泛,至少需要另一卷书(如F.T. Leigl Morgan Kaufmann的《并行节奏和架构介绍》,1992),所以在第一章的定义中提供大致的覆盖是不可能的。然而,所介绍的案例研究是相当完整的,包括一些关于res1机器的文章,以及那些曾经可用的机器。总之,这本书是一个很好的卷,提供了丰富的有价值的信息理论方面的互连在多个处理器。我推荐编者写的全面的介绍1章,自定义的越来越少的通信这一系列的IEEE教程。在需求方面,这本书甚至没有提到一些重要的主题,比如缓存一致性和像ATM这样的新解决方案,但这样的竞争可能适合其他卷,没有人可以在这样一本书中涵盖所有重要的内容。第二本书《互联网络与高性能并行计算机》出版了。非常相似,不仅在标题上,而且在所有内容上。它包含约70篇论文,分为11章。覆盖的区域基本相同,但侧重点不同。我
{"title":"Interconection Networks for Multiprocessors and Multicomputers: Theory and Practice [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.494610","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494610","url":null,"abstract":"All three books are collections of articles on related subjects that were previously published, mostly in IEEE Computer Society publications. They appear in an unnamedalbeit known for about a decade and highly rated-series of IEEE tutorials. The books have very similar contents; therefore, their joint review seems appropriate. Interconnection Networksfor Multiprocessors and Multicomputers has 10 chapters and over 50 articles, including chapter introductions. The first chapter, written by the editors, introduces the entire book and gives a proper perspective on its contents. Four subsequent chapters discuss interconnections from the point of view of their topologies. In particular, there are articles on Clos and Benes networks, multistage networks, buses, and crossbars. Although it is hard to distinguish among the articles in this collection and point to the one of particular value, I must confess that I read with great pleasure Leiserson’s 10-yearold article on fat trees. Although the chapters just mentioned discuss individual properties of various topologies, the next three chapters specifically address general properties of interconnection networks. These properties include routing (to provide required functionality), reliability, and performance. I took a closer look at the chapter on “Fault-tolerance and Reliability.” As the editors point out, an interconnecoon network‘s ability to avoid failures’is usually measured as its reliability or availability. A network achieves high reliability or availability normally through some form of fault tolerance. Thus, fault tolerance, in the form of various kinds of redundancy (in space or time), is the major subject of all the articles ii chapter, which provide a reasonably com coverage of the most important issues. I have mixed feelings about the las chapters of the book: one on algorithm applications, and one that includes case ies. The first attempts to cover sut related to designing applications and rithms for parallel machines. This ai broad enough to take at least another volume (such as Introduction to Parallel rzthms and Architectures, by F.T. Leigl Morgan Kaufmann, 1992), so providing approximate coverage in one chapter 1 definition, impossible. However, the ch presenting case studies is reasonably plete and includes articles on several res1 machines, as well as on those once com cially available. In summary, this book is a good vol providing a wealth of valuable informatic theoretical aspects of interconnecting n ple processors. I commend the editor writing comprehensive introductions I chapters, a custom less and less commc this series of IEEE tutorials. On the nee side, the book doesn’t even mention ce important topics such as cache coherencl newer solutions such as ATM, but such r rial is probably suited for other volumes no one can cover everything importani single book like this. The second book, Interconnectaon Nefi fir High-Pe$omzance Parallel Computers, i! prisingly similar, not only by ","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130595597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/M-PDT.1996.494607
M. Paprzycki
ious computers have been considered high performance), he also p e s a brief history of high-performance computing. Part I1 is models, as well as hardware (including issues related to collections ofworbstauons). Finally, YO (including FWD, internal parallel L'O, and external U 0 systems). je a other Mach, and NT) and message-passing systems (such as Acuve Messages, PVM, and Linda) and concludes with a short discussion of fault-eling of physical systems; seismic and oil-industry applicanons; applications in biology
{"title":"High Performace Computing Demystified [Book Reviews]","authors":"M. Paprzycki","doi":"10.1109/M-PDT.1996.494607","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494607","url":null,"abstract":"ious computers have been considered high performance), he also p e s a brief history of high-performance computing. Part I1 is models, as well as hardware (including issues related to collections ofworbstauons). Finally, YO (including FWD, internal parallel L'O, and external U 0 systems). je a other Mach, and NT) and message-passing systems (such as Acuve Messages, PVM, and Linda) and concludes with a short discussion of fault-eling of physical systems; seismic and oil-industry applicanons; applications in biology","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125408041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/M-PDT.1996.494608
J. Zalewski
~ This book is primarily devoted to Dash (Directorykchitecture for Shared Memory), a multiprocessor system known from earlier publications (see “The Stanford Dash Multiprocessor,” Computer, Mar. 1992, Vol. 3 5 , No. 3 , pp. 63-79). The book also provides readers with a comprehensive view of modem multiprocessing, as it describes where the technology is actually heading. The major issue in multiprocessor architectures is communication: how multiple processors communicate with each other. Not so long ago, buses were the major component tying various computational pieces together. Multiple processors used a bus to access common memory or to communicate with separate memories, which caused a communication bottleneck. Strictly speaking, the problems started when users wanted to extend existing systems with several processors to much larger aggregates of dozens or even hundreds of processing units. In such cases, even hierarchically organized buses began to saturate, and designers faced a scalability barrier. Moving from a bus to a point-to-point network was an immediate solution, but then old problems persisted and new ones arose, such as cache coherence. One approach was to maintain shared memory (common address space) along the bus or across the network, without cache coherence. Another relied on message passing, but in both cases the memory latency problem emerged. Technological developments soon made possible widespread use of caches, and then other problems started. Maintaining cache coherence across the bus (let alone the entire network) is not trivial, and most designers lost their hair before coming up with satisfactory solutions. This book is a concentrated effort to address such problems and provide a solution to maintain cache coherence across the pointto-point network of multiple processors. The authors call it scalable shared-memory multiprocessing (SSMP). The book’s three parts are General Concepts, Experience with Dash, and Future Trends. The first is the most interesting. It is mainly a histarical perspective on multiprocessor systems. The book first discusses scalability problems in detail, concluding that hardware cache coherence is a key to high performance. T o ensure scalability, one must apply point-topoint interconnections (as opposed to a bus) and base cache coherence on directory schemes. Scalability has three dimensions: How does the performance scale? That is, what speedup (in terms of execution time) can we achieve by using Nprocessors over a single processor for the same problem? How does the cost scale when more processors are added? What is the largest number of processors for which multiprocessing rather than uniprocessing is still advantageous? That is, what is the range of scalability?
{"title":"Scalable Shared-Memory Multiprocessing [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.494608","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494608","url":null,"abstract":"~ This book is primarily devoted to Dash (Directorykchitecture for Shared Memory), a multiprocessor system known from earlier publications (see “The Stanford Dash Multiprocessor,” Computer, Mar. 1992, Vol. 3 5 , No. 3 , pp. 63-79). The book also provides readers with a comprehensive view of modem multiprocessing, as it describes where the technology is actually heading. The major issue in multiprocessor architectures is communication: how multiple processors communicate with each other. Not so long ago, buses were the major component tying various computational pieces together. Multiple processors used a bus to access common memory or to communicate with separate memories, which caused a communication bottleneck. Strictly speaking, the problems started when users wanted to extend existing systems with several processors to much larger aggregates of dozens or even hundreds of processing units. In such cases, even hierarchically organized buses began to saturate, and designers faced a scalability barrier. Moving from a bus to a point-to-point network was an immediate solution, but then old problems persisted and new ones arose, such as cache coherence. One approach was to maintain shared memory (common address space) along the bus or across the network, without cache coherence. Another relied on message passing, but in both cases the memory latency problem emerged. Technological developments soon made possible widespread use of caches, and then other problems started. Maintaining cache coherence across the bus (let alone the entire network) is not trivial, and most designers lost their hair before coming up with satisfactory solutions. This book is a concentrated effort to address such problems and provide a solution to maintain cache coherence across the pointto-point network of multiple processors. The authors call it scalable shared-memory multiprocessing (SSMP). The book’s three parts are General Concepts, Experience with Dash, and Future Trends. The first is the most interesting. It is mainly a histarical perspective on multiprocessor systems. The book first discusses scalability problems in detail, concluding that hardware cache coherence is a key to high performance. T o ensure scalability, one must apply point-topoint interconnections (as opposed to a bus) and base cache coherence on directory schemes. Scalability has three dimensions: How does the performance scale? That is, what speedup (in terms of execution time) can we achieve by using Nprocessors over a single processor for the same problem? How does the cost scale when more processors are added? What is the largest number of processors for which multiprocessing rather than uniprocessing is still advantageous? That is, what is the range of scalability?","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124940642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-21DOI: 10.1109/M-PDT.1996.481714
C. J. Hall
Client/server computing is a term that everyone seems to be using at the moment. In the first chapter, the author very appropriately quotes Humpty Dumpty addressing Alice in Through the Looking Glass: “When I use a word, . . . i t means just what I choose it to mean. . . .” All of computing suffers to some extent from the confusion of terms, and anything involving the integration of computing and communication, such as client/server computing, doubly suffers from this confusion. This book is a very welcome attempt at shedding light on the subject and at trying to explain what is required for cliendserver systems to deliver the quality that information science professionals expect of traditional systems. The book contains a wealth of related material and largely succeeds in explaining and clarifying many of the terms and technologies that pervade the subject. T h e opening sections of the book clearly set the context, firmly relating the purpose of client/ server approaches to the business environment with several quite useful and well developed case studies. These sections clearly identify the implications of following such an approach and flag some technical issues for later consideration. The book weighs the pros and cons concerning the move to clienthemer solutions and discusses accompanying organizational changes such as downsizing. The author considers the cost implications and identifies the pitfalls, but also points to areas where significant financial benefits can arise. The remaining and larger part of the book follows the general introductory discussion with a conventional topic-by-topic treatment, considering the technical issues raised earlier in greater depth. T h e topics considered are very comprehensive. Included are client/ server development tools, networking concepts, graphical user interfaces, objectoriented design and programming, networking standards, and communication subsystems such as Open Systems Interconnection (03) and Internet stacks. Also covered are network operating systems and server operating systems, Inter-networking technologies such as routers and gateways, distributed system technologies such as Structured Query Language (SQL) and remote procedure call (RPC), distributed database systems, distributed systems management, electronic messaging and associated standards, implications for working practices and workgroups, security, and a detailed discussion of actual case studies including mission-critical examples. T h e book concludes with a very helpful glossary and a reasonable bibliography. In general, the list of topics is complete and handled thoroughly with care taken to point out recent and likely future developments and to relate each topic to the most significant stmdards, development groups, or proprietary software systems. There are a few surprising omissions, however-for example, distributed object-system techniques such as Distributed Systems Object Model @SOM) and Common Object Request Broker Archit
{"title":"Implementing Production Quality Client/Server Systenms [Book Reviews]","authors":"C. J. Hall","doi":"10.1109/M-PDT.1996.481714","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.481714","url":null,"abstract":"Client/server computing is a term that everyone seems to be using at the moment. In the first chapter, the author very appropriately quotes Humpty Dumpty addressing Alice in Through the Looking Glass: “When I use a word, . . . i t means just what I choose it to mean. . . .” All of computing suffers to some extent from the confusion of terms, and anything involving the integration of computing and communication, such as client/server computing, doubly suffers from this confusion. This book is a very welcome attempt at shedding light on the subject and at trying to explain what is required for cliendserver systems to deliver the quality that information science professionals expect of traditional systems. The book contains a wealth of related material and largely succeeds in explaining and clarifying many of the terms and technologies that pervade the subject. T h e opening sections of the book clearly set the context, firmly relating the purpose of client/ server approaches to the business environment with several quite useful and well developed case studies. These sections clearly identify the implications of following such an approach and flag some technical issues for later consideration. The book weighs the pros and cons concerning the move to clienthemer solutions and discusses accompanying organizational changes such as downsizing. The author considers the cost implications and identifies the pitfalls, but also points to areas where significant financial benefits can arise. The remaining and larger part of the book follows the general introductory discussion with a conventional topic-by-topic treatment, considering the technical issues raised earlier in greater depth. T h e topics considered are very comprehensive. Included are client/ server development tools, networking concepts, graphical user interfaces, objectoriented design and programming, networking standards, and communication subsystems such as Open Systems Interconnection (03) and Internet stacks. Also covered are network operating systems and server operating systems, Inter-networking technologies such as routers and gateways, distributed system technologies such as Structured Query Language (SQL) and remote procedure call (RPC), distributed database systems, distributed systems management, electronic messaging and associated standards, implications for working practices and workgroups, security, and a detailed discussion of actual case studies including mission-critical examples. T h e book concludes with a very helpful glossary and a reasonable bibliography. In general, the list of topics is complete and handled thoroughly with care taken to point out recent and likely future developments and to relate each topic to the most significant stmdards, development groups, or proprietary software systems. There are a few surprising omissions, however-for example, distributed object-system techniques such as Distributed Systems Object Model @SOM) and Common Object Request Broker Archit","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115265071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-21DOI: 10.1109/M-PDT.1996.481713
B. Mikolajczak
This tutorial is a collection of 3 5 previously published papers devoted to parallelization of data or knowledge-based systems. Papers are classified into several chapters: introduction, data models, database machines, textretrieval machines, commercial database machines, knowledge-based machines, and new directions. Each chapter starts with one paper as a guideline to a major topic and to papers of the chapter. T h e book addresses two main issues:
{"title":"Parallel Architectures for Data/Knowledge-Based Systems [Book Reviews]","authors":"B. Mikolajczak","doi":"10.1109/M-PDT.1996.481713","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.481713","url":null,"abstract":"This tutorial is a collection of 3 5 previously published papers devoted to parallelization of data or knowledge-based systems. Papers are classified into several chapters: introduction, data models, database machines, textretrieval machines, commercial database machines, knowledge-based machines, and new directions. Each chapter starts with one paper as a guideline to a major topic and to papers of the chapter. T h e book addresses two main issues:","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123521121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-21DOI: 10.1109/M-PDT.1996.481716
A. Zomaya
ment of an architecture for cliendserver business applications over a “global LAN” that introduces an analysis and design method for distributed systems. Unfortunately, the text doesn’t explain what this so-called global LAN is constructed of or how it works. The architecture described makes a virtue of separating data from applications-which is opposite the direction where most modern solutions are headed. The authors also describe distributed computing at Eastman Kodak, mainly based on experiences in beta testing of DECathena in an industrial setting. DECathena (described in more detail in Part 1) is the commercial version of Project Athena. Part 3 describes implementation and management strategies. T h e highlight of this part is the chapter that describes the products and strategies of vendors such as SunSoft, Hewlett-Packard, IBM, and Microsoft. There is an interesting discussion of Microsoft’s strategy for object linking and embedding (OLE) and Cairo, an object-oriented operating system allegedly based on a distributed object model. SunSoft’s strategy for the Distributed Object Environment (DOE) builds on the chapter on ONC+ in Part 1 , and in many ways it would have been logical to include this material in that part. Other chapters in this part describe management of migration and organizational issues. Finally, the four appendixes detail important topics predicted to influence distributed computing in the future, such as the OSF Distributed Management Environment and Object Management Group’s (OMG) Common Object Request Broker Architecture (Corba). It is unfortunate that the authors postponed this overview of the emerging distributed object standard until the appendixes. There are many products available that conform to the Corba standard, and it would be very interesting to read of developers’ and managers’ experiences in using them. Details of what is proposed by the OMG for the Common Object Services Specification and Corba2 would also be useful, as would be the recent standard for object-oriented databases (ODMG-93), which is omitted. The authors could have mentioned the International Organization for Standardization’s Olpen Distributed Processing draft framework here, as it has the widest scope of all distributed environment standards. In conclusion, this book contains several technical details about the various distributed environments and the experiences of network managers who plan and operate them. However, although the work is a rich source of reference material, some of the material is slightly dated. Moreover, the reader might find it difficult at times to reach firm conclusions about the costs or benefits of the various options. In the case of proprietary solutions it is difficult to determine what to develop, and in the case of prebuilt solutions it is difficult to foresee where the major vendors’ development strategies are headed. All of these problems reflect the rapidly evolving nature of the state of the art of distribut
{"title":"Parallel Evolution of Parallel Processors [Book Reviews]","authors":"A. Zomaya","doi":"10.1109/M-PDT.1996.481716","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.481716","url":null,"abstract":"ment of an architecture for cliendserver business applications over a “global LAN” that introduces an analysis and design method for distributed systems. Unfortunately, the text doesn’t explain what this so-called global LAN is constructed of or how it works. The architecture described makes a virtue of separating data from applications-which is opposite the direction where most modern solutions are headed. The authors also describe distributed computing at Eastman Kodak, mainly based on experiences in beta testing of DECathena in an industrial setting. DECathena (described in more detail in Part 1) is the commercial version of Project Athena. Part 3 describes implementation and management strategies. T h e highlight of this part is the chapter that describes the products and strategies of vendors such as SunSoft, Hewlett-Packard, IBM, and Microsoft. There is an interesting discussion of Microsoft’s strategy for object linking and embedding (OLE) and Cairo, an object-oriented operating system allegedly based on a distributed object model. SunSoft’s strategy for the Distributed Object Environment (DOE) builds on the chapter on ONC+ in Part 1 , and in many ways it would have been logical to include this material in that part. Other chapters in this part describe management of migration and organizational issues. Finally, the four appendixes detail important topics predicted to influence distributed computing in the future, such as the OSF Distributed Management Environment and Object Management Group’s (OMG) Common Object Request Broker Architecture (Corba). It is unfortunate that the authors postponed this overview of the emerging distributed object standard until the appendixes. There are many products available that conform to the Corba standard, and it would be very interesting to read of developers’ and managers’ experiences in using them. Details of what is proposed by the OMG for the Common Object Services Specification and Corba2 would also be useful, as would be the recent standard for object-oriented databases (ODMG-93), which is omitted. The authors could have mentioned the International Organization for Standardization’s Olpen Distributed Processing draft framework here, as it has the widest scope of all distributed environment standards. In conclusion, this book contains several technical details about the various distributed environments and the experiences of network managers who plan and operate them. However, although the work is a rich source of reference material, some of the material is slightly dated. Moreover, the reader might find it difficult at times to reach firm conclusions about the costs or benefits of the various options. In the case of proprietary solutions it is difficult to determine what to develop, and in the case of prebuilt solutions it is difficult to foresee where the major vendors’ development strategies are headed. All of these problems reflect the rapidly evolving nature of the state of the art of distribut","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116927498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-21DOI: 10.1109/M-PDT.1996.481715
D. Newell
implementation and Managemen t Stra tegies edited by Raman Khanna 518 PP $58 Prentice-Hall Eaglewood Cliffs, N J 1994 ISBN 0-13-220138-0 developer would benefit from the entire book, because it does raise important issues that many technical books in the field tend to ignore. Such books focus on the elegance of the technology rather than the implications of its application. However, a developer might become frustrated a t the uneven level of treatment, at times being rather pedesman and at others glossing over crucial topics. I cannot recommend its use in undergraduate computing courses, as i t lacks satisfactory coherence and consistency; i t develops the issues but contains too much extraneous detail. On the other hand, i t probably does have merit at the postgraduate level for MBA programmers, particularly those focusing on information systems management. In summary, although this review has beeii somewhat critical, I believe the bookis a very useful and valid contribution to the field, and I look forward to the second edition, which will make the material even more accessible to its intended audience. The author has indeed shed light on many of the fascinaung issues involved in clienthemer computing but at the same time has showered the reader with a wealth of material and left many questions hanging.
{"title":"Distributed Computing-- Implementation and Management Strategies [Book Reviews]","authors":"D. Newell","doi":"10.1109/M-PDT.1996.481715","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.481715","url":null,"abstract":"implementation and Managemen t Stra tegies edited by Raman Khanna 518 PP $58 Prentice-Hall Eaglewood Cliffs, N J 1994 ISBN 0-13-220138-0 developer would benefit from the entire book, because it does raise important issues that many technical books in the field tend to ignore. Such books focus on the elegance of the technology rather than the implications of its application. However, a developer might become frustrated a t the uneven level of treatment, at times being rather pedesman and at others glossing over crucial topics. I cannot recommend its use in undergraduate computing courses, as i t lacks satisfactory coherence and consistency; i t develops the issues but contains too much extraneous detail. On the other hand, i t probably does have merit at the postgraduate level for MBA programmers, particularly those focusing on information systems management. In summary, although this review has beeii somewhat critical, I believe the bookis a very useful and valid contribution to the field, and I look forward to the second edition, which will make the material even more accessible to its intended audience. The author has indeed shed light on many of the fascinaung issues involved in clienthemer computing but at the same time has showered the reader with a wealth of material and left many questions hanging.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116018269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-21DOI: 10.1109/M-PDT.1996.481711
M. Pernice
The field of parallel and distributed computing , like many emerging disciplines, has both promoters and detractors. The debate between these groups gave rise to the Gor-don Bell Prize, which annually recognizes significant achievements in the application of supercomputers to scientific and engineering problems. The 1990 Gordon Bell Prize for price/performance was won by a research group that calculated the electronic structure of a high-temperature superconductor on a 128-node Intel iPSCh860 computer at a cost of $1,2 50 per Mflop. This application was also run on various configurations of net-worked workstations. One configuration of 11 workstations completed the calculation for about $1,430 per Mflop; extrapolation of these figures showed that the cost would drop to $625 per Mflop if one waited longer for the results. This accomplishment was made possible by the availability of fast, cheap scientific workstations and an early version of PVM. Since then, PVM has become enormously popular. It provides a way to collectively mm-age several computers as one and to coordinate distributed applications that execute in this environment. With PVM, users can create applications that exploit the strengths of heterogeneous computing resources. With the message-passing capabilities of PVM, users can implement various parallel-programming paradigms on shared-and distributed-memory computers, including metacomputers composed of networked resources. Both academia and industry are exploiung the cost effectlve-ness of using workstatlon networks as virtual supercomputers, and PVM plays a prominent role in many of these projects. Several computer vendors support the PVM programmng interface, some of whom provide optimized versions for their machmes. This book is precisely what its name implies. It describes the PVM design; com-putlng model and programming interface; and features such as support for process groups, use in commercial multicomputers, and performance in a heterogeneous networked enm-ronment. Despite the book's tutorial nature, readers will benefit most if they are already comfortable with programming in a Unix environment and understand the basic concepts of parallel programming. The book is quite useful as supplementary material in a course on parallel programming that requires use of the PVM system. Chapter 1 introduces network computing and the PVM environment. It discusses the motivation for worhng in a heterogeneous networked compuung enmronment, enabling hardware trends, and other software packages
{"title":"PVM: Parallel Virtual Machine - A User's Guide and Tutorial for Networked Parallel Computing [Book Review]","authors":"M. Pernice","doi":"10.1109/M-PDT.1996.481711","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.481711","url":null,"abstract":"The field of parallel and distributed computing , like many emerging disciplines, has both promoters and detractors. The debate between these groups gave rise to the Gor-don Bell Prize, which annually recognizes significant achievements in the application of supercomputers to scientific and engineering problems. The 1990 Gordon Bell Prize for price/performance was won by a research group that calculated the electronic structure of a high-temperature superconductor on a 128-node Intel iPSCh860 computer at a cost of $1,2 50 per Mflop. This application was also run on various configurations of net-worked workstations. One configuration of 11 workstations completed the calculation for about $1,430 per Mflop; extrapolation of these figures showed that the cost would drop to $625 per Mflop if one waited longer for the results. This accomplishment was made possible by the availability of fast, cheap scientific workstations and an early version of PVM. Since then, PVM has become enormously popular. It provides a way to collectively mm-age several computers as one and to coordinate distributed applications that execute in this environment. With PVM, users can create applications that exploit the strengths of heterogeneous computing resources. With the message-passing capabilities of PVM, users can implement various parallel-programming paradigms on shared-and distributed-memory computers, including metacomputers composed of networked resources. Both academia and industry are exploiung the cost effectlve-ness of using workstatlon networks as virtual supercomputers, and PVM plays a prominent role in many of these projects. Several computer vendors support the PVM programmng interface, some of whom provide optimized versions for their machmes. This book is precisely what its name implies. It describes the PVM design; com-putlng model and programming interface; and features such as support for process groups, use in commercial multicomputers, and performance in a heterogeneous networked enm-ronment. Despite the book's tutorial nature, readers will benefit most if they are already comfortable with programming in a Unix environment and understand the basic concepts of parallel programming. The book is quite useful as supplementary material in a course on parallel programming that requires use of the PVM system. Chapter 1 introduces network computing and the PVM environment. It discusses the motivation for worhng in a heterogeneous networked compuung enmronment, enabling hardware trends, and other software packages","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132249023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-09-01DOI: 10.1109/M-PDT.1995.414842
D. Feitelson, P. Corbett, S. J. Baylor, Yarsun Hsu
Applications on MPPs often require a high aggregate bandwidth of low-latency I/O to secondary storage. This requirement can met by internal parallel I/O subsystems that comprise dedicated I/O nodes, each with processor, memory, and disks.Massively parallel processors (MPPs), encompassing from tens to thousands of processors, are emerging as a major architecture for high-performance computers. Most major computer vendors offer computers with some degree of parallelism, and many smaller vendors specialize in producing MPPs. These machines are targeted for both grand-challenge problems and general-purpose computing.Like any computer, MPP architectural design must balance computation, memory bandwidth and capacity, communication capabilities, and I/O. In the past, most design research focused on the basic compute and communications hardware and software. This led to unbalanced computers that had relatively poor I/O performance. Recently, researchers have focused on designing hardware and software for I/O subsystems in MPPs. Consequently, most current MPPs have an architecture based on an internal parallel I/O subsystem (the "Architectures with parallel I/O" sidebar describes some examples). In these computers, this subsystem encompasses a collection of I/O nodes, each managing and providing I/O access to a set of disks. The I/O nodes connect to other nodes in the system by the same switching network that connects the compute nodes.In this article we'll examine why many MPPs use parallel I/O subsystems, what architecture is best for such a subsystem, and how to implement the subsystem. We'll also discuss how parallel file systems and their user interfaces can exploit the parallel I/O to provide enhanced services to applications.The systems discussed in this article are mostly tightly coupled distributed-memory MIMD (multiple-instruction, multiple-data) MPPs. In some cases, we also discuss shared-memory and SIMD (single-instruction, multiple-data) machines. We'll discuss three node types. Compute nodes are optimized to perform floating-point and numeric calculations, and have no local disk except perhaps for paging, booting, and operating-system software. I/O nodes contain the system's secondary storage, and provide the parallel file-system services. Gateway nodes provide connectivity to external data servers and mass-storage systems. In some cases, individual nodes can serve as more than one type. For example, the same nodes often handle I/O and gateway functions. The "Terminology" sidebar defines some other terms used in this article.
{"title":"Parallel I/O subsystems in massively parallel supercomputers","authors":"D. Feitelson, P. Corbett, S. J. Baylor, Yarsun Hsu","doi":"10.1109/M-PDT.1995.414842","DOIUrl":"https://doi.org/10.1109/M-PDT.1995.414842","url":null,"abstract":"Applications on MPPs often require a high aggregate bandwidth of low-latency I/O to secondary storage. This requirement can met by internal parallel I/O subsystems that comprise dedicated I/O nodes, each with processor, memory, and disks.Massively parallel processors (MPPs), encompassing from tens to thousands of processors, are emerging as a major architecture for high-performance computers. Most major computer vendors offer computers with some degree of parallelism, and many smaller vendors specialize in producing MPPs. These machines are targeted for both grand-challenge problems and general-purpose computing.Like any computer, MPP architectural design must balance computation, memory bandwidth and capacity, communication capabilities, and I/O. In the past, most design research focused on the basic compute and communications hardware and software. This led to unbalanced computers that had relatively poor I/O performance. Recently, researchers have focused on designing hardware and software for I/O subsystems in MPPs. Consequently, most current MPPs have an architecture based on an internal parallel I/O subsystem (the \"Architectures with parallel I/O\" sidebar describes some examples). In these computers, this subsystem encompasses a collection of I/O nodes, each managing and providing I/O access to a set of disks. The I/O nodes connect to other nodes in the system by the same switching network that connects the compute nodes.In this article we'll examine why many MPPs use parallel I/O subsystems, what architecture is best for such a subsystem, and how to implement the subsystem. We'll also discuss how parallel file systems and their user interfaces can exploit the parallel I/O to provide enhanced services to applications.The systems discussed in this article are mostly tightly coupled distributed-memory MIMD (multiple-instruction, multiple-data) MPPs. In some cases, we also discuss shared-memory and SIMD (single-instruction, multiple-data) machines. We'll discuss three node types. Compute nodes are optimized to perform floating-point and numeric calculations, and have no local disk except perhaps for paging, booting, and operating-system software. I/O nodes contain the system's secondary storage, and provide the parallel file-system services. Gateway nodes provide connectivity to external data servers and mass-storage systems. In some cases, individual nodes can serve as more than one type. For example, the same nodes often handle I/O and gateway functions. The \"Terminology\" sidebar defines some other terms used in this article.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129887052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}