Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532146
J. Zalewski
chronous, and unbuffered and buffered, message passing. He upgrades algorithms studied in previous chapters so that mutual exclusion can be enforced on distributed systems. The chapter covers SR send and receive instructions, the powerful SR input (in) statement that implements extended rendezvous with two-way information flow, remote procedure calls, and client/ server programming. Example programs show that the SR runtime system buffers dynamically allocated virtual memory messages that are sent but not yet received, and that the SR runtime system’s process (thread) table is dynamically allocated. This SO-page chapter demonstrates S R s power in the distributed environment, and brings together and greatly augments all that has been learned in Chapters 1 through 5 . The chapter includes a useful summary of SR operations and their invocations, providing a good overview of the language. The chapter concludes with an Xtango color animation of the distributed dining philosophers program presented in The SR Language. The programs in Chapter 7 demonstrate SR’s effectiveness as a language for writing parallel programs that perform numerically intensive computations and that have processes that must synchronize or communicate relatively frequently. The chapter presents coarse-grained parallel SR programs that solve the N Queens problem and the dining philosophers problem on multiple machines. Other programs implement different patterns of communication between collections of processes and provide examples of data parallelism and master-worker organization. The SR language environment contains SRWin, an interface to the X-Windows graphics system. SRWin is a lower-level interface than Xtango is, and might be harder to use. T o complete the book, Hartley has written an SR resource that serves as an interface to Xtango so that its drawing and moving procedures can be called directly from an SR program. He also presents an animatlon of Quicksort using SRWin, so that the reader can compare the difference. Operatzng Systems Programmzng: The SR Language is a carefully and concisely written introduction to concurrent and parallel programming and to the SR language. I have used it successfully in my undergraduate and graduate Operating Systems and Parallel Programming courses for the past year. This unique book works well as the concurrent programming supplement to a standard course text such as Operatzng System Concepts, 4th E d , by Abraham Silberschatz and Peter Galmn, Addison-Wesley.
同步、无缓冲和有缓冲的消息传递。他升级了前几章中研究的算法,使互斥可以在分布式系统上强制执行。本章涵盖了SR发送和接收指令,强大的SR输入(in)语句,它实现了扩展的双向信息流,远程过程调用和客户端/服务器编程的集合。示例程序显示了SR运行时系统缓冲已发送但尚未接收的动态分配的虚拟内存消息,并且SR运行时系统的进程(线程)表是动态分配的。这一章展示了rs在分布式环境中的强大功能,并将第1章到第5章所学到的内容结合在一起,极大地扩展了这些内容。这一章包含了SR操作及其调用的一个有用的总结,提供了对语言的一个很好的概述。本章以Xtango彩色动画结束,该动画展示了《SR语言》中的分布式用餐哲学家计划。第7章中的程序演示了SR作为一种编写并行程序的语言的有效性,这些并行程序执行数字密集型计算,并且具有必须同步或相对频繁地通信的进程。本章介绍了在多台机器上解决N个皇后问题和用餐哲学家问题的粗粒度并行SR程序。其他程序在进程集合之间实现不同的通信模式,并提供数据并行性和主工组织的示例。SR语言环境包含SRWin,一个X-Windows图形系统的接口。SRWin是一个比Xtango更低级的接口,可能更难使用。为了完成这本书,Hartley编写了一个SR资源,作为Xtango的接口,以便其绘图和移动过程可以直接从SR程序中调用。他还提供了一个使用SRWin的快速排序动画,以便读者可以比较它们的差异。《操作系统编程:SR语言》是一本关于并发和并行编程以及SR语言的简明扼要的介绍。在过去的一年里,我在本科和研究生的操作系统和并行编程课程中成功地使用了它。这本独特的书很好地作为标准课程文本的并发编程补充,如由Abraham Silberschatz和Peter Galmn Addison-Wesley编写的《Operatzng System Concepts, 4th E d》。
{"title":"Solaris Multithreaded programming guide [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532146","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532146","url":null,"abstract":"chronous, and unbuffered and buffered, message passing. He upgrades algorithms studied in previous chapters so that mutual exclusion can be enforced on distributed systems. The chapter covers SR send and receive instructions, the powerful SR input (in) statement that implements extended rendezvous with two-way information flow, remote procedure calls, and client/ server programming. Example programs show that the SR runtime system buffers dynamically allocated virtual memory messages that are sent but not yet received, and that the SR runtime system’s process (thread) table is dynamically allocated. This SO-page chapter demonstrates S R s power in the distributed environment, and brings together and greatly augments all that has been learned in Chapters 1 through 5 . The chapter includes a useful summary of SR operations and their invocations, providing a good overview of the language. The chapter concludes with an Xtango color animation of the distributed dining philosophers program presented in The SR Language. The programs in Chapter 7 demonstrate SR’s effectiveness as a language for writing parallel programs that perform numerically intensive computations and that have processes that must synchronize or communicate relatively frequently. The chapter presents coarse-grained parallel SR programs that solve the N Queens problem and the dining philosophers problem on multiple machines. Other programs implement different patterns of communication between collections of processes and provide examples of data parallelism and master-worker organization. The SR language environment contains SRWin, an interface to the X-Windows graphics system. SRWin is a lower-level interface than Xtango is, and might be harder to use. T o complete the book, Hartley has written an SR resource that serves as an interface to Xtango so that its drawing and moving procedures can be called directly from an SR program. He also presents an animatlon of Quicksort using SRWin, so that the reader can compare the difference. Operatzng Systems Programmzng: The SR Language is a carefully and concisely written introduction to concurrent and parallel programming and to the SR language. I have used it successfully in my undergraduate and graduate Operating Systems and Parallel Programming courses for the past year. This unique book works well as the concurrent programming supplement to a standard course text such as Operatzng System Concepts, 4th E d , by Abraham Silberschatz and Peter Galmn, Addison-Wesley.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532144
R. Tadeusiewicz
Neural networks have increased not only in the number of applications but also in complexity. This increase in complexity has created a tremendous need for computational power, perhaps more power than conventional scalar processors can deliver efficiently. Such processors are oriented toward numeric and data manipulation. Neurocomputing requirements (such as nonprogramming and learning) impose different constraints and demands on the computer architectures and on the structure of multicomputer systems. We need new neurocomputers, dedicated to neural networks applications. This is the scope of Parallel Digital Implementations of Neural Networks. T h e surge of interest in neural networks, which started in the mid-eighties, stemmed largely from advances in VLSI technology. But hardware implementations of neural networks are still not as popular as the software tools for neural network modeling, learning, and applications. Information on hardware neural network implementations is still too limited and exotic for many neural network users. This book fills an important gap for such users. Neural networks have recently become such a subject of great interest to so many scientists, engineers, and smdents that you can easily find many books and papers about implementations (for example, Analogue Neural VLSI, by A. Murray and L. Tarassenko, Chapman & Hall; Neurocomputers: An Overview o f Neural Networks in VLSI, by M. Glesner and W. Poechmueller, Chapman & Hall; and VLSIfor Neural Networks and Art-ificial Intelligence, byJ.G. Delgado-Frias and W.R. Moore, Plenum Press). However, this book is different. It is wellfocused; it does not discuss all forms of VLSI neural network implementations, but presents only the most interesting and most important: parallel digital implementations. No analog circuits, no serial architecrures, no computer models. Only digital devices (general-purpose processors, such as array processors and DSP chips, or dedicated systems such as neurocomputers or digital neurochips), and only parallel solutions. This narrow focus is good, because the digital implementations of neural networks provide advantages such as freedom from noise, programmability, higher precision, and reliable storage devices. The book has three main sections:
{"title":"Parallel digital improvements of neural networks [Book Reviews]","authors":"R. Tadeusiewicz","doi":"10.1109/M-PDT.1996.532144","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532144","url":null,"abstract":"Neural networks have increased not only in the number of applications but also in complexity. This increase in complexity has created a tremendous need for computational power, perhaps more power than conventional scalar processors can deliver efficiently. Such processors are oriented toward numeric and data manipulation. Neurocomputing requirements (such as nonprogramming and learning) impose different constraints and demands on the computer architectures and on the structure of multicomputer systems. We need new neurocomputers, dedicated to neural networks applications. This is the scope of Parallel Digital Implementations of Neural Networks. T h e surge of interest in neural networks, which started in the mid-eighties, stemmed largely from advances in VLSI technology. But hardware implementations of neural networks are still not as popular as the software tools for neural network modeling, learning, and applications. Information on hardware neural network implementations is still too limited and exotic for many neural network users. This book fills an important gap for such users. Neural networks have recently become such a subject of great interest to so many scientists, engineers, and smdents that you can easily find many books and papers about implementations (for example, Analogue Neural VLSI, by A. Murray and L. Tarassenko, Chapman & Hall; Neurocomputers: An Overview o f Neural Networks in VLSI, by M. Glesner and W. Poechmueller, Chapman & Hall; and VLSIfor Neural Networks and Art-ificial Intelligence, byJ.G. Delgado-Frias and W.R. Moore, Plenum Press). However, this book is different. It is wellfocused; it does not discuss all forms of VLSI neural network implementations, but presents only the most interesting and most important: parallel digital implementations. No analog circuits, no serial architecrures, no computer models. Only digital devices (general-purpose processors, such as array processors and DSP chips, or dedicated systems such as neurocomputers or digital neurochips), and only parallel solutions. This narrow focus is good, because the digital implementations of neural networks provide advantages such as freedom from noise, programmability, higher precision, and reliable storage devices. The book has three main sections:","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122806205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532141
J. Zalewski
reviewed by Junusz Zuleu:skz, Em b7y-Riddle Aeronautical [Jniversity This book, part of the SunSoft Press series, is subtitled " A Technical Survey of Multi-processor/Multithreaded Systems Using Sparc, Multilevel Bus Architectures and Solaris (SunOS). " So, it covers only computer systems from Sun Microsystem:s Computer Corporation. Its purpose is " to bring together in one volume a coherent description of the elements that provide for the design and development of multiprocessor systems archi-tectures from Sun Microsystems. " It assumes that the reader understands computer architecture. As the subtitle suggests, the book progresses smoothly from processor hardware and its implementations to bus architectures, to low-level programming that includes threads and lightweight processes, and to complete systems. The book starts with general material on multiprocessing and on using Sun implementations. Ben Catanzaro correctly observes that because of physical limitations in malung chips faster, system performance will depend more and more on advances in computer architecture and in operating systems technology. This clears the way to using multiple processors. He briefly explains symmetric multiprocessing (SMP), where each processor shares the kernel image in memory and can execute its code concurrently, and asym-metric multiprocessing (ASMP), based on a masterlslave relationship between participating processors. The book also outlines the Sun solution for SMP: Sparc-CPU modules equipped with caches tied to an interconnect bus, to which 110 subsystem and physical memory connect separately. Next, the book describes the Sparc architecture and its unique register window model, compares versions 7, 8, and 9 of the Sparc specifications, and outlines Sparc chip imple-._______________~_ ~-mentations, including a brief note on Ultra-Sparc. It then outlines the Sparc memory model, explaining the differences between total-store ordering and partial-store ordering , and describes the memory management unit in detail. The next major subject is bus architectures. MBus (fully specified in the 58-page appendix) is a processor-to-memory bus, optimized for high-speed connection of the Sparc-CPU modules to physical memory and special U 0 modules. Its Level 2 protocol provides for cache-coherent shared-memory multipro-cessing and supports six transactions (ordinary read/write and four transactions supporting cache coherence: coherent read, coherent invalidate, coherent read & invalidate, and coherent write & invalidate). Its basic characteristics include multiplexed address/control with 64 bits of data and 36 bits of physical addressing, centralized arbitration, and up to 128-byte burst transfers. A chapter on designing shared-memory multiprocessor systems with MBus provides many useful details regarding cache-coherence protocols (mostly, MBus implementation of a …
{"title":"Multiprocessor system architectures [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532141","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532141","url":null,"abstract":"reviewed by Junusz Zuleu:skz, Em b7y-Riddle Aeronautical [Jniversity This book, part of the SunSoft Press series, is subtitled \" A Technical Survey of Multi-processor/Multithreaded Systems Using Sparc, Multilevel Bus Architectures and Solaris (SunOS). \" So, it covers only computer systems from Sun Microsystem:s Computer Corporation. Its purpose is \" to bring together in one volume a coherent description of the elements that provide for the design and development of multiprocessor systems archi-tectures from Sun Microsystems. \" It assumes that the reader understands computer architecture. As the subtitle suggests, the book progresses smoothly from processor hardware and its implementations to bus architectures, to low-level programming that includes threads and lightweight processes, and to complete systems. The book starts with general material on multiprocessing and on using Sun implementations. Ben Catanzaro correctly observes that because of physical limitations in malung chips faster, system performance will depend more and more on advances in computer architecture and in operating systems technology. This clears the way to using multiple processors. He briefly explains symmetric multiprocessing (SMP), where each processor shares the kernel image in memory and can execute its code concurrently, and asym-metric multiprocessing (ASMP), based on a masterlslave relationship between participating processors. The book also outlines the Sun solution for SMP: Sparc-CPU modules equipped with caches tied to an interconnect bus, to which 110 subsystem and physical memory connect separately. Next, the book describes the Sparc architecture and its unique register window model, compares versions 7, 8, and 9 of the Sparc specifications, and outlines Sparc chip imple-._______________~_ ~-mentations, including a brief note on Ultra-Sparc. It then outlines the Sparc memory model, explaining the differences between total-store ordering and partial-store ordering , and describes the memory management unit in detail. The next major subject is bus architectures. MBus (fully specified in the 58-page appendix) is a processor-to-memory bus, optimized for high-speed connection of the Sparc-CPU modules to physical memory and special U 0 modules. Its Level 2 protocol provides for cache-coherent shared-memory multipro-cessing and supports six transactions (ordinary read/write and four transactions supporting cache coherence: coherent read, coherent invalidate, coherent read & invalidate, and coherent write & invalidate). Its basic characteristics include multiplexed address/control with 64 bits of data and 36 bits of physical addressing, centralized arbitration, and up to 128-byte burst transfers. A chapter on designing shared-memory multiprocessor systems with MBus provides many useful details regarding cache-coherence protocols (mostly, MBus implementation of a …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"528 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124500175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532148
J. Zalewski
and Posix, and with a discussion of barriers, events, and spin locks. They also briefly present such problems as deadlocks, race conditions, priority inversion, and reentrancy. The discussion of race conditions, in this part and in a later section, is very interesting, although I spotted one error. A variable doubled in one thread and decremented in another gives two different results, depending on the threads’ order of execution. Contrary to what the authors say, this is not a race condition but an ordinary design error. This is followed by a discussion of Posix calls not available in the Solaris thread library-that is, those related to thread attributes, thread cancellation, and scheduling policies. Next, Lewis and Berg describe several tools for multithreaded programming and offer some programming hints. The chapter on examples that follows is technically the moslinteresting part of the book, because of the level of details covered. Two of the appendixes present a very valuable list of all calls for the Solaris threads library and for Posix. T h e authors discuss each call individually, unlike most books on Unix, which just provide manpage (manual page) descriptions. The authors wrote Threads Primer: A Guide to Multithreaded Propamming as an introductory text to give experienced C/Unix programmers a solid understanding of multithreading fundamentals. The book achieves this goal, but lessexperienced programmers can also benefit from it. However, be warned: the less “technical” you are, the less you will gain.
{"title":"Programming with threads [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532148","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532148","url":null,"abstract":"and Posix, and with a discussion of barriers, events, and spin locks. They also briefly present such problems as deadlocks, race conditions, priority inversion, and reentrancy. The discussion of race conditions, in this part and in a later section, is very interesting, although I spotted one error. A variable doubled in one thread and decremented in another gives two different results, depending on the threads’ order of execution. Contrary to what the authors say, this is not a race condition but an ordinary design error. This is followed by a discussion of Posix calls not available in the Solaris thread library-that is, those related to thread attributes, thread cancellation, and scheduling policies. Next, Lewis and Berg describe several tools for multithreaded programming and offer some programming hints. The chapter on examples that follows is technically the moslinteresting part of the book, because of the level of details covered. Two of the appendixes present a very valuable list of all calls for the Solaris threads library and for Posix. T h e authors discuss each call individually, unlike most books on Unix, which just provide manpage (manual page) descriptions. The authors wrote Threads Primer: A Guide to Multithreaded Propamming as an introductory text to give experienced C/Unix programmers a solid understanding of multithreading fundamentals. The book achieves this goal, but lessexperienced programmers can also benefit from it. However, be warned: the less “technical” you are, the less you will gain.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121620196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532142
F. Reynolds
with the /Sf5 Toolkit edited by Kenneth P Birman and Robbert Van Renesse 398 PP $50 IEEE Computer Society Press Los Alamtos, Calif 1994 ISBN 0-81 86-5342-6 features, barely mentioning the host-based and symmetric configurations and not mentioning direct virtual memory addressing, a feature unique among buses. The book also discusses SBus’s operation in a hierarchy with MBus. An outline follows of two other buses in a hierarchy, XBus and XDbus, developed jointly by Sun and Xerox. Both are packetswitched buses, which enable data-routing during transfer rather than before, unlike all other circuit-switched buses. XBus is primarily a chip interconnect; XDbus can be used at the chip, board, or backplane level. T o maintain multiprocessor cache coherence, XDbus provides a hardware protocol that is a generalization of the multicopy write-broadcast protocol. Other interesting features include use of Gunning Transceiver Logic (GTL) transceiver technology, a separate transaction (rather than dedicated lines) to transport interrupts, and full support for the SWAP synchronization primitive. Two chapters on software complement the material on Sun’s approach to symmetric multiprocessing. One discusses a general model o f a multithreaded architecture used in Solaris for threads, lightweight processes, and kernels. Another covers programming facilities and their use at the application level: mutexes, condition variables, semaphores, readedwriter locks, and signals. T h e book ends with a chapter on three Sun multiprocessor implementationsSparcServer 600MP, SparcCenter 2000, and SparcServer 1000--and with a chapter on future trends, the weakest in the whole book, because it’s very nontechnical and superficial. Multiprocessor System Architectures can serve as an overview of the Sun technology as well as a reference handbook jor designers of multiprocessor systems based on Sun machines. However, those who need details about particular subjects should refer to other publications, such as The Sparc Architecture Manual, edited by David L. Waever and Tom Germond (Prentice Hall); S B w Handbook, by Susan A. Mason (Prentice Hall); Solaris 2.X Intemzals and Architecturtz, by John R. Graham (McGraw-Hill); and Th:reads Primer: A Guide t o Multithreaded Programming, by Bil Lewis and Daniel J. Berg (Primtice Hall) (see the review on page 76 of this issue). My only other complaint is that this book unnecessarily uses sales language; it is too often hard to distinguish commercial propaganda from valuable technical information.
使用由Kenneth P Birman和robert Van Renesse编辑的/Sf5工具包398 PP 50美元IEEE计算机协会出版社Los Alamtos,加利福尼亚州1994 ISBN 0-81 86- 5442 -6功能,几乎没有提到基于主机和对称配置,也没有提到直接虚拟内存寻址,这是总线中唯一的功能。本书还讨论了SBus与MBus在层次结构中的操作。下面概述了层次结构中的另外两种总线,XBus和XDbus,它们由Sun和Xerox联合开发。两者都是分组交换总线,与所有其他电路交换总线不同,它在传输期间而不是之前启用数据路由。XBus主要是一个芯片互连;XDbus可用于芯片、电路板或背板级别。为了保持多处理器缓存的一致性,XDbus提供了一种硬件协议,它是多副本写广播协议的泛化。其他有趣的特性包括使用Gunning Transceiver Logic (GTL)收发器技术、传输中断的单独事务(而不是专用线路)以及对SWAP同步原语的完全支持。关于软件的两章补充了关于Sun的对称多处理方法的材料。其中一篇讨论了Solaris中用于线程、轻量级进程和内核的多线程体系结构的通用模型。另一篇介绍了编程工具及其在应用程序级别的使用:互斥锁、条件变量、信号量、读写器锁和信号。本书的最后一章介绍了三种Sun多处理器实现(SparcServer 600MP、SparcCenter 2000和SparcServer 1000),还有一章介绍了未来的趋势,这是全书中最薄弱的一章,因为它非常非技术性和肤浅。多处理器系统架构既可以作为Sun技术的概述,也可以作为基于Sun机器的多处理器系统设计人员的参考手册。然而,那些需要特定主题细节的人应该参考其他出版物,例如由David L. Waever和Tom Germond (Prentice Hall)编辑的The Sparc Architecture Manual;《S B w手册》,苏珊·a·梅森著,普伦蒂斯霍尔出版社;Solaris 2。X inttemzals and architectz, John R. Graham(麦格劳-希尔出版社);阅读比尔·刘易斯和丹尼尔·j·伯格(Primtice Hall)的《入门:多线程编程指南》(请参阅本期第76页的评论)。我唯一的另一个抱怨是,这本书不必要地使用了销售语言;通常很难区分商业宣传和有价值的技术信息。
{"title":"Reliable distributed computing with the Isis toolkit [Book Reviews]","authors":"F. Reynolds","doi":"10.1109/M-PDT.1996.532142","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532142","url":null,"abstract":"with the /Sf5 Toolkit edited by Kenneth P Birman and Robbert Van Renesse 398 PP $50 IEEE Computer Society Press Los Alamtos, Calif 1994 ISBN 0-81 86-5342-6 features, barely mentioning the host-based and symmetric configurations and not mentioning direct virtual memory addressing, a feature unique among buses. The book also discusses SBus’s operation in a hierarchy with MBus. An outline follows of two other buses in a hierarchy, XBus and XDbus, developed jointly by Sun and Xerox. Both are packetswitched buses, which enable data-routing during transfer rather than before, unlike all other circuit-switched buses. XBus is primarily a chip interconnect; XDbus can be used at the chip, board, or backplane level. T o maintain multiprocessor cache coherence, XDbus provides a hardware protocol that is a generalization of the multicopy write-broadcast protocol. Other interesting features include use of Gunning Transceiver Logic (GTL) transceiver technology, a separate transaction (rather than dedicated lines) to transport interrupts, and full support for the SWAP synchronization primitive. Two chapters on software complement the material on Sun’s approach to symmetric multiprocessing. One discusses a general model o f a multithreaded architecture used in Solaris for threads, lightweight processes, and kernels. Another covers programming facilities and their use at the application level: mutexes, condition variables, semaphores, readedwriter locks, and signals. T h e book ends with a chapter on three Sun multiprocessor implementationsSparcServer 600MP, SparcCenter 2000, and SparcServer 1000--and with a chapter on future trends, the weakest in the whole book, because it’s very nontechnical and superficial. Multiprocessor System Architectures can serve as an overview of the Sun technology as well as a reference handbook jor designers of multiprocessor systems based on Sun machines. However, those who need details about particular subjects should refer to other publications, such as The Sparc Architecture Manual, edited by David L. Waever and Tom Germond (Prentice Hall); S B w Handbook, by Susan A. Mason (Prentice Hall); Solaris 2.X Intemzals and Architecturtz, by John R. Graham (McGraw-Hill); and Th:reads Primer: A Guide t o Multithreaded Programming, by Bil Lewis and Daniel J. Berg (Primtice Hall) (see the review on page 76 of this issue). My only other complaint is that this book unnecessarily uses sales language; it is too often hard to distinguish commercial propaganda from valuable technical information.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114350861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532145
G. Lippman
Operating Systems Programming is a selfcontained guide to classic operating system problems, concurrent programming, and the Synchronizing Resources language. SR, based on C and Pascal, is very understandable to readers with programming knowledge. So, I recommend this book to students studying operating systems and to programmers interested in learning concurrent programming and studying these problems and their solutions in a readily accessible working language. (SR, developed a t the University of Arizona, is fully described in The SR Language, C o m ~ remy in Practice, by Gregory R. Andrews and Ronald A. Olsson [BenjamidCmmings]. For more information on SR, access http:// www.cs.arizona.edu/sr. The compiler and utilities, available by anonymous ftp at ftp:// ftp.cs.arizona.edu/sr, are readily installed on computer systems running Unix, such as a networked Sun system, or on PCs running Linux. Linux is also available by anonymous ftp, at ftp://sunsite.unc.edu/pub/Linux, or on C D ROM. For more information on Linux, access http://www.linux.org.) Stephen Hartley has skillfully woven together a description of the SR language and SR solutions of several classic OS problems, with emphasis on the mutual exclusion of concurrent processes, race conditions, critical sections, process synchronization, interprocess communication, and parallel computing. These solutions use semaphores, monitors, and message-passing techniques on singleand multiple-CPU computer systems. (The solutions are also available by anonymous ftp, to be compiled and run by the reader.) The book has seven chapters, followed by a list of the example programs and a bibliography. Each chapter contains descriptive information, SR programs for solving the OS problems, and laboratory exercises designed to extend these solutions. Chapter 1 reviews OS programming, hardware and software interrupts, hardware protection, and CPU scheduling. Chapter 2 presents SRs sequential features first, so that readers who have not previously written concurrent or parallel programs can see how closely SR resembles the languages they already know. Elementary programs for computing factorial, sorting, and string manipulation make the presentation very concrete. Hartley demonstrates how to use Unix command-line arguments in an SR program, and describes and uses the SR resource, which is effectively equivalent to the object or module in other languages. He then shows how to animate SR programs with the Xtango software system developed by John T. Stasko and Doug Hayes. Xtango has been implemented effcctively on Unixand Linux-based computers. (Xtango is available by anonymous ftp from Georgia Tech University at ftp.cc.gatech.edu/pub/people/stasko.) Chapter 3 introduces concurrent programming in which multiple processes manipulate shared data. T o preserve data integrity, solution of the critical section problem enforces mutual exclusion of the processes relative to this data. Hartley shows how several processes can
《操作系统编程》是一本关于经典操作系统问题、并发编程和同步资源语言的完整指南。基于C和Pascal的SR,对于有编程知识的读者来说是非常容易理解的。因此,我向学习操作系统的学生和有兴趣学习并发编程的程序员推荐这本书,并以易于访问的工作语言研究这些问题及其解决方案。SR是由亚利桑那大学开发的,在Gregory R. Andrews和Ronald a . Olsson [BenjamidCmmings]所著的《SR语言,C - m - remy in Practice》中有详细描述。有关SR的更多信息,请访问http:// www.cs.arizona.edu/sr。编译器和实用程序可以通过匿名ftp在ftp:// ftp.cs.arizona.edu/sr上获得,可以很容易地安装在运行Unix的计算机系统上,例如联网的Sun系统,或者安装在运行Linux的个人电脑上。Linux也可以通过匿名ftp、ftp://sunsite.unc.edu/pub/Linux或cd ROM获取。更多Linux信息,请访问http://www.linux.org。)Stephen Hartley巧妙地将SR语言和几个经典操作系统问题的SR解决方案的描述结合在一起,重点是并发进程的互斥、竞争条件、临界区、进程同步、进程间通信和并行计算。这些解决方案在单cpu和多cpu计算机系统上使用信号量、监视器和消息传递技术。(解决方案也可以通过匿名ftp获得,由读者编译和运行。)本书有七章,后面是示例程序列表和参考书目。每章包含描述性信息、用于解决操作系统问题的SR程序,以及旨在扩展这些解决方案的实验室练习。第1章回顾了操作系统编程、硬件和软件中断、硬件保护和CPU调度。第2章首先介绍了SR的顺序特性,以便以前没有编写过并发或并行程序的读者可以看到SR与他们已经知道的语言有多么相似。用于计算阶乘、排序和字符串操作的基本程序使表示非常具体。Hartley演示了如何在SR程序中使用Unix命令行参数,并描述和使用SR资源,这实际上相当于其他语言中的对象或模块。然后,他展示了如何用约翰·t·斯塔斯科和道格·海耶斯开发的Xtango软件系统动画SR程序。Xtango已经在基于unix和linux的计算机上有效地实现了。(Xtango可以通过匿名ftp从佐治亚理工大学(Georgia Tech University)获得,网址为ftp.cc.gatech.edu/pub/people/stasko。)第3章介绍了并发编程,其中多个进程操作共享数据。为了保持数据的完整性,临界区问题的解决方案强制了与该数据相关的进程的互斥。Hartley展示了如何从单个SR资源启动多个进程,或者从运行在不同虚拟或实际cpu上的多个资源启动多个进程。本章提供了一个简洁的SR解决方案来解决有界共享缓冲区的生产者-消费者问题。在介绍T. Dekker和Gary Peterson的解决方案以及多并发进程临界部分问题的解决方案之前,读者将首先尝试解决双进程临界部分问题。第4章介绍了信号量ori的SR实现,该信号量ori最初是由Edsger Djikstra设计的,用于解决多进程临界段问题,消除了繁忙等待。在SR中,信号量完全实现为对象,可以通过传统的P和V操作访问。这种实现减轻了在实现信号量的算法中对Unix系统调用的需求。本章提出了基于信号量的解决方案,以解决生产者-消费者、昏昏欲睡的理发师和读者-作者的问题。哈特利使用二进制信号。,它可以由tst -and-set或等效的不可中断汇编语言指令构造,以构造通用信号量。他用四个分门别类的例子详细地阐述了这一原则,极大地促进了理解。这一章还展示了一个Xtango动画,解决了用餐哲学家的问题。第5章描述了:monitor,一个在面向对象语言中发现的类的应用。SR监视器是一种更结构化的工具,用于保护共享数据或共享硬件(如磁盘驱动器或打印机)。本章给出了几个问题的基于监视器的SR解决方案,包括用餐哲学家问题和芳香问题。第六章介绍了SR中不同形式的消息传递的应用。
{"title":"Operating systems programming: the SR programming language [Book Reviews]","authors":"G. Lippman","doi":"10.1109/M-PDT.1996.532145","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532145","url":null,"abstract":"Operating Systems Programming is a selfcontained guide to classic operating system problems, concurrent programming, and the Synchronizing Resources language. SR, based on C and Pascal, is very understandable to readers with programming knowledge. So, I recommend this book to students studying operating systems and to programmers interested in learning concurrent programming and studying these problems and their solutions in a readily accessible working language. (SR, developed a t the University of Arizona, is fully described in The SR Language, C o m ~ remy in Practice, by Gregory R. Andrews and Ronald A. Olsson [BenjamidCmmings]. For more information on SR, access http:// www.cs.arizona.edu/sr. The compiler and utilities, available by anonymous ftp at ftp:// ftp.cs.arizona.edu/sr, are readily installed on computer systems running Unix, such as a networked Sun system, or on PCs running Linux. Linux is also available by anonymous ftp, at ftp://sunsite.unc.edu/pub/Linux, or on C D ROM. For more information on Linux, access http://www.linux.org.) Stephen Hartley has skillfully woven together a description of the SR language and SR solutions of several classic OS problems, with emphasis on the mutual exclusion of concurrent processes, race conditions, critical sections, process synchronization, interprocess communication, and parallel computing. These solutions use semaphores, monitors, and message-passing techniques on singleand multiple-CPU computer systems. (The solutions are also available by anonymous ftp, to be compiled and run by the reader.) The book has seven chapters, followed by a list of the example programs and a bibliography. Each chapter contains descriptive information, SR programs for solving the OS problems, and laboratory exercises designed to extend these solutions. Chapter 1 reviews OS programming, hardware and software interrupts, hardware protection, and CPU scheduling. Chapter 2 presents SRs sequential features first, so that readers who have not previously written concurrent or parallel programs can see how closely SR resembles the languages they already know. Elementary programs for computing factorial, sorting, and string manipulation make the presentation very concrete. Hartley demonstrates how to use Unix command-line arguments in an SR program, and describes and uses the SR resource, which is effectively equivalent to the object or module in other languages. He then shows how to animate SR programs with the Xtango software system developed by John T. Stasko and Doug Hayes. Xtango has been implemented effcctively on Unixand Linux-based computers. (Xtango is available by anonymous ftp from Georgia Tech University at ftp.cc.gatech.edu/pub/people/stasko.) Chapter 3 introduces concurrent programming in which multiple processes manipulate shared data. T o preserve data integrity, solution of the critical section problem enforces mutual exclusion of the processes relative to this data. Hartley shows how several processes can ","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114607114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532147
J. Zalewski
chronous, and unbuffered and buffered, message passing. He upgrades algorithms studied in previous chapters so that mutual exclusion can be enforced on distributed systems. The chapter covers SR send and receive instructions, the powerful SR input (in) statement that implements extended rendezvous with two-way information flow, remote procedure calls, and client/ server programming. Example programs show that the SR runtime system buffers dynamically allocated virtual memory messages that are sent but not yet received, and that the SR runtime system’s process (thread) table is dynamically allocated. This SO-page chapter demonstrates S R s power in the distributed environment, and brings together and greatly augments all that has been learned in Chapters 1 through 5 . The chapter includes a useful summary of SR operations and their invocations, providing a good overview of the language. The chapter concludes with an Xtango color animation of the distributed dining philosophers program presented in The SR Language. The programs in Chapter 7 demonstrate SR’s effectiveness as a language for writing parallel programs that perform numerically intensive computations and that have processes that must synchronize or communicate relatively frequently. The chapter presents coarse-grained parallel SR programs that solve the N Queens problem and the dining philosophers problem on multiple machines. Other programs implement different patterns of communication between collections of processes and provide examples of data parallelism and master-worker organization. The SR language environment contains SRWin, an interface to the X-Windows graphics system. SRWin is a lower-level interface than Xtango is, and might be harder to use. T o complete the book, Hartley has written an SR resource that serves as an interface to Xtango so that its drawing and moving procedures can be called directly from an SR program. He also presents an animatlon of Quicksort using SRWin, so that the reader can compare the difference. Operatzng Systems Programmzng: The SR Language is a carefully and concisely written introduction to concurrent and parallel programming and to the SR language. I have used it successfully in my undergraduate and graduate Operating Systems and Parallel Programming courses for the past year. This unique book works well as the concurrent programming supplement to a standard course text such as Operatzng System Concepts, 4th Ed , by Abraham Silberschatz and Peter Galmn, Addison-Wesley.
同步、无缓冲和有缓冲的消息传递。他升级了前几章中研究的算法,使互斥可以在分布式系统上强制执行。本章涵盖了SR发送和接收指令,强大的SR输入(in)语句,它实现了扩展的双向信息流,远程过程调用和客户端/服务器编程的集合。示例程序显示了SR运行时系统缓冲已发送但尚未接收的动态分配的虚拟内存消息,并且SR运行时系统的进程(线程)表是动态分配的。这一章展示了rs在分布式环境中的强大功能,并将第1章到第5章所学到的内容结合在一起,极大地扩展了这些内容。这一章包含了SR操作及其调用的一个有用的总结,提供了对语言的一个很好的概述。本章以Xtango彩色动画结束,该动画展示了《SR语言》中的分布式用餐哲学家计划。第7章中的程序演示了SR作为一种编写并行程序的语言的有效性,这些并行程序执行数字密集型计算,并且具有必须同步或相对频繁地通信的进程。本章介绍了在多台机器上解决N个皇后问题和用餐哲学家问题的粗粒度并行SR程序。其他程序在进程集合之间实现不同的通信模式,并提供数据并行性和主工组织的示例。SR语言环境包含SRWin,一个X-Windows图形系统的接口。SRWin是一个比Xtango更低级的接口,可能更难使用。为了完成这本书,Hartley编写了一个SR资源,作为Xtango的接口,以便其绘图和移动过程可以直接从SR程序中调用。他还提供了一个使用SRWin的快速排序动画,以便读者可以比较它们的差异。《操作系统编程:SR语言》是一本关于并发和并行编程以及SR语言的简明扼要的介绍。在过去的一年里,我在本科和研究生的操作系统和并行编程课程中成功地使用了它。这本独特的书很好地作为标准课程文本的并发编程补充,如由Abraham Silberschatz和Peter Galmn, Addison-Wesley编写的《Operatzng System Concepts, 4th Ed》。
{"title":"Threads primer [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.532147","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532147","url":null,"abstract":"chronous, and unbuffered and buffered, message passing. He upgrades algorithms studied in previous chapters so that mutual exclusion can be enforced on distributed systems. The chapter covers SR send and receive instructions, the powerful SR input (in) statement that implements extended rendezvous with two-way information flow, remote procedure calls, and client/ server programming. Example programs show that the SR runtime system buffers dynamically allocated virtual memory messages that are sent but not yet received, and that the SR runtime system’s process (thread) table is dynamically allocated. This SO-page chapter demonstrates S R s power in the distributed environment, and brings together and greatly augments all that has been learned in Chapters 1 through 5 . The chapter includes a useful summary of SR operations and their invocations, providing a good overview of the language. The chapter concludes with an Xtango color animation of the distributed dining philosophers program presented in The SR Language. The programs in Chapter 7 demonstrate SR’s effectiveness as a language for writing parallel programs that perform numerically intensive computations and that have processes that must synchronize or communicate relatively frequently. The chapter presents coarse-grained parallel SR programs that solve the N Queens problem and the dining philosophers problem on multiple machines. Other programs implement different patterns of communication between collections of processes and provide examples of data parallelism and master-worker organization. The SR language environment contains SRWin, an interface to the X-Windows graphics system. SRWin is a lower-level interface than Xtango is, and might be harder to use. T o complete the book, Hartley has written an SR resource that serves as an interface to Xtango so that its drawing and moving procedures can be called directly from an SR program. He also presents an animatlon of Quicksort using SRWin, so that the reader can compare the difference. Operatzng Systems Programmzng: The SR Language is a carefully and concisely written introduction to concurrent and parallel programming and to the SR language. I have used it successfully in my undergraduate and graduate Operating Systems and Parallel Programming courses for the past year. This unique book works well as the concurrent programming supplement to a standard course text such as Operatzng System Concepts, 4th Ed , by Abraham Silberschatz and Peter Galmn, Addison-Wesley.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"487 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123191641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-23DOI: 10.1109/M-PDT.1996.532143
E. Sorton
years these members have written about their research's technical details and its problem domain or context. Consequently , Birman and Van Renesse were able to select from a rich body of work. The book has 2 1 chapters, which are divided into four sections. The " Fundamentals " section introduces the problems Isis is intended to deal with and the Isis approach's general nature. This section defines and discusses at length the virtual synchrony programming model of distributed systems. Two chapters deal with controversies. One argues RPC's inadequacy as a, tool for constructing reliable distributed systems; the other defends the utility of causally ordered group communication. (Readers interested in the honorable opposition's side of the second controversy should read " Understanding the Limitations of Causally and Totally Ordered Communication, " by David Cheriton and Dale Skeen, in the 1991 Proceedings of the Symposium on Operating Systems Principles, ACM Press.) " Redesign, " the second section, describes the motivation, design, and new research initiatives of Horus. When the book was being written, Horus was very much a work in progress. Nevertheless , this section's chapters capture the spirit of Horus's design, the direction of the ongoing research, and many of the lessons learned during the development of the original Isis Toolkit. The " Protocol " section contains chapters detailing the key group-communication and fault-detection protocols on which Isis and Horus are built. These are among the most technically challenging chapters. Readers who are " notation averse " might be inclined to skip Chapters 12,13, and 14. I would encourage those who are interested in more than a superficial understanding of how the system works to persevere. As is often the case when dealing with problems associated with distributed consensus, the Isis protocols are not unduly complex, but are in some ways quite subtle. These chapters present the material carefully and, for the most part, straightforwardly. The final section, " Tools and Applications , " describes a fairly broad range of applications that have been built with Isis. Meta is a toolht for constructing distributed reactive systems, which include process-control systems. T h e Paralex programming environment is intended to simplify designing and building parallel, distributed programs. The M I S query and reporting system was built for the World Bank's Planning and Budgeting Department. Distributed M L provides distributed computing extensions to standard M L (metalanguage) programming language. Each chapter explains …
{"title":"DCE: A guide to developing portable applications [Book Reviews]","authors":"E. Sorton","doi":"10.1109/M-PDT.1996.532143","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.532143","url":null,"abstract":"years these members have written about their research's technical details and its problem domain or context. Consequently , Birman and Van Renesse were able to select from a rich body of work. The book has 2 1 chapters, which are divided into four sections. The \" Fundamentals \" section introduces the problems Isis is intended to deal with and the Isis approach's general nature. This section defines and discusses at length the virtual synchrony programming model of distributed systems. Two chapters deal with controversies. One argues RPC's inadequacy as a, tool for constructing reliable distributed systems; the other defends the utility of causally ordered group communication. (Readers interested in the honorable opposition's side of the second controversy should read \" Understanding the Limitations of Causally and Totally Ordered Communication, \" by David Cheriton and Dale Skeen, in the 1991 Proceedings of the Symposium on Operating Systems Principles, ACM Press.) \" Redesign, \" the second section, describes the motivation, design, and new research initiatives of Horus. When the book was being written, Horus was very much a work in progress. Nevertheless , this section's chapters capture the spirit of Horus's design, the direction of the ongoing research, and many of the lessons learned during the development of the original Isis Toolkit. The \" Protocol \" section contains chapters detailing the key group-communication and fault-detection protocols on which Isis and Horus are built. These are among the most technically challenging chapters. Readers who are \" notation averse \" might be inclined to skip Chapters 12,13, and 14. I would encourage those who are interested in more than a superficial understanding of how the system works to persevere. As is often the case when dealing with problems associated with distributed consensus, the Isis protocols are not unduly complex, but are in some ways quite subtle. These chapters present the material carefully and, for the most part, straightforwardly. The final section, \" Tools and Applications , \" describes a fairly broad range of applications that have been built with Isis. Meta is a toolht for constructing distributed reactive systems, which include process-control systems. T h e Paralex programming environment is intended to simplify designing and building parallel, distributed programs. The M I S query and reporting system was built for the World Bank's Planning and Budgeting Department. Distributed M L provides distributed computing extensions to standard M L (metalanguage) programming language. Each chapter explains …","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125299264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/M-PDT.1996.494612
J. Zalewski
All three books are collections of articles on related subjects that were previously published, mostly in IEEE Computer Society publications. They appear in an unnamedalbeit known for about a decade and highly rated-series of IEEE tutorials. The books have very similar contents; therefore, their joint review seems appropriate. Interconnection Networksfor Multiprocessors and Multicomputers has 10 chapters and over 50 articles, including chapter introductions. The first chapter, written by the editors, introduces the entire book and gives a proper perspective on its contents. Four subsequent chapters discuss interconnections from the point of view of their topologies. In particular, there are articles on Clos and Benes networks, multistage networks, buses, and crossbars. Although it is hard to distinguish among the articles in this collection and point to the one of particular value, I must confess that I read with great pleasure Leiserson’s 10-yearold article on fat trees. Although the chapters just mentioned discuss individual properties of various topologies, the next three chapters specifically address general properties of interconnection networks. These properties include routing (to provide required functionality), reliability, and performance. I took a closer look at the chapter on “Fault-tolerance and Reliability.” As the editors point out, an interconnecoon network‘s ability to avoid failures’is usually measured as its reliability or availability. A network achieves high reliability or availability normally through some form of fault tolerance. Thus, fault tolerance, in the form of various kinds of redundancy (in space or time), is the major subject of all the articles ii chapter, which provide a reasonably com coverage of the most important issues. I have mixed feelings about the las chapters of the book: one on algorithm applications, and one that includes case ies. The first attempts to cover sut related to designing applications and rithms for parallel machines. This ai broad enough to take at least another volume (such as Introduction to Parallel rzthms and Architectures, by F.T. Leigl Morgan Kaufmann, 1992), so providing approximate coverage in one chapter 1 definition, impossible. However, the ch presenting case studies is reasonably plete and includes articles on several res1 machines, as well as on those once com cially available. In summary, this book is a good vol providing a wealth of valuable informatic theoretical aspects of interconnecting n ple processors. I commend the editor writing comprehensive introductions I chapters, a custom less and less commc this series of IEEE tutorials. On the nee side, the book doesn’t even mention ce important topics such as cache coherencl newer solutions such as ATM, but such r rial is probably suited for other volumes no one can cover everything importani single book like this. The second book, Interconnectaon Nefi fir High-Pe$omzance Parallel Computers, i! prisingly similar, not only by
这三本书都是以前发表的相关主题的文章的集合,主要是在IEEE计算机协会的出版物上。它们出现在一个不知名的IEEE系列教程中,尽管这个系列已经有大约十年的历史了。这两本书的内容非常相似;因此,他们的联合审查似乎是适当的。多处理器和多计算机的互连网络有10章和50多篇文章,包括章节介绍。第一章由编辑撰写,介绍了整本书,并对其内容给出了适当的视角。接下来的四章从拓扑学的角度讨论互连。特别是,有关于Clos和Benes网络、多级网络、总线和横梁的文章。虽然很难区分这个合集里的文章,并指出哪一篇特别有价值,但我必须承认,我非常高兴地阅读了雷瑟森10年前关于胖树的文章。虽然刚刚提到的章节讨论了各种拓扑的个别属性,但接下来的三章专门讨论了互连网络的一般属性。这些属性包括路由(以提供所需的功能)、可靠性和性能。我仔细阅读了“容错和可靠性”一章。正如编者所指出的,互连网络避免故障的能力通常以其可靠性或可用性来衡量。网络通常通过某种形式的容错实现高可靠性或可用性。因此,容错,以各种冗余(在空间或时间上)的形式,是所有文章ii章的主要主题,这些文章ii章提供了对最重要问题的合理覆盖。对于这本书的最后几章,我的感觉很复杂:一章是关于算法应用的,另一章是关于案例的。第一次尝试涵盖套件相关的设计应用程序和并行机器的节奏。这足够广泛,至少需要另一卷书(如F.T. Leigl Morgan Kaufmann的《并行节奏和架构介绍》,1992),所以在第一章的定义中提供大致的覆盖是不可能的。然而,所介绍的案例研究是相当完整的,包括一些关于res1机器的文章,以及那些曾经可用的机器。总之,这本书是一个很好的卷,提供了丰富的有价值的信息理论方面的互连在多个处理器。我推荐编者写的全面的介绍1章,自定义的越来越少的通信这一系列的IEEE教程。在需求方面,这本书甚至没有提到一些重要的主题,比如缓存一致性和像ATM这样的新解决方案,但这样的竞争可能适合其他卷,没有人可以在这样一本书中涵盖所有重要的内容。第二本书《互联网络与高性能并行计算机》出版了。非常相似,不仅在标题上,而且在所有内容上。它包含约70篇论文,分为11章。覆盖的区域基本相同,但侧重点不同。我
{"title":"Multiprocessor Performance Measurement and Evaluation [Book Reviews]","authors":"J. Zalewski","doi":"10.1109/M-PDT.1996.494612","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494612","url":null,"abstract":"All three books are collections of articles on related subjects that were previously published, mostly in IEEE Computer Society publications. They appear in an unnamedalbeit known for about a decade and highly rated-series of IEEE tutorials. The books have very similar contents; therefore, their joint review seems appropriate. Interconnection Networksfor Multiprocessors and Multicomputers has 10 chapters and over 50 articles, including chapter introductions. The first chapter, written by the editors, introduces the entire book and gives a proper perspective on its contents. Four subsequent chapters discuss interconnections from the point of view of their topologies. In particular, there are articles on Clos and Benes networks, multistage networks, buses, and crossbars. Although it is hard to distinguish among the articles in this collection and point to the one of particular value, I must confess that I read with great pleasure Leiserson’s 10-yearold article on fat trees. Although the chapters just mentioned discuss individual properties of various topologies, the next three chapters specifically address general properties of interconnection networks. These properties include routing (to provide required functionality), reliability, and performance. I took a closer look at the chapter on “Fault-tolerance and Reliability.” As the editors point out, an interconnecoon network‘s ability to avoid failures’is usually measured as its reliability or availability. A network achieves high reliability or availability normally through some form of fault tolerance. Thus, fault tolerance, in the form of various kinds of redundancy (in space or time), is the major subject of all the articles ii chapter, which provide a reasonably com coverage of the most important issues. I have mixed feelings about the las chapters of the book: one on algorithm applications, and one that includes case ies. The first attempts to cover sut related to designing applications and rithms for parallel machines. This ai broad enough to take at least another volume (such as Introduction to Parallel rzthms and Architectures, by F.T. Leigl Morgan Kaufmann, 1992), so providing approximate coverage in one chapter 1 definition, impossible. However, the ch presenting case studies is reasonably plete and includes articles on several res1 machines, as well as on those once com cially available. In summary, this book is a good vol providing a wealth of valuable informatic theoretical aspects of interconnecting n ple processors. I commend the editor writing comprehensive introductions I chapters, a custom less and less commc this series of IEEE tutorials. On the nee side, the book doesn’t even mention ce important topics such as cache coherencl newer solutions such as ATM, but such r rial is probably suited for other volumes no one can cover everything importani single book like this. The second book, Interconnectaon Nefi fir High-Pe$omzance Parallel Computers, i! prisingly similar, not only by ","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128352124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1996-01-22DOI: 10.1109/M-PDT.1996.494609
J. Madey
1, performance. An outline of h design and a prelction of future trends follow In conclusion, the authors make the very valid statement that (‘the future of generalpurpose, hgh-performance multiprocessing belongs to SSMPs. . . . Their obvious advantages in ease of use, performance, and costperformance will make them the clear winner over other alternatives.” I do have one minor criticism: it seems that the list of references is erroneous. For example, the reference LLG] appears before bi], and unreachable enmes are k e d , such as “T.
{"title":"Distributed Systems: Concepts and Design [Book Reviews]","authors":"J. Madey","doi":"10.1109/M-PDT.1996.494609","DOIUrl":"https://doi.org/10.1109/M-PDT.1996.494609","url":null,"abstract":"1, performance. An outline of h design and a prelction of future trends follow In conclusion, the authors make the very valid statement that (‘the future of generalpurpose, hgh-performance multiprocessing belongs to SSMPs. . . . Their obvious advantages in ease of use, performance, and costperformance will make them the clear winner over other alternatives.” I do have one minor criticism: it seems that the list of references is erroneous. For example, the reference LLG] appears before bi], and unreachable enmes are k e d , such as “T.","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123474855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}