分布式图形:界限在哪里?

R. Phillips, M. Pique, C. Moler, J. Torborg, D. Greenberg
{"title":"分布式图形:界限在哪里?","authors":"R. Phillips, M. Pique, C. Moler, J. Torborg, D. Greenberg","doi":"10.1145/77276.77291","DOIUrl":null,"url":null,"abstract":"Good morning, ladies and gentlemen. Welcome to the panel entitled Distributed Graphics: Where to Draw the Lines? My name is Dick Phillips. I'm from Los Alamos National Laboratory and I'll be your chair this session. I'll be joined by a great group of panelists --- friends and colleagues all. Our second speaker following me will be Michael Pique from Scripps Clinic. Following him will be Cleve Moler from Ardent Computer. After Cleve we'll hear from Jay Torborg who is associated with Alliant Computer. And batting in the clean-up position is going to be Don Greenberg from Cornell University. I have to give you one administrative announcement. You probably know this by now if you've been attending panel sessions all week. But once again, these proceedings are being audio taped for subsequent transcription and publication. That means that when we open up the session for question and answer, which will be in another 30 or 40 minutes, if you would like to ask a question, you must come to one of the microphones that's situated in the aisles. They are just about in every aisle, part way back and close to the front. And to be recognized, please state your name and affiliation, and I'll remind you of that when we get into the question and answer session. The title of our panel begs a question --- where to draw the lines. Well, the trivial answer to that question is obviously on the display that you have available. The real implication of that title was where to draw the lines of demarcation for graphics processing. You're going to hear from me and from my other panelists several different points of view. Just when you thought everything was settling down and it was clear that all graphics processing was moving out to workstations or graphic supercomputers, you're going to hear at least two different points of view that may sound a bit nostalgic. Let me take you back in time just a bit, and this is a greatly oversimplified graphics time line --- where we have been and where we are and where we're going in the evolution of visualization capability. I'm not going to dwell too much on the part of this time line to the left. We're really interested in what's up at the right hand side. But I can't resist pointing out that back in the days which I have labeled pre-history here, a lot of us can remember getting excited about seeing output in the form of a printer plot, thinking that we were doing visualization and that that was really computer graphics. And I for one can remember the first time I had 300 band available to me on a storage tube terminal and I thought this is blazing speed. I cannot believe what kind of graphics capability I have got now. Where things really get interesting though, if you move along that time line to the right, up into the mid 1980s, I have put some I think seminal events on there --- Silicon Graphics introducing the geometry engine in the workstation. Well, workstations in general. That was a real watershed event that has changed the way that we do graphics and where we do graphics considerably. Then as we move into the later part of the 1980s, I have noted the appearance of graphics accelerators for workstations. These are specialized plug-in boards that have all of the graphics features like Phong shading and high speed transformations built into them. Graphic supercomputers like Ardent and Stellar and HP/Apollo have appeared in that time frame. Then we look a little bit further into the '90s and I have indicated the occurrence of very high speed networks is going to have a profound effect on the way we do graphics display and how we distribute the activities that are associated with it. Let me give a very oversimplified couple of statements on what gave rise to the need for specialized graphics hardware --- the accelerators that I talked about and indeed the graphic supercomputers. As I've said, to terribly oversimplify, it was certainly the need for real time transformations and rendering. All of the advances in computer graphics over the last 10 or 15 years, many of them we can now find built into the hardware of the workstations and graphic supercomputers that we have available to us. One of the other reasons for wanting to bring all of that high speed computational capability right to the desktop, as it is, was to compensate for the lamentably low communication bandwidths which we had then --- which we have now, as a matter of fact. And I'm even including Ethernet and I'll be bold enough to say that the FDDI, which is not really upon us, is also in that lamentably slow category for many of the kinds of things we'd like to do. It turns out --- in my view, at least --- that that specialized hardware, wonderful as it is for many, many applications, and make no mistake, it has revolutionized the way that we can do interactive graphics --- it's not useful for all applications. One application that I've listed as a first bullet is one where we're doing specialized rendering --- research rendering let's call it. Not everything we wanted --- not all the research in rendering has been done --- right? So Gouraud shading and Phong shading and so on is not the be-all end-all necessarily. There's a lot of interesting work being done. It has been reported at this conference, as a matter of fact. That is really a small reason for wanting to do the graphics computing on yet another system. But the next one that I've listed is a very compelling reason in many installations, particularly where large scale heavy-duty simulations are being done. I've mentioned that I'm from Los Alamos and that's certainly one center where there are computations that are done on supercomputers and that need to be visualized, and because of the nature of the computations all of the specialized hardware in accelerator boards and in graphic supercomputers is not necessarily useful. Indeed, I'll argue that in many cases it's of no value whatsoever. The last point I want to make here --- before I show you a couple of specific slides of these simulations that I'm referring to --- is that what will happen is that the emergence of very high speed networks --- both local networks and international and national networks --- is going to provide a way for these large scale simulations to take advantage of graphics hardware that does not necessarily have the specialized capabilities we just talked about. At Los Alamos a group of folks in our network engineering department have taken the lead in defining what is called the High Speed Channel specification. Before I get to that, let me just give you an idea of the kinds of computations that are being done at Los Alamos --- and I know at many other places --- that simply can't take advantage of the specialized hardware that I've just been referring to. This happens to be the mesh that's associated with a finite difference computation for some simulation. It doesn't really matter what it is, but I just wanted to show you that we're talking typically tens of thousands of individual mesh points, and I can guarantee you this is a fairly sparse mesh compared to the kinds of things that most of our users encounter. The point in showing you this is that as the simulation evolves in time, there is a different version of this mesh for every single time step. The scientists who are doing the simulation would like to be able --- either after the fact or perhaps if the timing is appropriate --- to steer the computations that are going on by being able to visualize the evolution in time of meshes like this. And they need to be sent to some display device. And ideally you'd like to do that at the rate of 24 frames per second, but we can go through some computations and find that's simply not feasible with the kind of network bandwidths that are available today. The specialized hardware that I've just been talking about gives us no help at all here, because what I need to be able to do is to send one instance of this mesh to the display device for every time step, as I mentioned a moment ago. In addition, the scientists at Los Alamos and other places would like to be able to have the counterpart of a numerical laboratory. This is completely synthesized, but you can --- and many of you may have had experience in the past with visualization techniques and fluid flow, where you can actually see shock waves by various lighting techniques. The intent here is to be able to simulate that situation and be able to show the flow evolving --- not necessarily as it's being computed, but perhaps after the fact --- but be able to pick out important points by seeing a temporal evolution of that particular simulation. So those are just a couple of examples that have given rise to the development of a high speed channel specification and an accompanying network at Los Alamos, and I wanted to say right now --- just so you don't think oh, great, a special purpose solution for a national laboratory that no one else will ever be able to use --- not so. Many of you out there I am sure know --- and I know several of our panelists are either aware of or working on high speed channel hardware for their particular products. There are about 30 vendors that have signed on to the high speed channel specification. In addition, Digital Equipment Corporation is building the corresponding network, which is called CP*. I'm not going to go into network details here because that's not my point. I really wanted to describe what is now a new highway for data transmission that facilitates my job, which is to help the scientists do the visualization that they need to do. So what we're seeing here is a very simplified view of how this high speed network, which is spec'ed at 800 megabits and a corresponding cross bar switch-style network that is going to allow effective point-to-point connections between the various components of the computing environment --- the supercomputers, the data storage devices, and the display devices. And each --- unlike","PeriodicalId":405574,"journal":{"name":"ACM SIGGRAPH 89 Panel Proceedings","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Distributed graphics: where to draw the lines?\",\"authors\":\"R. Phillips, M. Pique, C. Moler, J. Torborg, D. Greenberg\",\"doi\":\"10.1145/77276.77291\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Good morning, ladies and gentlemen. Welcome to the panel entitled Distributed Graphics: Where to Draw the Lines? My name is Dick Phillips. I'm from Los Alamos National Laboratory and I'll be your chair this session. I'll be joined by a great group of panelists --- friends and colleagues all. Our second speaker following me will be Michael Pique from Scripps Clinic. Following him will be Cleve Moler from Ardent Computer. After Cleve we'll hear from Jay Torborg who is associated with Alliant Computer. And batting in the clean-up position is going to be Don Greenberg from Cornell University. I have to give you one administrative announcement. You probably know this by now if you've been attending panel sessions all week. But once again, these proceedings are being audio taped for subsequent transcription and publication. That means that when we open up the session for question and answer, which will be in another 30 or 40 minutes, if you would like to ask a question, you must come to one of the microphones that's situated in the aisles. They are just about in every aisle, part way back and close to the front. And to be recognized, please state your name and affiliation, and I'll remind you of that when we get into the question and answer session. The title of our panel begs a question --- where to draw the lines. Well, the trivial answer to that question is obviously on the display that you have available. The real implication of that title was where to draw the lines of demarcation for graphics processing. You're going to hear from me and from my other panelists several different points of view. Just when you thought everything was settling down and it was clear that all graphics processing was moving out to workstations or graphic supercomputers, you're going to hear at least two different points of view that may sound a bit nostalgic. Let me take you back in time just a bit, and this is a greatly oversimplified graphics time line --- where we have been and where we are and where we're going in the evolution of visualization capability. I'm not going to dwell too much on the part of this time line to the left. We're really interested in what's up at the right hand side. But I can't resist pointing out that back in the days which I have labeled pre-history here, a lot of us can remember getting excited about seeing output in the form of a printer plot, thinking that we were doing visualization and that that was really computer graphics. And I for one can remember the first time I had 300 band available to me on a storage tube terminal and I thought this is blazing speed. I cannot believe what kind of graphics capability I have got now. Where things really get interesting though, if you move along that time line to the right, up into the mid 1980s, I have put some I think seminal events on there --- Silicon Graphics introducing the geometry engine in the workstation. Well, workstations in general. That was a real watershed event that has changed the way that we do graphics and where we do graphics considerably. Then as we move into the later part of the 1980s, I have noted the appearance of graphics accelerators for workstations. These are specialized plug-in boards that have all of the graphics features like Phong shading and high speed transformations built into them. Graphic supercomputers like Ardent and Stellar and HP/Apollo have appeared in that time frame. Then we look a little bit further into the '90s and I have indicated the occurrence of very high speed networks is going to have a profound effect on the way we do graphics display and how we distribute the activities that are associated with it. Let me give a very oversimplified couple of statements on what gave rise to the need for specialized graphics hardware --- the accelerators that I talked about and indeed the graphic supercomputers. As I've said, to terribly oversimplify, it was certainly the need for real time transformations and rendering. All of the advances in computer graphics over the last 10 or 15 years, many of them we can now find built into the hardware of the workstations and graphic supercomputers that we have available to us. One of the other reasons for wanting to bring all of that high speed computational capability right to the desktop, as it is, was to compensate for the lamentably low communication bandwidths which we had then --- which we have now, as a matter of fact. And I'm even including Ethernet and I'll be bold enough to say that the FDDI, which is not really upon us, is also in that lamentably slow category for many of the kinds of things we'd like to do. It turns out --- in my view, at least --- that that specialized hardware, wonderful as it is for many, many applications, and make no mistake, it has revolutionized the way that we can do interactive graphics --- it's not useful for all applications. One application that I've listed as a first bullet is one where we're doing specialized rendering --- research rendering let's call it. Not everything we wanted --- not all the research in rendering has been done --- right? So Gouraud shading and Phong shading and so on is not the be-all end-all necessarily. There's a lot of interesting work being done. It has been reported at this conference, as a matter of fact. That is really a small reason for wanting to do the graphics computing on yet another system. But the next one that I've listed is a very compelling reason in many installations, particularly where large scale heavy-duty simulations are being done. I've mentioned that I'm from Los Alamos and that's certainly one center where there are computations that are done on supercomputers and that need to be visualized, and because of the nature of the computations all of the specialized hardware in accelerator boards and in graphic supercomputers is not necessarily useful. Indeed, I'll argue that in many cases it's of no value whatsoever. The last point I want to make here --- before I show you a couple of specific slides of these simulations that I'm referring to --- is that what will happen is that the emergence of very high speed networks --- both local networks and international and national networks --- is going to provide a way for these large scale simulations to take advantage of graphics hardware that does not necessarily have the specialized capabilities we just talked about. At Los Alamos a group of folks in our network engineering department have taken the lead in defining what is called the High Speed Channel specification. Before I get to that, let me just give you an idea of the kinds of computations that are being done at Los Alamos --- and I know at many other places --- that simply can't take advantage of the specialized hardware that I've just been referring to. This happens to be the mesh that's associated with a finite difference computation for some simulation. It doesn't really matter what it is, but I just wanted to show you that we're talking typically tens of thousands of individual mesh points, and I can guarantee you this is a fairly sparse mesh compared to the kinds of things that most of our users encounter. The point in showing you this is that as the simulation evolves in time, there is a different version of this mesh for every single time step. The scientists who are doing the simulation would like to be able --- either after the fact or perhaps if the timing is appropriate --- to steer the computations that are going on by being able to visualize the evolution in time of meshes like this. And they need to be sent to some display device. And ideally you'd like to do that at the rate of 24 frames per second, but we can go through some computations and find that's simply not feasible with the kind of network bandwidths that are available today. The specialized hardware that I've just been talking about gives us no help at all here, because what I need to be able to do is to send one instance of this mesh to the display device for every time step, as I mentioned a moment ago. In addition, the scientists at Los Alamos and other places would like to be able to have the counterpart of a numerical laboratory. This is completely synthesized, but you can --- and many of you may have had experience in the past with visualization techniques and fluid flow, where you can actually see shock waves by various lighting techniques. The intent here is to be able to simulate that situation and be able to show the flow evolving --- not necessarily as it's being computed, but perhaps after the fact --- but be able to pick out important points by seeing a temporal evolution of that particular simulation. So those are just a couple of examples that have given rise to the development of a high speed channel specification and an accompanying network at Los Alamos, and I wanted to say right now --- just so you don't think oh, great, a special purpose solution for a national laboratory that no one else will ever be able to use --- not so. Many of you out there I am sure know --- and I know several of our panelists are either aware of or working on high speed channel hardware for their particular products. There are about 30 vendors that have signed on to the high speed channel specification. In addition, Digital Equipment Corporation is building the corresponding network, which is called CP*. I'm not going to go into network details here because that's not my point. I really wanted to describe what is now a new highway for data transmission that facilitates my job, which is to help the scientists do the visualization that they need to do. So what we're seeing here is a very simplified view of how this high speed network, which is spec'ed at 800 megabits and a corresponding cross bar switch-style network that is going to allow effective point-to-point connections between the various components of the computing environment --- the supercomputers, the data storage devices, and the display devices. And each --- unlike\",\"PeriodicalId\":405574,\"journal\":{\"name\":\"ACM SIGGRAPH 89 Panel Proceedings\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1989-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGGRAPH 89 Panel Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/77276.77291\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 89 Panel Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/77276.77291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

女士们,先生们,大家早上好。欢迎来到“分布式图形:在哪里画线?”我叫迪克·菲利普斯。我来自洛斯阿拉莫斯国家实验室我将是这次会议的主席。我将和一群很棒的小组成员一起——朋友和同事。我之后的第二位演讲者是来自Scripps诊所的Michael Pique。紧随其后的是来自殷切电脑公司的克莱夫·莫勒。在Cleve之后,我们将听到与Alliant Computer有关的Jay Torborg。负责清理的是来自康奈尔大学的唐·格林伯格。我必须给你一个行政通知。如果你参加了一整周的小组讨论,你现在可能已经知道了这一点。但是,再一次,这些程序被录音,以供随后的转录和出版。这意味着,当我们开始问答环节时,这将是另外30到40分钟,如果你想提问,你必须来到过道上的一个麦克风前。它们几乎在每个过道里,靠后或靠前。为了被认出来,请说出你的名字和所属单位,等我们进入问答环节时我会提醒你们的。我们小组的标题回避了一个问题——界限在哪里。这个问题的简单答案显然就在你们现有的显示屏上。这个标题的真正含义是如何划分图形处理的界限。你们将从我和其他小组成员那里听到几个不同的观点。就在你以为一切都尘埃落定,所有的图形处理都将转移到工作站或图形超级计算机上的时候,你会听到至少两种不同的观点,听起来可能有点怀旧。让我带你们回到过去,这是一个非常简化的图形时间线——在可视化能力的进化中,我们曾经到过哪里,我们现在在哪里,我们将走向哪里。我不打算过多地讨论这条时间线左边的部分。我们真正感兴趣的是右边是什么。但我忍不住要指出,在我称之为史前的那些日子里,我们很多人都记得,看到以打印机绘图的形式输出的时候很兴奋,认为我们在做可视化,那是真正的计算机图形。我还记得我第一次在存储终端上使用300波段的时候,我认为这是惊人的速度。我简直不敢相信我现在的图像处理能力。真正有趣的是,如果你沿着时间线向右移动,到20世纪80年代中期,我在那里放了一些我认为具有开创性的事件——Silicon Graphics在工作站引入了几何引擎。嗯,一般来说是工作站。这是一个真正的分水岭事件,它改变了我们制作图形的方式和我们制作图形的地方。当我们进入20世纪80年代后期时,我注意到工作站图形加速器的出现。这些是专门的插件板,具有所有的图形功能,如Phong着色和高速转换内置。在这段时间内,出现了像殷捷、恒星和惠普/阿波罗这样的图形超级计算机。然后我们再看一下90年代我已经指出高速网络的出现将对我们进行图形显示的方式以及我们如何分配与之相关的活动产生深远的影响。让我给出一些非常简单的陈述,说明是什么导致了对专用图形硬件的需求——我谈到的加速器,以及图形超级计算机。就像我说过的,要过分简化,这当然需要实时转换和渲染。在过去的10到15年里,计算机图形学的所有进步,我们现在都可以在工作站和图形超级计算机的硬件中找到它们。另一个原因是我们想把所有的高速计算能力都带到桌面,就像现在一样,是为了弥补我们当时拥有的可悲的低通信带宽——事实上,我们现在拥有的。我甚至包括以太网,我大胆地说,FDDI,它并不在我们身上,对于我们想做的许多事情来说,也属于令人遗憾的缓慢类别。事实证明——至少在我看来——那些专门的硬件,尽管对很多很多应用来说都很好,而且毫无疑问,它已经彻底改变了我们制作交互式图形的方式——但它并不是对所有应用都有用。 我列出的第一个应用是我们做专门渲染的应用——我们称之为研究渲染。并不是所有我们想要的——并不是所有的渲染研究都完成了,对吧?所以Gouraud阴影法和Phong阴影法等等并不是最重要的。有很多有趣的工作正在进行。事实上,在这次会议上已经报道过了。这确实是想要在另一个系统上进行图形计算的一个小原因。但我列出的下一个是在许多安装中非常令人信服的原因,特别是在进行大规模重型模拟的地方。我提到过,我来自洛斯阿拉莫斯那当然是一个在超级计算机上进行计算的中心,需要可视化,因为计算的性质,所有的专用硬件在加速器板和图形超级计算机上都不一定有用。事实上,我认为在很多情况下,它没有任何价值。最后我想说的一点——在我给你们展示我所提到的这些模拟的一些具体幻灯片之前——即将发生的是,高速网络的出现——本地网络、国际和国家网络——将为这些大规模模拟提供一种利用图形硬件的方法,而这些硬件不一定具有我们刚刚谈到的专门功能。在洛斯阿拉莫斯,我们网络工程部门的一群人率先定义了所谓的高速通道规范。在我开始之前,让我给你们介绍一下在洛斯阿拉莫斯进行的各种计算——我知道在许多其他地方——这些计算根本无法利用我刚才提到的专用硬件。这恰好是与一些模拟的有限差分计算相关联的网格。它是什么并不重要,但我只是想向你们展示我们在谈论的是典型的成千上万个单独的网格点,我可以向你们保证这是一个相当稀疏的网格与我们大多数用户遇到的东西相比。我向你们展示这个的重点是,随着模拟时间的推移,每个时间步都有一个不同版本的网格。正在进行模拟的科学家们希望能够——无论是在事实之后,还是在时机合适的情况下——通过能够可视化网格在时间上的演变来引导正在进行的计算。它们需要被发送到一些显示设备上。理想情况下,你想以每秒24帧的速率来做,但我们可以通过一些计算,发现这在今天可用的网络带宽下是根本不可行的。我刚才讲的专用硬件在这里一点帮助都没有,因为我需要做的是为每一个时间步发送一个网格实例到显示设备,我刚才提到过。此外,洛斯阿拉莫斯和其他地方的科学家们希望能够拥有一个与数字实验室相对应的实验室。这是完全合成的,但你们可以——你们中的许多人可能在过去有过可视化技术和流体流动的经验,在那里你们可以通过各种照明技术看到冲击波。我们的目的是能够模拟这种情况,并能够显示流的演变——不一定是在计算时,但可能是在事实之后——但能够通过观察特定模拟的时间演变来挑选出重要的点。这些只是几个例子,它们促进了高速信道规范的发展以及洛斯阿拉莫斯的配套网络,我现在想说的是——只是为了让你不要认为哦,太好了,一个国家实验室的特殊用途解决方案,其他人永远无法使用——不是这样的。我相信在座的许多人都知道——我知道我们的一些小组成员要么知道,要么正在为他们的特定产品开发高速通道硬件。大约有30家供应商已经签署了高速通道规范。此外,数字设备公司正在建设相应的网络,称为CP*。我不打算在这里讨论网络细节,因为这不是我的重点。我真的很想描述一下现在数据传输的新高速公路,它有助于我的工作,帮助科学家们做他们需要做的可视化工作。 因此,我们在这里看到的是一个非常简化的视图,说明这个高速网络,规格为800兆比特,以及相应的跨条交换式网络,将如何在计算环境的各个组成部分之间实现有效的点对点连接——超级计算机、数据存储设备和显示设备。每个人——不像
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Distributed graphics: where to draw the lines?
Good morning, ladies and gentlemen. Welcome to the panel entitled Distributed Graphics: Where to Draw the Lines? My name is Dick Phillips. I'm from Los Alamos National Laboratory and I'll be your chair this session. I'll be joined by a great group of panelists --- friends and colleagues all. Our second speaker following me will be Michael Pique from Scripps Clinic. Following him will be Cleve Moler from Ardent Computer. After Cleve we'll hear from Jay Torborg who is associated with Alliant Computer. And batting in the clean-up position is going to be Don Greenberg from Cornell University. I have to give you one administrative announcement. You probably know this by now if you've been attending panel sessions all week. But once again, these proceedings are being audio taped for subsequent transcription and publication. That means that when we open up the session for question and answer, which will be in another 30 or 40 minutes, if you would like to ask a question, you must come to one of the microphones that's situated in the aisles. They are just about in every aisle, part way back and close to the front. And to be recognized, please state your name and affiliation, and I'll remind you of that when we get into the question and answer session. The title of our panel begs a question --- where to draw the lines. Well, the trivial answer to that question is obviously on the display that you have available. The real implication of that title was where to draw the lines of demarcation for graphics processing. You're going to hear from me and from my other panelists several different points of view. Just when you thought everything was settling down and it was clear that all graphics processing was moving out to workstations or graphic supercomputers, you're going to hear at least two different points of view that may sound a bit nostalgic. Let me take you back in time just a bit, and this is a greatly oversimplified graphics time line --- where we have been and where we are and where we're going in the evolution of visualization capability. I'm not going to dwell too much on the part of this time line to the left. We're really interested in what's up at the right hand side. But I can't resist pointing out that back in the days which I have labeled pre-history here, a lot of us can remember getting excited about seeing output in the form of a printer plot, thinking that we were doing visualization and that that was really computer graphics. And I for one can remember the first time I had 300 band available to me on a storage tube terminal and I thought this is blazing speed. I cannot believe what kind of graphics capability I have got now. Where things really get interesting though, if you move along that time line to the right, up into the mid 1980s, I have put some I think seminal events on there --- Silicon Graphics introducing the geometry engine in the workstation. Well, workstations in general. That was a real watershed event that has changed the way that we do graphics and where we do graphics considerably. Then as we move into the later part of the 1980s, I have noted the appearance of graphics accelerators for workstations. These are specialized plug-in boards that have all of the graphics features like Phong shading and high speed transformations built into them. Graphic supercomputers like Ardent and Stellar and HP/Apollo have appeared in that time frame. Then we look a little bit further into the '90s and I have indicated the occurrence of very high speed networks is going to have a profound effect on the way we do graphics display and how we distribute the activities that are associated with it. Let me give a very oversimplified couple of statements on what gave rise to the need for specialized graphics hardware --- the accelerators that I talked about and indeed the graphic supercomputers. As I've said, to terribly oversimplify, it was certainly the need for real time transformations and rendering. All of the advances in computer graphics over the last 10 or 15 years, many of them we can now find built into the hardware of the workstations and graphic supercomputers that we have available to us. One of the other reasons for wanting to bring all of that high speed computational capability right to the desktop, as it is, was to compensate for the lamentably low communication bandwidths which we had then --- which we have now, as a matter of fact. And I'm even including Ethernet and I'll be bold enough to say that the FDDI, which is not really upon us, is also in that lamentably slow category for many of the kinds of things we'd like to do. It turns out --- in my view, at least --- that that specialized hardware, wonderful as it is for many, many applications, and make no mistake, it has revolutionized the way that we can do interactive graphics --- it's not useful for all applications. One application that I've listed as a first bullet is one where we're doing specialized rendering --- research rendering let's call it. Not everything we wanted --- not all the research in rendering has been done --- right? So Gouraud shading and Phong shading and so on is not the be-all end-all necessarily. There's a lot of interesting work being done. It has been reported at this conference, as a matter of fact. That is really a small reason for wanting to do the graphics computing on yet another system. But the next one that I've listed is a very compelling reason in many installations, particularly where large scale heavy-duty simulations are being done. I've mentioned that I'm from Los Alamos and that's certainly one center where there are computations that are done on supercomputers and that need to be visualized, and because of the nature of the computations all of the specialized hardware in accelerator boards and in graphic supercomputers is not necessarily useful. Indeed, I'll argue that in many cases it's of no value whatsoever. The last point I want to make here --- before I show you a couple of specific slides of these simulations that I'm referring to --- is that what will happen is that the emergence of very high speed networks --- both local networks and international and national networks --- is going to provide a way for these large scale simulations to take advantage of graphics hardware that does not necessarily have the specialized capabilities we just talked about. At Los Alamos a group of folks in our network engineering department have taken the lead in defining what is called the High Speed Channel specification. Before I get to that, let me just give you an idea of the kinds of computations that are being done at Los Alamos --- and I know at many other places --- that simply can't take advantage of the specialized hardware that I've just been referring to. This happens to be the mesh that's associated with a finite difference computation for some simulation. It doesn't really matter what it is, but I just wanted to show you that we're talking typically tens of thousands of individual mesh points, and I can guarantee you this is a fairly sparse mesh compared to the kinds of things that most of our users encounter. The point in showing you this is that as the simulation evolves in time, there is a different version of this mesh for every single time step. The scientists who are doing the simulation would like to be able --- either after the fact or perhaps if the timing is appropriate --- to steer the computations that are going on by being able to visualize the evolution in time of meshes like this. And they need to be sent to some display device. And ideally you'd like to do that at the rate of 24 frames per second, but we can go through some computations and find that's simply not feasible with the kind of network bandwidths that are available today. The specialized hardware that I've just been talking about gives us no help at all here, because what I need to be able to do is to send one instance of this mesh to the display device for every time step, as I mentioned a moment ago. In addition, the scientists at Los Alamos and other places would like to be able to have the counterpart of a numerical laboratory. This is completely synthesized, but you can --- and many of you may have had experience in the past with visualization techniques and fluid flow, where you can actually see shock waves by various lighting techniques. The intent here is to be able to simulate that situation and be able to show the flow evolving --- not necessarily as it's being computed, but perhaps after the fact --- but be able to pick out important points by seeing a temporal evolution of that particular simulation. So those are just a couple of examples that have given rise to the development of a high speed channel specification and an accompanying network at Los Alamos, and I wanted to say right now --- just so you don't think oh, great, a special purpose solution for a national laboratory that no one else will ever be able to use --- not so. Many of you out there I am sure know --- and I know several of our panelists are either aware of or working on high speed channel hardware for their particular products. There are about 30 vendors that have signed on to the high speed channel specification. In addition, Digital Equipment Corporation is building the corresponding network, which is called CP*. I'm not going to go into network details here because that's not my point. I really wanted to describe what is now a new highway for data transmission that facilitates my job, which is to help the scientists do the visualization that they need to do. So what we're seeing here is a very simplified view of how this high speed network, which is spec'ed at 800 megabits and a corresponding cross bar switch-style network that is going to allow effective point-to-point connections between the various components of the computing environment --- the supercomputers, the data storage devices, and the display devices. And each --- unlike
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Operating systems and graphic user interfaces Physically-based modeling: past, present, and future Distributed graphics: where to draw the lines? HDTV (Hi-Vision) computer graphics Hardware/software solutions for scientfic visualization at large reserach laboratories
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1