In the fall of 1965, Carnegie Institute of Technology decided to install Large Capacity Core Storage (LCS) as the auxiliary storage device on its IBM 360/67 Time-Sharing computer system. The bulk core will be used as a swapping device, replacing the drums of conventional configurations, and as an extension of main core memory. The decision was motivated by an analysis which yielded the following results: • The effective rate at which the system can deliver pages to user tasks is increased to its theoretical limit with LCS, representing a significant improvement over drum performance. • The potential response time to users is decreased because LCS has no rotational delay. • Less main core is needed for effective system operation.
{"title":"Bulk core in a 360/67 time-sharing system","authors":"H. Lauer","doi":"10.1145/1465611.1465693","DOIUrl":"https://doi.org/10.1145/1465611.1465693","url":null,"abstract":"In the fall of 1965, Carnegie Institute of Technology decided to install Large Capacity Core Storage (LCS) as the auxiliary storage device on its IBM 360/67 Time-Sharing computer system. The bulk core will be used as a swapping device, replacing the drums of conventional configurations, and as an extension of main core memory. The decision was motivated by an analysis which yielded the following results:\u0000 • The effective rate at which the system can deliver pages to user tasks is increased to its theoretical limit with LCS, representing a significant improvement over drum performance.\u0000 • The potential response time to users is decreased because LCS has no rotational delay.\u0000 • Less main core is needed for effective system operation.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115178017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a conceptual design of a distributed processing system aimed at providing general purpose computing and very high tolerances of failures while taking advantage of future large scale integration techniques. In addition it was desirable for the system to be easily expandable or contractable so that it could be flexibly applied to a variety of applications. The system was designed for a space-borne application; however the structure should have general applicability to a variety of systems. A method of analyzing the parallelism inherent in the computations to be carried out on the system is also presented. The distributed processor can thus be efficiently organized for the composition of general purpose (or special purpose) computations for a given application. Thought has also been directed toward the design of a system executive program and failure detection and reconfiguration methods; however, these features are still in development and only a brief discussion of the system executive is given in this paper.
{"title":"A distributed processing system for general purpose computing","authors":"G. Burnett, L. Koczela, R. A. Hokom","doi":"10.1145/1465611.1465710","DOIUrl":"https://doi.org/10.1145/1465611.1465710","url":null,"abstract":"This paper presents a conceptual design of a distributed processing system aimed at providing general purpose computing and very high tolerances of failures while taking advantage of future large scale integration techniques. In addition it was desirable for the system to be easily expandable or contractable so that it could be flexibly applied to a variety of applications. The system was designed for a space-borne application; however the structure should have general applicability to a variety of systems. A method of analyzing the parallelism inherent in the computations to be carried out on the system is also presented. The distributed processor can thus be efficiently organized for the composition of general purpose (or special purpose) computations for a given application. Thought has also been directed toward the design of a system executive program and failure detection and reconfiguration methods; however, these features are still in development and only a brief discussion of the system executive is given in this paper.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129421328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The high speed computer area seems to be dominated by a continued reduction in the price of computer switching circuits and the approach of these circuits to speeds at which the velocity of light becomes an important factor. Barring some unforeseen dramatic change in technology, the outlook for increased computational speed in the classical sequential machine organization becomes increasingly grim. Simultaneity, or parallelism, therefore, becomes more and more essential if computer performance is to continue to increase.
{"title":"Observations on high-performance machines","authors":"D. Senzig","doi":"10.1145/1465611.1465714","DOIUrl":"https://doi.org/10.1145/1465611.1465714","url":null,"abstract":"The high speed computer area seems to be dominated by a continued reduction in the price of computer switching circuits and the approach of these circuits to speeds at which the velocity of light becomes an important factor. Barring some unforeseen dramatic change in technology, the outlook for increased computational speed in the classical sequential machine organization becomes increasingly grim. Simultaneity, or parallelism, therefore, becomes more and more essential if computer performance is to continue to increase.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"454 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123368939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
"If you look at Automata which have been built by men or exist in nature, you will very frequently notice that their structure is controlled only partly by rigorous requirements and is controlled to a much larger extent by the manner in which they might fail and by the (more or less effective) precautionary measures which have been taken against their failure. There can be no question of eliminating failures or of completely paralyzing the effects of failures. All we can try to do is to arrange an automaton so that in the vast majority of failures, it can continue to operate."
{"title":"Systems recovery from main frame errors","authors":"R. Armstrong, H. Conrad, P. Ferraiolo, P. Webb","doi":"10.1145/1465611.1465664","DOIUrl":"https://doi.org/10.1145/1465611.1465664","url":null,"abstract":"\"If you look at Automata which have been built by men or exist in nature, you will very frequently notice that their structure is controlled only partly by rigorous requirements and is controlled to a much larger extent by the manner in which they might fail and by the (more or less effective) precautionary measures which have been taken against their failure. There can be no question of eliminating failures or of completely paralyzing the effects of failures. All we can try to do is to arrange an automaton so that in the vast majority of failures, it can continue to operate.\"","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134282759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A programmer using existing programming languages typically codes a problem by (1) defining it, then (2) analyzing the processing requirements, and (3) on the basis of these requirements, choosing a data representation, and finally, (4) coding the problem. Almost always, difficulties arise because necessary processing not envisioned in the analysis phase makes the chosen data representation inappropriate because of a lack of space, efficiency, ease of use or some combination of these. The decision is then made to either live with these difficulties or change the data representation. Unfortunately, changing the data representation usually involves making extensive changes to the code already written. Furthermore, there is no assurance that this dilemma will not recur with the new data representation.
{"title":"Dataless programming","authors":"R. Balzer","doi":"10.1145/1465611.1465683","DOIUrl":"https://doi.org/10.1145/1465611.1465683","url":null,"abstract":"A programmer using existing programming languages typically codes a problem by (1) defining it, then (2) analyzing the processing requirements, and (3) on the basis of these requirements, choosing a data representation, and finally, (4) coding the problem. Almost always, difficulties arise because necessary processing not envisioned in the analysis phase makes the chosen data representation inappropriate because of a lack of space, efficiency, ease of use or some combination of these. The decision is then made to either live with these difficulties or change the data representation. Unfortunately, changing the data representation usually involves making extensive changes to the code already written. Furthermore, there is no assurance that this dilemma will not recur with the new data representation.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133081097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Present day hybrid systems are characterized by increasingly sophisticated software requirements. The first attempts at creation of useful analog-digital computer systems were faced with a multitude of hardware problems associated with communication between discrete and sequential machines on the one hand and continuous and parallel machines on the other. Now, however, since the hardware marriage has been successfully consumated, a multitude of software problems remain. This panel will concentrate on the most important of these problems. Position papers by each of the four panelists are presented below.
{"title":"The impact of new technology on the analog hybrid art-I","authors":"G. Bekey","doi":"10.1145/1465611.1465678","DOIUrl":"https://doi.org/10.1145/1465611.1465678","url":null,"abstract":"Present day hybrid systems are characterized by increasingly sophisticated software requirements. The first attempts at creation of useful analog-digital computer systems were faced with a multitude of hardware problems associated with communication between discrete and sequential machines on the one hand and continuous and parallel machines on the other. Now, however, since the hardware marriage has been successfully consumated, a multitude of software problems remain. This panel will concentrate on the most important of these problems. Position papers by each of the four panelists are presented below.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132171134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We do not, it seems, have a very clear and commonly agreed upon set of notions about data---either what they are, how they should be fed and cared for, or their relation to the design of programming languages and operating systems. This paper sketches a theory of data which may serve to clarify these questions. It is based on a number of old ideas and may, as a result, seem obvious. Be that as it may, some of these old ideas are not common currency in our field, either separately or in combination; it is hoped that rehashing them in a somewhat new form may prove to be at least suggestive.
{"title":"Another look at data","authors":"G. Mealy","doi":"10.1145/1465611.1465682","DOIUrl":"https://doi.org/10.1145/1465611.1465682","url":null,"abstract":"We do not, it seems, have a very clear and commonly agreed upon set of notions about data---either what they are, how they should be fed and cared for, or their relation to the design of programming languages and operating systems. This paper sketches a theory of data which may serve to clarify these questions. It is based on a number of old ideas and may, as a result, seem obvious. Be that as it may, some of these old ideas are not common currency in our field, either separately or in combination; it is hoped that rehashing them in a somewhat new form may prove to be at least suggestive.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115873590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many computer systems include one or more high transfer rate secondary storage devices in addition to numerous input-output (I/O) devices. When the processors which manage these devices (frequently referred to as I/O controllers or channels), together with the central processing unit (CPU), communicate almost exclusively with a single primary memory, as in the configuration illustrated in Figure 1, the problem of providing these processors with adequate data transfer capability becomes formidable. Ideally, each processor should be able to transfer a datum to or from primary memory at its convenience without regard to the ability of the memory to accept or supply the datum at that particular moment, or the ability of the processor-to-memory transfer path (memory bus) to effect the transfer. Unfortunately, economic and technical considerations dictate that memory systems of the capability implied must be relegated to the role of standards with which more practical systems may be compared. With practical memory systems, the rate at which data can be transferred between processors and primary memory is limited by the transfer capabilities, or bandwidths, of the memory itself and of the memory busses over which the transfers are made. Furthermore, since the memory system is shared by several processors, care must be taken to keep performance from being degraded excessively by interference caused by simultaneous attempts on the part of several processors to utilize a facility, such as a memory bus, which is capable of handling only a single data transfer at any given moment.
{"title":"Intercommunication of processors and memory","authors":"M. Pirtle","doi":"10.1145/1465611.1465695","DOIUrl":"https://doi.org/10.1145/1465611.1465695","url":null,"abstract":"Many computer systems include one or more high transfer rate secondary storage devices in addition to numerous input-output (I/O) devices. When the processors which manage these devices (frequently referred to as I/O controllers or channels), together with the central processing unit (CPU), communicate almost exclusively with a single primary memory, as in the configuration illustrated in Figure 1, the problem of providing these processors with adequate data transfer capability becomes formidable. Ideally, each processor should be able to transfer a datum to or from primary memory at its convenience without regard to the ability of the memory to accept or supply the datum at that particular moment, or the ability of the processor-to-memory transfer path (memory bus) to effect the transfer. Unfortunately, economic and technical considerations dictate that memory systems of the capability implied must be relegated to the role of standards with which more practical systems may be compared. With practical memory systems, the rate at which data can be transferred between processors and primary memory is limited by the transfer capabilities, or bandwidths, of the memory itself and of the memory busses over which the transfers are made. Furthermore, since the memory system is shared by several processors, care must be taken to keep performance from being degraded excessively by interference caused by simultaneous attempts on the part of several processors to utilize a facility, such as a memory bus, which is capable of handling only a single data transfer at any given moment.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124124294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When is a display a large scale display? There are no hard and fast rules for answering this question. An arbitrary, but convenient starting point is to say that anything larger than 30 inches is considered large scale because 30 inches is the practical limit on cathode ray tube (CRT) size. Why should we want a large scale display? The most obvious reason is that many people have a need to view the same display surface and as the audience grows larger, so must the size of the display.
{"title":"How do we stand on the big board?","authors":"Murray L. Kesselman","doi":"10.1145/1465611.1465632","DOIUrl":"https://doi.org/10.1145/1465611.1465632","url":null,"abstract":"When is a display a large scale display? There are no hard and fast rules for answering this question. An arbitrary, but convenient starting point is to say that anything larger than 30 inches is considered large scale because 30 inches is the practical limit on cathode ray tube (CRT) size. Why should we want a large scale display? The most obvious reason is that many people have a need to view the same display surface and as the audience grows larger, so must the size of the display.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114870221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In general, computer software is getting more complicated at an increasing rate. Unfortunately, the people who have to develop this software have been at it for only a relatively short time. The result is predictable: ever-larger software troubles. This, in turn, leads many people to feel insecure about software development, to think of it as a modern "black art" for which even the most able practitioners lose the recipe every few months.
{"title":"How to write software specifications","authors":"Philip H. Hartman, D. H. Owens","doi":"10.1145/1465611.1465713","DOIUrl":"https://doi.org/10.1145/1465611.1465713","url":null,"abstract":"In general, computer software is getting more complicated at an increasing rate. Unfortunately, the people who have to develop this software have been at it for only a relatively short time. The result is predictable: ever-larger software troubles. This, in turn, leads many people to feel insecure about software development, to think of it as a modern \"black art\" for which even the most able practitioners lose the recipe every few months.","PeriodicalId":265740,"journal":{"name":"AFIPS '67 (Fall)","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132521815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}