A hardware/software trade-off is the establishment of the division of responsibility for performing system functions between the software, firmware and hardware. This is part and parcel of the fundamental process of defining computer architecture. It begins the day a computer is conceived and may be carried on by an ever widening group of individuals until the last computer of a given model is retired. There are areas of the trade-off which are the sole preserve of the manufacturer and his hardware/software team. Other areas of the trade-off are the responsibility of the user, or independent equipment manufacturers.
{"title":"Hardware/software trade-offs: reasons and directions","authors":"Richard L. Mandell","doi":"10.1145/1479992.1480056","DOIUrl":"https://doi.org/10.1145/1479992.1480056","url":null,"abstract":"A hardware/software trade-off is the establishment of the division of responsibility for performing system functions between the software, firmware and hardware. This is part and parcel of the fundamental process of defining computer architecture. It begins the day a computer is conceived and may be carried on by an ever widening group of individuals until the last computer of a given model is retired. There are areas of the trade-off which are the sole preserve of the manufacturer and his hardware/software team. Other areas of the trade-off are the responsibility of the user, or independent equipment manufacturers.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132842977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Development of the Kesel-Winninghoff multi-layer baroclinic primitive equation atmospheric prediction model began at the Fleet Numerical Weather Central, Monterey, California, in late 1968. The model, herein referred to as the Primitive Equation Model (PEM), was initially written as a single processor version to be executed in one of the dual processors of one of the two FNWC CDC 6500 dual processor computer systems. This version, however, required slightly over six and one-half hours to compute a set of 72 hour predictions.
{"title":"Horizontal domain partitioning of the Navy atmospheric primitive equation prediction model","authors":"Edward Morenoff, P. G. Kesel, L. Clarke","doi":"10.1145/1479992.1480046","DOIUrl":"https://doi.org/10.1145/1479992.1480046","url":null,"abstract":"Development of the Kesel-Winninghoff multi-layer baroclinic primitive equation atmospheric prediction model began at the Fleet Numerical Weather Central, Monterey, California, in late 1968. The model, herein referred to as the Primitive Equation Model (PEM), was initially written as a single processor version to be executed in one of the dual processors of one of the two FNWC CDC 6500 dual processor computer systems. This version, however, required slightly over six and one-half hours to compute a set of 72 hour predictions.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"527 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133166882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic memory management is an important advance in memory allocation especially in virtual memory and multiprogramming systems. In this paper we consider the case of paged memory systems: that is, the physical and logical address space of these systems is partitioned into equal size blocks of contiguous addresses. The paged memory system has been used by many computer systems. However, the basic memory management problem of deciding which pages should be kept in the main memory to allow efficient operation without wasting space is still not sufficiently understood and has been of considerable interest. Obviously, pages should only be removed from the main memory if there is a very low probability that they will be used in the near future. The difficulty lies in trying to determine which pages to remove, without incurring difficult implementation problems at the same time.
{"title":"The page fault frequency replacement algorithm","authors":"W. Chu, H. Opderbeck","doi":"10.1145/1479992.1480077","DOIUrl":"https://doi.org/10.1145/1479992.1480077","url":null,"abstract":"Dynamic memory management is an important advance in memory allocation especially in virtual memory and multiprogramming systems. In this paper we consider the case of paged memory systems: that is, the physical and logical address space of these systems is partitioned into equal size blocks of contiguous addresses. The paged memory system has been used by many computer systems. However, the basic memory management problem of deciding which pages should be kept in the main memory to allow efficient operation without wasting space is still not sufficiently understood and has been of considerable interest. Obviously, pages should only be removed from the main memory if there is a very low probability that they will be used in the near future. The difficulty lies in trying to determine which pages to remove, without incurring difficult implementation problems at the same time.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"27 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116424787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
"Associativity" is a highly desirable property of memory devices. Unfortunately, it does not seem to fit very well into the structure of contemporary randomaccess memories. A realization of associativity on such memories is always involved with high density of logic, and in today's technology is bound to be very expensive. Virtually all existing implementations of associative memories are accordingly on a very small scale and are typically used for special purposes such as the support of "virtual memory" schemes. From this situation one can get the impression that large scale associative memories are impractical. Fortunately, however, it turns out that rotating memories, unlike random access memories, are very natural hosts for at least a limited degree of associative addressing.
{"title":"Rotating storage devices as partially associative memories","authors":"N. Minsky","doi":"10.1145/1479992.1480075","DOIUrl":"https://doi.org/10.1145/1479992.1480075","url":null,"abstract":"\"Associativity\" is a highly desirable property of memory devices. Unfortunately, it does not seem to fit very well into the structure of contemporary randomaccess memories. A realization of associativity on such memories is always involved with high density of logic, and in today's technology is bound to be very expensive. Virtually all existing implementations of associative memories are accordingly on a very small scale and are typically used for special purposes such as the support of \"virtual memory\" schemes. From this situation one can get the impression that large scale associative memories are impractical. Fortunately, however, it turns out that rotating memories, unlike random access memories, are very natural hosts for at least a limited degree of associative addressing.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131542534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The theory of fault-tolerant computer design has developed rapidly. Several techniques using hardware or software have been suggested. A student is often faced with the problem of developing a common perspective for a variety of methods. In this paper we attempt to develop a simple framework within which different methods can be compared. We use a set of very elementary indices to construct the framework. The indices are quite crude and our framework is somewhat ad hoc. Though a unified theory would be extremely useful we have not attempted to develop one here. Our discussion is a first pass at identifying some goals of reliable design and an attempt at quantifying some parameters. We discuss only a very small set of the techniques that have been proposed for fault-tolerant computers. Methods for constructing relevant indices for these techniques are presented. We feel that these indices are relevant for most reliability techniques.
{"title":"A framework for hardware-software tradeoffs in the design of fault-tolerant computers","authors":"K. Chandy, C. Ramamoorthy, A. Cowan","doi":"10.1145/1479992.1480000","DOIUrl":"https://doi.org/10.1145/1479992.1480000","url":null,"abstract":"The theory of fault-tolerant computer design has developed rapidly. Several techniques using hardware or software have been suggested. A student is often faced with the problem of developing a common perspective for a variety of methods. In this paper we attempt to develop a simple framework within which different methods can be compared. We use a set of very elementary indices to construct the framework. The indices are quite crude and our framework is somewhat ad hoc. Though a unified theory would be extremely useful we have not attempted to develop one here. Our discussion is a first pass at identifying some goals of reliable design and an attempt at quantifying some parameters. We discuss only a very small set of the techniques that have been proposed for fault-tolerant computers. Methods for constructing relevant indices for these techniques are presented. We feel that these indices are relevant for most reliability techniques.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128657878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The history of modern computer simulation of urban affairs represents the confluence of a number of trends which came to maturity in the middle of this century. Probably the oldest of these tendencies is the emphasis on planned urban development which has existed for millennia and which in the last century has demonstrated considerable vitality as a reaction to the excesses of the industrial revolution and the poverty and squalor of nineteenth-century cities. A second strand is the development of economic and sociological theory which goes a considerable distance in explaining some aspects of the organization and form of metropolitan settlement and its growth. These theories have a long history, but have matured principally during the 1920's and 1930's. Finally, as a methodological catalyst, the development of the automobile, of a Federal Bureau of Public Roads dedicated to providing facilities for it, and of the large-scale metropolitan study based on the origin-and-destination survey have together made possible the crystallization and further growth of simulation methods. These methods are thus proximately based on the engineering attitude and computer technology of the large-scale transportation study, but they are in a position to draw on a number of other important streams of intellectual development.
{"title":"Computer simulations of the metropolis","authors":"B. Harris","doi":"10.1145/1479992.1480048","DOIUrl":"https://doi.org/10.1145/1479992.1480048","url":null,"abstract":"The history of modern computer simulation of urban affairs represents the confluence of a number of trends which came to maturity in the middle of this century. Probably the oldest of these tendencies is the emphasis on planned urban development which has existed for millennia and which in the last century has demonstrated considerable vitality as a reaction to the excesses of the industrial revolution and the poverty and squalor of nineteenth-century cities. A second strand is the development of economic and sociological theory which goes a considerable distance in explaining some aspects of the organization and form of metropolitan settlement and its growth. These theories have a long history, but have matured principally during the 1920's and 1930's. Finally, as a methodological catalyst, the development of the automobile, of a Federal Bureau of Public Roads dedicated to providing facilities for it, and of the large-scale metropolitan study based on the origin-and-destination survey have together made possible the crystallization and further growth of simulation methods. These methods are thus proximately based on the engineering attitude and computer technology of the large-scale transportation study, but they are in a position to draw on a number of other important streams of intellectual development.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134433067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A designer of a computer-communications network must consider the reliability of a given network design as a function of its realization costs. Although there is an abundance of graph theoretic and queuing tools that have generated algorithms for the topological synthesis and analysis of large networks, it is unfortunate that the reliability and cost dimensions of the problem have not been satisfactorily related.
{"title":"Minimum cost-reliable computer communication networks","authors":"J. DeMercado","doi":"10.1145/1479992.1480069","DOIUrl":"https://doi.org/10.1145/1479992.1480069","url":null,"abstract":"A designer of a computer-communications network must consider the reliability of a given network design as a function of its realization costs. Although there is an abundance of graph theoretic and queuing tools that have generated algorithms for the topological synthesis and analysis of large networks, it is unfortunate that the reliability and cost dimensions of the problem have not been satisfactorily related.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133801763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this paper is to establish the applicability of program-proving techniques to the verification of operating systems, control programs and synchronization programs. All the illustrative examples to be presented use Dijkstra's P and V operations for controlling the synchronization of competing processes. However, the techniques discussed are applicable to any set of such control primitives. A major portion of the paper is devoted to the proof of correctness of two programs devised by Courtois et al. that control the sequencing of "readers" and "writers" requesting the use of a common device.
{"title":"The application of program-proving techniques to the verification of synchronization processes","authors":"K. Levitt","doi":"10.1145/1479992.1479997","DOIUrl":"https://doi.org/10.1145/1479992.1479997","url":null,"abstract":"The purpose of this paper is to establish the applicability of program-proving techniques to the verification of operating systems, control programs and synchronization programs. All the illustrative examples to be presented use Dijkstra's P and V operations for controlling the synchronization of competing processes. However, the techniques discussed are applicable to any set of such control primitives. A major portion of the paper is devoted to the proof of correctness of two programs devised by Courtois et al. that control the sequencing of \"readers\" and \"writers\" requesting the use of a common device.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"309 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133522695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many social research programs are characterized by a stringent requirement that identifiable data collected on the subjects of research be kept confidential. This requirement, coupled with the increasing number of sensitive, sometimes controversial research efforts, has stimulated social scientists' interest in legal, administrative, and technical methods for assuring that confidentiality is maintained. We concern ourselves primarily with the technical methods in this paper, treating "security" as a partial operationalization of the notion of confidentiality.
{"title":"Security of information processing: implications from social research","authors":"R. Boruch","doi":"10.1145/1479992.1480051","DOIUrl":"https://doi.org/10.1145/1479992.1480051","url":null,"abstract":"Many social research programs are characterized by a stringent requirement that identifiable data collected on the subjects of research be kept confidential. This requirement, coupled with the increasing number of sensitive, sometimes controversial research efforts, has stimulated social scientists' interest in legal, administrative, and technical methods for assuring that confidentiality is maintained. We concern ourselves primarily with the technical methods in this paper, treating \"security\" as a partial operationalization of the notion of confidentiality.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121800165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There have been very few systematic studies of the effect on system performance of strategies for scheduling jobs for execution in a multi-programming system. Most of this work has been concerned with empirical efforts to obtain job mixes which effectively utilize the central processor. These efforts are frequently carried out in commercial or production oriented installations where the job load consists of a relatively few jobs whose internal characteristics can be well determined. This approach is not feasible in an environment where internal job characteristics are not known before run time, or where internal job characteristics may vary rapidly. Such circumstances are often the case in an industrial or research laboratory or in a university computer center. This study uses as its measures for determining job scheduling strategies such quantities as are frequently known or can be accurately estimated such as amount of core memory required, processor service time required, etc. The specific job scheduling strategies used include first-come-first-serve (FCFS), shortest processor service time first (STF), smallest cost (cost = core size X processor service time) first (SCF), and smallest memory requirement first (SMF). We evaluated both preemptive resume and non-preemptive job scheduling. It is typical of virtually all of the previous work that the emphasis has been on improving CPU utilization. There may often be other goals which are more useful measures of performance such as throughput (job completion rate per unit time), the expected wait time before completion of a given class of job, the utilization of I/O resources, etc. We collected several measures of system performance including all of those listed previously to assess the effects of job scheduling. There has been very little previous study of the interaction between job scheduling and CPU scheduling. We systematically vary CPU scheduling algorithms in conjunction with alteration of job scheduling strategies. Those job scheduling strategies which give high throughput are characteristically observed to be more sensitive to CPU scheduling methods than those which yield relatively low throughput. We do not, however, attempt to correlate job scheduling methods with internal job characteristics such as CPU burst time, etc. We did, however, consider the effect of skewed CPU burst time distribution on performance under different pairs of strategies.
{"title":"The interaction of multi-programming job scheduling and CPU scheduling","authors":"J. Browne, J. Lan, F. Baskett","doi":"10.1145/1479992.1479995","DOIUrl":"https://doi.org/10.1145/1479992.1479995","url":null,"abstract":"There have been very few systematic studies of the effect on system performance of strategies for scheduling jobs for execution in a multi-programming system. Most of this work has been concerned with empirical efforts to obtain job mixes which effectively utilize the central processor. These efforts are frequently carried out in commercial or production oriented installations where the job load consists of a relatively few jobs whose internal characteristics can be well determined. This approach is not feasible in an environment where internal job characteristics are not known before run time, or where internal job characteristics may vary rapidly. Such circumstances are often the case in an industrial or research laboratory or in a university computer center. This study uses as its measures for determining job scheduling strategies such quantities as are frequently known or can be accurately estimated such as amount of core memory required, processor service time required, etc. The specific job scheduling strategies used include first-come-first-serve (FCFS), shortest processor service time first (STF), smallest cost (cost = core size X processor service time) first (SCF), and smallest memory requirement first (SMF). We evaluated both preemptive resume and non-preemptive job scheduling. It is typical of virtually all of the previous work that the emphasis has been on improving CPU utilization. There may often be other goals which are more useful measures of performance such as throughput (job completion rate per unit time), the expected wait time before completion of a given class of job, the utilization of I/O resources, etc. We collected several measures of system performance including all of those listed previously to assess the effects of job scheduling. There has been very little previous study of the interaction between job scheduling and CPU scheduling. We systematically vary CPU scheduling algorithms in conjunction with alteration of job scheduling strategies. Those job scheduling strategies which give high throughput are characteristically observed to be more sensitive to CPU scheduling methods than those which yield relatively low throughput. We do not, however, attempt to correlate job scheduling methods with internal job characteristics such as CPU burst time, etc. We did, however, consider the effect of skewed CPU burst time distribution on performance under different pairs of strategies.","PeriodicalId":262093,"journal":{"name":"AFIPS '72 (Fall, part I)","volume":"23 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1972-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127567152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}