It is the objective of this paper to briefly trace the history of the idea and the difficulties involved with defining or implementing it. In doing this, we first consider the general control problem and instruction formats. Next, storage implementations of the control function are considered and a restricted definition of microprogramming is proposed. This is then evaluated from a technological, architectural and programming point of view. We hope to show that our (demanding) definition of microprogramming is now technologically feasible and attractive from systems considerations.
{"title":"Microprogramming revisited","authors":"M. J. Flynn, M. McLaren","doi":"10.1145/800196.806013","DOIUrl":"https://doi.org/10.1145/800196.806013","url":null,"abstract":"It is the objective of this paper to briefly trace the history of the idea and the difficulties involved with defining or implementing it. In doing this, we first consider the general control problem and instruction formats. Next, storage implementations of the control function are considered and a restricted definition of microprogramming is proposed. This is then evaluated from a technological, architectural and programming point of view. We hope to show that our (demanding) definition of microprogramming is now technologically feasible and attractive from systems considerations.","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126443810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this paper is to describe a language processor which is presently being developed at the United Aircraft Research Laboratories. The processor is called the Semantic Plex Processor* and is a bootstrap processor in the sense that it starts with a basic vocabulary, but one which is capable of generating new language with relative ease. Thus the set of statements which are ultimately acceptable to the processor is open-ended. Furthermore, the set of statements which are meaningful to the processor is a function not only of the set of statements which occurred in the past, but also of those which may occur in the future. (In this paper the words “time,” “past,” “future,” etc., are used to denote relative positions of statements as they appear from left to right in the input string.)
{"title":"A semantic model for a language processor","authors":"Robert V. Zara","doi":"10.1145/800196.806003","DOIUrl":"https://doi.org/10.1145/800196.806003","url":null,"abstract":"The purpose of this paper is to describe a language processor which is presently being developed at the United Aircraft Research Laboratories. The processor is called the Semantic Plex Processor* and is a bootstrap processor in the sense that it starts with a basic vocabulary, but one which is capable of generating new language with relative ease. Thus the set of statements which are ultimately acceptable to the processor is open-ended. Furthermore, the set of statements which are meaningful to the processor is a function not only of the set of statements which occurred in the past, but also of those which may occur in the future. (In this paper the words “time,” “past,” “future,” etc., are used to denote relative positions of statements as they appear from left to right in the input string.)","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130065329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To the author's knowledge three other operational compiler-compiler systems, whose strategy is similar to the scheme documented here, have been developed. The Feldman system [6, 7, 8] is a bounded context, syntax directed system. Syntax specifications are expressed in Floyd production language (FPL) and semantics are defined in Feldman semantic language (FSL). FSL seems to be a good model on which to base semantic languages. A lexical analysis (subscan to distinguish identifiers, operators, delimiters and punctuation marks) precedes the full analysis. In the COGENT system [5], on the other hand, each character is interpreted separately, hence allowing greater flexibility (e.g. its use in a symbolic differentiation program) and enforcing more detailed attention to syntax. Syntax for a source program is virtually in Backus Normal Form (BNF) [3] and this system is syntax directed along the lines suggested by Irons [1]. Much has been written on the compiler-compiler system of Brooker and Morris [2, 22, 23] which has superficial similarities with the COGENT scheme and the system described below.
据作者所知,已经开发了另外三个操作编译器-编译器系统,它们的策略与本文记录的方案相似。Feldman系统[6,7,8]是一个有界上下文、语法导向的系统。语法规范用弗洛伊德生成语言(FPL)表示,语义定义用费尔德曼语义语言(FSL)定义。FSL似乎是建立语义语言的一个很好的模型。词法分析(用于区分标识符、操作符、分隔符和标点符号的子扫描)在完整分析之前进行。另一方面,在COGENT系统[5]中,每个字符被单独解释,因此允许更大的灵活性(例如它在符号区分程序中的使用),并强制对语法进行更详细的关注。源程序的语法实际上是巴克斯范式(Backus Normal Form, BNF)[3],而这个系统的语法是沿着Irons[1]建议的方向引导的。Brooker和Morris[2,22,23]的编译器-编译器系统已经写了很多,它与COGENT方案和下面描述的系统有表面上的相似之处。
{"title":"A compiler—compiler system","authors":"R. Trout","doi":"10.1145/800196.806002","DOIUrl":"https://doi.org/10.1145/800196.806002","url":null,"abstract":"To the author's knowledge three other operational compiler-compiler systems, whose strategy is similar to the scheme documented here, have been developed. The Feldman system [6, 7, 8] is a bounded context, syntax directed system. Syntax specifications are expressed in Floyd production language (FPL) and semantics are defined in Feldman semantic language (FSL). FSL seems to be a good model on which to base semantic languages. A lexical analysis (subscan to distinguish identifiers, operators, delimiters and punctuation marks) precedes the full analysis. In the COGENT system [5], on the other hand, each character is interpreted separately, hence allowing greater flexibility (e.g. its use in a symbolic differentiation program) and enforcing more detailed attention to syntax. Syntax for a source program is virtually in Backus Normal Form (BNF) [3] and this system is syntax directed along the lines suggested by Irons [1]. Much has been written on the compiler-compiler system of Brooker and Morris [2, 22, 23] which has superficial similarities with the COGENT scheme and the system described below.","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127665072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The most widespread use of computers by hospitals to date is for administrative purposes. In contrast to these essentially nonmedical uses, the Instrumentation Field Station of the United States Public Health Service and other medical groups have developed programs for automatic analysis of medical signals. In these systems the electrocardiogram served as the model. Typically there are two program parts, pattern recognition and diagnosis. The IFS systems are primarily for clinical use now rather than research. The analysis is performed without intervention or supervision by a physician but provides in English language form a validated diagnostic aid, frees him from routine tasks, provides more time for patient care and expedites return of medical information. These advantages make it likely that electrocardiograms, and eventually most other medical signal waveforms, will be analyzed by computers.
{"title":"The computerized electrocardiogram: A model for medical signal analysis","authors":"A. Weihrer, J. Whiteman, C. Cáceres","doi":"10.1145/800196.805998","DOIUrl":"https://doi.org/10.1145/800196.805998","url":null,"abstract":"The most widespread use of computers by hospitals to date is for administrative purposes. In contrast to these essentially nonmedical uses, the Instrumentation Field Station of the United States Public Health Service and other medical groups have developed programs for automatic analysis of medical signals. In these systems the electrocardiogram served as the model. Typically there are two program parts, pattern recognition and diagnosis. The IFS systems are primarily for clinical use now rather than research. The analysis is performed without intervention or supervision by a physician but provides in English language form a validated diagnostic aid, frees him from routine tasks, provides more time for patient care and expedites return of medical information. These advantages make it likely that electrocardiograms, and eventually most other medical signal waveforms, will be analyzed by computers.","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129087240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of Computer Aided Design is to create in the computer a model of the design problem. For example, an electronic circuit may be being designed; the engineer will use an environment consisting of standard circuit parts, with the laws that govern the operation, and will use this environment, together with the constraints on performance, to build a model which is his proposed solution to the design problem. This model may now be tested against the specification and will generally be modified iteratively until the design goal is achieved, in this case a layout with the required characteristics. It may also be that the design problem is being tackled by a team, in which case several users of the design system may wish to access and transform the model, for instance to display views and projections of it, or check on how it interfaces with a parallel project. The rest of this paper is concerned with the basic requirements of the data structure package, and with a survey of those packages which have been implemented and about which information is available.
{"title":"Compound data structure for computer aided design; a survey","authors":"J. Gray","doi":"10.1145/800196.806005","DOIUrl":"https://doi.org/10.1145/800196.806005","url":null,"abstract":"The aim of Computer Aided Design is to create in the computer a model of the design problem. For example, an electronic circuit may be being designed; the engineer will use an environment consisting of standard circuit parts, with the laws that govern the operation, and will use this environment, together with the constraints on performance, to build a model which is his proposed solution to the design problem. This model may now be tested against the specification and will generally be modified iteratively until the design goal is achieved, in this case a layout with the required characteristics. It may also be that the design problem is being tackled by a team, in which case several users of the design system may wish to access and transform the model, for instance to display views and projections of it, or check on how it interfaces with a parallel project. The rest of this paper is concerned with the basic requirements of the data structure package, and with a survey of those packages which have been implemented and about which information is available.","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126875359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, considerable attention has been given to find reliable methods capable of producing, within a digital computer, pseudo-random numbers obeying the uniform distribution on the unit interval. Apparently, the most popular method has been the congruence algorithm whose basic form Xi+1 = aX1 + b mod 2m (1) can be easily implemented on a binary computer with word size of m bits. Since its introduction, a number of papers1-3 have been written in which techniques, such as suggesting formulae1 to compute optimal values for a and b, have been presented to improve the statistical properties of the method. As a consequence, several versions with values for a and b to suit everybody's needs are now in existence. One must be aware that an analysis based on statistical testing cannot be entirely conclusive, especially if the power of some tests used is not known. Nevertheless, the comparative analysis of this study does indicate that a generator based on Tausworthe's concept exhibits a statistical behavior that is as good if not superior to that of the congruence algorithm. Therefore, the following advantage in its use are apparent: (1) Its functional form and statistical behavior are entirely machine independent. (2) It has been shown analytically that it generates values of a random variable uniformly distributed on the unit interval. (3) It can be easily programmed in FORTRAN without sacrificing any of its characteristics. (To the author's knowledge, none of these advantages can be claimed by any of the existing congruence algorithms.)
{"title":"A comparative analysis of two concepts in the generation of uniform pseudo-random numbers","authors":"G. Canavos","doi":"10.1145/800196.806017","DOIUrl":"https://doi.org/10.1145/800196.806017","url":null,"abstract":"In recent years, considerable attention has been given to find reliable methods capable of producing, within a digital computer, pseudo-random numbers obeying the uniform distribution on the unit interval. Apparently, the most popular method has been the congruence algorithm whose basic form Xi+1 = aX1 + b mod 2m (1) can be easily implemented on a binary computer with word size of m bits. Since its introduction, a number of papers1-3 have been written in which techniques, such as suggesting formulae1 to compute optimal values for a and b, have been presented to improve the statistical properties of the method. As a consequence, several versions with values for a and b to suit everybody's needs are now in existence. One must be aware that an analysis based on statistical testing cannot be entirely conclusive, especially if the power of some tests used is not known. Nevertheless, the comparative analysis of this study does indicate that a generator based on Tausworthe's concept exhibits a statistical behavior that is as good if not superior to that of the congruence algorithm. Therefore, the following advantage in its use are apparent: (1) Its functional form and statistical behavior are entirely machine independent. (2) It has been shown analytically that it generates values of a random variable uniformly distributed on the unit interval. (3) It can be easily programmed in FORTRAN without sacrificing any of its characteristics. (To the author's knowledge, none of these advantages can be claimed by any of the existing congruence algorithms.)","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124881961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As computing systems become more and more powerful, the tasks for which these systems are used become more and more sophisticated. In analyzing a complete task, one finds that there are certain portions which are best allocated to the computer and certain portions which are best allocated—at least at present—to man. Calculations and repetitive operations are done most efficiently by computer; intuitive functions and certain types of heuristic information are best handled by man. If man and the computer are to function together efficiently a device is needed to facilitate the communication of complex information between them. The sophisticated CRT display console is by far the best device available today for accomplishing this communication. To use the full capability of the graphic display console, both the computer and the man at the console must be able to create and manipulate drawings as well as alphanumeric information. If the computer is to do this, it needs an internal representation of the drawing information which provides not only the lines and points but also the information on how these are related to each other. The internal or computer representation of a drawing has come to be called the “model.”
{"title":"GRASP—a graphic service program","authors":"E. Thomas","doi":"10.1145/800196.806008","DOIUrl":"https://doi.org/10.1145/800196.806008","url":null,"abstract":"As computing systems become more and more powerful, the tasks for which these systems are used become more and more sophisticated. In analyzing a complete task, one finds that there are certain portions which are best allocated to the computer and certain portions which are best allocated—at least at present—to man. Calculations and repetitive operations are done most efficiently by computer; intuitive functions and certain types of heuristic information are best handled by man. If man and the computer are to function together efficiently a device is needed to facilitate the communication of complex information between them. The sophisticated CRT display console is by far the best device available today for accomplishing this communication. To use the full capability of the graphic display console, both the computer and the man at the console must be able to create and manipulate drawings as well as alphanumeric information. If the computer is to do this, it needs an internal representation of the drawing information which provides not only the lines and points but also the information on how these are related to each other. The internal or computer representation of a drawing has come to be called the “model.”","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"52 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130273604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classification systems in the sciences usually provide an unambiguous structure of mutually exclusive, collectively exhaustive categories. The same formal structuralization, when strictly applied to the classification of technical literature for retrieval purposes, has proved inadequate. At another extreme, approaches to indexing which preclude any hierarchical association are similarly disappointing.
{"title":"DIALOG: An operational on-line reference retrieval system","authors":"R. Summit","doi":"10.1145/800196.805974","DOIUrl":"https://doi.org/10.1145/800196.805974","url":null,"abstract":"Classification systems in the sciences usually provide an unambiguous structure of mutually exclusive, collectively exhaustive categories. The same formal structuralization, when strictly applied to the classification of technical literature for retrieval purposes, has proved inadequate. At another extreme, approaches to indexing which preclude any hierarchical association are similarly disappointing.","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116634961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among the limiting factors in computer performance are the rates at which information can be transferred into and out of the memory. Obviously the effective data transfer rate depends on the characteristics of the peripheral unit (speed of data collection or data posting) and the characteristics of the central processor (speed of filing of data). As peripherals are becoming faster and more numerous, the requests by the peripherals for data filing by the processor can become so dense that they can not be accommodated. This point will occur sometime before the total number of requests multiplied by the service times for those requests (namely the peripheral load factor) becomes one.
{"title":"Analysis of computer peripheral interference","authors":"J. Staudhammer, C. Combs, G. Wilkinson","doi":"10.1145/800196.805979","DOIUrl":"https://doi.org/10.1145/800196.805979","url":null,"abstract":"Among the limiting factors in computer performance are the rates at which information can be transferred into and out of the memory. Obviously the effective data transfer rate depends on the characteristics of the peripheral unit (speed of data collection or data posting) and the characteristics of the central processor (speed of filing of data). As peripherals are becoming faster and more numerous, the requests by the peripherals for data filing by the processor can become so dense that they can not be accommodated. This point will occur sometime before the total number of requests multiplied by the service times for those requests (namely the peripheral load factor) becomes one.","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124855168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The transportation planning process is a set of analytical techniques used to forecast future transportation requirements and to evaluate proposed systems. While some of the techniques described in this paper can be used in the solution of current problems, the primary concern is with the problems of long-range planning. The following sections of this paper describe a transportation planning process. The process described is one that is widely (but not exclusively) used for urban transportation studies in the United States. This collection of survey techniques, analysis methods, and computer programs which makes up the process, has been developed over the past two decades by hundreds of researchers from diverse fields. This research and planning activity has been supported by the communities for which studies have been made, state highway departments, the Bureau of Public Roads, and the Department of Housing and Urban Development. While improvements in the methodology continue to be made, and are necessary if better techniques are to be developed, the process seems to have been somewhat standardized in the form presented.
{"title":"Electronic computer applications in urban transportation planning","authors":"R. Schofer, F. F. Goodyear","doi":"10.1145/800196.805994","DOIUrl":"https://doi.org/10.1145/800196.805994","url":null,"abstract":"The transportation planning process is a set of analytical techniques used to forecast future transportation requirements and to evaluate proposed systems. While some of the techniques described in this paper can be used in the solution of current problems, the primary concern is with the problems of long-range planning. The following sections of this paper describe a transportation planning process. The process described is one that is widely (but not exclusively) used for urban transportation studies in the United States. This collection of survey techniques, analysis methods, and computer programs which makes up the process, has been developed over the past two decades by hundreds of researchers from diverse fields. This research and planning activity has been supported by the communities for which studies have been made, state highway departments, the Bureau of Public Roads, and the Department of Housing and Urban Development. While improvements in the methodology continue to be made, and are necessary if better techniques are to be developed, the process seems to have been somewhat standardized in the form presented.","PeriodicalId":257203,"journal":{"name":"Proceedings of the 1967 22nd national conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1967-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129968629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}