For your background information---and so that you may better understand the scope of our automation effort---I should like to explain that our operations include the publishing of 27 newspapers and two magazines.
{"title":"Integrated automation in newspaper and book production","authors":"J. H. Perry","doi":"10.1145/1464291.1464305","DOIUrl":"https://doi.org/10.1145/1464291.1464305","url":null,"abstract":"For your background information---and so that you may better understand the scope of our automation effort---I should like to explain that our operations include the publishing of 27 newspapers and two magazines.","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134153143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hybrid computation in general owes its growing acceptance to the increasing number of sophisticated problems that neither digital nor analog computers alone can handle adequately. In most cases this apparently cumbersome approach is justified by the speed of analog computers, a speed impossible to attain in all-digital systems, even where the problem requires the memory and logic capabilities that only digital computers can provide.
{"title":"A unified approach to deterministic and random errors in hybrid loops","authors":"J. Vidal","doi":"10.1145/1464291.1464311","DOIUrl":"https://doi.org/10.1145/1464291.1464311","url":null,"abstract":"Hybrid computation in general owes its growing acceptance to the increasing number of sophisticated problems that neither digital nor analog computers alone can handle adequately. In most cases this apparently cumbersome approach is justified by the speed of analog computers, a speed impossible to attain in all-digital systems, even where the problem requires the memory and logic capabilities that only digital computers can provide.","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129316536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The need for data communications is not restricted to only the very large and very widely dispersed business or service organizations. Small businesses tend to have all the problems of their larger counterparts, especially the problem of the conveying of key financial and operating data to the personnel involved in the daily activities of that business. The extent of geographical dispersion of the business is also not a delimiter of the communications need. The widely dispersed organizations usually will reap greater benefits because data communications will help overcome their problems of geography. But, have you ever considered that a management person could have his desk backed right up to a computer and yet have a severe communications void?
{"title":"Communications needs of the user for management information systems","authors":"D. J. Dantine","doi":"10.1145/1464291.1464334","DOIUrl":"https://doi.org/10.1145/1464291.1464334","url":null,"abstract":"The need for data communications is not restricted to only the very large and very widely dispersed business or service organizations. Small businesses tend to have all the problems of their larger counterparts, especially the problem of the conveying of key financial and operating data to the personnel involved in the daily activities of that business. The extent of geographical dispersion of the business is also not a delimiter of the communications need. The widely dispersed organizations usually will reap greater benefits because data communications will help overcome their problems of geography. But, have you ever considered that a management person could have his desk backed right up to a computer and yet have a severe communications void?","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124556702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Space vehicle trajectories must be near optimum in the sense that some parameter is either a maximum or a minimum; for example, in reentry the trajectory to desired terminal conditions is near optimum when the total aerodynamic heating is a minimum. Several perturbation methods, such as the calculus of variations, applications of the maximum principle, and direct steepest descent, have been considered for determining the time histories of nonlinear controls that correspond to optimum trajectories.
{"title":"Trajectory optimization using fast-time repetitive computation","authors":"R. Wingrove, J. S. Raby","doi":"10.1145/1464291.1464377","DOIUrl":"https://doi.org/10.1145/1464291.1464377","url":null,"abstract":"Space vehicle trajectories must be near optimum in the sense that some parameter is either a maximum or a minimum; for example, in reentry the trajectory to desired terminal conditions is near optimum when the total aerodynamic heating is a minimum. Several perturbation methods, such as the calculus of variations, applications of the maximum principle, and direct steepest descent, have been considered for determining the time histories of nonlinear controls that correspond to optimum trajectories.","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124054986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Assisted Interrogation (CAINT) is a system of computer programs for use in man-machine communications. Its principal function is to enable a computer to elicit information from a man by interrogating him---asking him a program of questions where the program follows a logical course depending both on information available before the interrogation started and on that gained during the interrogation. The information acquired is intended to be put to immediate practical use, in updating a data base, generating reports, or driving other interrogations.
{"title":"Computer assisted interrogation","authors":"C. T. Meadow, Douglas W. Waugh","doi":"10.1145/1464291.1464331","DOIUrl":"https://doi.org/10.1145/1464291.1464331","url":null,"abstract":"Computer Assisted Interrogation (CAINT) is a system of computer programs for use in man-machine communications. Its principal function is to enable a computer to elicit information from a man by interrogating him---asking him a program of questions where the program follows a logical course depending both on information available before the interrogation started and on that gained during the interrogation. The information acquired is intended to be put to immediate practical use, in updating a data base, generating reports, or driving other interrogations.","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122019897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
From the early days of electronic computers until the present, a period of over 20 years, electronic and magnetic hardware for mechanizing logical functions and storage in the central processor portion of a computer system have been extremely expensive. Although these costs have been dropping steadily in terms of the cost per component, increases in the complexity and capacity of central processors have tended to keep pace with decreases in hardware costs. Hence, reductions in hardware costs to date have been reflected primarily in increased performance and capability rather than reduced cost. However, developments presently underway in batch-fabricated technologies will provide such significantly lower hardware costs in the central processor that it will not be possible to maintain a system balance from the standpoint of cost and reliability. If properly used, large-scale integrated-circuit arrays in particular will provide digital logic and control functions at such sharply reduced costs and increased reliability that the central processor will tend to become an almost negligible part of the system from the standpoint of both cost and reliability. The dominant factors in systems cost will be software and electromechanical mass storage and input/output devices.
{"title":"Effects of large arrays on machine organization and hardware/software tradeoffs","authors":"L. C. Hobbs","doi":"10.1145/1464291.1464299","DOIUrl":"https://doi.org/10.1145/1464291.1464299","url":null,"abstract":"From the early days of electronic computers until the present, a period of over 20 years, electronic and magnetic hardware for mechanizing logical functions and storage in the central processor portion of a computer system have been extremely expensive. Although these costs have been dropping steadily in terms of the cost per component, increases in the complexity and capacity of central processors have tended to keep pace with decreases in hardware costs. Hence, reductions in hardware costs to date have been reflected primarily in increased performance and capability rather than reduced cost. However, developments presently underway in batch-fabricated technologies will provide such significantly lower hardware costs in the central processor that it will not be possible to maintain a system balance from the standpoint of cost and reliability. If properly used, large-scale integrated-circuit arrays in particular will provide digital logic and control functions at such sharply reduced costs and increased reliability that the central processor will tend to become an almost negligible part of the system from the standpoint of both cost and reliability. The dominant factors in systems cost will be software and electromechanical mass storage and input/output devices.","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127306731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many research problems in the social and physical sciences require the collection of large amounts of data of the simultaneously measured attributes of a phenomenon or process under investigation. Pattern recognition problems, in particular, yield data of multiple variables for each manifestation of the different sources of data. The automatic off-line multivariate analysis techniques described in this paper deal with the quantitative description of data of this type.
{"title":"Automatic off-line multivariate data analysis","authors":"G. Sebestyen","doi":"10.1145/1464291.1464365","DOIUrl":"https://doi.org/10.1145/1464291.1464365","url":null,"abstract":"Many research problems in the social and physical sciences require the collection of large amounts of data of the simultaneously measured attributes of a phenomenon or process under investigation. Pattern recognition problems, in particular, yield data of multiple variables for each manifestation of the different sources of data. The automatic off-line multivariate analysis techniques described in this paper deal with the quantitative description of data of this type.","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127001809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several papers have appeared in the last few years which propose a design for large, high-speed memories using planar thin films. Included in this category are memories with greater than 250,000 bits and cycle times of less than 250 nanoseconds. Some authors have set rather high goals of 10 6 bits and 100 nanoseconds cycle time, and, after performing a number of calculations, have concluded that it is indeed possible for such a memory to operate, provided the problems of building it can be solved. Others have presented the results of early, partially implemented models with less ambitious goals, and of course have concluded that a full-sized memory is indeed feasible. These are necessary steps preceding the building of a fully populated, reliable, manufacturable memory. This paper describes the design of such a memory.
{"title":"A 200-nanosecond thin film main memory system","authors":"S. Meddaugh, K. Pearson","doi":"10.1145/1464291.1464322","DOIUrl":"https://doi.org/10.1145/1464291.1464322","url":null,"abstract":"Several papers have appeared in the last few years which propose a design for large, high-speed memories using planar thin films. Included in this category are memories with greater than 250,000 bits and cycle times of less than 250 nanoseconds. Some authors have set rather high goals of 10 6 bits and 100 nanoseconds cycle time, and, after performing a number of calculations, have concluded that it is indeed possible for such a memory to operate, provided the problems of building it can be solved. Others have presented the results of early, partially implemented models with less ambitious goals, and of course have concluded that a full-sized memory is indeed feasible. These are necessary steps preceding the building of a fully populated, reliable, manufacturable memory. This paper describes the design of such a memory.","PeriodicalId":297471,"journal":{"name":"AFIPS '66 (Fall)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1899-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129075435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}