A class of context-free grammars, called "Extended LL(k)" or ELL(k), is defined. This class has been shown to include LL(k) grammars as proper subset, and there are some grammars which are ELL(k) grammars but not LALR(k) grammars.An algorithm to construct persers for ELL(k) grammars is proposed in this paper.Before this paper had been completed, PL/O language was taken as a sample. A parser was constructed for it by ELL(k) technique.
{"title":"Extended LL(k) grammars and parsers","authors":"Yen-Jen Oyang, Ching-Chi Hsu","doi":"10.1145/503896.503938","DOIUrl":"https://doi.org/10.1145/503896.503938","url":null,"abstract":"A class of context-free grammars, called \"Extended LL(k)\" or ELL(k), is defined. This class has been shown to include LL(k) grammars as proper subset, and there are some grammars which are ELL(k) grammars but not LALR(k) grammars.An algorithm to construct persers for ELL(k) grammars is proposed in this paper.Before this paper had been completed, PL/O language was taken as a sample. A parser was constructed for it by ELL(k) technique.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129713714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Commenting System has been developed that will facilitate commenting in student programs. The need for a system such as this arose as a result of the departmental emphasis that is placed on well documented programs in all languages taught at East Tennessee State University. Due to the inadequate number of terminals and keypunches available to Computer Science students, they are more apt to minimize their comments or to insert them as an afterthought once the program is completed. The commenting system was developed as a team project in a Software Design course. The team was responsible for designing, coding, and implementing the system as part of their class assignment.The Commenting System is capable of easing the the task of documenting a program source listing when implemented by the programmer. The system will recognize certain predetermined keywords such as PURPOSE, VARIABLE DICTIONARY, or INPUT, and it will emphasize them appropriately within the margins and border them according to user specifications. System capabilities include producing a variable dictionary with user specified tab values, blocking comments in varying widths, or even completely ignoring a block of comments that the programmer has previously formatted. The Commenting System itself was written in FORTRAN, COBOL, PL/I, and IBM 360/370 ASSEMBLER languages.The Commenting System is presently being used on a trial basis in the Advanced Programming Techniques class being taught at East Tennessee State University.
{"title":"A commenting system to improve program readability","authors":"Michele Fletcher, Bobby Morrison, Robert Riser","doi":"10.1145/503896.503943","DOIUrl":"https://doi.org/10.1145/503896.503943","url":null,"abstract":"A Commenting System has been developed that will facilitate commenting in student programs. The need for a system such as this arose as a result of the departmental emphasis that is placed on well documented programs in all languages taught at East Tennessee State University. Due to the inadequate number of terminals and keypunches available to Computer Science students, they are more apt to minimize their comments or to insert them as an afterthought once the program is completed. The commenting system was developed as a team project in a Software Design course. The team was responsible for designing, coding, and implementing the system as part of their class assignment.The Commenting System is capable of easing the the task of documenting a program source listing when implemented by the programmer. The system will recognize certain predetermined keywords such as PURPOSE, VARIABLE DICTIONARY, or INPUT, and it will emphasize them appropriately within the margins and border them according to user specifications. System capabilities include producing a variable dictionary with user specified tab values, blocking comments in varying widths, or even completely ignoring a block of comments that the programmer has previously formatted. The Commenting System itself was written in FORTRAN, COBOL, PL/I, and IBM 360/370 ASSEMBLER languages.The Commenting System is presently being used on a trial basis in the Advanced Programming Techniques class being taught at East Tennessee State University.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121678484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A directed graph in which there is a bound for the degrees of the vertices is called labelled i f each edge is assigned a label from a f in i te set of labels, the edges emerging from a given vertex al l having distinct labels. Knuth [2] gives a wellknown transformation to represent an arbitrary rooted tree by means of a labelled binary tree. (He calls each edge either a "brother" or a "son", so the label set is {brother, son}) . The binary tree contains al l information of the original tree, and the lat ter can be reconstructed from the former.
{"title":"Drawing labelled directed binary graphs on a grid","authors":"R. I. Becker, S. Schach","doi":"10.1145/503896.503903","DOIUrl":"https://doi.org/10.1145/503896.503903","url":null,"abstract":"A directed graph in which there is a bound for the degrees of the vertices is called labelled i f each edge is assigned a label from a f in i te set of labels, the edges emerging from a given vertex al l having distinct labels. Knuth [2] gives a wellknown transformation to represent an arbitrary rooted tree by means of a labelled binary tree. (He calls each edge either a \"brother\" or a \"son\", so the label set is {brother, son}) . The binary tree contains al l information of the original tree, and the lat ter can be reconstructed from the former.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116432529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
is denoted by L(G). Let A~ = {£0' ~i ..... ~k '~} Permission to copy without fee all or pamt of this material is granted provided that the copies a r e not made or distributed for dire~t eommerclal advantaEe, the ACM oopyriEht notlce and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 1982 ACM 0-89791-071-0/82/0400-0023 $00.75 be a finite alphabet which contains k + 2 elements;
{"title":"Several remarks on computations of labelled Markov algorithms","authors":"D. Simovici","doi":"10.1145/503896.503901","DOIUrl":"https://doi.org/10.1145/503896.503901","url":null,"abstract":"is denoted by L(G). Let A~ = {£0' ~i ..... ~k '~} Permission to copy without fee all or pamt of this material is granted provided that the copies a r e not made or distributed for dire~t eommerclal advantaEe, the ACM oopyriEht notlce and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 1982 ACM 0-89791-071-0/82/0400-0023 $00.75 be a finite alphabet which contains k + 2 elements;","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122187434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper will describe the advantages and disadvantages of developing a BASIC interpreter written in PASCAL on the Cromemco Cromix, a microcomputer, as opposed to development on the Xerox Sigma9, a mainframe computer. The Cromemco's nonstandard PASCAL features necessitate a comparison between the two system's PASCALs and the effect of their differences on the design of the interpreter.
{"title":"An experiment in the design of a BASIC interpreter","authors":"D. Miles","doi":"10.1145/503896.503953","DOIUrl":"https://doi.org/10.1145/503896.503953","url":null,"abstract":"This paper will describe the advantages and disadvantages of developing a BASIC interpreter written in PASCAL on the Cromemco Cromix, a microcomputer, as opposed to development on the Xerox Sigma9, a mainframe computer. The Cromemco's nonstandard PASCAL features necessitate a comparison between the two system's PASCALs and the effect of their differences on the design of the interpreter.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper summarizes the development of an image analyzing and processing systems which is designed to determine particle size and particle size distribution of materials that have been transported atmospherically and hydrologically. The system, consisting of a redesigned microscope, a standard television camera, a Micro Works DS-65 DIGISECTOR video digitizer, and a TRS-80 computer, was constructed. The modified DS-65 was built as a peripheral device which converts the analog signal of the television camera to a digital signal and transfers the data through software control to the TRS-80. A machine language program was written to control the digitization and transfer of data. A BASIC language program, SCAN, automatically determines the particle size and particle size distribution. A new portion of the sample can be analyzed by adjusting two micrometers attached to the stage of the microscope. These micrometer adjusting screws allow accurate movement of the sample within a horizontal plane. The results of consecutive scans are accumulated and the distribution is plotted as a bar graph. Representative particle diameters, limited by microscope resolution, are on the order of 10 microns or greater in magnitude. Samples of particulates were tested and the results will be presented at the ACM Regional Meeting in Knoxville, Tennessee on April 1-3, 1982.
本文综述了一种用于确定大气和水文输送的物料粒度和粒度分布的图像分析与处理系统的发展。该系统由一台重新设计的显微镜、一台标准电视摄像机、一台Micro Works DS-65 DIGISECTOR视频数字化仪和一台TRS-80计算机组成。改进后的DS-65作为一种外围设备,将电视摄像机的模拟信号转换为数字信号,并通过软件控制将数据传输到TRS-80。编写了一个机器语言程序来控制数据的数字化和传输。一个BASIC语言程序,SCAN,自动确定粒度和粒度分布。通过调整连接在显微镜台上的两个微米计,可以分析样品的新部分。这些千分尺调节螺钉允许样品在水平面内精确移动。将连续扫描的结果累积起来,并将其分布绘制为条形图。受显微镜分辨率的限制,代表性的颗粒直径在10微米或更大的量级上。对颗粒样本进行了测试,结果将于1982年4月1日至3日在田纳西州诺克斯维尔举行的ACM区域会议上公布。
{"title":"The development and application of an image analyzing and processing system","authors":"R. Rutz, D. E. Fields","doi":"10.1145/503896.503916","DOIUrl":"https://doi.org/10.1145/503896.503916","url":null,"abstract":"This paper summarizes the development of an image analyzing and processing systems which is designed to determine particle size and particle size distribution of materials that have been transported atmospherically and hydrologically. The system, consisting of a redesigned microscope, a standard television camera, a Micro Works DS-65 DIGISECTOR video digitizer, and a TRS-80 computer, was constructed. The modified DS-65 was built as a peripheral device which converts the analog signal of the television camera to a digital signal and transfers the data through software control to the TRS-80. A machine language program was written to control the digitization and transfer of data. A BASIC language program, SCAN, automatically determines the particle size and particle size distribution. A new portion of the sample can be analyzed by adjusting two micrometers attached to the stage of the microscope. These micrometer adjusting screws allow accurate movement of the sample within a horizontal plane. The results of consecutive scans are accumulated and the distribution is plotted as a bar graph. Representative particle diameters, limited by microscope resolution, are on the order of 10 microns or greater in magnitude. Samples of particulates were tested and the results will be presented at the ACM Regional Meeting in Knoxville, Tennessee on April 1-3, 1982.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
4×4 Tac-Tix is a two person game with the last player losing. It had been conjectured for quite some time now that this was a second person game. By using the AND-OR trees and the grundy function technique extensively, we prove in this paper that both, the 4×4 Tac-Tix and its modified version are second person games.
{"title":"4×4 Tac-Tix is a second person game","authors":"J. Navlakha","doi":"10.1145/503896.503904","DOIUrl":"https://doi.org/10.1145/503896.503904","url":null,"abstract":"4×4 Tac-Tix is a two person game with the last player losing. It had been conjectured for quite some time now that this was a second person game. By using the AND-OR trees and the grundy function technique extensively, we prove in this paper that both, the 4×4 Tac-Tix and its modified version are second person games.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131529715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are two basic approaches in the normalization theory of relational databases. One approach is the decomposition of large relation schemes into smaller relation schemes. A required criteria for a satisfactory decomposition is the lossless join property. The other approach is to synthesize a set of relation schemes from a given set of functional dependencies that are assumed to hold for a universal relation scheme. The synthesized relation schemes are easily identified once a minimal cover of the given set of functional dependencies is obtained. This paper presents another method for synthesizing relation schemes without finding a minimal cover. Starting with a given set of functional dependencies, a partial order graph can be defined. Using the partial order graph and any method for finding keys of relation schemes, a systematic method for synthesizing relation schemes is outlined. The method is easy to implement. However, no programming technique is suggested in this paper.
{"title":"A hierarchical method for synthesizing relations","authors":"Raymond Fadous","doi":"10.1145/503896.503922","DOIUrl":"https://doi.org/10.1145/503896.503922","url":null,"abstract":"There are two basic approaches in the normalization theory of relational databases. One approach is the decomposition of large relation schemes into smaller relation schemes. A required criteria for a satisfactory decomposition is the lossless join property. The other approach is to synthesize a set of relation schemes from a given set of functional dependencies that are assumed to hold for a universal relation scheme. The synthesized relation schemes are easily identified once a minimal cover of the given set of functional dependencies is obtained. This paper presents another method for synthesizing relation schemes without finding a minimal cover. Starting with a given set of functional dependencies, a partial order graph can be defined. Using the partial order graph and any method for finding keys of relation schemes, a systematic method for synthesizing relation schemes is outlined. The method is easy to implement. However, no programming technique is suggested in this paper.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131319081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents research undertaken in the field of speech compression with a low cost speech processing system developed around an APPLE II microcomputer. Unlike some of the more popular techniques of speech compression based on statistical analysis of the speech waveforms in the frequency domain, speech compression was approached by the authors with the perspective of functional approximations of speech waveforms in the time domain. These functional approximations would be evaluated in a parallel processing mode. Research activities discussed in this paper include: the design and implementation of a parallel interface between the APPLE II and an analog to digital converter; the development of machine language programs to accurately sample sound waveforms; the development of software for the analysis of converted data, including a real time scrolling graphics routine and also a graphics hardcopy routine which controls a dot-matrix printer; the development of the compression technique utilizing an intelligent sampling routine; and the design of a pipelined parallel processing network of microprocessors, to evaluate in real time a polynomial function representing a speech waveform. The compression technique developed currently allows four minutes of good quality speech to be stored on one floppy disk.
{"title":"Speech compression: a functional approximation approach","authors":"Kevan L. Miller","doi":"10.1145/503896.503950","DOIUrl":"https://doi.org/10.1145/503896.503950","url":null,"abstract":"This paper presents research undertaken in the field of speech compression with a low cost speech processing system developed around an APPLE II microcomputer. Unlike some of the more popular techniques of speech compression based on statistical analysis of the speech waveforms in the frequency domain, speech compression was approached by the authors with the perspective of functional approximations of speech waveforms in the time domain. These functional approximations would be evaluated in a parallel processing mode. Research activities discussed in this paper include: the design and implementation of a parallel interface between the APPLE II and an analog to digital converter; the development of machine language programs to accurately sample sound waveforms; the development of software for the analysis of converted data, including a real time scrolling graphics routine and also a graphics hardcopy routine which controls a dot-matrix printer; the development of the compression technique utilizing an intelligent sampling routine; and the design of a pipelined parallel processing network of microprocessors, to evaluate in real time a polynomial function representing a speech waveform. The compression technique developed currently allows four minutes of good quality speech to be stored on one floppy disk.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"30 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132584379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Foreign Language Test Construction System aids the foreign language instructor in preparing examinations. The system interprets a test outline created by the instructor and produces one of several possible examinations. Any particular outline generates a variety of syntactically equivalent examinations allowing the instructor to administer multiple versions of the same test. The concept of creating a test outline reduces the amount of memory or disk space needed to store a test.The menu-driven system allows an instructor not familiar with computers to create examinations with minimal effort. The instructor simply follows the directions on the computer monitor and types the questions into the computer according to a previously specified format. The system stores a list of possible verbs and several classifications of nouns (countries, names, seasons). The system chooses words from this list according to the outline created by the instructor producing an almost limitless variety of equivalent examinations.
{"title":"A foreign language test construction system","authors":"J. S. Craig","doi":"10.1145/503896.503942","DOIUrl":"https://doi.org/10.1145/503896.503942","url":null,"abstract":"The Foreign Language Test Construction System aids the foreign language instructor in preparing examinations. The system interprets a test outline created by the instructor and produces one of several possible examinations. Any particular outline generates a variety of syntactically equivalent examinations allowing the instructor to administer multiple versions of the same test. The concept of creating a test outline reduces the amount of memory or disk space needed to store a test.The menu-driven system allows an instructor not familiar with computers to create examinations with minimal effort. The instructor simply follows the directions on the computer monitor and types the questions into the computer according to a previously specified format. The system stores a list of possible verbs and several classifications of nouns (countries, names, seasons). The system chooses words from this list according to the outline created by the instructor producing an almost limitless variety of equivalent examinations.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129704621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}