The CASE statement evaluates an expression, selects an action according to the value of the expression and then executes the action. The most efficient runtime behavior is exhibited when the action can be selected via a jump table, which provides an entry for every possible value in the range of the expression but has execution time that is constant as the number of cases increases. If the range of the expression is too large then the jump table becomes impractical because of excessive space requirements. Implementations of the CASE statement that limit table size to increase linearly with the number of cases either require linear execution time or capitalize on the subrange structure of the expression to reduce the time requirement. Hash methods also limit space requirements, and in the case of hashing with chaining to resolve collisions can provide log n time performance. Open addressing methods provide constant time performance as the number of cases increases and, since the hash table is static and can be closely packed in an optimal fashion, the execution time can be limited to an average of less than two probes per selection even for closely packed tables. Open addressing in optimally packed tables leads to selection of the default case in fewer than eight probes. One can choose a hash function that facilitates extension of allowed data types from the usual byte and integer types to strings and double precision integers with minimal penalty in execution time.
{"title":"Hash table methods for case statements","authors":"J. Gait","doi":"10.1145/503896.503932","DOIUrl":"https://doi.org/10.1145/503896.503932","url":null,"abstract":"The CASE statement evaluates an expression, selects an action according to the value of the expression and then executes the action. The most efficient runtime behavior is exhibited when the action can be selected via a jump table, which provides an entry for every possible value in the range of the expression but has execution time that is constant as the number of cases increases. If the range of the expression is too large then the jump table becomes impractical because of excessive space requirements. Implementations of the CASE statement that limit table size to increase linearly with the number of cases either require linear execution time or capitalize on the subrange structure of the expression to reduce the time requirement. Hash methods also limit space requirements, and in the case of hashing with chaining to resolve collisions can provide log n time performance. Open addressing methods provide constant time performance as the number of cases increases and, since the hash table is static and can be closely packed in an optimal fashion, the execution time can be limited to an average of less than two probes per selection even for closely packed tables. Open addressing in optimally packed tables leads to selection of the default case in fewer than eight probes. One can choose a hash function that facilitates extension of allowed data types from the usual byte and integer types to strings and double precision integers with minimal penalty in execution time.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"775 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116669738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Section 1: Introduction Application software for use in the low-cost microcomputer environment generally needs to require minimal training to use and a maximal degree of user-friendliness. Reasons for this vary, but generally revolve around: a) low budget for computing needs; b) single operator with high turnover rate, entailing frequent need for training. In this environment, the user's view of the system is largely confined to keyboard and video display screen, the keyboard providing the physical means of interacting with the system and the screen the visual. For off-the-shelf hardware, an application program can do very l i t t l e to make the keyboard more user-friendly, but can do a great deal with the screen. In this case, the hardware is general ly equipped with a screen for which the display is refreshed from a user-accessible memory area. This kind of hardware arrangement is called a memory mapped screen display. I fhen the screen memory is changed, the display changes. Since the screen memory is user-accessible, an application program can determine what is on the screen by examining the screen memory. Consequently, attractive and user-friendly screen interaction can be conducted via an application program working directly against the screen memory. The screen memory organization and programmer tools for
{"title":"A generalized screen management utility: automatic programming approach","authors":"Y. S. Chua, C. Clinton","doi":"10.1145/503896.503931","DOIUrl":"https://doi.org/10.1145/503896.503931","url":null,"abstract":"Section 1: Introduction Application software for use in the low-cost microcomputer environment generally needs to require minimal training to use and a maximal degree of user-friendliness. Reasons for this vary, but generally revolve around: a) low budget for computing needs; b) single operator with high turnover rate, entailing frequent need for training. In this environment, the user's view of the system is largely confined to keyboard and video display screen, the keyboard providing the physical means of interacting with the system and the screen the visual. For off-the-shelf hardware, an application program can do very l i t t l e to make the keyboard more user-friendly, but can do a great deal with the screen. In this case, the hardware is general ly equipped with a screen for which the display is refreshed from a user-accessible memory area. This kind of hardware arrangement is called a memory mapped screen display. I fhen the screen memory is changed, the display changes. Since the screen memory is user-accessible, an application program can determine what is on the screen by examining the screen memory. Consequently, attractive and user-friendly screen interaction can be conducted via an application program working directly against the screen memory. The screen memory organization and programmer tools for","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123529148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A class of context-free grammars, called "Extended LL(k)" or ELL(k), is defined. This class has been shown to include LL(k) grammars as proper subset, and there are some grammars which are ELL(k) grammars but not LALR(k) grammars.An algorithm to construct persers for ELL(k) grammars is proposed in this paper.Before this paper had been completed, PL/O language was taken as a sample. A parser was constructed for it by ELL(k) technique.
{"title":"Extended LL(k) grammars and parsers","authors":"Yen-Jen Oyang, Ching-Chi Hsu","doi":"10.1145/503896.503938","DOIUrl":"https://doi.org/10.1145/503896.503938","url":null,"abstract":"A class of context-free grammars, called \"Extended LL(k)\" or ELL(k), is defined. This class has been shown to include LL(k) grammars as proper subset, and there are some grammars which are ELL(k) grammars but not LALR(k) grammars.An algorithm to construct persers for ELL(k) grammars is proposed in this paper.Before this paper had been completed, PL/O language was taken as a sample. A parser was constructed for it by ELL(k) technique.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129713714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Commenting System has been developed that will facilitate commenting in student programs. The need for a system such as this arose as a result of the departmental emphasis that is placed on well documented programs in all languages taught at East Tennessee State University. Due to the inadequate number of terminals and keypunches available to Computer Science students, they are more apt to minimize their comments or to insert them as an afterthought once the program is completed. The commenting system was developed as a team project in a Software Design course. The team was responsible for designing, coding, and implementing the system as part of their class assignment.The Commenting System is capable of easing the the task of documenting a program source listing when implemented by the programmer. The system will recognize certain predetermined keywords such as PURPOSE, VARIABLE DICTIONARY, or INPUT, and it will emphasize them appropriately within the margins and border them according to user specifications. System capabilities include producing a variable dictionary with user specified tab values, blocking comments in varying widths, or even completely ignoring a block of comments that the programmer has previously formatted. The Commenting System itself was written in FORTRAN, COBOL, PL/I, and IBM 360/370 ASSEMBLER languages.The Commenting System is presently being used on a trial basis in the Advanced Programming Techniques class being taught at East Tennessee State University.
{"title":"A commenting system to improve program readability","authors":"Michele Fletcher, Bobby Morrison, Robert Riser","doi":"10.1145/503896.503943","DOIUrl":"https://doi.org/10.1145/503896.503943","url":null,"abstract":"A Commenting System has been developed that will facilitate commenting in student programs. The need for a system such as this arose as a result of the departmental emphasis that is placed on well documented programs in all languages taught at East Tennessee State University. Due to the inadequate number of terminals and keypunches available to Computer Science students, they are more apt to minimize their comments or to insert them as an afterthought once the program is completed. The commenting system was developed as a team project in a Software Design course. The team was responsible for designing, coding, and implementing the system as part of their class assignment.The Commenting System is capable of easing the the task of documenting a program source listing when implemented by the programmer. The system will recognize certain predetermined keywords such as PURPOSE, VARIABLE DICTIONARY, or INPUT, and it will emphasize them appropriately within the margins and border them according to user specifications. System capabilities include producing a variable dictionary with user specified tab values, blocking comments in varying widths, or even completely ignoring a block of comments that the programmer has previously formatted. The Commenting System itself was written in FORTRAN, COBOL, PL/I, and IBM 360/370 ASSEMBLER languages.The Commenting System is presently being used on a trial basis in the Advanced Programming Techniques class being taught at East Tennessee State University.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121678484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper will describe the advantages and disadvantages of developing a BASIC interpreter written in PASCAL on the Cromemco Cromix, a microcomputer, as opposed to development on the Xerox Sigma9, a mainframe computer. The Cromemco's nonstandard PASCAL features necessitate a comparison between the two system's PASCALs and the effect of their differences on the design of the interpreter.
{"title":"An experiment in the design of a BASIC interpreter","authors":"D. Miles","doi":"10.1145/503896.503953","DOIUrl":"https://doi.org/10.1145/503896.503953","url":null,"abstract":"This paper will describe the advantages and disadvantages of developing a BASIC interpreter written in PASCAL on the Cromemco Cromix, a microcomputer, as opposed to development on the Xerox Sigma9, a mainframe computer. The Cromemco's nonstandard PASCAL features necessitate a comparison between the two system's PASCALs and the effect of their differences on the design of the interpreter.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper summarizes the development of an image analyzing and processing systems which is designed to determine particle size and particle size distribution of materials that have been transported atmospherically and hydrologically. The system, consisting of a redesigned microscope, a standard television camera, a Micro Works DS-65 DIGISECTOR video digitizer, and a TRS-80 computer, was constructed. The modified DS-65 was built as a peripheral device which converts the analog signal of the television camera to a digital signal and transfers the data through software control to the TRS-80. A machine language program was written to control the digitization and transfer of data. A BASIC language program, SCAN, automatically determines the particle size and particle size distribution. A new portion of the sample can be analyzed by adjusting two micrometers attached to the stage of the microscope. These micrometer adjusting screws allow accurate movement of the sample within a horizontal plane. The results of consecutive scans are accumulated and the distribution is plotted as a bar graph. Representative particle diameters, limited by microscope resolution, are on the order of 10 microns or greater in magnitude. Samples of particulates were tested and the results will be presented at the ACM Regional Meeting in Knoxville, Tennessee on April 1-3, 1982.
本文综述了一种用于确定大气和水文输送的物料粒度和粒度分布的图像分析与处理系统的发展。该系统由一台重新设计的显微镜、一台标准电视摄像机、一台Micro Works DS-65 DIGISECTOR视频数字化仪和一台TRS-80计算机组成。改进后的DS-65作为一种外围设备,将电视摄像机的模拟信号转换为数字信号,并通过软件控制将数据传输到TRS-80。编写了一个机器语言程序来控制数据的数字化和传输。一个BASIC语言程序,SCAN,自动确定粒度和粒度分布。通过调整连接在显微镜台上的两个微米计,可以分析样品的新部分。这些千分尺调节螺钉允许样品在水平面内精确移动。将连续扫描的结果累积起来,并将其分布绘制为条形图。受显微镜分辨率的限制,代表性的颗粒直径在10微米或更大的量级上。对颗粒样本进行了测试,结果将于1982年4月1日至3日在田纳西州诺克斯维尔举行的ACM区域会议上公布。
{"title":"The development and application of an image analyzing and processing system","authors":"R. Rutz, D. E. Fields","doi":"10.1145/503896.503916","DOIUrl":"https://doi.org/10.1145/503896.503916","url":null,"abstract":"This paper summarizes the development of an image analyzing and processing systems which is designed to determine particle size and particle size distribution of materials that have been transported atmospherically and hydrologically. The system, consisting of a redesigned microscope, a standard television camera, a Micro Works DS-65 DIGISECTOR video digitizer, and a TRS-80 computer, was constructed. The modified DS-65 was built as a peripheral device which converts the analog signal of the television camera to a digital signal and transfers the data through software control to the TRS-80. A machine language program was written to control the digitization and transfer of data. A BASIC language program, SCAN, automatically determines the particle size and particle size distribution. A new portion of the sample can be analyzed by adjusting two micrometers attached to the stage of the microscope. These micrometer adjusting screws allow accurate movement of the sample within a horizontal plane. The results of consecutive scans are accumulated and the distribution is plotted as a bar graph. Representative particle diameters, limited by microscope resolution, are on the order of 10 microns or greater in magnitude. Samples of particulates were tested and the results will be presented at the ACM Regional Meeting in Knoxville, Tennessee on April 1-3, 1982.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
4×4 Tac-Tix is a two person game with the last player losing. It had been conjectured for quite some time now that this was a second person game. By using the AND-OR trees and the grundy function technique extensively, we prove in this paper that both, the 4×4 Tac-Tix and its modified version are second person games.
{"title":"4×4 Tac-Tix is a second person game","authors":"J. Navlakha","doi":"10.1145/503896.503904","DOIUrl":"https://doi.org/10.1145/503896.503904","url":null,"abstract":"4×4 Tac-Tix is a two person game with the last player losing. It had been conjectured for quite some time now that this was a second person game. By using the AND-OR trees and the grundy function technique extensively, we prove in this paper that both, the 4×4 Tac-Tix and its modified version are second person games.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131529715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are two basic approaches in the normalization theory of relational databases. One approach is the decomposition of large relation schemes into smaller relation schemes. A required criteria for a satisfactory decomposition is the lossless join property. The other approach is to synthesize a set of relation schemes from a given set of functional dependencies that are assumed to hold for a universal relation scheme. The synthesized relation schemes are easily identified once a minimal cover of the given set of functional dependencies is obtained. This paper presents another method for synthesizing relation schemes without finding a minimal cover. Starting with a given set of functional dependencies, a partial order graph can be defined. Using the partial order graph and any method for finding keys of relation schemes, a systematic method for synthesizing relation schemes is outlined. The method is easy to implement. However, no programming technique is suggested in this paper.
{"title":"A hierarchical method for synthesizing relations","authors":"Raymond Fadous","doi":"10.1145/503896.503922","DOIUrl":"https://doi.org/10.1145/503896.503922","url":null,"abstract":"There are two basic approaches in the normalization theory of relational databases. One approach is the decomposition of large relation schemes into smaller relation schemes. A required criteria for a satisfactory decomposition is the lossless join property. The other approach is to synthesize a set of relation schemes from a given set of functional dependencies that are assumed to hold for a universal relation scheme. The synthesized relation schemes are easily identified once a minimal cover of the given set of functional dependencies is obtained. This paper presents another method for synthesizing relation schemes without finding a minimal cover. Starting with a given set of functional dependencies, a partial order graph can be defined. Using the partial order graph and any method for finding keys of relation schemes, a systematic method for synthesizing relation schemes is outlined. The method is easy to implement. However, no programming technique is suggested in this paper.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131319081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents research undertaken in the field of speech compression with a low cost speech processing system developed around an APPLE II microcomputer. Unlike some of the more popular techniques of speech compression based on statistical analysis of the speech waveforms in the frequency domain, speech compression was approached by the authors with the perspective of functional approximations of speech waveforms in the time domain. These functional approximations would be evaluated in a parallel processing mode. Research activities discussed in this paper include: the design and implementation of a parallel interface between the APPLE II and an analog to digital converter; the development of machine language programs to accurately sample sound waveforms; the development of software for the analysis of converted data, including a real time scrolling graphics routine and also a graphics hardcopy routine which controls a dot-matrix printer; the development of the compression technique utilizing an intelligent sampling routine; and the design of a pipelined parallel processing network of microprocessors, to evaluate in real time a polynomial function representing a speech waveform. The compression technique developed currently allows four minutes of good quality speech to be stored on one floppy disk.
{"title":"Speech compression: a functional approximation approach","authors":"Kevan L. Miller","doi":"10.1145/503896.503950","DOIUrl":"https://doi.org/10.1145/503896.503950","url":null,"abstract":"This paper presents research undertaken in the field of speech compression with a low cost speech processing system developed around an APPLE II microcomputer. Unlike some of the more popular techniques of speech compression based on statistical analysis of the speech waveforms in the frequency domain, speech compression was approached by the authors with the perspective of functional approximations of speech waveforms in the time domain. These functional approximations would be evaluated in a parallel processing mode. Research activities discussed in this paper include: the design and implementation of a parallel interface between the APPLE II and an analog to digital converter; the development of machine language programs to accurately sample sound waveforms; the development of software for the analysis of converted data, including a real time scrolling graphics routine and also a graphics hardcopy routine which controls a dot-matrix printer; the development of the compression technique utilizing an intelligent sampling routine; and the design of a pipelined parallel processing network of microprocessors, to evaluate in real time a polynomial function representing a speech waveform. The compression technique developed currently allows four minutes of good quality speech to be stored on one floppy disk.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"30 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132584379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Foreign Language Test Construction System aids the foreign language instructor in preparing examinations. The system interprets a test outline created by the instructor and produces one of several possible examinations. Any particular outline generates a variety of syntactically equivalent examinations allowing the instructor to administer multiple versions of the same test. The concept of creating a test outline reduces the amount of memory or disk space needed to store a test.The menu-driven system allows an instructor not familiar with computers to create examinations with minimal effort. The instructor simply follows the directions on the computer monitor and types the questions into the computer according to a previously specified format. The system stores a list of possible verbs and several classifications of nouns (countries, names, seasons). The system chooses words from this list according to the outline created by the instructor producing an almost limitless variety of equivalent examinations.
{"title":"A foreign language test construction system","authors":"J. S. Craig","doi":"10.1145/503896.503942","DOIUrl":"https://doi.org/10.1145/503896.503942","url":null,"abstract":"The Foreign Language Test Construction System aids the foreign language instructor in preparing examinations. The system interprets a test outline created by the instructor and produces one of several possible examinations. Any particular outline generates a variety of syntactically equivalent examinations allowing the instructor to administer multiple versions of the same test. The concept of creating a test outline reduces the amount of memory or disk space needed to store a test.The menu-driven system allows an instructor not familiar with computers to create examinations with minimal effort. The instructor simply follows the directions on the computer monitor and types the questions into the computer according to a previously specified format. The system stores a list of possible verbs and several classifications of nouns (countries, names, seasons). The system chooses words from this list according to the outline created by the instructor producing an almost limitless variety of equivalent examinations.","PeriodicalId":184493,"journal":{"name":"ACM-SE 20","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1982-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129704621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}