Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48017
T. Lewis, F. Handloser, S. Bose, S. Yang
The theory of prototyping is presented. A description is then given of Oregon speedcode universe (OSU), a software development system using on-screen editing of standard graphical user interface objects, prototyping, program generation, and software accelerators, which are typically used to accelerate the production of running applications. A programmer uses OSU to design and implement all user interface objects such as menus, windows, dialogs, and icons. These objects are then incorporated into an application-specific sequence that mimics the application during program development and performs the desired operations of the application during program execution. Experimental results suggest that the techniques used by OSU can be used to develop 50-90% of an application without explicit programming, yielding productivity improvements of 2 to 10 times.<>
{"title":"Prototypes from standard user interface management systems","authors":"T. Lewis, F. Handloser, S. Bose, S. Yang","doi":"10.1109/HICSS.1989.48017","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48017","url":null,"abstract":"The theory of prototyping is presented. A description is then given of Oregon speedcode universe (OSU), a software development system using on-screen editing of standard graphical user interface objects, prototyping, program generation, and software accelerators, which are typically used to accelerate the production of running applications. A programmer uses OSU to design and implement all user interface objects such as menus, windows, dialogs, and icons. These objects are then incorporated into an application-specific sequence that mimics the application during program development and performs the desired operations of the application during program execution. Experimental results suggest that the techniques used by OSU can be used to develop 50-90% of an application without explicit programming, yielding productivity improvements of 2 to 10 times.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115199005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48117
Jea-Cheoul Ryou, Johnny S. K. Wong
Task migration from heavily loaded processors to lightly loaded or idle processors is one way to balance the load across all processors and thus reduce average response time. A description is given of a dynamic task migration protocol based on proposed set strategies which is used to minimize the communication cost and reduce the processing overhead at each processor. The decision to migrate is based on the information exchange between processors. The performance of the algorithm is examined by simulation.<>
{"title":"A task migration algorithm for load balancing in a distributed system","authors":"Jea-Cheoul Ryou, Johnny S. K. Wong","doi":"10.1109/HICSS.1989.48117","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48117","url":null,"abstract":"Task migration from heavily loaded processors to lightly loaded or idle processors is one way to balance the load across all processors and thus reduce average response time. A description is given of a dynamic task migration protocol based on proposed set strategies which is used to minimize the communication cost and reduce the processing overhead at each processor. The decision to migrate is based on the information exchange between processors. The performance of the algorithm is examined by simulation.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123569076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48115
A. Hevner, R. Linger
While the problem of restructuring control flow in software is fairly well understood, few methods exist for understanding and restructuring the data flow of software. A method of data re-engineering is proposed that combines the theories of data-usage abstractions for system redesign. The principal results of this re-engineering process are the elimination of data-flow anomalies, the reduction of data scope, and the construction of reusable data objects as common services.<>
{"title":"A method for data re-engineering in structured programs","authors":"A. Hevner, R. Linger","doi":"10.1109/HICSS.1989.48115","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48115","url":null,"abstract":"While the problem of restructuring control flow in software is fairly well understood, few methods exist for understanding and restructuring the data flow of software. A method of data re-engineering is proposed that combines the theories of data-usage abstractions for system redesign. The principal results of this re-engineering process are the elimination of data-flow anomalies, the reduction of data scope, and the construction of reusable data objects as common services.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126146706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48006
L. Rydstrom, O. Viktorsson
The problem of predicting the number of remaining faults in a software system are studied. Seven software projects are analyzed using a number of software structure metrices and reliability growth models. The following conclusions are drawn: there is no single model that can always be used, irrespective of the project conditions; software structure metrics (mainly size) do correlate with the number of faults; the assumptions of reliability growth models do not apply when the testing is structured and well organized; and sufficient data has to be collected from different projects to create a basis for predictions.<>
{"title":"Software reliability prediction for large and complex telecommunication systems","authors":"L. Rydstrom, O. Viktorsson","doi":"10.1109/HICSS.1989.48006","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48006","url":null,"abstract":"The problem of predicting the number of remaining faults in a software system are studied. Seven software projects are analyzed using a number of software structure metrices and reliability growth models. The following conclusions are drawn: there is no single model that can always be used, irrespective of the project conditions; software structure metrics (mainly size) do correlate with the number of faults; the assumptions of reliability growth models do not apply when the testing is structured and well organized; and sufficient data has to be collected from different projects to create a basis for predictions.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122002359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48043
C. Fan, M. Eich
The results of a performance analysis used to compare MARS (a main-memory recoverable database with stable log) and MM-DBMS (different main-memory database management system) are reported. With equal numbers and sizes of log records, MARS supports a higher transaction throughput rate than does MM-DBMS. Even with larger numbers of log records, a rate of 1500 transactions per second can be supported by MARS logging. The MARS checkpointing rates are comparable to those of MM-DBMS. In all situations, MARS recovery outperforms that of MM-DBMS.<>
{"title":"Performance analysis of MARS logging, checkpointing, and recovery","authors":"C. Fan, M. Eich","doi":"10.1109/HICSS.1989.48043","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48043","url":null,"abstract":"The results of a performance analysis used to compare MARS (a main-memory recoverable database with stable log) and MM-DBMS (different main-memory database management system) are reported. With equal numbers and sizes of log records, MARS supports a higher transaction throughput rate than does MM-DBMS. Even with larger numbers of log records, a rate of 1500 transactions per second can be supported by MARS logging. The MARS checkpointing rates are comparable to those of MM-DBMS. In all situations, MARS recovery outperforms that of MM-DBMS.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129155951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48084
B. Zorn, K. Ho, J. Larus, L. Semenzato, P. Hilfinger
Extensions to Common Lisp for concurrent computation on multiprocessors are discussed. Functions for process creation, communication, and synchronization are described. Process objects create multiple threads of control. Processes are lightweight so that programmers can use them to take advantage of fine-grained parallelism. Communication and synchronization are managed with mailboxes. Signals allow processes to communicate using asynchronous interrupts. These constructs are used to implement several higher-level multiprocessing abstractions. These include structured processes, a parallel tree search, and dataflow computation.<>
{"title":"Lisp extensions for multiprocessing","authors":"B. Zorn, K. Ho, J. Larus, L. Semenzato, P. Hilfinger","doi":"10.1109/HICSS.1989.48084","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48084","url":null,"abstract":"Extensions to Common Lisp for concurrent computation on multiprocessors are discussed. Functions for process creation, communication, and synchronization are described. Process objects create multiple threads of control. Processes are lightweight so that programmers can use them to take advantage of fine-grained parallelism. Communication and synchronization are managed with mailboxes. Signals allow processes to communicate using asynchronous interrupts. These constructs are used to implement several higher-level multiprocessing abstractions. These include structured processes, a parallel tree search, and dataflow computation.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127632233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48032
S. Letovsky
A description is given of CPU, a program analysis tool that converts programs into formal specifications. CPU takes as input a program plus a knowledge base of programming plans and finds instances of plans in the code. A technique called transformational analysis is used in which plans that are recognized in the code are replaced by descriptions of their goals. Both procedural plans and data-structuring plans can be recognized. The result of a transformational analysis is a hierarchical derivation of the program, where the topmost layer constitutes a formal specification for the input program, the bottommost layer is the original code, and the intermediate layers denote plans that were recognized in the program. This derivation can be used to generate summaries of the code and to answer questions about it.<>
{"title":"A program anti-compiler","authors":"S. Letovsky","doi":"10.1109/HICSS.1989.48032","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48032","url":null,"abstract":"A description is given of CPU, a program analysis tool that converts programs into formal specifications. CPU takes as input a program plus a knowledge base of programming plans and finds instances of plans in the code. A technique called transformational analysis is used in which plans that are recognized in the code are replaced by descriptions of their goals. Both procedural plans and data-structuring plans can be recognized. The result of a transformational analysis is a hierarchical derivation of the program, where the topmost layer constitutes a formal specification for the input program, the bottommost layer is the original code, and the intermediate layers denote plans that were recognized in the program. This derivation can be used to generate summaries of the code and to answer questions about it.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127988087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48018
M. LaLomia, M. Coovert
The three existing approaches to user modeling-GOMS (goals, operators, methods, and selection rules), production simulation, and mental models-are examined. Each approach is described, relevant experimental research is reviewed, and each approach is summarized in terms of its advantages, limitations, and applicability to the system design process. It is suggested that these approaches are inadequate to account fully for the interplay between human information processing, user characteristics, computer systems, and the demands of the various tasks. An alternative approach to user modeling that utilizes structural covariance analysis is presented. A theoretical causal model of the human-computer interaction, which incorporates the user, system, and task characteristics, is described and discussed in terms of applying structural analysis to the theorized pattern of causation among the user, system, and task. How this approach can provide useful information for guiding the design process is discussed.<>
{"title":"Approaches to user modeling","authors":"M. LaLomia, M. Coovert","doi":"10.1109/HICSS.1989.48018","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48018","url":null,"abstract":"The three existing approaches to user modeling-GOMS (goals, operators, methods, and selection rules), production simulation, and mental models-are examined. Each approach is described, relevant experimental research is reviewed, and each approach is summarized in terms of its advantages, limitations, and applicability to the system design process. It is suggested that these approaches are inadequate to account fully for the interplay between human information processing, user characteristics, computer systems, and the demands of the various tasks. An alternative approach to user modeling that utilizes structural covariance analysis is presented. A theoretical causal model of the human-computer interaction, which incorporates the user, system, and task characteristics, is described and discussed in terms of applying structural analysis to the theorized pattern of causation among the user, system, and task. How this approach can provide useful information for guiding the design process is discussed.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129443770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48027
S. Doane, J. Pellegrino, R. Klatzky
A study is described, whose purpose was to develop a model of users' knowledge of the Unix operating system and thus to depict the relationship between user expertise and mental models of the Unix system. Thirty computer science and engineering majors with varying levels of expertise participated in the experiment. Expertise was measured by experience with the Unix system and computing, as well as by self-descriptions. Mental models were examined by asking subjects to: sort Unix system terms according to their similarity and construct a graph using Unix system terms. Models of experts possess more abstract and semantically bound information than models of those less expert in the Unix system. Experts best represent the higher levels of the Unix system; novices more fully represent the lower, more concrete levels of the system. The potential utility of the experts' representation is discussed with respect to performing tasks within the Unix system.<>
{"title":"Unix system mental models and Unix system expertise","authors":"S. Doane, J. Pellegrino, R. Klatzky","doi":"10.1109/HICSS.1989.48027","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48027","url":null,"abstract":"A study is described, whose purpose was to develop a model of users' knowledge of the Unix operating system and thus to depict the relationship between user expertise and mental models of the Unix system. Thirty computer science and engineering majors with varying levels of expertise participated in the experiment. Expertise was measured by experience with the Unix system and computing, as well as by self-descriptions. Mental models were examined by asking subjects to: sort Unix system terms according to their similarity and construct a graph using Unix system terms. Models of experts possess more abstract and semantically bound information than models of those less expert in the Unix system. Experts best represent the higher levels of the Unix system; novices more fully represent the lower, more concrete levels of the system. The potential utility of the experts' representation is discussed with respect to performing tasks within the Unix system.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130977252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1989-01-03DOI: 10.1109/HICSS.1989.48050
T. Graham, J. Cordy
Current programming languages provide sophisticated facilities for the structuring and manipulation of data within a program. Its high-level constructs, however, stop short of being able to communicate the value and structure of data to external display devices. If a programmer wishes to print out a binary tree, or maintain a display of an editor line database, complicated hand coding is necessary. This paper shows the ways in which the traditional model of input/output is inadequate, and a new model based on conceptual views of data structures is introduced. It is intended that the conceptual view model be supported by a programming environment to allow convenient specification and application of views. A prototype of this environment, called the Weasel environment, has been implemented, and is described.<>
{"title":"Conceptual views of data structures as a model of output in programming languages","authors":"T. Graham, J. Cordy","doi":"10.1109/HICSS.1989.48050","DOIUrl":"https://doi.org/10.1109/HICSS.1989.48050","url":null,"abstract":"Current programming languages provide sophisticated facilities for the structuring and manipulation of data within a program. Its high-level constructs, however, stop short of being able to communicate the value and structure of data to external display devices. If a programmer wishes to print out a binary tree, or maintain a display of an editor line database, complicated hand coding is necessary. This paper shows the ways in which the traditional model of input/output is inadequate, and a new model based on conceptual views of data structures is introduced. It is intended that the conceptual view model be supported by a programming environment to allow convenient specification and application of views. A prototype of this environment, called the Weasel environment, has been implemented, and is described.<<ETX>>","PeriodicalId":325958,"journal":{"name":"[1989] Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume II: Software Track","volume":"230 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133848376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}