igure One shows a representative test cycle for tests implemented by the NIST group. A test cycle is initiated with an analysis and planning phase, typically coordinated by a group of researchers, research sponsors, and NIST staff. During this phase, test protocols and implementation schedules are defined. A data-collection phase leads to the creation or identification of standardized speech and natural language corpora, distributed to a community of core technology developers. In most cases, a portion of the corpora is held in reserve by NIST as performance assessment test material. At agreed-upon times, NIST defines and releases development and evaluation test sets to the core technology developers, and they, in turn, provide NIST with the results of their locally-implemented tests. NIST then produces a detailed set of uniformly-scored tabulated results, including the results of numerous paired-comparison statistical significance tests and other analyses. These test results and their scientific implications then become an important matter for discussion at technical meetings. The extent of NIST's work is illustrated by a look at some 60 technical papers on speech recognition submitted to the 1996 IEEE International Conference on Acoustics, Speech and Signal Processing. Twenty-eight of the 60 papers reported results based on the use of NIST-defined test data, test methodologies, and NIST-implemented benchmark tests. Of these 28 papers, 16 were by researchers in the United States and 12 were from other nations. From Dragon Systems' perspective, the NIST reference speech database measurement and testing methodologies are important for research and necessary to advance the technology. While ideas are plentiful , testing is expensive; researchers and research resources are costly. So sharing data makes sense. Large common databases are statistically more meaningful than smaller proprietary ones, and using these large databases minimizes dead-end approaches. At the speech-recognition workshops where results of the NIST's benchmark tests are presented, there are opportunities to compare results and the different approaches pursued at different laboratories. In this way the entire community benefits.
{"title":"Tests, measurements, and automatic speech recognition","authors":"D. S. Pallett, J. Baker","doi":"10.1145/266231.266238","DOIUrl":"https://doi.org/10.1145/266231.266238","url":null,"abstract":"igure One shows a representative test cycle for tests implemented by the NIST group. A test cycle is initiated with an analysis and planning phase, typically coordinated by a group of researchers, research sponsors, and NIST staff. During this phase, test protocols and implementation schedules are defined. A data-collection phase leads to the creation or identification of standardized speech and natural language corpora, distributed to a community of core technology developers. In most cases, a portion of the corpora is held in reserve by NIST as performance assessment test material. At agreed-upon times, NIST defines and releases development and evaluation test sets to the core technology developers, and they, in turn, provide NIST with the results of their locally-implemented tests. NIST then produces a detailed set of uniformly-scored tabulated results, including the results of numerous paired-comparison statistical significance tests and other analyses. These test results and their scientific implications then become an important matter for discussion at technical meetings. The extent of NIST's work is illustrated by a look at some 60 technical papers on speech recognition submitted to the 1996 IEEE International Conference on Acoustics, Speech and Signal Processing. Twenty-eight of the 60 papers reported results based on the use of NIST-defined test data, test methodologies, and NIST-implemented benchmark tests. Of these 28 papers, 16 were by researchers in the United States and 12 were from other nations. From Dragon Systems' perspective, the NIST reference speech database measurement and testing methodologies are important for research and necessary to advance the technology. While ideas are plentiful , testing is expensive; researchers and research resources are costly. So sharing data makes sense. Large common databases are statistically more meaningful than smaller proprietary ones, and using these large databases minimizes dead-end approaches. At the speech-recognition workshops where results of the NIST's benchmark tests are presented, there are opportunities to compare results and the different approaches pursued at different laboratories. In this way the entire community benefits.","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128270328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
m To explore the impact of current software standards on software reuse, we describe the analysis, findings, and recommendations of the IEEE Software Engineering Standards Committee (SESC) Reuse Planning Group. The object of the Reuse Planning Group was to define, for the SESC, a statement of direction for IEEE standards related to the analysis, design, implementation, validation, verification, documentation, and maintenance of reusable software assets as well as supporting infrastructure in the creation of new applications. We also examine the current state of software reuse standards by the following: (1) an analysis of the needs of various users of standards and a classification of the needs with respect to the type of reuse standards that might be written; (2) a set of normative documents on the subject of software reuse, identified and evaluated for the role they might play in the standardization process; (3) a program element view of the IEEE Software Engineering Standards Committee collection, into which reuse standards must fit; and (4) recommendations for standardization projects. nsertion of any new innovation by an organization requires adoption, utilization, and management of the new technology. The motive for an organization to adopt a new technology is based on expectations for achieving a set of goals. The insertion of software reuse technologies is no different from other innovations, in that they must be adopted, utilized, and managed in software product development or maintenance lifecycles. The following goals are usually stated for software reuse: (1) the organization expects product development or maintenance efforts to decrease; (2) the organization expects an increase in product quality; and (3) the organization expects a decrease in product time-to-market. Although the benefits of software reuse have been discussed in the literature for several decades, it remains an elusive goal. Successful insertion of new technology depends on both technical and nontechnical factors. It is important that both be adequately addressed. Clearly, software standards are an important technical issue, and while explicit software reuse standards do not exist, a number of current de facto and official government standards are making an impact. To explore the impact of current software standards on software reuse, we describe the analysis, findings, and recommendations of the IEEE Software Engineering Standards Committee (SESC) Reuse Planning Group. The goal of the Group was to define, for the SESC, a statement of direction for IEEE standards on the analysis, design, implementation, validation, verification, documentation, and maintenance of reusable software assets, as well as their supporting infrastructure in the creation of new applications. We examine the current state of software reuse standards by addressing the following topics: (1) the needs of various users of standards and a classification of those needs with respect to kinds of reuse
{"title":"Software reuse standards","authors":"J. Baldo, J. Moore, D. Rine","doi":"10.1145/260558.260559","DOIUrl":"https://doi.org/10.1145/260558.260559","url":null,"abstract":"m To explore the impact of current software standards on software reuse, we describe the analysis, findings, and recommendations of the IEEE Software Engineering Standards Committee (SESC) Reuse Planning Group. The object of the Reuse Planning Group was to define, for the SESC, a statement of direction for IEEE standards related to the analysis, design, implementation, validation, verification, documentation, and maintenance of reusable software assets as well as supporting infrastructure in the creation of new applications. We also examine the current state of software reuse standards by the following: (1) an analysis of the needs of various users of standards and a classification of the needs with respect to the type of reuse standards that might be written; (2) a set of normative documents on the subject of software reuse, identified and evaluated for the role they might play in the standardization process; (3) a program element view of the IEEE Software Engineering Standards Committee collection, into which reuse standards must fit; and (4) recommendations for standardization projects. nsertion of any new innovation by an organization requires adoption, utilization, and management of the new technology. The motive for an organization to adopt a new technology is based on expectations for achieving a set of goals. The insertion of software reuse technologies is no different from other innovations, in that they must be adopted, utilized, and managed in software product development or maintenance lifecycles. The following goals are usually stated for software reuse: (1) the organization expects product development or maintenance efforts to decrease; (2) the organization expects an increase in product quality; and (3) the organization expects a decrease in product time-to-market. Although the benefits of software reuse have been discussed in the literature for several decades, it remains an elusive goal. Successful insertion of new technology depends on both technical and nontechnical factors. It is important that both be adequately addressed. Clearly, software standards are an important technical issue, and while explicit software reuse standards do not exist, a number of current de facto and official government standards are making an impact. To explore the impact of current software standards on software reuse, we describe the analysis, findings, and recommendations of the IEEE Software Engineering Standards Committee (SESC) Reuse Planning Group. The goal of the Group was to define, for the SESC, a statement of direction for IEEE standards on the analysis, design, implementation, validation, verification, documentation, and maintenance of reusable software assets, as well as their supporting infrastructure in the creation of new applications. We examine the current state of software reuse standards by addressing the following topics: (1) the needs of various users of standards and a classification of those needs with respect to kinds of reuse ","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116546212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
m Software reuse can be an important step towards increasing productivity and quality. A necessary condition for its success is standardization of reusable components at each level of the software lifecycle. Standardization can be looked at in two different ways: externally (the interface), and internally (functionality). Both of these are fundamental, and imply extra costs in the development of components. The external perspective is the usual one—it considers the appearance of the components and the ways they are related to the rest of the world. The internal perspective is strongly related to reuse: here a component is considered standard when its functionality is common among all systems belonging to a particular domain; such components are usually discovered following domain analysis. A qualitative analysis of these two approaches to standards and reuse led us to a simple model showing the extra costs of standardizing reusable software components. he reuse of existing software in the development of new systems is widely studied. Despite its benefits, software reuse is not a guaranteed success, and is generally a cost-intensive investment. Among the many factors that can affect the success of a reuse program is the design and realization of the components likely to be reused, and particularly their adequate standardization. When dealing with standards and resuable software, we must first see a component as not only a code module, but as all the other products of the software lifecycle, as for instance the design and requirements. The higher the level of the component, the greater the benefits of its reuse. Given a software component in a reuse context, we can choose more than one perspective from which to determine whether or not it is standard. We can look at the interface or at its functionality. Both are equally important for the success of a reuse program. In fact, a component without an interface that is immediately understandable and easy to integrate and adapt, i.e., a component not designed with a “plug and play” philosophy, implies adaptation and integration costs which can easily overrun the value of the component. At the same time, a perfect “plug and play” interface can be nearly useless if it is used for a component that is almost unique and thus has practically no chance of being reused. In the following we will define more precisely a standard reusable software component from the two perspectives, and perform a qualitative analysis of the cost of its standardization.
{"title":"The cost of standardizing components for software reuse","authors":"G. Succi, Francesco Baruchelli","doi":"10.1145/260558.260561","DOIUrl":"https://doi.org/10.1145/260558.260561","url":null,"abstract":"m Software reuse can be an important step towards increasing productivity and quality. A necessary condition for its success is standardization of reusable components at each level of the software lifecycle. Standardization can be looked at in two different ways: externally (the interface), and internally (functionality). Both of these are fundamental, and imply extra costs in the development of components. The external perspective is the usual one—it considers the appearance of the components and the ways they are related to the rest of the world. The internal perspective is strongly related to reuse: here a component is considered standard when its functionality is common among all systems belonging to a particular domain; such components are usually discovered following domain analysis. A qualitative analysis of these two approaches to standards and reuse led us to a simple model showing the extra costs of standardizing reusable software components. he reuse of existing software in the development of new systems is widely studied. Despite its benefits, software reuse is not a guaranteed success, and is generally a cost-intensive investment. Among the many factors that can affect the success of a reuse program is the design and realization of the components likely to be reused, and particularly their adequate standardization. When dealing with standards and resuable software, we must first see a component as not only a code module, but as all the other products of the software lifecycle, as for instance the design and requirements. The higher the level of the component, the greater the benefits of its reuse. Given a software component in a reuse context, we can choose more than one perspective from which to determine whether or not it is standard. We can look at the interface or at its functionality. Both are equally important for the success of a reuse program. In fact, a component without an interface that is immediately understandable and easy to integrate and adapt, i.e., a component not designed with a “plug and play” philosophy, implies adaptation and integration costs which can easily overrun the value of the component. At the same time, a perfect “plug and play” interface can be nearly useless if it is used for a component that is almost unique and thus has practically no chance of being reused. In the following we will define more precisely a standard reusable software component from the two perspectives, and perform a qualitative analysis of the cost of its standardization.","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127701759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
m Domain analysis is a promising path for identifying standard software architectures. Recent advances in the technology and documentation of objectoriented frameworks have made it possible to link the products of domain analysis to concrete software artifacts. The result is a coherent lifecycle process for the domain engineering of reusable components. or nearly two decades, software engineers attempted to create a software component industry based on a model of a repository of “components” or “parts” that could be accessed by many different kinds of (possibly totally unrelated) applications. It took several years of spectacular failures to make it clear that this approach could not succeed. It became increasingly clear that components could only be produced in the context of a domain. Such a domain might be telecommunications, banking, etc. Most current efforts are aimed at designing what are known as domain architectures—that is, the standard architectures of systems created in particular domains. In the computer hardware area, this has been accomplished with great success: A personal computer has a motherboard, expander slots, keyboard, monitor, etc., conformant to a standard architecture. But in the software area, much less is known. With the identification of a domain architecture, it becomes possible to develop systematically reusable domain components that fit within that domain architecture (via suitable interconnection mechanisms). The discipline that has arisen around standardizing production of domain components is known as domain analysis. The companion discipline of domain engineering—the systematic creation of domain architectures based upon the results of domain analysis—has flourished in recent years with the rise of object-oriented framework technologies and patterns.
{"title":"Standardizing production of domain components","authors":"J. Favaro","doi":"10.1145/260558.260562","DOIUrl":"https://doi.org/10.1145/260558.260562","url":null,"abstract":"m Domain analysis is a promising path for identifying standard software architectures. Recent advances in the technology and documentation of objectoriented frameworks have made it possible to link the products of domain analysis to concrete software artifacts. The result is a coherent lifecycle process for the domain engineering of reusable components. or nearly two decades, software engineers attempted to create a software component industry based on a model of a repository of “components” or “parts” that could be accessed by many different kinds of (possibly totally unrelated) applications. It took several years of spectacular failures to make it clear that this approach could not succeed. It became increasingly clear that components could only be produced in the context of a domain. Such a domain might be telecommunications, banking, etc. Most current efforts are aimed at designing what are known as domain architectures—that is, the standard architectures of systems created in particular domains. In the computer hardware area, this has been accomplished with great success: A personal computer has a motherboard, expander slots, keyboard, monitor, etc., conformant to a standard architecture. But in the software area, much less is known. With the identification of a domain architecture, it becomes possible to develop systematically reusable domain components that fit within that domain architecture (via suitable interconnection mechanisms). The discipline that has arisen around standardizing production of domain components is known as domain analysis. The companion discipline of domain engineering—the systematic creation of domain architectures based upon the results of domain analysis—has flourished in recent years with the rise of object-oriented framework technologies and patterns.","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122655336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
m Thera S.p.A. is a software house that produces finished and semi-finished software and provides smaller software organizations in Northern Italy with base application components on which they can build and specialize new products. Thera senior management is strongly committed to improving software development, in pursuit of business objectives. A key factor for improving software production is the introduction of standard domain analysis methods. Its success will be a cornerstone in the global development process in which all the software production will be redesigned and standardized on the basis of standard domain analysis and on the software reuse experience gained from it. hera S.p.A. is a software house, one of whose main business goals is to develop software products for the rational management of firms and their resources by evolving software systems that introduce well-defined, manageable, and flexible solutions for business, management, and production problems. Thera’s target customers are mainly manufacturers (of production management systems), insurance companies (actuarial management systems) and commercial organizations (accounting and commercial systems). In addition, Thera develops “ad hoc” products for clients with specific needs. It also produces semi-finished software products, acting as a supplier to smaller software organizations in Northern Italy by providing them with application base components. Even though Thera’s products already enjoy commercial success, senior management is strongly committed to improving the development process in order to pursue the following business objectives:
m Thera S.p.A.是一家软件公司,生产成品和半成品软件,并为意大利北部的小型软件组织提供基础应用程序组件,他们可以在这些组件上构建和专业化新产品。Thera高级管理层坚定地致力于改进软件开发,以追求业务目标。改进软件生产的一个关键因素是引入标准领域分析方法。它的成功将成为全球开发过程的基石,在这个过程中,所有的软件产品都将在标准领域分析和从中获得的软件重用经验的基础上进行重新设计和标准化。hera S.p.A.是一家软件公司,其主要业务目标之一是开发软件产品,通过发展软件系统来合理管理公司及其资源,该软件系统为业务、管理和生产问题引入定义良好的、可管理的和灵活的解决方案。Thera的目标客户主要是制造商(生产管理系统),保险公司(精算管理系统)和商业组织(会计和商业系统)。此外,Thera还为有特殊需求的客户开发“特别”产品。它还生产半成品软件产品,通过向意大利北部的小型软件组织提供应用程序基础组件,充当供应商的角色。尽管Thera的产品已经取得了商业上的成功,但高级管理层仍坚定地致力于改进开发过程,以实现以下业务目标:
{"title":"Standardizing domain-specific components: a case study","authors":"Massimo Fenaroli, A. Valerio","doi":"10.1145/260558.260563","DOIUrl":"https://doi.org/10.1145/260558.260563","url":null,"abstract":"m Thera S.p.A. is a software house that produces finished and semi-finished software and provides smaller software organizations in Northern Italy with base application components on which they can build and specialize new products. Thera senior management is strongly committed to improving software development, in pursuit of business objectives. A key factor for improving software production is the introduction of standard domain analysis methods. Its success will be a cornerstone in the global development process in which all the software production will be redesigned and standardized on the basis of standard domain analysis and on the software reuse experience gained from it. hera S.p.A. is a software house, one of whose main business goals is to develop software products for the rational management of firms and their resources by evolving software systems that introduce well-defined, manageable, and flexible solutions for business, management, and production problems. Thera’s target customers are mainly manufacturers (of production management systems), insurance companies (actuarial management systems) and commercial organizations (accounting and commercial systems). In addition, Thera develops “ad hoc” products for clients with specific needs. It also produces semi-finished software products, acting as a supplier to smaller software organizations in Northern Italy by providing them with application base components. Even though Thera’s products already enjoy commercial success, senior management is strongly committed to improving the development process in order to pursue the following business objectives:","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124663375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
m Several myths about standard software reuse practices are reviewed here. We examine how each myth has been addressed at Sodalia, a company with practical experience with reuse over the past few years. Sodalia has embraced standard reuse as a key strategic imperative to reach its objectives of highquality, rapidly deployed telecommunications software applications. Sodalia, which is qualified at ISO-9001, has been assessed at level 2 of the SEI’s Capability Maturity Model (currently seeking level 3) and is deeply invested into the definition and deployment of a corporate-wide standard reuse program that makes it among the leading reuse practitioners in Europe. bject technologies have reached a level of maturity at which companies in Europe and the United States have adopted and applied them to industrial applications for a sufficiently long period of time to assess benefits such as quality and adaptability. However, benefits reaped from software reuse seem more difficult to attain, and have not been demonstrated on a significant scale. Despite this, reuse is widely recognized as one of the major factors in enhancing software development, in terms of both reduced time-to-market and quality improvement. Other advantages include productivity improvement (through shared maintenance), interoperability/compatibility (ensured by uniform behavior of a family of applications), standardization (standards are embedded inside reusable components), and capture of domain knowledge (during domain analysis). The complexity of software reuse is not due to the inherent complexity of individual reuse activities, which are often relatively simple and well understood—the difficulty lies in the large number of technical and managerial issues, which must be tackled simultaneously, and their interdependencies. Moreover, the impact of reuse on organization, management, strategy, marketing, business processes, software development, technologies, corporate culture, communication is often either underestimated or excessively emphasized. As a result software reuse is seen as a holy grail, unreachable to many. We want to review several myths about software reuse, which often act as roadblocks to widespread adoption of reuse as standard software engineering practice. We examine how each myth has been addressed at Sodalia, a three-year software development joint venture between Bell Atlantic (U.S.A.) and Telecom Italia. The company has embraced systematic reuse as a key strategic imperative to reach its objectives of high-quality, rapidly deployed telecommunications software applications.
{"title":"Standard reuse practices: many myths vs. a reality","authors":"S. Doublait","doi":"10.1145/260558.260565","DOIUrl":"https://doi.org/10.1145/260558.260565","url":null,"abstract":"m Several myths about standard software reuse practices are reviewed here. We examine how each myth has been addressed at Sodalia, a company with practical experience with reuse over the past few years. Sodalia has embraced standard reuse as a key strategic imperative to reach its objectives of highquality, rapidly deployed telecommunications software applications. Sodalia, which is qualified at ISO-9001, has been assessed at level 2 of the SEI’s Capability Maturity Model (currently seeking level 3) and is deeply invested into the definition and deployment of a corporate-wide standard reuse program that makes it among the leading reuse practitioners in Europe. bject technologies have reached a level of maturity at which companies in Europe and the United States have adopted and applied them to industrial applications for a sufficiently long period of time to assess benefits such as quality and adaptability. However, benefits reaped from software reuse seem more difficult to attain, and have not been demonstrated on a significant scale. Despite this, reuse is widely recognized as one of the major factors in enhancing software development, in terms of both reduced time-to-market and quality improvement. Other advantages include productivity improvement (through shared maintenance), interoperability/compatibility (ensured by uniform behavior of a family of applications), standardization (standards are embedded inside reusable components), and capture of domain knowledge (during domain analysis). The complexity of software reuse is not due to the inherent complexity of individual reuse activities, which are often relatively simple and well understood—the difficulty lies in the large number of technical and managerial issues, which must be tackled simultaneously, and their interdependencies. Moreover, the impact of reuse on organization, management, strategy, marketing, business processes, software development, technologies, corporate culture, communication is often either underestimated or excessively emphasized. As a result software reuse is seen as a holy grail, unreachable to many. We want to review several myths about software reuse, which often act as roadblocks to widespread adoption of reuse as standard software engineering practice. We examine how each myth has been addressed at Sodalia, a three-year software development joint venture between Bell Atlantic (U.S.A.) and Telecom Italia. The company has embraced systematic reuse as a key strategic imperative to reach its objectives of high-quality, rapidly deployed telecommunications software applications.","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116345190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Succi, L. Benedicenti, P. Predonzani, T. Vernazza
m We describe a model to define a set of standard reusable processes. To standardize and reuse a software process, we first need to describe it. We adopt Ivar Jacobson’s use cases as a starting point and then generate scenarios and identify people and their roles. The data collected are significant enough to start mapping the enterprise—we use an OMT-like technique. By adopting activity-based management, it is possible to validate the “off-line” model directly “on-line.” After the necessary corrections, the model is a good representation of the firm’s real production process. This forms the basis for the reengineering process. process is a set of activities organized to reach a goal [Feiler and Humphrey 1992]. A process may follow predefined prescriptions, and it usually has one or more descriptions. We can reuse the prescriptions of an old process for a new one. We can define a new process that fits the descriptions of an old one. In all these cases, we speak of process reuse. We define process reuse as the replica of a set of actions of an already performed process in a new environment. Process reuse is useful in almost any field: All of industrialization has been viewed as the result of defining, standardizing, and replicating processes [Rullani 1988]. It is especially useful where there is a lack of consolidated practice, as in the software industry. The CMM and the ISO 9000 share this view: They require some reuse of predefined software processes. ISO 9000 is almost entirely about defining a process schema to ensure that a company satisfies its own goals and monitoring how effectively the company follows the schema. CMM level 2 elicits a firm’s underlying and hidden practices; further levels try to define them (level 3), handle and evaluate them properly (level 4), and make them work efficiently (level 5). The CMM-derived PSP focuses on teaching programmers how to describe, to improve, and to reuse their processes. Process reuse enables firms to create a set of corporate processes. Corporate processes may define the “essence” of a firm, the know-how that remains regardless of employee turnover. Well-structured corporate processes help new employees to get acquainted with the firm. Business process reengineering is applicable only when the process is defined, i.e., only if a set of corporate processes is in place. Corporate processes need standardization: It is possible to define a corporate process only through a systematic definition of the key processes that are already in place. Therefore, process reuse and process standardization are two faces of the same coin. We explore this issue by describing Gertrude, a model to define a set of standard reusable processes. Standardizing the Reuse of Software Processes S U P P O R T I N G A R T I C L E ★
我们描述一个模型来定义一组标准的可重用过程。为了标准化和重用一个软件过程,我们首先需要描述它。我们采用Ivar Jacobson的用例作为起点,然后生成场景并确定人员及其角色。收集到的数据非常重要,足以开始映射企业—我们使用类似omt的技术。通过采用基于活动的管理,可以直接“联机”验证“离线”模型。经过必要的修正后,该模型很好地反映了企业的实际生产过程。这构成了再造过程的基础。过程是为了达到一个目标而组织起来的一系列活动[Feiler and Humphrey 1992]。流程可能遵循预定义的处方,并且通常具有一个或多个描述。我们可以将旧流程的处方重新用于新流程。我们可以定义一个符合旧过程描述的新过程。在所有这些情况下,我们都说流程重用。我们将流程重用定义为在新环境中复制已执行的流程的一组操作。过程重用在几乎任何领域都是有用的:所有的工业化都被视为定义、标准化和复制过程的结果[Rullani 1988]。它在缺乏统一实践的地方特别有用,比如在软件行业。CMM和ISO 9000共享这一观点:它们需要对预定义的软件过程进行重用。iso9000几乎完全是关于定义一个过程模式,以确保公司满足自己的目标,并监控公司如何有效地遵循该模式。CMM第2层引出了公司的潜在和隐藏的实践;进一步的层次尝试定义它们(第3级),适当地处理和评估它们(第4级),并使它们有效地工作(第5级)。cmm派生的PSP侧重于教程序员如何描述、改进和重用他们的过程。流程重用使公司能够创建一组公司流程。公司流程可能定义了公司的“本质”,即无论员工流动如何,都能保留的专有技术。结构良好的公司流程有助于新员工熟悉公司。业务流程再造只有在流程被定义时才适用,也就是说,只有当一组公司流程到位时才适用。公司流程需要标准化:只有通过对已经存在的关键流程进行系统定义,才能定义公司流程。因此,流程重用和流程标准化是同一事物的两个方面。我们通过描述Gertrude来探讨这个问题,Gertrude是一个定义一组标准可重用流程的模型。软件过程重用的标准化[j] P [P] R [T] N [G] A [R] T] C [L] E
{"title":"Standardizing the reuse of software processes","authors":"G. Succi, L. Benedicenti, P. Predonzani, T. Vernazza","doi":"10.1145/260558.260564","DOIUrl":"https://doi.org/10.1145/260558.260564","url":null,"abstract":"m We describe a model to define a set of standard reusable processes. To standardize and reuse a software process, we first need to describe it. We adopt Ivar Jacobson’s use cases as a starting point and then generate scenarios and identify people and their roles. The data collected are significant enough to start mapping the enterprise—we use an OMT-like technique. By adopting activity-based management, it is possible to validate the “off-line” model directly “on-line.” After the necessary corrections, the model is a good representation of the firm’s real production process. This forms the basis for the reengineering process. process is a set of activities organized to reach a goal [Feiler and Humphrey 1992]. A process may follow predefined prescriptions, and it usually has one or more descriptions. We can reuse the prescriptions of an old process for a new one. We can define a new process that fits the descriptions of an old one. In all these cases, we speak of process reuse. We define process reuse as the replica of a set of actions of an already performed process in a new environment. Process reuse is useful in almost any field: All of industrialization has been viewed as the result of defining, standardizing, and replicating processes [Rullani 1988]. It is especially useful where there is a lack of consolidated practice, as in the software industry. The CMM and the ISO 9000 share this view: They require some reuse of predefined software processes. ISO 9000 is almost entirely about defining a process schema to ensure that a company satisfies its own goals and monitoring how effectively the company follows the schema. CMM level 2 elicits a firm’s underlying and hidden practices; further levels try to define them (level 3), handle and evaluate them properly (level 4), and make them work efficiently (level 5). The CMM-derived PSP focuses on teaching programmers how to describe, to improve, and to reuse their processes. Process reuse enables firms to create a set of corporate processes. Corporate processes may define the “essence” of a firm, the know-how that remains regardless of employee turnover. Well-structured corporate processes help new employees to get acquainted with the firm. Business process reengineering is applicable only when the process is defined, i.e., only if a set of corporate processes is in place. Corporate processes need standardization: It is possible to define a corporate process only through a systematic definition of the key processes that are already in place. Therefore, process reuse and process standardization are two faces of the same coin. We explore this issue by describing Gertrude, a model to define a set of standard reusable processes. Standardizing the Reuse of Software Processes S U P P O R T I N G A R T I C L E ★","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122065197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
m A short analysis of software reuse and standardization possibilities for SMEs (versus large firms) is followed by the discussion of the practical application of the SALMS software repository in a small Hungarian software consulting firm, CIM-EXP Ltd. Standardization is used to produce reusable assets (design for reuse), always based on the same rules, to make application of the assets (design with reuse) easier. The first experiences are rather good in the positive effects of both the reuse and the standardization. euse of software elements is becoming more and more important in the lifecycle of software products. There are different views on the scope of reuse during the software lifecycle. One view is that reuse efforts should focus on code, as this work is more likely to have practical results [Frakes et al. 1990]. Another opinion is that all the results and resources used in a project, including human expertise, should be reused [Basili et al. 1988]. We note that all documents created during the perception-design-implementation-testing of a product, such as ideas, methodologies, requirement specifications, design results, code, executable code, test procedures and results, documentation, could be reused in later projects.
m对中小企业(相对于大公司)的软件重用和标准化可能性进行了简短的分析,然后讨论了SALMS软件存储库在一家小型匈牙利软件咨询公司CIM-EXP Ltd中的实际应用。标准化用于生成可重用的资产(为重用而设计),总是基于相同的规则,以使资产的应用(设计与重用)更容易。最初的经验在重用和标准化方面都取得了不错的效果。软件元素的使用在软件产品的生命周期中变得越来越重要。对于软件生命周期中的重用范围有不同的看法。一种观点认为重用工作应该集中在代码上,因为这样的工作更有可能产生实际的结果[Frakes et al. 1990]。另一种观点是,项目中使用的所有结果和资源,包括人类专业知识,都应该被重用[Basili et al. 1988]。我们注意到,在产品的感知-设计-实现-测试过程中创建的所有文档,如想法、方法、需求规范、设计结果、代码、可执行代码、测试过程和结果、文档,都可以在以后的项目中重用。
{"title":"Software reuse and standardization for SMEs: the CIM-EXP perspective","authors":"G. Kovács","doi":"10.1145/260558.260560","DOIUrl":"https://doi.org/10.1145/260558.260560","url":null,"abstract":"m A short analysis of software reuse and standardization possibilities for SMEs (versus large firms) is followed by the discussion of the practical application of the SALMS software repository in a small Hungarian software consulting firm, CIM-EXP Ltd. Standardization is used to produce reusable assets (design for reuse), always based on the same rules, to make application of the assets (design with reuse) easier. The first experiences are rather good in the positive effects of both the reuse and the standardization. euse of software elements is becoming more and more important in the lifecycle of software products. There are different views on the scope of reuse during the software lifecycle. One view is that reuse efforts should focus on code, as this work is more likely to have practical results [Frakes et al. 1990]. Another opinion is that all the results and resources used in a project, including human expertise, should be reused [Basili et al. 1988]. We note that all documents created during the perception-design-implementation-testing of a product, such as ideas, methodologies, requirement specifications, design results, code, executable code, test procedures and results, documentation, could be reused in later projects.","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128555044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Carnahan, G. Carver, M. Gray, Michael D. Hogan, T. Hopp, J. Horlick, G. Lyon, E. Messina
Abstract : In May 1996, NIST management requested a white paper on metrology for information technology (IT). A task group was formed to develop this white paper with representatives from the Manufacturing Engineering Laboratory (MEL), the Information Technology Laboratory (ITL), and Technology Services (TS). The task group members had a wide spectrum of experiences and perspectives on testing and measuring physical and IT quantities. The task group believed that its collective experience and knowledge were probably sufficient to investigate the underlying question of the nature of IT metrology. During the course of its work, the task group did not find any previous work addressing the overall subject of metrology for IT. The task group found it to be both exciting and challenging to possibly be first in what should be a continuing area of study. After some spirited deliberations, the task group was able to reach consensus on its white paper. Also, as a result of its deliberations, the task group decided that this white paper should suggest possible answers rather than assert definitive conclusions. In this spirit, the white paper suggests: a scope and a conceptual basis for IT metrology; a taxonomy for IT methods of testing; status of IT testing and measurement; opportunities to advance IT metrology; overall roles for NIST; and recapitulates the importance of IT metrology to the U.S. The task group is very appreciative of having had the opportunity to produce this white paper. The task group hopes that this white paper will provide food for thought for our intended audience: NIST management and technical staff and our colleagues elsewhere who are involved in various aspects of testing and measuring IT.
{"title":"Metrology for information technology","authors":"L. Carnahan, G. Carver, M. Gray, Michael D. Hogan, T. Hopp, J. Horlick, G. Lyon, E. Messina","doi":"10.1145/266231.266236","DOIUrl":"https://doi.org/10.1145/266231.266236","url":null,"abstract":"Abstract : In May 1996, NIST management requested a white paper on metrology for information technology (IT). A task group was formed to develop this white paper with representatives from the Manufacturing Engineering Laboratory (MEL), the Information Technology Laboratory (ITL), and Technology Services (TS). The task group members had a wide spectrum of experiences and perspectives on testing and measuring physical and IT quantities. The task group believed that its collective experience and knowledge were probably sufficient to investigate the underlying question of the nature of IT metrology. During the course of its work, the task group did not find any previous work addressing the overall subject of metrology for IT. The task group found it to be both exciting and challenging to possibly be first in what should be a continuing area of study. After some spirited deliberations, the task group was able to reach consensus on its white paper. Also, as a result of its deliberations, the task group decided that this white paper should suggest possible answers rather than assert definitive conclusions. In this spirit, the white paper suggests: a scope and a conceptual basis for IT metrology; a taxonomy for IT methods of testing; status of IT testing and measurement; opportunities to advance IT metrology; overall roles for NIST; and recapitulates the importance of IT metrology to the U.S. The task group is very appreciative of having had the opportunity to produce this white paper. The task group hopes that this white paper will provide food for thought for our intended audience: NIST management and technical staff and our colleagues elsewhere who are involved in various aspects of testing and measuring IT.","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134243501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ⅵ Today, the typical use of the World Wide Web is to browse information in a largely read-only manner. But this was not the original idea—as early as 1990, a prototype Web editor and browser was operational on the Next platform, demonstrating how Web content could be read and written. Unfortunately, most of the world never saw this editor/brows-er, but instead developed their view of the Web from the widely distributed text-based line mode browser. When NCSA Mosaic was developed, it improved the line mode browser by adding a graph-ical user interface and inline images, but had no provision for editing. As Mosaic 2.4 reached critical mass in 1993–4, " publish/browse " became the dominant model for the Web. But the original view of the Web as a readable and writable collaborative medium was not lost. n 1995, two browser/editor products were released: NaviPress by NaviSoft and Front-Page by Vermeer. These products began developing a market for authoring tools that allow a user to edit HyperText Markup Language (HTML) pages remotely [Raggett 1997], taking advantage of the ability to work at a distance over the In-ternet. In early 1996, NaviSoft and Ver-meer were purchased by America Online and Microsoft, respectively, presaging major corporate interest in Web distributed authoring technology. In 1995–96, Netscape released Navigator Gold, a Web browser/editor tool, able to publish pages to a remote Web server. 1996–7 also saw the release of Web-integrated word processors, with Microsoft Word 97, Lotus WordPro 97, and Corel WordPerfect 7, all with HTML editing and remote publishing capacities. In this setting, an ad hoc collection of people interested in remote authoring (now known as the WebDAV working group) met at the WWW4 conference in December 1995, and then at America Online in June 1996. Comprised of developers working on remote authoring tools, and people generally interested in extending the Web for authoring, this group identified key issues in writing these authoring tools, and also found a pressing need to develop standard extensions to the HyperText Transfer Protocol (HTTP) [Fielding et al. 1997] for the following capabilities: —Metadata, to create, remove, and query information about Web pages, such as its author, creation date, etc., also to link pages of any media type to related pages. —Name space management, to copy and move Web pages, and to receive a listing of pages at a particular hierarchy level (like a directory listing in a file …
{"title":"World Wide Web distributed authoring and versioning (WebDAV): an introduction","authors":"E. J. Whitehead","doi":"10.1145/253452.253458","DOIUrl":"https://doi.org/10.1145/253452.253458","url":null,"abstract":"Ⅵ Today, the typical use of the World Wide Web is to browse information in a largely read-only manner. But this was not the original idea—as early as 1990, a prototype Web editor and browser was operational on the Next platform, demonstrating how Web content could be read and written. Unfortunately, most of the world never saw this editor/brows-er, but instead developed their view of the Web from the widely distributed text-based line mode browser. When NCSA Mosaic was developed, it improved the line mode browser by adding a graph-ical user interface and inline images, but had no provision for editing. As Mosaic 2.4 reached critical mass in 1993–4, \" publish/browse \" became the dominant model for the Web. But the original view of the Web as a readable and writable collaborative medium was not lost. n 1995, two browser/editor products were released: NaviPress by NaviSoft and Front-Page by Vermeer. These products began developing a market for authoring tools that allow a user to edit HyperText Markup Language (HTML) pages remotely [Raggett 1997], taking advantage of the ability to work at a distance over the In-ternet. In early 1996, NaviSoft and Ver-meer were purchased by America Online and Microsoft, respectively, presaging major corporate interest in Web distributed authoring technology. In 1995–96, Netscape released Navigator Gold, a Web browser/editor tool, able to publish pages to a remote Web server. 1996–7 also saw the release of Web-integrated word processors, with Microsoft Word 97, Lotus WordPro 97, and Corel WordPerfect 7, all with HTML editing and remote publishing capacities. In this setting, an ad hoc collection of people interested in remote authoring (now known as the WebDAV working group) met at the WWW4 conference in December 1995, and then at America Online in June 1996. Comprised of developers working on remote authoring tools, and people generally interested in extending the Web for authoring, this group identified key issues in writing these authoring tools, and also found a pressing need to develop standard extensions to the HyperText Transfer Protocol (HTTP) [Fielding et al. 1997] for the following capabilities: —Metadata, to create, remove, and query information about Web pages, such as its author, creation date, etc., also to link pages of any media type to related pages. —Name space management, to copy and move Web pages, and to receive a listing of pages at a particular hierarchy level (like a directory listing in a file …","PeriodicalId":270594,"journal":{"name":"ACM Stand.","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124004352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}