The goal of this research is to express domain knowledge in software applications explicitly and as separated as possible from the implementation strategy. Although some (domain) knowledge is notoriously hard to elicit and capture, as was discovered in building expert systems, the domain knowledge we intend to make explicit is quite tangible as is illustrated by examples. In fact, the domain knowledge is currently "implemented" using a (object-oriented) programming language. When expressed in a suitable medium, domain knowledge consists of concepts and relations between the concepts, constraints on the concepts and the relations, and rules that state how to infer new concepts and relations.
{"title":"Making software knowledgeable","authors":"M. D'Hondt","doi":"10.1145/581339.581477","DOIUrl":"https://doi.org/10.1145/581339.581477","url":null,"abstract":"The goal of this research is to express domain knowledge in software applications explicitly and as separated as possible from the implementation strategy. Although some (domain) knowledge is notoriously hard to elicit and capture, as was discovered in building expert systems, the domain knowledge we intend to make explicit is quite tangible as is illustrated by examples. In fact, the domain knowledge is currently \"implemented\" using a (object-oriented) programming language. When expressed in a suitable medium, domain knowledge consists of concepts and relations between the concepts, constraints on the concepts and the relations, and rules that state how to infer new concepts and relations.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This tutorial provides an overview of patterns and principles that we have found useful in designing business information systems.
本教程概述了我们认为在设计业务信息系统时很有用的模式和原则。
{"title":"Information systems architecture","authors":"M. Fowler","doi":"10.1145/581441.581454","DOIUrl":"https://doi.org/10.1145/581441.581454","url":null,"abstract":"This tutorial provides an overview of patterns and principles that we have found useful in designing business information systems.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132378633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In this time of economic turmoil, IT is emerging as a foundation for transition to the new economy. This presentation provides a high-level perspective on the key business and technology megatrends shaping the future of IT, as well as the key management initiatives required to harness and exploit IT effectively. Key issues include: i) What are the key trends and events that will drive new IT investments during the next five years? ii) How will technology advances and changes impact IT deployment decisions? iii) How can organizations harness and exploit IT despite ever-increasing complexity and volatility?.
{"title":"Transforming and extending the enterprise through IT","authors":"D. Feinberg","doi":"10.1145/581339.581341","DOIUrl":"https://doi.org/10.1145/581339.581341","url":null,"abstract":"Summary form only given. In this time of economic turmoil, IT is emerging as a foundation for transition to the new economy. This presentation provides a high-level perspective on the key business and technology megatrends shaping the future of IT, as well as the key management initiatives required to harness and exploit IT effectively. Key issues include: i) What are the key trends and events that will drive new IT investments during the next five years? ii) How will technology advances and changes impact IT deployment decisions? iii) How can organizations harness and exploit IT despite ever-increasing complexity and volatility?.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129258660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jörg Niere, Wilhelm Schäfer, J. Wadsack, Lothar Wendehals, J. Welsh
A method and a corresponding tool is described which assist design recovery and program understanding by recognising instances of design patterns semi-automatically. The approach taken is specifically designed to overcome the existing scalability problems caused by many design and implementation variants of design pattern instances. Our approach is based on a new recognition algorithm which works incrementally rather than trying to analyse a possibly large software system in one pass without any human intervention. The new algorithm exploits domain and context knowledge given by a reverse engineer and by a special underlying data structure, namely a special form of an annotated abstract syntax graph. A comparative and quantitative evaluation of applying the approach to the Java AWT and JGL libraries is also given.
{"title":"Towards pattern-based design recovery","authors":"Jörg Niere, Wilhelm Schäfer, J. Wadsack, Lothar Wendehals, J. Welsh","doi":"10.1145/581380.581382","DOIUrl":"https://doi.org/10.1145/581380.581382","url":null,"abstract":"A method and a corresponding tool is described which assist design recovery and program understanding by recognising instances of design patterns semi-automatically. The approach taken is specifically designed to overcome the existing scalability problems caused by many design and implementation variants of design pattern instances. Our approach is based on a new recognition algorithm which works incrementally rather than trying to analyse a possibly large software system in one pass without any human intervention. The new algorithm exploits domain and context knowledge given by a reverse engineer and by a special underlying data structure, namely a special form of an annotated abstract syntax graph. A comparative and quantitative evaluation of applying the approach to the Java AWT and JGL libraries is also given.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124335234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The capabilities of modern mobile devices enable new classes of applications to exploit the ability to form ad-hoc workgroups and exchange data in a very dynamic fashion. They also present, however, new challenges to application developers, related with the scarcity of resources that need to be exploited efficiently. Moreover network connectivity may be interrupted instantaneously and network bandwidth remains by orders of magnitude lower than in wired networks. To address such issues, we have designed and implemented XMIDDLE, which advances mobile computing middleware approaches by choosing a more powerful underlying data structure (XML) and by supporting offline data manipulation.
{"title":"XMIDDLE: information sharing middleware for a mobile environment","authors":"S. Zachariadis, L. Capra, C. Mascolo, W. Emmerich","doi":"10.1145/581339.581463","DOIUrl":"https://doi.org/10.1145/581339.581463","url":null,"abstract":"Summary form only given. The capabilities of modern mobile devices enable new classes of applications to exploit the ability to form ad-hoc workgroups and exchange data in a very dynamic fashion. They also present, however, new challenges to application developers, related with the scarcity of resources that need to be exploited efficiently. Moreover network connectivity may be interrupted instantaneously and network bandwidth remains by orders of magnitude lower than in wired networks. To address such issues, we have designed and implemented XMIDDLE, which advances mobile computing middleware approaches by choosing a more powerful underlying data structure (XML) and by supporting offline data manipulation.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126540141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a constant need for practical, efficient, and cost-effective software evolution techniques. We propose a novel evolution methodology that integrates the concepts of features, regression tests, and component-based software engineering (CBSE). Regression test cases are untapped resources, full of information about system features. By exercising each feature with its associated test cases using code profilers and similar tools, code can be located and refactored to create components. These components are then inserted back into the legacy system, ensuring a working system structure. This methodology is divided into three parts. Part one identifies the source code associated with features that need evolution. Part two deals with creating components and part three measures results. By applying this methodology, AFS has successfully restructured its enterprise legacy system and reduced the costs of future maintenance. Additionally, the components that were refactored from the legacy system are currently being used within a web-enabled application.
{"title":"Evolving legacy system features into fine-grained components","authors":"A. Mehta, G. Heineman","doi":"10.1145/581388.581391","DOIUrl":"https://doi.org/10.1145/581388.581391","url":null,"abstract":"There is a constant need for practical, efficient, and cost-effective software evolution techniques. We propose a novel evolution methodology that integrates the concepts of features, regression tests, and component-based software engineering (CBSE). Regression test cases are untapped resources, full of information about system features. By exercising each feature with its associated test cases using code profilers and similar tools, code can be located and refactored to create components. These components are then inserted back into the legacy system, ensuring a working system structure. This methodology is divided into three parts. Part one identifies the source code associated with features that need evolution. Part two deals with creating components and part three measures results. By applying this methodology, AFS has successfully restructured its enterprise legacy system and reduced the costs of future maintenance. Additionally, the components that were refactored from the legacy system are currently being used within a web-enabled application.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133067604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web applications are the legacy software of the future. Developed under tight schedules, with high employee turnover, and in a rapidly evolving environment, these systems are often poorly structured and poorly documented. Maintaining such systems is problematic. This paper presents an approach to recover the architecture of such systems, in order to make maintenance more manageable. Our lightweight approach is flexible and retargetable to the various technologies that are used in developing Web applications. The approach extracts the structure of dynamic Web applications and shows the interaction between their various components such as databases, distributed objects, and Web pages. The recovery process uses a set of specialized extractors to analyze the source code and binaries of Web applications. The extracted data is manipulated to reduce the complexity of the architectural diagrams. Developers can use the extracted architecture to gain a better understanding of Web applications and to assist in their maintenance.
{"title":"Architecture recovery of Web applications","authors":"A. Hassan, R. Holt","doi":"10.1145/581339.581383","DOIUrl":"https://doi.org/10.1145/581339.581383","url":null,"abstract":"Web applications are the legacy software of the future. Developed under tight schedules, with high employee turnover, and in a rapidly evolving environment, these systems are often poorly structured and poorly documented. Maintaining such systems is problematic. This paper presents an approach to recover the architecture of such systems, in order to make maintenance more manageable. Our lightweight approach is flexible and retargetable to the various technologies that are used in developing Web applications. The approach extracts the structure of dynamic Web applications and shows the interaction between their various components such as databases, distributed objects, and Web pages. The recovery process uses a set of specialized extractors to analyze the source code and binaries of Web applications. The extracted data is manipulated to reduce the complexity of the architectural diagrams. Developers can use the extracted architecture to gain a better understanding of Web applications and to assist in their maintenance.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131779015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Kusumoto, M. Imagawa, Katsuro Inoue, S. Morimoto, K. Matsusita, Michio Tsuda
Function point analysis (FPA) was proposed to help measure the functionality of software systems. It is used to estimate the effort required for the software development. However, it has been reported that since function point measurement involves judgment on the part of the measurer, differences for the same product may occur even in the same organization. Also, if an organization tries to introduce FPA, FP will have to be measured from the past software developed there, and this measurement is cost-consuming. We examine the possibility to measure FP from source code automatically. First, we propose measurement rules to count data and transactional functions for an object-oriented program based on the IFPUG method and develop the function point measurement tool. Then, we apply the tool to practical Java programs in a computer company and examine the difference between the FP values obtained by the tool and those of an FP measurement specialist. The results show that the number of data and transactional functions extracted by the tool is similar to those by the specialist, although for the classification of each function there is some difference between them.
{"title":"Function point measurement from Java programs","authors":"S. Kusumoto, M. Imagawa, Katsuro Inoue, S. Morimoto, K. Matsusita, Michio Tsuda","doi":"10.1145/581410.581412","DOIUrl":"https://doi.org/10.1145/581410.581412","url":null,"abstract":"Function point analysis (FPA) was proposed to help measure the functionality of software systems. It is used to estimate the effort required for the software development. However, it has been reported that since function point measurement involves judgment on the part of the measurer, differences for the same product may occur even in the same organization. Also, if an organization tries to introduce FPA, FP will have to be measured from the past software developed there, and this measurement is cost-consuming. We examine the possibility to measure FP from source code automatically. First, we propose measurement rules to count data and transactional functions for an object-oriented program based on the IFPUG method and develop the function point measurement tool. Then, we apply the tool to practical Java programs in a computer company and examine the difference between the FP values obtained by the tool and those of an FP measurement specialist. The results show that the number of data and transactional functions extracted by the tool is similar to those by the specialist, although for the classification of each function there is some difference between them.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132976190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-19DOI: 10.1109/ICSE.2002.1007971
S. Butler
Conducting cost-benefit analyses of architectural attributes such as security has always been difficult, because the benefits are difficult to assess. Specialists usually make security decisions, but program managers are left wondering whether their investment in security is well spent. The paper summarizes the results of using a cost-benefit analysis method called SAEM to compare alternative security designs in a financial and accounting information system. The case study presented starts with a multi-attribute risk assessment that results in a prioritized list of risks. Security specialists estimate countermeasure benefits and how the organization's risks are reduced. Using SAEM, security design alternatives are compared with the organization's current selection of security technologies to see if a more cost-effective solution is possible. The goal of using SAEM is to help information-system stakeholders decide whether their security investment is consistent with the expected risks.
{"title":"Security attribute evaluation method: a cost-benefit approach","authors":"S. Butler","doi":"10.1109/ICSE.2002.1007971","DOIUrl":"https://doi.org/10.1109/ICSE.2002.1007971","url":null,"abstract":"Conducting cost-benefit analyses of architectural attributes such as security has always been difficult, because the benefits are difficult to assess. Specialists usually make security decisions, but program managers are left wondering whether their investment in security is well spent. The paper summarizes the results of using a cost-benefit analysis method called SAEM to compare alternative security designs in a financial and accounting information system. The case study presented starts with a multi-attribute risk assessment that results in a prioritized list of risks. Security specialists estimate countermeasure benefits and how the organization's risks are reduced. Using SAEM, security design alternatives are compared with the organization's current selection of security technologies to see if a more cost-effective solution is possible. The goal of using SAEM is to help information-system stakeholders decide whether their security investment is consistent with the expected risks.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128848608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of networked lightweight portable computing devices can potentially enable accessibility to a vast array of remote applications and data. In order to cope with shortage of local resources such as memory, CPU and bandwidth, such applications are typically designed as a thin-client thick-server applications. However, another highly desirable yet conflicting requirement is to support disconnected operation, due to the low quality and high cost of online connectivity. We present a novel programming model and a runtime infrastructure that addresses these requirements by automatically reconfiguring the application to operate in disconnected mode of operation, when voluntary disconnection is requested, and automatically resorting to normal distributed operation, upon reconnection. The programming model enables developers to design disconnected aware applications by providing a set of component reference annotations with special disconnection and reconnection semantics. Using these annotations, designers can identify critical components, priorities, dependencies, local component alternatives with reduced functionality, and state merging policies. The runtime infrastructures carries out dis- and re-connection semantics using component mobility and dynamic application layout. The disconnected operation framework, FarGo-DA, is an extension of FarGo, a mobile component framework for distributed applications.
{"title":"A programming model and system support for disconnected-aware applications on resource-constrained devices","authors":"Y. Weinsberg, I. Ben-Shaul","doi":"10.1145/581384.581386","DOIUrl":"https://doi.org/10.1145/581384.581386","url":null,"abstract":"The emergence of networked lightweight portable computing devices can potentially enable accessibility to a vast array of remote applications and data. In order to cope with shortage of local resources such as memory, CPU and bandwidth, such applications are typically designed as a thin-client thick-server applications. However, another highly desirable yet conflicting requirement is to support disconnected operation, due to the low quality and high cost of online connectivity. We present a novel programming model and a runtime infrastructure that addresses these requirements by automatically reconfiguring the application to operate in disconnected mode of operation, when voluntary disconnection is requested, and automatically resorting to normal distributed operation, upon reconnection. The programming model enables developers to design disconnected aware applications by providing a set of component reference annotations with special disconnection and reconnection semantics. Using these annotations, designers can identify critical components, priorities, dependencies, local component alternatives with reduced functionality, and state merging policies. The runtime infrastructures carries out dis- and re-connection semantics using component mobility and dynamic application layout. The disconnected operation framework, FarGo-DA, is an extension of FarGo, a mobile component framework for distributed applications.","PeriodicalId":186061,"journal":{"name":"Proceedings of the 24th International Conference on Software Engineering. ICSE 2002","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}