Multi-agent systems (MAS) that link business and physical domains such as in production automation need to be reconfigured correctly and efficiently to adapt to new requirements. The standard UML-based approach only partly supports the reconfiguration process to capture agent classes and their instances, while ontologies allow modeling all aspects of MAS design in one common continuous model. In this paper we introduce a MAS development life cycle and focus on the product-specific reconfiguration of a system built mostly from reusable agents. We investigate the process variants based on (a) UML and (b) ontologies. We evaluate both process variants in a feasibility study using fundamental illustrative scenarios from an industrial production automation environment and derive lessons learned for process improvement in building sustainable MAS in the scope of production automation.
{"title":"Investigating UML- and Ontology-Based Approaches for Process Improvement in Developing Agile Multi-Agent Systems","authors":"T. Moser, K. Kunz, K. Matousek, D. Wahyudin","doi":"10.1109/SEAA.2008.37","DOIUrl":"https://doi.org/10.1109/SEAA.2008.37","url":null,"abstract":"Multi-agent systems (MAS) that link business and physical domains such as in production automation need to be reconfigured correctly and efficiently to adapt to new requirements. The standard UML-based approach only partly supports the reconfiguration process to capture agent classes and their instances, while ontologies allow modeling all aspects of MAS design in one common continuous model. In this paper we introduce a MAS development life cycle and focus on the product-specific reconfiguration of a system built mostly from reusable agents. We investigate the process variants based on (a) UML and (b) ontologies. We evaluate both process variants in a feasibility study using fundamental illustrative scenarios from an industrial production automation environment and derive lessons learned for process improvement in building sustainable MAS in the scope of production automation.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116064571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Hamid, A. Radermacher, Patrick Vanuxeem, A. Lanusse, S. Gérard
The requirement for higher reliability and availability of systems is continuously increasing even in domains not traditionally strongly concerned by such issues. Required solutions are expected to be efficient, flexible, reusable on rapidly evolving hardware and of course at low cost. Combining both model and component seems to be a very promising cocktail for building solutions to this problem. Hence, we will present in this paper an approach using a model as its first structural citizen all along the development process. Our proposal will be illustrated with an application modeled with UML (extended with some of its dedicated profiles). Our approach includes an underlying execution infrastructure/middleware, providing fault-tolerance services. For the component aspect, our framework promotes firstly an infrastructure based on the Component/Container/Connectorparadigm to provide run-time facilities enabling transparent management of fault-tolerance (mainly fault-detection and redundancy mechanisms). For the model-driven point of view, our framework provides tool support for assisting the users to model their applications and to deploy and configure them on computing platforms. In this paper we focus on the run-time support offered by the component framework, specially the replication-aw are interaction mechanism enabling a transparent replication management mechanisms and some additional system components dedicated to fault-detection and replicas management.
{"title":"A Fault-tolerance Framework for Distributed Component Systems","authors":"B. Hamid, A. Radermacher, Patrick Vanuxeem, A. Lanusse, S. Gérard","doi":"10.1109/SEAA.2008.50","DOIUrl":"https://doi.org/10.1109/SEAA.2008.50","url":null,"abstract":"The requirement for higher reliability and availability of systems is continuously increasing even in domains not traditionally strongly concerned by such issues. Required solutions are expected to be efficient, flexible, reusable on rapidly evolving hardware and of course at low cost. Combining both model and component seems to be a very promising cocktail for building solutions to this problem. Hence, we will present in this paper an approach using a model as its first structural citizen all along the development process. Our proposal will be illustrated with an application modeled with UML (extended with some of its dedicated profiles). Our approach includes an underlying execution infrastructure/middleware, providing fault-tolerance services. For the component aspect, our framework promotes firstly an infrastructure based on the Component/Container/Connectorparadigm to provide run-time facilities enabling transparent management of fault-tolerance (mainly fault-detection and redundancy mechanisms). For the model-driven point of view, our framework provides tool support for assisting the users to model their applications and to deploy and configure them on computing platforms. In this paper we focus on the run-time support offered by the component framework, specially the replication-aw are interaction mechanism enabling a transparent replication management mechanisms and some additional system components dedicated to fault-detection and replicas management.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129527401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Belda, I. D. Fez, F. Fraile, V. Murcia, P. Arce, J. C. Guerri
This paper describes the main challenges of implementing a multimedia on demand system (TetraMoD) for emergency scenarios. The objective of this system is to offer a complete solution for providing multimedia services (video on demand and reliability file transfer) to a rescue team in an emergency situation. This multimedia system will improve the traditional emergency communications. We propose to integrate the use of different networks, TETRA and DVBT, characterized by a wide geographical coverage and broadband broadcast, respectively. This system consists of the following elements: a TetraMoD gateway - the unit deployed on as interface between TETRA and DVBT networks- and a middleware TetraMoD client -the software for the client terminal. A real scenario and example of communication using standard protocols (SDP, RTP/RTCP, RTSP, FLUTE) are showed.
{"title":"Multimedia System for Emergency Services over TETRA-DVBT Networks","authors":"R. Belda, I. D. Fez, F. Fraile, V. Murcia, P. Arce, J. C. Guerri","doi":"10.1109/SEAA.2008.71","DOIUrl":"https://doi.org/10.1109/SEAA.2008.71","url":null,"abstract":"This paper describes the main challenges of implementing a multimedia on demand system (TetraMoD) for emergency scenarios. The objective of this system is to offer a complete solution for providing multimedia services (video on demand and reliability file transfer) to a rescue team in an emergency situation. This multimedia system will improve the traditional emergency communications. We propose to integrate the use of different networks, TETRA and DVBT, characterized by a wide geographical coverage and broadband broadcast, respectively. This system consists of the following elements: a TetraMoD gateway - the unit deployed on as interface between TETRA and DVBT networks- and a middleware TetraMoD client -the software for the client terminal. A real scenario and example of communication using standard protocols (SDP, RTP/RTCP, RTSP, FLUTE) are showed.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The OSGi Services Platform provides a framework for the dynamic deployment of Java-based applications. It allows to install, to activate, to update and to uninstall application modules without the need to restart the host Java Virtual Machine. However, the mishandling of such OSGi dynamics may result in a problem described in the OSGi specification as Stale References, which happen when services from uninstalled modules are still referenced by active code. It may lead to inconsistencies in application's behavior, state and memory. Currently, there are no tools available to address this issue. This paper presents a diagnostics tool named ServiceCoroner that detects such problems. It helps developers and administrators diagnose OSGi applications running either in production or test environments. We have validated this tool on two open source applications that run on OSGi: a JavaEE application server and a multi-protocol instant messenger application. The results of the experiments show stale references in those applications.
{"title":"Service Coroner: A Diagnostic Tool for Locating OSGi Stale References","authors":"Kiev Gama, D. Donsez","doi":"10.1109/SEAA.2008.32","DOIUrl":"https://doi.org/10.1109/SEAA.2008.32","url":null,"abstract":"The OSGi Services Platform provides a framework for the dynamic deployment of Java-based applications. It allows to install, to activate, to update and to uninstall application modules without the need to restart the host Java Virtual Machine. However, the mishandling of such OSGi dynamics may result in a problem described in the OSGi specification as Stale References, which happen when services from uninstalled modules are still referenced by active code. It may lead to inconsistencies in application's behavior, state and memory. Currently, there are no tools available to address this issue. This paper presents a diagnostics tool named ServiceCoroner that detects such problems. It helps developers and administrators diagnose OSGi applications running either in production or test environments. We have validated this tool on two open source applications that run on OSGi: a JavaEE application server and a multi-protocol instant messenger application. The results of the experiments show stale references in those applications.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115068967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When software systems incorporate existing software components, there is a need to evaluate these components. Component evaluation is of two kinds according to literature: component certification is performed by an independent actor to provide a trustworthy assessment of the component¿s properties in general, and component selection is performed by a system development organization. While this principle is in general understood, in practice the certification process is neither established nor well defined. This paper outlines the relationship between the evaluations performed during certification and selection. We start from the current state of practice and research and (a) propose a component-based life cycle for COTS-based development and software product line development, (b) identify a number of differences in process characteristics between the two types of evaluation, and (c) classify concrete quality properties based on their suitability to be evaluated during certification (when there is no system context) and/or during system development.
{"title":"Towards Efficient Software Component Evaluation: An Examination of Component Selection and Certification","authors":"R. Land, Alexandre Alvaro, I. Crnkovic","doi":"10.1109/SEAA.2008.76","DOIUrl":"https://doi.org/10.1109/SEAA.2008.76","url":null,"abstract":"When software systems incorporate existing software components, there is a need to evaluate these components. Component evaluation is of two kinds according to literature: component certification is performed by an independent actor to provide a trustworthy assessment of the component¿s properties in general, and component selection is performed by a system development organization. While this principle is in general understood, in practice the certification process is neither established nor well defined. This paper outlines the relationship between the evaluations performed during certification and selection. We start from the current state of practice and research and (a) propose a component-based life cycle for COTS-based development and software product line development, (b) identify a number of differences in process characteristics between the two types of evaluation, and (c) classify concrete quality properties based on their suitability to be evaluated during certification (when there is no system context) and/or during system development.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128954660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous studies show that the agile development models implement very few risk management practices. In this paper, we present and evaluate a model integrating the risk management and agile processes. The results show that the model provides a valid solution to address the lack of risk management, however, only in certain types of agile projects.
{"title":"Outlining a Model Integrating Risk Management and Agile Software Development","authors":"Jaana Nyfjord, M. Kajko-Mattsson","doi":"10.1109/SEAA.2008.77","DOIUrl":"https://doi.org/10.1109/SEAA.2008.77","url":null,"abstract":"Previous studies show that the agile development models implement very few risk management practices. In this paper, we present and evaluate a model integrating the risk management and agile processes. The results show that the model provides a valid solution to address the lack of risk management, however, only in certain types of agile projects.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121807193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a framework for distributed embedded applications that can be used to engineer open, and the same time, predictable embedded systems. Applications are composed from components (actors), which communicate transparently by exchanging labeled messages (signals) over a real-time network. The signals are exchanged at precisely specified time instants, in accordance with the Distributed Timed Multitasking (DTM) model of computation, resulting in the elimination of task and transaction I/O jitter. DTM is supported by an operational environment, which has been integrated with application components in an implementation model specifying explicitly the composition of software nodes allocated to physical network nodes. The framework is characterized by complete separation of computation and communication, whereby communication is delegated to the timed-multitasking operational environment. This has resulted in a simplified application model in which actors have been reduced to actor tasks composed of prefabricated components, such as state machine and action function blocks.
{"title":"A Software Framework for Hard Real-Time Distributed Embedded Systems","authors":"C. Angelov, K. Sierszecki, Feng Zhou","doi":"10.1109/SEAA.2008.29","DOIUrl":"https://doi.org/10.1109/SEAA.2008.29","url":null,"abstract":"The paper presents a framework for distributed embedded applications that can be used to engineer open, and the same time, predictable embedded systems. Applications are composed from components (actors), which communicate transparently by exchanging labeled messages (signals) over a real-time network. The signals are exchanged at precisely specified time instants, in accordance with the Distributed Timed Multitasking (DTM) model of computation, resulting in the elimination of task and transaction I/O jitter. DTM is supported by an operational environment, which has been integrated with application components in an implementation model specifying explicitly the composition of software nodes allocated to physical network nodes. The framework is characterized by complete separation of computation and communication, whereby communication is delegated to the timed-multitasking operational environment. This has resulted in a simplified application model in which actors have been reduced to actor tasks composed of prefabricated components, such as state machine and action function blocks.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"395 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122184209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large number of software organizations are adopting the software product line approach in their reuse program. One fundamental factor to evaluate cost-benefit of this approach is the practical use of cost models to estimate if an investment is worthwhile for a family of products. This paper analyzes the most significant cost models for product line engineering and it highlights the set of features that makes an effective model. This work also presents an integrated cost model for product line engineering with its foundations and elements. At the end, is presented a discussion over the results of a case study where the model was applied.
{"title":"InCoME: Integrated Cost Model for Product Line Engineering","authors":"Jarley Palmeira Nóbrega, E. Almeida, S. Meira","doi":"10.1109/SEAA.2008.41","DOIUrl":"https://doi.org/10.1109/SEAA.2008.41","url":null,"abstract":"A large number of software organizations are adopting the software product line approach in their reuse program. One fundamental factor to evaluate cost-benefit of this approach is the practical use of cost models to estimate if an investment is worthwhile for a family of products. This paper analyzes the most significant cost models for product line engineering and it highlights the set of features that makes an effective model. This work also presents an integrated cost model for product line engineering with its foundations and elements. At the end, is presented a discussion over the results of a case study where the model was applied.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"2020 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116306353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a method that integrates Use case-based requirements specification into release planning process. Release planning addresses decisions related to the implementation of a selected requirements' collection in incremental software development. Its aim is determining an optimal schedule for the development within the constraints of defined deadlines and available resources. Scheduling requirements' development for the upcoming version is a complex process and requires significant manual efforts. Presented method is evaluated with a case study that demonstrates how this method can significantly accelerate release plan production (> 50%), provide more informed and established decisions and supply precise requirement's tracing at Use case level. Finally, the paper analyzes benefits and issues from the use of this method by project managers.
{"title":"A Proposed Method for Release Planning from Use Case-based Requirements Specification","authors":"Ákos Szoke","doi":"10.1109/SEAA.2008.18","DOIUrl":"https://doi.org/10.1109/SEAA.2008.18","url":null,"abstract":"This paper proposes a method that integrates Use case-based requirements specification into release planning process. Release planning addresses decisions related to the implementation of a selected requirements' collection in incremental software development. Its aim is determining an optimal schedule for the development within the constraints of defined deadlines and available resources. Scheduling requirements' development for the upcoming version is a complex process and requires significant manual efforts. Presented method is evaluated with a case study that demonstrates how this method can significantly accelerate release plan production (> 50%), provide more informed and established decisions and supply precise requirement's tracing at Use case level. Finally, the paper analyzes benefits and issues from the use of this method by project managers.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125884207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although Web services are generally envisioned as being stateless, some of them are implicitly stateful. The reason is that the Web services often work as front-ends to enterprise systems and are used in a session-oriented way by the clients. Contrary to the case of stateless services, for a stateful Web service there exist constraints to the order in which the operations of the service may be invoked. However, specification of such constraints is not a standard part of a Web service interface, and compliance with such constraints is not checked by the standard Web service development tools. Therefore, we propose in this paper to extend a web service interface by a constraint definition that is based on behavior protocols. Also, we implemented a tool that checks whether a given BPEL code complies with the constraints of all stateful web services it communicates with. The key idea behind the tool is to translate the BPEL code into Java and then to check the Java program using Java PathFinder with behavior protocol extension.
{"title":"Checking Session-Oriented Interactions between Web Services","authors":"P. Parízek, Jirí Adámek","doi":"10.1109/SEAA.2008.11","DOIUrl":"https://doi.org/10.1109/SEAA.2008.11","url":null,"abstract":"Although Web services are generally envisioned as being stateless, some of them are implicitly stateful. The reason is that the Web services often work as front-ends to enterprise systems and are used in a session-oriented way by the clients. Contrary to the case of stateless services, for a stateful Web service there exist constraints to the order in which the operations of the service may be invoked. However, specification of such constraints is not a standard part of a Web service interface, and compliance with such constraints is not checked by the standard Web service development tools. Therefore, we propose in this paper to extend a web service interface by a constraint definition that is based on behavior protocols. Also, we implemented a tool that checks whether a given BPEL code complies with the constraints of all stateful web services it communicates with. The key idea behind the tool is to translate the BPEL code into Java and then to check the Java program using Java PathFinder with behavior protocol extension.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128265564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}