Tim Dörnemann, Matthew Smith, Ernst Juhnke, Bernd Freisleben
In this paper, an approach to create virtual cluster environments is presented which enables fine grained service-oriented applications to be executed side by side to traditional batch job oriented Grid applications. Secure execution environments which can be staged into an existing batch job environment are created. A grid enabled workflow engine to build complex application workflows which are executed in the virtual environment is provided. A security concept is introduced allowing cluster worker nodes to expose services to the BPEL engine outside of the private cluster network and thus enabling multi-site workflows in a secure fashion. A prototypical implementation based on Globus Toolkit 4, Virtual Workspaces, ActiveBPEL and Xen is presented.
{"title":"Secure Grid Micro-Workflows Using Virtual Workspaces","authors":"Tim Dörnemann, Matthew Smith, Ernst Juhnke, Bernd Freisleben","doi":"10.1109/SEAA.2008.70","DOIUrl":"https://doi.org/10.1109/SEAA.2008.70","url":null,"abstract":"In this paper, an approach to create virtual cluster environments is presented which enables fine grained service-oriented applications to be executed side by side to traditional batch job oriented Grid applications. Secure execution environments which can be staged into an existing batch job environment are created. A grid enabled workflow engine to build complex application workflows which are executed in the virtual environment is provided. A security concept is introduced allowing cluster worker nodes to expose services to the BPEL engine outside of the private cluster network and thus enabling multi-site workflows in a secure fashion. A prototypical implementation based on Globus Toolkit 4, Virtual Workspaces, ActiveBPEL and Xen is presented.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"81 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132581364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When designing a grid workflow, it might be necessary to integrate different kinds of services. In an ideal scenario all services are grid-enabled. But real workflows often consist of grid-enabled and non grid-enabled services. One reason is that grid-enabling services can be costly. Therefore it is favorable to solely grid-enable the compute-intensive and time-consuming applications. Additionally, workflows should be allowed to include grid jobs that execute legacy applications. Another reason is that very often, third parties charge fees for accessing their services. Hence, it is impossible to convert such a third party service into a service that can be integrated into a grid environment at all. This paper discusses problems of designing a workflow that consists of all these different kinds of services. The geospatial domain is exemplarily used to demonstrate difficulties that workflow designer have to overcome, i.e. constructing a geospatial workflow by using combinations of conventional Web services (XML-based), standard OGC Web services and grid-enabled OGC Web services (WSRF-based). The concept of a workflow engine capable of enacting these workflows is presented and an implementation based on the ActiveBPEL engine is proposed.
{"title":"BPEL Workflows Combining Standard OGC Web Services and Grid-enabled OGC Web Services","authors":"T. Fleuren, P. Müller","doi":"10.1109/SEAA.2008.34","DOIUrl":"https://doi.org/10.1109/SEAA.2008.34","url":null,"abstract":"When designing a grid workflow, it might be necessary to integrate different kinds of services. In an ideal scenario all services are grid-enabled. But real workflows often consist of grid-enabled and non grid-enabled services. One reason is that grid-enabling services can be costly. Therefore it is favorable to solely grid-enable the compute-intensive and time-consuming applications. Additionally, workflows should be allowed to include grid jobs that execute legacy applications. Another reason is that very often, third parties charge fees for accessing their services. Hence, it is impossible to convert such a third party service into a service that can be integrated into a grid environment at all. This paper discusses problems of designing a workflow that consists of all these different kinds of services. The geospatial domain is exemplarily used to demonstrate difficulties that workflow designer have to overcome, i.e. constructing a geospatial workflow by using combinations of conventional Web services (XML-based), standard OGC Web services and grid-enabled OGC Web services (WSRF-based). The concept of a workflow engine capable of enacting these workflows is presented and an implementation based on the ActiveBPEL engine is proposed.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133412938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Dörnemann, Dennis Meier, M. Mathes, Bernd Freisleben
Virtual organizations in Grid computing environments are arrangements of Grid participants into groups, where each participant may belong to different physical organizations. The combination of Grid computing and peer-to-peer technology causes problems in terms of forming, organizing, and managing virtual organizations in such networks. In this paper, a novel approach to map a Grid to a peer-to-peer network and to map each virtual organization to a group of peers, independent of a particular virtual organization software and peer-to-peer network, is presented. A prototypical implementation based on the Globus Toolkit 4 Grid middleware, the peer-to-peer framework Free Pastry and the virtual organization solutions GridShib/Shibboleth and the Virtual Organization Membership Service, is presented.
{"title":"Mapping Virtual Organizations in Grids to Peer-to-Peer Networks","authors":"K. Dörnemann, Dennis Meier, M. Mathes, Bernd Freisleben","doi":"10.1109/SEAA.2008.72","DOIUrl":"https://doi.org/10.1109/SEAA.2008.72","url":null,"abstract":"Virtual organizations in Grid computing environments are arrangements of Grid participants into groups, where each participant may belong to different physical organizations. The combination of Grid computing and peer-to-peer technology causes problems in terms of forming, organizing, and managing virtual organizations in such networks. In this paper, a novel approach to map a Grid to a peer-to-peer network and to map each virtual organization to a group of peers, independent of a particular virtual organization software and peer-to-peer network, is presented. A prototypical implementation based on the Globus Toolkit 4 Grid middleware, the peer-to-peer framework Free Pastry and the virtual organization solutions GridShib/Shibboleth and the Virtual Organization Membership Service, is presented.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115871627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hong Linh Truong, Lukasz Juszczyk, S. Bashir, A. Manzoor, S. Dustdar
Mobile devices are considered to be very useful in ad-hoc and team collaborations, for example in disaster responses, where dedicated infrastructures are not available. Such collaborations normally require flexible and interoperable services while running on mobile devices and being integrated with various other services. Therefore, middleware and toolkits for developing mobile services which can be accessed by using standard interfaces and protocols are in demand. Due to the lack of tools, the support of the development of Web services and collaboration tools on mobile devices is still limited. This paper presents the Vimoware toolkit which allows both developers and users to develop Web services for mobile devices, to conduct ad-hoc team collaborations by executing pre-defined or on-situ flows of tasks, and to test collaboration scenarios.
{"title":"Vimoware - A Toolkit for Mobile Web Services and Collaborative Computing","authors":"Hong Linh Truong, Lukasz Juszczyk, S. Bashir, A. Manzoor, S. Dustdar","doi":"10.1109/SEAA.2008.42","DOIUrl":"https://doi.org/10.1109/SEAA.2008.42","url":null,"abstract":"Mobile devices are considered to be very useful in ad-hoc and team collaborations, for example in disaster responses, where dedicated infrastructures are not available. Such collaborations normally require flexible and interoperable services while running on mobile devices and being integrated with various other services. Therefore, middleware and toolkits for developing mobile services which can be accessed by using standard interfaces and protocols are in demand. Due to the lack of tools, the support of the development of Web services and collaboration tools on mobile devices is still limited. This paper presents the Vimoware toolkit which allows both developers and users to develop Web services for mobile devices, to conduct ad-hoc team collaborations by executing pre-defined or on-situ flows of tasks, and to test collaboration scenarios.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121288521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent research on static code attribute (SCA) based defect prediction suggests that a performance ceiling has been achieved and this barrier can be exceeded by increasing the information content in data. In this research we propose static call graph based ranking (CGBR) framework, which can be applied to any defect prediction model based on SCA. In this framework, we model both intra module properties and inter module relations. Our results show that defect predictors using CGBR framework can detect the same number of defective modules, while yielding significantly lower false alarm rates. On industrial public data, we also show that using CGBR framework can improve testing efforts by 23%.
{"title":"Software Defect Prediction Using Call Graph Based Ranking (CGBR) Framework","authors":"Burak Turhan, Gözde Koçak, A. Bener","doi":"10.1109/SEAA.2008.52","DOIUrl":"https://doi.org/10.1109/SEAA.2008.52","url":null,"abstract":"Recent research on static code attribute (SCA) based defect prediction suggests that a performance ceiling has been achieved and this barrier can be exceeded by increasing the information content in data. In this research we propose static call graph based ranking (CGBR) framework, which can be applied to any defect prediction model based on SCA. In this framework, we model both intra module properties and inter module relations. Our results show that defect predictors using CGBR framework can detect the same number of defective modules, while yielding significantly lower false alarm rates. On industrial public data, we also show that using CGBR framework can improve testing efforts by 23%.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130217002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Explicit declaration of provided and required features facilitates easier updates of components within an application. A necessary precondition is that sufficient and correct meta-data about the component and its features is available. In this paper we describe a method that ensures safe OSGi bundle updates and package bindings despite potentially errorneous meta-data. It uses subtype checks on feature types, implemented as user-space enhancements of the standard bundle update process. The method was successfully applied in the Knopflerfish and Apache Felix frameworks and the paper discusses the general experiences with the OSGi framework gained during the implementation.
{"title":"Enhanced OSGi Bundle Updates to Prevent Runtime Exceptions","authors":"Přemek Brada","doi":"10.1109/SEAA.2008.51","DOIUrl":"https://doi.org/10.1109/SEAA.2008.51","url":null,"abstract":"Explicit declaration of provided and required features facilitates easier updates of components within an application. A necessary precondition is that sufficient and correct meta-data about the component and its features is available. In this paper we describe a method that ensures safe OSGi bundle updates and package bindings despite potentially errorneous meta-data. It uses subtype checks on feature types, implemented as user-space enhancements of the standard bundle update process. The method was successfully applied in the Knopflerfish and Apache Felix frameworks and the paper discusses the general experiences with the OSGi framework gained during the implementation.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115277640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Practitioners report that experience plays an important role in effective software testing. We investigate the role of experience in a multiple case study about three successful projects conducted at Siemens Austria and document the state of practice in testing software systems. The studied projects were employed from the domains telecommunications, insurance and banking, as well as safety-critical railway systems. The study shows that test design is to a considerable extent based on experience in all three projects and that experience-based testing is an important supplementary approach to requirements-based testing. The study further analyzes the different sources of experience, the perceived value of experience for testing, and the measures taken to manage and evolve this experience.
{"title":"The Role of Experience in Software Testing Practice","authors":"Armin Beer, R. Ramler","doi":"10.1109/SEAA.2008.28","DOIUrl":"https://doi.org/10.1109/SEAA.2008.28","url":null,"abstract":"Practitioners report that experience plays an important role in effective software testing. We investigate the role of experience in a multiple case study about three successful projects conducted at Siemens Austria and document the state of practice in testing software systems. The studied projects were employed from the domains telecommunications, insurance and banking, as well as safety-critical railway systems. The study shows that test design is to a considerable extent based on experience in all three projects and that experience-based testing is an important supplementary approach to requirements-based testing. The study further analyzes the different sources of experience, the perceived value of experience for testing, and the measures taken to manage and evolve this experience.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Wahyudin, Alexander Schatten, D. Winkler, A. Tjoa, S. Biffl
The quality evaluation of open source software (OSS) products, e.g., defect estimation and prediction approaches of individual releases, gains importance with increasing OSS adoption in industry applications. Most empirical studies on the accuracy of defect prediction and software maintenance focus on product metrics as predictors that are available only when the product is finished. Only few prediction models consider information on the development process (project metrics) that seems relevant to quality improvement of the software product. In this paper, we investigate defect prediction with data from a family of widely used OSS projects based both on product and project metrics as well as on combinations of these metrics. Main results of data analysis are (a) a set of project metrics prior to product release that had strong correlation to potential defect growth between releases and (b) a combination of product and project metrics enables a more accurate defect prediction than the application of one single type of measurement. Thus, the combined application of project and product metrics can (a) improve the accuracy of defect prediction, (b) enable a better guidance of the release process from project management point of view, and (c) help identifying areas for product and process improvement.
{"title":"Defect Prediction using Combined Product and Project Metrics - A Case Study from the Open Source \"Apache\" MyFaces Project Family","authors":"D. Wahyudin, Alexander Schatten, D. Winkler, A. Tjoa, S. Biffl","doi":"10.1109/SEAA.2008.36","DOIUrl":"https://doi.org/10.1109/SEAA.2008.36","url":null,"abstract":"The quality evaluation of open source software (OSS) products, e.g., defect estimation and prediction approaches of individual releases, gains importance with increasing OSS adoption in industry applications. Most empirical studies on the accuracy of defect prediction and software maintenance focus on product metrics as predictors that are available only when the product is finished. Only few prediction models consider information on the development process (project metrics) that seems relevant to quality improvement of the software product. In this paper, we investigate defect prediction with data from a family of widely used OSS projects based both on product and project metrics as well as on combinations of these metrics. Main results of data analysis are (a) a set of project metrics prior to product release that had strong correlation to potential defect growth between releases and (b) a combination of product and project metrics enables a more accurate defect prediction than the application of one single type of measurement. Thus, the combined application of project and product metrics can (a) improve the accuracy of defect prediction, (b) enable a better guidance of the release process from project management point of view, and (c) help identifying areas for product and process improvement.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130146378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In early phases of the software development process, requirements prioritization necessarily relies on the specified requirements and on predictions of benefit and cost of individual requirements. This paper induces a conceptual model of requirements prioritization based on benefit and cost. For this purpose, it uses Grounded Theory. We provide a detailed account of the procedures and rationale of (i) how we obtained our results and (ii) how we used them to form the basis for a framework for classifying requirements prioritization methods.
{"title":"Requirements Prioritization Based on Benefit and Cost Prediction: A Method Classification Framework","authors":"M. Daneva, A. Herrmann","doi":"10.1109/seaa.2008.46","DOIUrl":"https://doi.org/10.1109/seaa.2008.46","url":null,"abstract":"In early phases of the software development process, requirements prioritization necessarily relies on the specified requirements and on predictions of benefit and cost of individual requirements. This paper induces a conceptual model of requirements prioritization based on benefit and cost. For this purpose, it uses Grounded Theory. We provide a detailed account of the procedures and rationale of (i) how we obtained our results and (ii) how we used them to form the basis for a framework for classifying requirements prioritization methods.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122552731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software product lines (SPL) processes are gradually being adopted by many companies in several domains. A particular domain where the adoption of such processes may bring relevant benefits is the mobile applications domain given the big diversity of handsets, although the characteristics of this domain usually create barriers to apply these processes in practice, such as, restrictions of memory size and processing power and different API implementations by different manufacturers. In this context, this work presents briefly a practical approach to implement core assets in a SPL applied to the mobile game domain combining the good practices from the already published processes and describes in details a case study performed with the application of this approach, based on three different adventure mobile games. The results of the case study have shown the approach can be suitable for the domain in question.
{"title":"A Case Study in Software Product Lines - The Case of the Mobile Game Domain","authors":"L. Nascimento, E. Almeida, S. Meira","doi":"10.1109/SEAA.2008.14","DOIUrl":"https://doi.org/10.1109/SEAA.2008.14","url":null,"abstract":"Software product lines (SPL) processes are gradually being adopted by many companies in several domains. A particular domain where the adoption of such processes may bring relevant benefits is the mobile applications domain given the big diversity of handsets, although the characteristics of this domain usually create barriers to apply these processes in practice, such as, restrictions of memory size and processing power and different API implementations by different manufacturers. In this context, this work presents briefly a practical approach to implement core assets in a SPL applied to the mobile game domain combining the good practices from the already published processes and describes in details a case study performed with the application of this approach, based on three different adventure mobile games. The results of the case study have shown the approach can be suitable for the domain in question.","PeriodicalId":127633,"journal":{"name":"2008 34th Euromicro Conference Software Engineering and Advanced Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121949688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}