Indira Nurdiani, J. Börstler, Samuel Fricker, K. Petersen
To assess the benefits of introducing Agile practices, it is important to get a clear understanding of the baseline situation, i.e. the situation before their introduction. Without a clear baseline, we cannot properly assess the extent of impacts, both positive and negative, of introducing Agile practices. This paper provides a preliminary guideline to help researchers in capturing and reporting baseline situations. The guideline has been developed through the study of literature and interviews with industry practitioners, and validated by experts in academia.
{"title":"A Preliminary Checklist for Capturing Baseline Situations in Studying the Impacts of Agile Practices Introduction","authors":"Indira Nurdiani, J. Börstler, Samuel Fricker, K. Petersen","doi":"10.1145/3193965.3193969","DOIUrl":"https://doi.org/10.1145/3193965.3193969","url":null,"abstract":"To assess the benefits of introducing Agile practices, it is important to get a clear understanding of the baseline situation, i.e. the situation before their introduction. Without a clear baseline, we cannot properly assess the extent of impacts, both positive and negative, of introducing Agile practices. This paper provides a preliminary guideline to help researchers in capturing and reporting baseline situations. The guideline has been developed through the study of literature and interviews with industry practitioners, and validated by experts in academia.","PeriodicalId":237556,"journal":{"name":"2018 IEEE/ACM 6th International Workshop on Conducting Empirical Studies in Industry (CESI)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115420147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An intuitive method is needed to achieve buy-in from all sectors of Engineering for a way to gauge release-over-release change for a given product's sequence of releases. Also, customers need to know if there are extant releases that are more reliable than the ones they already rely on in their networks. A new Release-Over-Release (RoR) metric can both enable customers to clearly understand the reliability risk of migrating to other available releases, and also enable Engineering to understand if their software engineering efforts are actually improving release reliability.
需要一种直观的方法来获得所有工程部门的支持,以衡量给定产品发布序列的发布-过度发布变更。此外,客户需要知道是否存在比他们网络中已经依赖的版本更可靠的现有版本。一个新的release - over - release (RoR)度量既可以使客户清楚地了解迁移到其他可用版本的可靠性风险,也可以使工程人员了解他们的软件工程工作是否实际上提高了版本的可靠性。
{"title":"Comparing Reliability Levels of Software Releases","authors":"Pete Rotella, S. Chulani","doi":"10.1145/3193965.3193968","DOIUrl":"https://doi.org/10.1145/3193965.3193968","url":null,"abstract":"An intuitive method is needed to achieve buy-in from all sectors of Engineering for a way to gauge release-over-release change for a given product's sequence of releases. Also, customers need to know if there are extant releases that are more reliable than the ones they already rely on in their networks. A new Release-Over-Release (RoR) metric can both enable customers to clearly understand the reliability risk of migrating to other available releases, and also enable Engineering to understand if their software engineering efforts are actually improving release reliability.","PeriodicalId":237556,"journal":{"name":"2018 IEEE/ACM 6th International Workshop on Conducting Empirical Studies in Industry (CESI)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117146902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software Inspection is an important approach to find defects in Software Engineering (SE) artifacts. While there has been extensive research on traditional software inspection with pen-and-paper materials, modern SE poses new environments, methods, and tools for the cooperation of software engineers. Technologies, such as Human Computation (HC), provide tool support for distributed and tool-mediated work processes. However, there is little empirical experience on how to leverage HC for software inspection. In this vision paper, we present the context for a research program on this topic and introduce the preliminary concept of a theory-based ex-periment line to facilitate designing experiment families that fit together to answer larger questions than individual experiments. We present an example feature model for an experiment line for Soft-ware Inspection with Human Computation and discuss its expected benefits for the research program, including the coordination of research, design and material reuse, and aggregation facilities.
{"title":"Towards an Experiment Line on Software Inspection with Human Computation","authors":"S. Biffl, Marcos Kalinowski, D. Winkler","doi":"10.1145/3193965.3193971","DOIUrl":"https://doi.org/10.1145/3193965.3193971","url":null,"abstract":"Software Inspection is an important approach to find defects in Software Engineering (SE) artifacts. While there has been extensive research on traditional software inspection with pen-and-paper materials, modern SE poses new environments, methods, and tools for the cooperation of software engineers. Technologies, such as Human Computation (HC), provide tool support for distributed and tool-mediated work processes. However, there is little empirical experience on how to leverage HC for software inspection. In this vision paper, we present the context for a research program on this topic and introduce the preliminary concept of a theory-based ex-periment line to facilitate designing experiment families that fit together to answer larger questions than individual experiments. We present an example feature model for an experiment line for Soft-ware Inspection with Human Computation and discuss its expected benefits for the research program, including the coordination of research, design and material reuse, and aggregation facilities.","PeriodicalId":237556,"journal":{"name":"2018 IEEE/ACM 6th International Workshop on Conducting Empirical Studies in Industry (CESI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127428815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context: Case studies are a useful approach for conducting empirical studies of software engineering, in part because they allow a phenomenon to be studied in its real-world context. However, given that there are several kinds of case studies, each with its own strengths and weaknesses, researchers need to know how to choose which kind to employ for a specific research study. Aim: The objective of this research is to compare two case study approaches: embedded, longitudinal case studies, and multi-case studies. Approach: We compared two actual software engineering case studies: a multi-case study involving interviews with 46 practitioners at 9 international companies engaged in offshoring and outsourcing, and a single case, participant observation embedded case study lasting 13 months in a mid-sized Irish software company. Both case studies were exploring similar problems of understanding the activities performed by members of scrum development teams. Results: We found that both multi-case and embedded case studies are suitable for exploratory research (hypothesis development) but that embedded research may also be more suitable for explanatory research (hypothesis testing). We also found that longitudinal case studies offer better confirmability, while multi-case studies offer better transferability. Conclusion: We propose a set of illustrative research questions to assist with the selection of the appropriate case study method.
{"title":"Experience of Industry Case Studies: A Comparison of Multi-Case and Embedded Case Study Methods","authors":"J. Bass, Sarah Beecham, J. Noll","doi":"10.1145/3193965.3193967","DOIUrl":"https://doi.org/10.1145/3193965.3193967","url":null,"abstract":"Context: Case studies are a useful approach for conducting empirical studies of software engineering, in part because they allow a phenomenon to be studied in its real-world context. However, given that there are several kinds of case studies, each with its own strengths and weaknesses, researchers need to know how to choose which kind to employ for a specific research study. Aim: The objective of this research is to compare two case study approaches: embedded, longitudinal case studies, and multi-case studies. Approach: We compared two actual software engineering case studies: a multi-case study involving interviews with 46 practitioners at 9 international companies engaged in offshoring and outsourcing, and a single case, participant observation embedded case study lasting 13 months in a mid-sized Irish software company. Both case studies were exploring similar problems of understanding the activities performed by members of scrum development teams. Results: We found that both multi-case and embedded case studies are suitable for exploratory research (hypothesis development) but that embedded research may also be more suitable for explanatory research (hypothesis testing). We also found that longitudinal case studies offer better confirmability, while multi-case studies offer better transferability. Conclusion: We propose a set of illustrative research questions to assist with the selection of the appropriate case study method.","PeriodicalId":237556,"journal":{"name":"2018 IEEE/ACM 6th International Workshop on Conducting Empirical Studies in Industry (CESI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125877958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Daneva, K. Sikkel, Nelly Condori-Fernández, A. Herrmann
Background: A grand challenge for Requirement Engineering (RE) research is to help practitioners understand which RE methods work in what contexts and why. RE researchers recognize that for an RE method to be adopted in industry, RE practitioners should be able to evaluate the relevance of empirical studies to their practice. One possible approach to relevance evaluation is the set of perspective-based checklists proposed by Kitchenham et al. Specifically, the checklist from the practitioner's perspective seems to be a good candidate for evaluating the relevance of RE studies to RE practice. However, little is known about the applicability of this checklist to the RE field. Moreover, this checklist also requires a deeper analysis of its reliability. Aim: We propose a perspective-based checklist to the RE community that allows evaluating the relevance of experimental studies in RE from the practitioner's/consultant's viewpoint. Method: We followed an iterative design-science based approach in which we first analyzed the problems with a previously published checklist and then developed an operationalized proposal for a new checklist to counter these problems. We performed a reliability evaluation of this new checklist by having two practitioners apply the checklist on 24 papers that report experimental results on software requirements specifications' comprehensibility. Results: We report first-hand experiences of practitioners in evaluating the relevance of primary studies in RE, by using a perspective-based checklist. With respect to the reliability of the adjusted checklist, 9 of out 19 questions show an acceptable proportion of agreement (between two practitioners). Conclusions: Based on our experience, the contextualization and operationalization of a perspective-based checklist helps to make it more useful for the practitioners. However, to increase the reliability of the checklist, more reviewers and more discussion cycles are necessary.
{"title":"Experiences in Using Practitioner's Checklists to Evaluate the Industrial Relevance of Requirements Engineering Experiments","authors":"M. Daneva, K. Sikkel, Nelly Condori-Fernández, A. Herrmann","doi":"10.1145/3193965.3193966","DOIUrl":"https://doi.org/10.1145/3193965.3193966","url":null,"abstract":"Background: A grand challenge for Requirement Engineering (RE) research is to help practitioners understand which RE methods work in what contexts and why. RE researchers recognize that for an RE method to be adopted in industry, RE practitioners should be able to evaluate the relevance of empirical studies to their practice. One possible approach to relevance evaluation is the set of perspective-based checklists proposed by Kitchenham et al. Specifically, the checklist from the practitioner's perspective seems to be a good candidate for evaluating the relevance of RE studies to RE practice. However, little is known about the applicability of this checklist to the RE field. Moreover, this checklist also requires a deeper analysis of its reliability. Aim: We propose a perspective-based checklist to the RE community that allows evaluating the relevance of experimental studies in RE from the practitioner's/consultant's viewpoint. Method: We followed an iterative design-science based approach in which we first analyzed the problems with a previously published checklist and then developed an operationalized proposal for a new checklist to counter these problems. We performed a reliability evaluation of this new checklist by having two practitioners apply the checklist on 24 papers that report experimental results on software requirements specifications' comprehensibility. Results: We report first-hand experiences of practitioners in evaluating the relevance of primary studies in RE, by using a perspective-based checklist. With respect to the reliability of the adjusted checklist, 9 of out 19 questions show an acceptable proportion of agreement (between two practitioners). Conclusions: Based on our experience, the contextualization and operationalization of a perspective-based checklist helps to make it more useful for the practitioners. However, to increase the reliability of the checklist, more reviewers and more discussion cycles are necessary.","PeriodicalId":237556,"journal":{"name":"2018 IEEE/ACM 6th International Workshop on Conducting Empirical Studies in Industry (CESI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126867194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katarzyna Biesialska, Xavier Franch, V. Muntés-Mulero
Conducting empirical research in software engineering industry is a process, and as such, it should be generalizable. The aim of this paper is to discuss how academic researchers may address some of the challenges they encounter during conducting empirical research in the software industry by means of a systematic and structured approach. The protocol developed in this paper should serve as a practical guide for researchers and help them with conducting empirical research in this complex environment.
{"title":"Protocol and Tools for Conducting Agile Software Engineering Research in an Industrial-Academic Setting: A Preliminary Study","authors":"Katarzyna Biesialska, Xavier Franch, V. Muntés-Mulero","doi":"10.1145/3193965.3193970","DOIUrl":"https://doi.org/10.1145/3193965.3193970","url":null,"abstract":"Conducting empirical research in software engineering industry is a process, and as such, it should be generalizable. The aim of this paper is to discuss how academic researchers may address some of the challenges they encounter during conducting empirical research in the software industry by means of a systematic and structured approach. The protocol developed in this paper should serve as a practical guide for researchers and help them with conducting empirical research in this complex environment.","PeriodicalId":237556,"journal":{"name":"2018 IEEE/ACM 6th International Workshop on Conducting Empirical Studies in Industry (CESI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121451680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}