Context. Empirical research consistently demonstrates that that scholarly peer review is ineffective, unreliable, and prejudiced. In principle, the solution is to move from contemporary, unstructured, essay-like reviewing to more structured, checklist-like reviewing. The Task Force created models—called “empirical standards”—of the software engineering community’s expectations for different popular methodologies. Objective. This paper presents a tool for facilitating more structured reviewing by generating review checklists from the empirical standards. Design. A tool that generates pre-submission and review forms using the empirical standards for software engineering research was designed and implemented. The pre-submission and review forms can be used by authors and reviewers, respectively, to determine whether a manuscript meets the software engineering community’s expectations for the particular kind of research conducted. Evaluation. The proposed tool can be empirically evaluated using lab or field randomized experiments as well as qualitative research. Huge, impractical studies involving splitting a conference program committee are not necessary to establish the effectiveness of the standards, checklists and structured review. Conclusions. The checklist generator enables more structured peer reviews, which in turn should improve review quality, reliability, thoroughness, and readability. Empirical research is needed to assess the effectiveness of the tool and the standards.
{"title":"Towards a More Structured Peer Review Process with Empirical Standards","authors":"Arham Arshad, Taher Ahmed Ghaleb, P. Ralph","doi":"10.1145/3463274.3463359","DOIUrl":"https://doi.org/10.1145/3463274.3463359","url":null,"abstract":"Context. Empirical research consistently demonstrates that that scholarly peer review is ineffective, unreliable, and prejudiced. In principle, the solution is to move from contemporary, unstructured, essay-like reviewing to more structured, checklist-like reviewing. The Task Force created models—called “empirical standards”—of the software engineering community’s expectations for different popular methodologies. Objective. This paper presents a tool for facilitating more structured reviewing by generating review checklists from the empirical standards. Design. A tool that generates pre-submission and review forms using the empirical standards for software engineering research was designed and implemented. The pre-submission and review forms can be used by authors and reviewers, respectively, to determine whether a manuscript meets the software engineering community’s expectations for the particular kind of research conducted. Evaluation. The proposed tool can be empirically evaluated using lab or field randomized experiments as well as qualitative research. Huge, impractical studies involving splitting a conference program committee are not necessary to establish the effectiveness of the standards, checklists and structured review. Conclusions. The checklist generator enables more structured peer reviews, which in turn should improve review quality, reliability, thoroughness, and readability. Empirical research is needed to assess the effectiveness of the tool and the standards.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130488054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","authors":"","doi":"10.1145/3463274","DOIUrl":"https://doi.org/10.1145/3463274","url":null,"abstract":"","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128983452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context: It is impossible to imagine our everyday and professional lives without software. Consequently, software products, especially socio-technical systems, have more or less obvious impacts on almost all areas of our society. For this purpose, a group of scientists worldwide has developed the Sustainability Awareness Framework (SusAF) which examines the impacts on five interrelated dimensions: social, individual, environmental, economic, and technical. According to this framework, we should design software to maintain or improve the Sustainability Impacts. Designing for sustainability is a major challenge that can profoundly change the field of activity – particular for Software Engineers. Objectives: The aim of the thesis work is to analyze the current role of Software Engineers and relate it to Sustainability Impacts of Software Products in order to contribute to this paradigm shift. This should provide a basis for follow-up works. The question in which direction exactly the Software Engineer should develop and how exactly this path can be followed is still owed by the scientific community. Perhaps universities will have to adapt the curriculum in the training of Software Engineers, politics could possibly initiate support programs in the field of sustainability for software companies, or maybe software sustainability certifications could emerge. In any case, Software Engineers must adapt to the times and acquire the necessary knowledge, the skills and the competencies. Results: The results of the dissertation are a better understanding of the needed paradigm shift of Software Engineers and complement the SusAF that to better support sustainability design. The extended SusAF is intended for both training and corporate use.
{"title":"The Connection between the Sustainability Impacts of Software Products and the Role of Software Engineers","authors":"Dominic Lammert","doi":"10.1145/3463274.3463346","DOIUrl":"https://doi.org/10.1145/3463274.3463346","url":null,"abstract":"Context: It is impossible to imagine our everyday and professional lives without software. Consequently, software products, especially socio-technical systems, have more or less obvious impacts on almost all areas of our society. For this purpose, a group of scientists worldwide has developed the Sustainability Awareness Framework (SusAF) which examines the impacts on five interrelated dimensions: social, individual, environmental, economic, and technical. According to this framework, we should design software to maintain or improve the Sustainability Impacts. Designing for sustainability is a major challenge that can profoundly change the field of activity – particular for Software Engineers. Objectives: The aim of the thesis work is to analyze the current role of Software Engineers and relate it to Sustainability Impacts of Software Products in order to contribute to this paradigm shift. This should provide a basis for follow-up works. The question in which direction exactly the Software Engineer should develop and how exactly this path can be followed is still owed by the scientific community. Perhaps universities will have to adapt the curriculum in the training of Software Engineers, politics could possibly initiate support programs in the field of sustainability for software companies, or maybe software sustainability certifications could emerge. In any case, Software Engineers must adapt to the times and acquire the necessary knowledge, the skills and the competencies. Results: The results of the dissertation are a better understanding of the needed paradigm shift of Software Engineers and complement the SusAF that to better support sustainability design. The extended SusAF is intended for both training and corporate use.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122664012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to limited time, budget or resources, a team is prone to introduce code that does not follow the best software development practices. This code that introduces instability in the software projects is known as Technical Debt (TD). Often, TD intentionally manifests in source code, which is known as Self-Admitted Technical Debt (SATD). This paper presents DebtHunter, a natural language processing (NLP)- and machine learning (ML)- based approach for identifying and classifying SATD in source code comments. The proposed classification approach combines two classification phases for differentiating between the multiple debt types. Evaluations over 10 open source systems, containing more than 259k comments, showed that the approach was able to improve the performance of others in the literature. The presented approach is supported by a tool that can help developers to effectively manage SATD. The tool complements the analysis over Java source code by allowing developers to also examine the associated issue tracker. DebtHunter can be used in a continuous evolution environment to monitor the development process and make developers aware of how and where SATD is introduced, thus helping them to manage and resolve it.
{"title":"DebtHunter: A Machine Learning-based Approach for Detecting Self-Admitted Technical Debt","authors":"Irene Sala, Antonela Tommasel, F. Fontana","doi":"10.1145/3463274.3464455","DOIUrl":"https://doi.org/10.1145/3463274.3464455","url":null,"abstract":"Due to limited time, budget or resources, a team is prone to introduce code that does not follow the best software development practices. This code that introduces instability in the software projects is known as Technical Debt (TD). Often, TD intentionally manifests in source code, which is known as Self-Admitted Technical Debt (SATD). This paper presents DebtHunter, a natural language processing (NLP)- and machine learning (ML)- based approach for identifying and classifying SATD in source code comments. The proposed classification approach combines two classification phases for differentiating between the multiple debt types. Evaluations over 10 open source systems, containing more than 259k comments, showed that the approach was able to improve the performance of others in the literature. The presented approach is supported by a tool that can help developers to effectively manage SATD. The tool complements the analysis over Java source code by allowing developers to also examine the associated issue tracker. DebtHunter can be used in a continuous evolution environment to monitor the development process and make developers aware of how and where SATD is introduced, thus helping them to manage and resolve it.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124451878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is an ongoing interest in the Software Engineering field for multivocal literature reviews including grey literature. However, at the same time, the role of the grey literature is still controversial, and the benefits of its inclusion in systematic reviews are object of discussion. Some of these arguments concern the quality assessment methods for grey literature entries, which is often considered a challenging and critical task. On the one hand, apart from a few proposals, there is a lack of an acknowledged methodological support for the inclusion of Software Engineering grey literature in systematic surveys. On the other hand, the unstructured shape of the grey literature contents could lead to bias in the evaluation process impacting on the quality of the surveys. This work leverages an approach on fuzzy Likert scales, and it proposes a methodology for managing the explicit uncertainties emerging during the assessment of entries from the grey literature. The methodology also strengthens the adoption of consensus policies that take into account the individual confidence level expressed for each of the collected scores.
{"title":"About the Assessment of Grey Literature in Software Engineering","authors":"G. D. Angelis, F. Lonetti","doi":"10.1145/3463274.3463362","DOIUrl":"https://doi.org/10.1145/3463274.3463362","url":null,"abstract":"There is an ongoing interest in the Software Engineering field for multivocal literature reviews including grey literature. However, at the same time, the role of the grey literature is still controversial, and the benefits of its inclusion in systematic reviews are object of discussion. Some of these arguments concern the quality assessment methods for grey literature entries, which is often considered a challenging and critical task. On the one hand, apart from a few proposals, there is a lack of an acknowledged methodological support for the inclusion of Software Engineering grey literature in systematic surveys. On the other hand, the unstructured shape of the grey literature contents could lead to bias in the evaluation process impacting on the quality of the surveys. This work leverages an approach on fuzzy Likert scales, and it proposes a methodology for managing the explicit uncertainties emerging during the assessment of entries from the grey literature. The methodology also strengthens the adoption of consensus policies that take into account the individual confidence level expressed for each of the collected scores.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114416693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. A. Tecimer, Eray Tüzün, Hamdi Dibeklioğlu, H. Erdogmus
Reviewer selection in modern code review is crucial for effective code reviews. Several techniques exist for recommending reviewers appropriate for a given pull request (PR). Most code reviewer recommendation techniques in the literature build and evaluate their models based on datasets collected from real projects using open-source or industrial practices. The techniques invariably presume that these datasets reliably represent the “ground truth.” In the context of a classification problem, ground truth refers to the objectively correct labels of a class used to build models from a dataset or evaluate a model’s performance. In a project dataset used to build a code reviewer recommendation system, the recommended code reviewer picked for a PR is usually assumed to be the best code reviewer for that PR. However, in practice, the recommended code reviewer may not be the best possible code reviewer, or even a qualified one. Recent code reviewer recommendation studies suggest that the datasets used tend to suffer from systematic labeling bias, making the ground truth unreliable. Therefore, models and recommendation systems built on such datasets may perform poorly in real practice. In this study, we introduce a novel approach to automatically detect and eliminate systematic labeling bias in code reviewer recommendation systems. The bias that we remove results from selecting reviewers that do not ensure a permanently successful fix for a bug-related PR. To demonstrate the effectiveness of our approach, we evaluated it on two open-source project datasets —HIVE and QT Creator— and with five code reviewer recommendation techniques —Profile-Based, RSTrace, Naive Bayes, k-NN, and Decision Tree. Our debiasing approach appears promising since it improved the Mean Reciprocal Rank (MRR) of the evaluated techniques up to 26% in the datasets used.
{"title":"Detection and Elimination of Systematic Labeling Bias in Code Reviewer Recommendation Systems","authors":"K. A. Tecimer, Eray Tüzün, Hamdi Dibeklioğlu, H. Erdogmus","doi":"10.1145/3463274.3463336","DOIUrl":"https://doi.org/10.1145/3463274.3463336","url":null,"abstract":"Reviewer selection in modern code review is crucial for effective code reviews. Several techniques exist for recommending reviewers appropriate for a given pull request (PR). Most code reviewer recommendation techniques in the literature build and evaluate their models based on datasets collected from real projects using open-source or industrial practices. The techniques invariably presume that these datasets reliably represent the “ground truth.” In the context of a classification problem, ground truth refers to the objectively correct labels of a class used to build models from a dataset or evaluate a model’s performance. In a project dataset used to build a code reviewer recommendation system, the recommended code reviewer picked for a PR is usually assumed to be the best code reviewer for that PR. However, in practice, the recommended code reviewer may not be the best possible code reviewer, or even a qualified one. Recent code reviewer recommendation studies suggest that the datasets used tend to suffer from systematic labeling bias, making the ground truth unreliable. Therefore, models and recommendation systems built on such datasets may perform poorly in real practice. In this study, we introduce a novel approach to automatically detect and eliminate systematic labeling bias in code reviewer recommendation systems. The bias that we remove results from selecting reviewers that do not ensure a permanently successful fix for a bug-related PR. To demonstrate the effectiveness of our approach, we evaluated it on two open-source project datasets —HIVE and QT Creator— and with five code reviewer recommendation techniques —Profile-Based, RSTrace, Naive Bayes, k-NN, and Decision Tree. Our debiasing approach appears promising since it improved the Mean Reciprocal Rank (MRR) of the evaluated techniques up to 26% in the datasets used.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117175825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software development processes play a key role in the software and system development life cycle. Processes are becoming complex and evolve rapidly due to the modern-day continuous software engineering (CSE) concepts, which are mainly based on continuous integration, continuous delivery, infrastructure-as-code, automation and more. The fast growing Chinese software development industry adopts various processes to achieve potential benefits offered in the international market. This study is conducted with the aim to investigate the trends of processes in practice in the Chinese industry. The survey questionnaire data is collected from 34 practitioners working in software development firms across the China and the results highlight that iterative and agile processes are extensively used in industrial setting. Furthermore, agile and traditional approaches are combined to develop the hybrid processes. Most of the participants are satisfied using the current development processes, however, they show interest to continuously improve the existing process models and methods. Finally, we noticed that majority of the software development organizations used the ISO 9001 standard for process assessment and improvement activities. The given results provide preliminary overview of processes deployed in the Chinese industry.
{"title":"System and Software Processes in Practice: Insights from Chinese Industry","authors":"Peng Zhou, A. Khan, Peng Liang, Sher Badshah","doi":"10.1145/3463274.3463786","DOIUrl":"https://doi.org/10.1145/3463274.3463786","url":null,"abstract":"Software development processes play a key role in the software and system development life cycle. Processes are becoming complex and evolve rapidly due to the modern-day continuous software engineering (CSE) concepts, which are mainly based on continuous integration, continuous delivery, infrastructure-as-code, automation and more. The fast growing Chinese software development industry adopts various processes to achieve potential benefits offered in the international market. This study is conducted with the aim to investigate the trends of processes in practice in the Chinese industry. The survey questionnaire data is collected from 34 practitioners working in software development firms across the China and the results highlight that iterative and agile processes are extensively used in industrial setting. Furthermore, agile and traditional approaches are combined to develop the hybrid processes. Most of the participants are satisfied using the current development processes, however, they show interest to continuously improve the existing process models and methods. Finally, we noticed that majority of the software development organizations used the ISO 9001 standard for process assessment and improvement activities. The given results provide preliminary overview of processes deployed in the Chinese industry.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130107779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliability is one of the most important quality attributes of a software system, addressing the system’s ability to perform the required functionalities under stated conditions, for a stated period of time. Nowadays, a system failure could threaten the safety of human life. Thus, assessing reliability became one of the software engineering‘s holy grails. Our approach wants to establish based on what project’s characteristics we obtain the best bug-oriented reliability prediction model. The pillars on which we base our approach are the metric introduced to estimate one aspect of reliability using bugs, and the Chidamber and Kemerer (CK) metrics to assess reliability in the early stages of development. The methodology used for prediction is a feed-forward neural network with back-propagation learning. Five different projects are used to validate the proposed approach for reliability prediction. The results indicate that CK metrics are promising in predicting reliability using a neural network model. The experiments also analyze if the type of project used in the development of the prediction model influences the quality of the prediction. As a result of the operated experiments using both within-project and cross-project validation, the best prediction model was obtained using PDE (PlugIn characteristic) for MY project (Task characteristic).
{"title":"Towards a Reliability Prediction Model based on Internal Structure and Post-Release Defects Using Neural Networks","authors":"A. Vescan, C. Serban, Alisa-Daniela Budur","doi":"10.1145/3463274.3463363","DOIUrl":"https://doi.org/10.1145/3463274.3463363","url":null,"abstract":"Reliability is one of the most important quality attributes of a software system, addressing the system’s ability to perform the required functionalities under stated conditions, for a stated period of time. Nowadays, a system failure could threaten the safety of human life. Thus, assessing reliability became one of the software engineering‘s holy grails. Our approach wants to establish based on what project’s characteristics we obtain the best bug-oriented reliability prediction model. The pillars on which we base our approach are the metric introduced to estimate one aspect of reliability using bugs, and the Chidamber and Kemerer (CK) metrics to assess reliability in the early stages of development. The methodology used for prediction is a feed-forward neural network with back-propagation learning. Five different projects are used to validate the proposed approach for reliability prediction. The results indicate that CK metrics are promising in predicting reliability using a neural network model. The experiments also analyze if the type of project used in the development of the prediction model influences the quality of the prediction. As a result of the operated experiments using both within-project and cross-project validation, the best prediction model was obtained using PDE (PlugIn characteristic) for MY project (Task characteristic).","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125722893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context: Software development is moving towards a place where data about development is gathered in a systematic fashion in order to improve the practice, for example, in tuning of static code analysis. However, this kind of data gathering has so far primarily happened within organizations, which is unfortunate as it tends to favor larger organizations with more resources for maintenance of developer tools. Objective: Over the years, we have seen a lot of benefits from open source and recently there has been a lot of development in open data. We see this as an opportunity for cross-organisation community building and wonder to what extent the views on using and sharing open source software developer tools carry across to open data-driven tuning of software development tools. Method: An exploratory study with 11 participants divided into 3 focus groups discussing using and sharing of static code analyzers and data about these analyzers. Results: While using and sharing open-source code (analyzers in this case) is perceived in a positive light as part of the practice of modern software development, sharing data is met with skepticism and uncertainty. Developers are concerned about threats to the company brand, exposure of intellectual property, legal liabilities, and to what extent data is context-specific to a certain organisation. Conclusions: Sharing data in software development is different from sharing data about software development. We need to better understand how we can provide solutions for sharing of software development data in a fashion that reduces risk and enables openness.
{"title":"Open Data-driven Usability Improvements of Static Code Analysis and its Challenges","authors":"Emma Söderberg, Luke Church, Martin Höst","doi":"10.1145/3463274.3463808","DOIUrl":"https://doi.org/10.1145/3463274.3463808","url":null,"abstract":"Context: Software development is moving towards a place where data about development is gathered in a systematic fashion in order to improve the practice, for example, in tuning of static code analysis. However, this kind of data gathering has so far primarily happened within organizations, which is unfortunate as it tends to favor larger organizations with more resources for maintenance of developer tools. Objective: Over the years, we have seen a lot of benefits from open source and recently there has been a lot of development in open data. We see this as an opportunity for cross-organisation community building and wonder to what extent the views on using and sharing open source software developer tools carry across to open data-driven tuning of software development tools. Method: An exploratory study with 11 participants divided into 3 focus groups discussing using and sharing of static code analyzers and data about these analyzers. Results: While using and sharing open-source code (analyzers in this case) is perceived in a positive light as part of the practice of modern software development, sharing data is met with skepticism and uncertainty. Developers are concerned about threats to the company brand, exposure of intellectual property, legal liabilities, and to what extent data is context-specific to a certain organisation. Conclusions: Sharing data in software development is different from sharing data about software development. We need to better understand how we can provide solutions for sharing of software development data in a fashion that reduces risk and enables openness.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bridging the gap between academic research and industrial application is an important issue to promote Jackson's Problem Frames approach (PF) to the software engineering community. Various attempts have been made to tackle this problem, such as defining formal semantics of PF for software development, and providing a semi-formal approach to model transformations of problem diagrams, with automated tool support. In this paper, we propose to exclusively focus on exploring and evaluating the effectiveness of Jackson's problem diagrams for modeling the context of cyber-physical systems, by developing a suite of support tools enhanced with adaptive user interfaces, and empirically and comprehensively assess its usability. This paper introduces the state of the art, corresponding research questions, research methodologies and current progress of our research.
{"title":"Evaluating the Effectiveness of Problem Frames for Contextual Modeling of Cyber-Physical Systems: a Tool Suite with Adaptive User Interfaces","authors":"Waqas Junaid","doi":"10.1145/3463274.3463344","DOIUrl":"https://doi.org/10.1145/3463274.3463344","url":null,"abstract":"Bridging the gap between academic research and industrial application is an important issue to promote Jackson's Problem Frames approach (PF) to the software engineering community. Various attempts have been made to tackle this problem, such as defining formal semantics of PF for software development, and providing a semi-formal approach to model transformations of problem diagrams, with automated tool support. In this paper, we propose to exclusively focus on exploring and evaluating the effectiveness of Jackson's problem diagrams for modeling the context of cyber-physical systems, by developing a suite of support tools enhanced with adaptive user interfaces, and empirically and comprehensively assess its usability. This paper introduces the state of the art, corresponding research questions, research methodologies and current progress of our research.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}