Johannes Bräuer, Reinhold Plösch, Matthias Saft, Christian Körner
To measure object-oriented design quality, metric-based approaches have been established. These have then been enhanced by identifying design smells in code. While these approaches are useful for identifying hot spots that should be refactored, they are still too vague to sufficiently guide software developers to implement improvements. This is why our previous work focuses on measuring the compliance of source code with object-oriented design best practices. These design best practices were systematically derived from the literature and can be mapped to design principles, which can help reveal fundamental object-oriented design issues in a software product. Despite the successful applications of this approach in industrial and open source projects, there is little accepted knowledge about the importance of various design best practices. Consequently, this paper shows the result of an online survey aimed at identifying the importance of 49 design best practices on design quality. In total, 214 people participated in the survey, resulting in an average of 138 opinions for each practice. Based on these opinions, five very important, 21 important, 12 moderately important and 11 unimportant design best practices could be derived. This information about importance helps managing design improvements in a focused way.
{"title":"A Survey on the Importance of Object-Oriented Design Best Practices","authors":"Johannes Bräuer, Reinhold Plösch, Matthias Saft, Christian Körner","doi":"10.1109/SEAA.2017.14","DOIUrl":"https://doi.org/10.1109/SEAA.2017.14","url":null,"abstract":"To measure object-oriented design quality, metric-based approaches have been established. These have then been enhanced by identifying design smells in code. While these approaches are useful for identifying hot spots that should be refactored, they are still too vague to sufficiently guide software developers to implement improvements. This is why our previous work focuses on measuring the compliance of source code with object-oriented design best practices. These design best practices were systematically derived from the literature and can be mapped to design principles, which can help reveal fundamental object-oriented design issues in a software product. Despite the successful applications of this approach in industrial and open source projects, there is little accepted knowledge about the importance of various design best practices. Consequently, this paper shows the result of an online survey aimed at identifying the importance of 49 design best practices on design quality. In total, 214 people participated in the survey, resulting in an average of 138 opinions for each practice. Based on these opinions, five very important, 21 important, 12 moderately important and 11 unimportant design best practices could be derived. This information about importance helps managing design improvements in a focused way.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122181982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context: Software effort estimation is a critical factor for project success. A new approach called ensemble effort estimation gets popular because of its performance. While many combination rules have been proposed, they were only compared in a systematic literature review. Objective: To compare linear combination rules proposed in the past studies under the same condition based on empirical approach. Method: We conducted an experiment with 9 linear combination rules, 7 datasets, and 4 effort estimation models. Results: We found 6 out of 9 linear combination rules never underperformed its base learners. No linear combination rule was superior to the others. Conclusion: No definitive rule was found while some linear combination rules can give competitive or better estimates than its base learners.
{"title":"A Comparative Study on Linear Combination Rules for Ensemble Effort Estimation","authors":"S. Amasaki","doi":"10.1109/SEAA.2017.11","DOIUrl":"https://doi.org/10.1109/SEAA.2017.11","url":null,"abstract":"Context: Software effort estimation is a critical factor for project success. A new approach called ensemble effort estimation gets popular because of its performance. While many combination rules have been proposed, they were only compared in a systematic literature review. Objective: To compare linear combination rules proposed in the past studies under the same condition based on empirical approach. Method: We conducted an experiment with 9 linear combination rules, 7 datasets, and 4 effort estimation models. Results: We found 6 out of 9 linear combination rules never underperformed its base learners. No linear combination rule was superior to the others. Conclusion: No definitive rule was found while some linear combination rules can give competitive or better estimates than its base learners.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130187443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Tahvili, Mehrdad Saadatmand, M. Bohlin, W. Afzal, Sharvathul Hasan Ameerjan
Knowing the execution time of test cases is importantto perform test scheduling, prioritization and progressmonitoring. This work in progress paper presents a novelapproach for predicting the execution time of test cases basedon test specifications and available historical data on previouslyexecuted test cases. Our approach works by extractingtiming information (measured and maximum execution time)for various steps in manual test cases. This information is thenused to estimate the maximum time for test steps that have notpreviously been executed, but for which textual specificationsexist. As part of our approach, natural language parsing ofthe specifications is performed to identify word combinationsto check whether existing timing information on various testactivities is already available or not. Finally, linear regressionis used to predict the actual execution time for test cases. A proof-of-concept use case at Bombardier Transportationserves to evaluate the proposed approach.
{"title":"Towards Execution Time Prediction for Manual Test Cases from Test Specification","authors":"S. Tahvili, Mehrdad Saadatmand, M. Bohlin, W. Afzal, Sharvathul Hasan Ameerjan","doi":"10.1109/SEAA.2017.10","DOIUrl":"https://doi.org/10.1109/SEAA.2017.10","url":null,"abstract":"Knowing the execution time of test cases is importantto perform test scheduling, prioritization and progressmonitoring. This work in progress paper presents a novelapproach for predicting the execution time of test cases basedon test specifications and available historical data on previouslyexecuted test cases. Our approach works by extractingtiming information (measured and maximum execution time)for various steps in manual test cases. This information is thenused to estimate the maximum time for test steps that have notpreviously been executed, but for which textual specificationsexist. As part of our approach, natural language parsing ofthe specifications is performed to identify word combinationsto check whether existing timing information on various testactivities is already available or not. Finally, linear regressionis used to predict the actual execution time for test cases. A proof-of-concept use case at Bombardier Transportationserves to evaluate the proposed approach.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129148198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon André Scherr, Frank Elberzhager, Konstantin Holl
The acceptance of mobile applications is highly dependent on the realized set of features and on the quality of the application. Information about their acceptance can be gained quickly by collecting and analyzing user feedback such as explicit textual reviews provided by an application's users or implicitly provided usage data. With an approach based on developing a minimal set of functions in order to realize a minimum viable product (MVP), it is possible to put a product on the market within a short amount of time. Currently, the elicitation, analysis, and processing of user feedback is unfocused and takes too much time and effort to mitigate the poor quality of the application. Hence, we outline an approach named Opti4Apps, which is aimed at tailored quality assurance as part of MVP development and enables and expands the benefits of an MVP by providing a semiautomated feedback elicitation, analysis, and processing framework. This is intended to raise the effectiveness and efficiency of early user feedback consideration during further development in order to assure the quality and acceptance of an app as an MVP. We will present the overall structure as well as the process behind the Opti4Apps framework. As proof-ofconcept, we implemented an initial prototype of our idea, focused on textual user feedback.
{"title":"An Automated Feedback-Based Approach to Support Mobile App Development","authors":"Simon André Scherr, Frank Elberzhager, Konstantin Holl","doi":"10.1109/SEAA.2017.45","DOIUrl":"https://doi.org/10.1109/SEAA.2017.45","url":null,"abstract":"The acceptance of mobile applications is highly dependent on the realized set of features and on the quality of the application. Information about their acceptance can be gained quickly by collecting and analyzing user feedback such as explicit textual reviews provided by an application's users or implicitly provided usage data. With an approach based on developing a minimal set of functions in order to realize a minimum viable product (MVP), it is possible to put a product on the market within a short amount of time. Currently, the elicitation, analysis, and processing of user feedback is unfocused and takes too much time and effort to mitigate the poor quality of the application. Hence, we outline an approach named Opti4Apps, which is aimed at tailored quality assurance as part of MVP development and enables and expands the benefits of an MVP by providing a semiautomated feedback elicitation, analysis, and processing framework. This is intended to raise the effectiveness and efficiency of early user feedback consideration during further development in order to assure the quality and acceptance of an app as an MVP. We will present the overall structure as well as the process behind the Opti4Apps framework. As proof-ofconcept, we implemented an initial prototype of our idea, focused on textual user feedback.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129270171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manually managing collaboration becomes a problem in distributed software engineering environments. Individual engineers easily loose track of who to involve and when. The result is lack of communication, alternatively communication overload, leading to errors and rework. This paper presents a Domain-Specific Language (DSL) for scripting of collaboration structures and their evolution. We demonstrate the DSL's benefits and expressiveness for setting up an iteration planning meeting in an agile development setting.
{"title":"A Domain-Specific Language for Coordinating Collaboration","authors":"Christoph Mayr-Dorn, Christoph Laaber","doi":"10.1109/SEAA.2017.33","DOIUrl":"https://doi.org/10.1109/SEAA.2017.33","url":null,"abstract":"Manually managing collaboration becomes a problem in distributed software engineering environments. Individual engineers easily loose track of who to involve and when. The result is lack of communication, alternatively communication overload, leading to errors and rework. This paper presents a Domain-Specific Language (DSL) for scripting of collaboration structures and their evolution. We demonstrate the DSL's benefits and expressiveness for setting up an iteration planning meeting in an agile development setting.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117307987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Makrina Viola Kosti, Apostolos Ampatzoglou, A. Chatzigeorgiou, Georgios Pallas, I. Stamelos, L. Angelis
One of the first steps towards the effective Technical Debt (TD) management is the quantification and continuous monitoring of the TD principal. In the current state-ofresearch and practice the most common ways to assess TD principal are the use of: (a) structural proxies—i.e., most commonly through quality metrics; and (b) monetized proxies—i.e., most commonly through the use of the SQALE (Software Quality Assessment based on Lifecycle Expectations) method. Although both approaches have merit, they seem to rely on different viewpoints of TD and their levels of agreement have not been evaluated so far. Therefore, in this paper, we empirically explore this relation by analyzing data obtained from 20 open source software projects and build a regression model that establishes a relationship between them. The results of the study suggest that a model of seven structural metrics, quantifying different aspects of quality (i.e., coupling, cohesion, complexity, size, and inheritance) can accurately estimate TD principal as appraised by SonarQube. The results of this case study are useful to both academia and industry. In particular, academia can gain knowledge on: (a) the reliability and agreement of TD principal assessment methods and (b) the structural characteristics of software that contribute to the accumulation of TD, whereas practitioners are provided with an alternative evaluation model with reduced number of parameters that can accurately assess TD, through traditional software quality metrics and tools.
{"title":"Technical Debt Principal Assessment Through Structural Metrics","authors":"Makrina Viola Kosti, Apostolos Ampatzoglou, A. Chatzigeorgiou, Georgios Pallas, I. Stamelos, L. Angelis","doi":"10.1109/SEAA.2017.59","DOIUrl":"https://doi.org/10.1109/SEAA.2017.59","url":null,"abstract":"One of the first steps towards the effective Technical Debt (TD) management is the quantification and continuous monitoring of the TD principal. In the current state-ofresearch and practice the most common ways to assess TD principal are the use of: (a) structural proxies—i.e., most commonly through quality metrics; and (b) monetized proxies—i.e., most commonly through the use of the SQALE (Software Quality Assessment based on Lifecycle Expectations) method. Although both approaches have merit, they seem to rely on different viewpoints of TD and their levels of agreement have not been evaluated so far. Therefore, in this paper, we empirically explore this relation by analyzing data obtained from 20 open source software projects and build a regression model that establishes a relationship between them. The results of the study suggest that a model of seven structural metrics, quantifying different aspects of quality (i.e., coupling, cohesion, complexity, size, and inheritance) can accurately estimate TD principal as appraised by SonarQube. The results of this case study are useful to both academia and industry. In particular, academia can gain knowledge on: (a) the reliability and agreement of TD principal assessment methods and (b) the structural characteristics of software that contribute to the accumulation of TD, whereas practitioners are provided with an alternative evaluation model with reduced number of parameters that can accurately assess TD, through traditional software quality metrics and tools.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130844384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Skills and competences of people participating in online professional networks constitute an ever-increasing new source for data collection and analysis. An important sub-domain of human resources management (HRM) is the recruitment process. Job advertisements and people profiles are main parts of recruitment and since are now available online, they constitute a key factor of a new e-recruitment era. Data mining for erecruitment analysis is important in order to extract a knowledge base for people analytics. Skills and competences are the key variables for people analytics and can be drawn from job advertisements. Leveraging the raw information of online job offers, provides a rich source for people analytics. Detecting the appropriate skills and competences for a job from raw text data and associate them with a job seeker is an increasing challenge. The main objective of this paper is the proposal of a framework aiming to collect online job advertisements from a web source which concerns IT job offers and to extract from the raw text the required skills and competences for specific jobs. The selected professional networking web source is StackOverflow and multivariate statistical data analysis was used to test the correlations between skills and competences in the job offers dataset. The present work falls in a relatively new field of research, concerning the competence mining of peopleware data with special focus on software development.
{"title":"Mining People Analytics from StackOverflow Job Advertisements","authors":"M. Papoutsoglou, N. Mittas, L. Angelis","doi":"10.1109/SEAA.2017.50","DOIUrl":"https://doi.org/10.1109/SEAA.2017.50","url":null,"abstract":"Skills and competences of people participating in online professional networks constitute an ever-increasing new source for data collection and analysis. An important sub-domain of human resources management (HRM) is the recruitment process. Job advertisements and people profiles are main parts of recruitment and since are now available online, they constitute a key factor of a new e-recruitment era. Data mining for erecruitment analysis is important in order to extract a knowledge base for people analytics. Skills and competences are the key variables for people analytics and can be drawn from job advertisements. Leveraging the raw information of online job offers, provides a rich source for people analytics. Detecting the appropriate skills and competences for a job from raw text data and associate them with a job seeker is an increasing challenge. The main objective of this paper is the proposal of a framework aiming to collect online job advertisements from a web source which concerns IT job offers and to extract from the raw text the required skills and competences for specific jobs. The selected professional networking web source is StackOverflow and multivariate statistical data analysis was used to test the correlations between skills and competences in the job offers dataset. The present work falls in a relatively new field of research, concerning the competence mining of peopleware data with special focus on software development.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"134 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131058800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technical Debt (TD) is a metaphor used to explain the negative impacts that sub-optimal design decisions have in the long-term perspective of a software project. Although TD is acknowledged by both researchers and practitioners to have strong negative impact on Software development, its study on Testware has so far been very limited. A gap in knowledge that is important to address due to the growing popularity of Testware (scripted automated testing) in software development practice.In this paper we present a mapping analysis that connects 21 well-known, Software, object-oriented TD items to Testware, establishing them as Testware Technical Debt (TTD) items. The analysis indicates that most Software TD items are applicable or observable as TTD items, often in similar form and with roughly the same impact as for Software artifacts (e.g. reducing quality of the produced artifacts, lowering the effectiveness and efficiency of the development process whilst increasing costs). In the analysis, we also identify three types of connections between software TD and TTD items with varying levels of impact and criticality. Additionally, the study finds support for previous research results in which specific TTD items unique to Testware were identified. Finally, the paper outlines several areas of future research into TTD.
{"title":"Towards a Mapping of Software Technical Debt onto Testware","authors":"Emil Alégroth, J. Gonzalez-Huerta","doi":"10.1109/SEAA.2017.65","DOIUrl":"https://doi.org/10.1109/SEAA.2017.65","url":null,"abstract":"Technical Debt (TD) is a metaphor used to explain the negative impacts that sub-optimal design decisions have in the long-term perspective of a software project. Although TD is acknowledged by both researchers and practitioners to have strong negative impact on Software development, its study on Testware has so far been very limited. A gap in knowledge that is important to address due to the growing popularity of Testware (scripted automated testing) in software development practice.In this paper we present a mapping analysis that connects 21 well-known, Software, object-oriented TD items to Testware, establishing them as Testware Technical Debt (TTD) items. The analysis indicates that most Software TD items are applicable or observable as TTD items, often in similar form and with roughly the same impact as for Software artifacts (e.g. reducing quality of the produced artifacts, lowering the effectiveness and efficiency of the development process whilst increasing costs). In the analysis, we also identify three types of connections between software TD and TTD items with varying levels of impact and criticality. Additionally, the study finds support for previous research results in which specific TTD items unique to Testware were identified. Finally, the paper outlines several areas of future research into TTD.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128449161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been growing interest in research on sustainability in software engineering. Despite active research in this area, there is still a lack of understanding of how sustainability is perceived by software professionals. To understand how software sustainability is currently dealt with in practice, we performed an interview study with 10 software project team leads from nine companies in Austria. Our study shows that practitioners regard software sustainability as important but are technically minded with respect to sustainability. Organizational and economic issues are addressed, but environmental considerations are missing. The perceived influence of various project factors on sustainability is partly diverse, suggesting that the meaning of sustainability needs to be refined for the specific project and application context.
{"title":"An Interview Study on Sustainability Concerns in Software Development Projects","authors":"Iris Groher, R. Weinreich","doi":"10.1109/SEAA.2017.70","DOIUrl":"https://doi.org/10.1109/SEAA.2017.70","url":null,"abstract":"In recent years, there has been growing interest in research on sustainability in software engineering. Despite active research in this area, there is still a lack of understanding of how sustainability is perceived by software professionals. To understand how software sustainability is currently dealt with in practice, we performed an interview study with 10 software project team leads from nine companies in Austria. Our study shows that practitioners regard software sustainability as important but are technically minded with respect to sustainability. Organizational and economic issues are addressed, but environmental considerations are missing. The perceived influence of various project factors on sustainability is partly diverse, suggesting that the meaning of sustainability needs to be refined for the specific project and application context.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123341618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jürgen Thanhofer-Pilisch, Alexander Lang, Michael Vierhauser, Rick Rabiser
Domain-specific languages (DSLs) are frequently used in software engineering. In contrast to general-purpose languages, DSLs are designed for a special purpose in a particular domain. Due to volatile user requirements and new technologies DSLs, similar to the software systems they describe or produce, are subject to continuous evolution. This work explores existing research on DSL evolution to summarize, structure and analyze this area of research, and to identify trends and open issues. We conducted a systematic mapping study and identified 98 papers as potentially relevant for our study. By applying inclusion and exclusion criteria we selected a set of 34 papers relevant for DSL evolution. We classified and analyzed these papers to create a map of the research field. We conclude that DSL evolution is a topic of increasing relevancy. However, research on language evolution so far did not focus much on the characteristics DSLs exhibit. Also, there are not many cross-references between our primary studies meaning researchers are often not aware of potentially useful work. Our study results help researchers and practitioners working on DSL-based approaches to get an overview of existing research on DSL evolution and open challenges.
{"title":"A Systematic Mapping Study on DSL Evolution","authors":"Jürgen Thanhofer-Pilisch, Alexander Lang, Michael Vierhauser, Rick Rabiser","doi":"10.1109/SEAA.2017.25","DOIUrl":"https://doi.org/10.1109/SEAA.2017.25","url":null,"abstract":"Domain-specific languages (DSLs) are frequently used in software engineering. In contrast to general-purpose languages, DSLs are designed for a special purpose in a particular domain. Due to volatile user requirements and new technologies DSLs, similar to the software systems they describe or produce, are subject to continuous evolution. This work explores existing research on DSL evolution to summarize, structure and analyze this area of research, and to identify trends and open issues. We conducted a systematic mapping study and identified 98 papers as potentially relevant for our study. By applying inclusion and exclusion criteria we selected a set of 34 papers relevant for DSL evolution. We classified and analyzed these papers to create a map of the research field. We conclude that DSL evolution is a topic of increasing relevancy. However, research on language evolution so far did not focus much on the characteristics DSLs exhibit. Also, there are not many cross-references between our primary studies meaning researchers are often not aware of potentially useful work. Our study results help researchers and practitioners working on DSL-based approaches to get an overview of existing research on DSL evolution and open challenges.","PeriodicalId":151513,"journal":{"name":"2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115266501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}