A. Monteiro, A. D. Souza, B. Batista, Mauricio Zaparoli
The social media exerts an important role in publishing information and newspaper online. The quality of this information and the sentiment analysis might help predict the price of diverse market asset and cause great gains and losses. In this scenario, many researchers have been studying the diverse aspects that influence this area. Recently, cryptocurrencies have gained a spotlight between financial assets and, one of its characteristics is the fact that its market is strongly influenced by opinions and speculation being a proper area for sentiment analysis and data mining techniques. However, there is not any complete theoretical and technical framework about this subject. Due to its interdisciplinary characteristics involving topics in economics, human behavior, and artificial intelligence, there is a lack of clarity about the techniques and tools used in sentiment analysis in the cryptocurrencies scenario. The goal of this paper is to analyze related research in market prediction based on text mining and other artificial intelligence techniques and generate a systematic mapping about the main research, identifing the possible gaps in this field. This work might help the research community to better structure this emerging area and identify more exactly aspects that require research and are of essential importance.
{"title":"Market Prediction in Criptocurrency: A Systematic Literature Mapping","authors":"A. Monteiro, A. D. Souza, B. Batista, Mauricio Zaparoli","doi":"10.1145/3330204.3330272","DOIUrl":"https://doi.org/10.1145/3330204.3330272","url":null,"abstract":"The social media exerts an important role in publishing information and newspaper online. The quality of this information and the sentiment analysis might help predict the price of diverse market asset and cause great gains and losses. In this scenario, many researchers have been studying the diverse aspects that influence this area. Recently, cryptocurrencies have gained a spotlight between financial assets and, one of its characteristics is the fact that its market is strongly influenced by opinions and speculation being a proper area for sentiment analysis and data mining techniques. However, there is not any complete theoretical and technical framework about this subject. Due to its interdisciplinary characteristics involving topics in economics, human behavior, and artificial intelligence, there is a lack of clarity about the techniques and tools used in sentiment analysis in the cryptocurrencies scenario. The goal of this paper is to analyze related research in market prediction based on text mining and other artificial intelligence techniques and generate a systematic mapping about the main research, identifing the possible gaps in this field. This work might help the research community to better structure this emerging area and identify more exactly aspects that require research and are of essential importance.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114399526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Horita, D. Rhodes, Thiago J. Inocêncio, Gustavo R. Gonzales
Monitoring systems are essential for prompt action in case of a disaster, as well as for understanding these systems as constituent systems within a System-of-Systems (SoS) can provide new and unique features that cannot be provided by any individual system separately. Furthermore, the identification of existing stakeholders also plays an important role, as constituent systems may be associated with multiple requirements. Therefore, this work presents two artifacts -a conceptual architecture and a stakeholder map - of a SoS for disaster monitoring and early-warning. A Design Science Research (DSR) within a Brazilian early-warning center has been conducted for designing and evaluating the artifacts. An artifact generalization approach has been also employed for generalizing the artifacts from a specific to a broader and generic scenario. Study findings showed that a SoS for disaster monitoring and early-warning should comprise nine constituents, which are used by four groups of stakeholders.
{"title":"Building a conceptual architecture and stakeholder map of a system-of-systems for disaster monitoring and early-warning: A case study in Brazil","authors":"F. Horita, D. Rhodes, Thiago J. Inocêncio, Gustavo R. Gonzales","doi":"10.1145/3330204.3330215","DOIUrl":"https://doi.org/10.1145/3330204.3330215","url":null,"abstract":"Monitoring systems are essential for prompt action in case of a disaster, as well as for understanding these systems as constituent systems within a System-of-Systems (SoS) can provide new and unique features that cannot be provided by any individual system separately. Furthermore, the identification of existing stakeholders also plays an important role, as constituent systems may be associated with multiple requirements. Therefore, this work presents two artifacts -a conceptual architecture and a stakeholder map - of a SoS for disaster monitoring and early-warning. A Design Science Research (DSR) within a Brazilian early-warning center has been conducted for designing and evaluating the artifacts. An artifact generalization approach has been also employed for generalizing the artifacts from a specific to a broader and generic scenario. Study findings showed that a SoS for disaster monitoring and early-warning should comprise nine constituents, which are used by four groups of stakeholders.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127358167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elivaldo Lozer Fracalossi Ribeiro, E. L. Monteiro, Daniela Barreiro Claro, R. Maciel
Interoperability is the ability of heterogeneous systems to communicate with another system transparently. Usually, interoperability is classified into syntactic, semantic, and pragmatic. The syntactic level is related to the grammar and vocabulary of the message swapped, the semantic level with the meaning of the data and the pragmatic level with the understanding of the messages sent and received. A set of systems is pragmatically interoperable when they share the same expectations about the effect of messages exchanged between them. Due to the vast diversity of definitions and no consensus, provide a pragmatic interoperability solution is a challenge. In this paper, we propose a conceptual framework that aims to contribute to the unification of the concept of pragmatic interoperability and common elements necessary for its realization. For this, a unified definition and conceptual framework are presented. The framework was applied in three different scenarios to demonstrate its applicability and, consequently, validation of the unified concept.
{"title":"A Conceptual Framework for Pragmatic Interoperability","authors":"Elivaldo Lozer Fracalossi Ribeiro, E. L. Monteiro, Daniela Barreiro Claro, R. Maciel","doi":"10.1145/3330204.3330246","DOIUrl":"https://doi.org/10.1145/3330204.3330246","url":null,"abstract":"Interoperability is the ability of heterogeneous systems to communicate with another system transparently. Usually, interoperability is classified into syntactic, semantic, and pragmatic. The syntactic level is related to the grammar and vocabulary of the message swapped, the semantic level with the meaning of the data and the pragmatic level with the understanding of the messages sent and received. A set of systems is pragmatically interoperable when they share the same expectations about the effect of messages exchanged between them. Due to the vast diversity of definitions and no consensus, provide a pragmatic interoperability solution is a challenge. In this paper, we propose a conceptual framework that aims to contribute to the unification of the concept of pragmatic interoperability and common elements necessary for its realization. For this, a unified definition and conceptual framework are presented. The framework was applied in three different scenarios to demonstrate its applicability and, consequently, validation of the unified concept.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115611086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an approach to managing software projects that are susceptible to frequent and intense change requests demanding quick response time. Highly influenced by agile thinking, in particular, SCRUM, Kanban and Lean Software development, this solution directly captures the philosophy and the main techniques of these popular agile "methodologies". Moreover, it includes the use of specific measures to manage the project's health and give accurate information to the top level management. This proposal's focus is on improving and optimizing the maintenance process of large-scale information systems which daily handle the change requests of thousands of users. The application of this approach in a real study case allowed to obtain significant results that indicate the potential of the proposal.
{"title":"An Agile Approach Applied to Intense Maintenance Projects","authors":"G. D. A. Júnior, A. Dantas","doi":"10.1145/3330204.3330255","DOIUrl":"https://doi.org/10.1145/3330204.3330255","url":null,"abstract":"This paper proposes an approach to managing software projects that are susceptible to frequent and intense change requests demanding quick response time. Highly influenced by agile thinking, in particular, SCRUM, Kanban and Lean Software development, this solution directly captures the philosophy and the main techniques of these popular agile \"methodologies\". Moreover, it includes the use of specific measures to manage the project's health and give accurate information to the top level management. This proposal's focus is on improving and optimizing the maintenance process of large-scale information systems which daily handle the change requests of thousands of users. The application of this approach in a real study case allowed to obtain significant results that indicate the potential of the proposal.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134396538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several studies have investigated how to use Machine Learning algorithms to recognize users based on keystroke dynamic. All those studies required Feature Engineering (FE), i.e., a process in which specialists choose what attributes should be considered for learning. However, this process is susceptible to problems such as original information loss or inappropriate attribute choices. Thus, the objective of this work is to demonstrate the hypothesis that user recognition algorithms applied to keystroke dynamics raw (original) data can perform better than the ones that depend on FE. Therefore, this work proposes a deep neural network named DRK. The proposed network contains layers that learn adequate data representations to perform user recognition based on keystroke dynamics raw data, avoiding FE. Experiments compared DRK with four other deep neural networks that use FE in four datasets with 280 users. The proposed network achieved better results in all datasets, showing strong evidence that the stated hypothesis is, in fact, valid.
{"title":"Deep Neural Networks Applied to User Recognition Based on Keystroke Dynamics: Learning from Raw Data","authors":"Marco Aurélio da Silva Cruz, R. Goldschmidt","doi":"10.1145/3330204.3330245","DOIUrl":"https://doi.org/10.1145/3330204.3330245","url":null,"abstract":"Several studies have investigated how to use Machine Learning algorithms to recognize users based on keystroke dynamic. All those studies required Feature Engineering (FE), i.e., a process in which specialists choose what attributes should be considered for learning. However, this process is susceptible to problems such as original information loss or inappropriate attribute choices. Thus, the objective of this work is to demonstrate the hypothesis that user recognition algorithms applied to keystroke dynamics raw (original) data can perform better than the ones that depend on FE. Therefore, this work proposes a deep neural network named DRK. The proposed network contains layers that learn adequate data representations to perform user recognition based on keystroke dynamics raw data, avoiding FE. Experiments compared DRK with four other deep neural networks that use FE in four datasets with 280 users. The proposed network achieved better results in all datasets, showing strong evidence that the stated hypothesis is, in fact, valid.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128214478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several approaches to measure similarity between UML models have been proposed in recent years. However, they usually fall short of what was expected in terms of precision and sensitivity. Consequently, software developers end up using imprecise, similarity-measuring approaches to figure out how similar design models of fast-changing information systems are. This article proposes UMLSim, which is a hybrid approach to measure similarity between UML models. It brings an innovative approach by using multiple criteria to quantify how UML models are similar, including semantic, syntactic, structural, and design criteria. A case study was conducted to compare the UMLSim with five state-of-the-art approaches through six evaluation scenarios, in which the similarity between realistic UML models was computed. Our results, supported by empirical evidence, show that, on average, the UML-Sim presented high values for precision (0.93), recall (0.63) and f-measure (0.67) metrics, excelling the state-of-the-art approaches. The empirical knowledge and insights that are produced may serve as a starting point for future works. The results are encouraging and show the potential for using UMLSim in real-world settings.
{"title":"Towards a Hybrid Approach to Measure Similarity Between UML Models","authors":"L. Gonçales, Kleinner Farias, Vinícius Bischoff","doi":"10.1145/3330204.3330226","DOIUrl":"https://doi.org/10.1145/3330204.3330226","url":null,"abstract":"Several approaches to measure similarity between UML models have been proposed in recent years. However, they usually fall short of what was expected in terms of precision and sensitivity. Consequently, software developers end up using imprecise, similarity-measuring approaches to figure out how similar design models of fast-changing information systems are. This article proposes UMLSim, which is a hybrid approach to measure similarity between UML models. It brings an innovative approach by using multiple criteria to quantify how UML models are similar, including semantic, syntactic, structural, and design criteria. A case study was conducted to compare the UMLSim with five state-of-the-art approaches through six evaluation scenarios, in which the similarity between realistic UML models was computed. Our results, supported by empirical evidence, show that, on average, the UML-Sim presented high values for precision (0.93), recall (0.63) and f-measure (0.67) metrics, excelling the state-of-the-art approaches. The empirical knowledge and insights that are produced may serve as a starting point for future works. The results are encouraging and show the potential for using UMLSim in real-world settings.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127082178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis Fernando Carvalho Dias, Fernando Silva Parreiras
Money laundering is a method used by criminals to give lawful appearance to funds obtained illegally. Due to the difficulty of identification using traditional methods of research, technology has played an important role in this process. The knowledge discovery techniques were used in an experimental research with the objective of verifying their effectiveness and accuracy in the identification of relationships in banking transactions from real money laundering investigations. The graph matching techniques obtained the best results but it was found that, despite the low efficiency, frequent pattern mining techniques are important and should not be avoided.
{"title":"Comparing Data Mining Techniques for Anti-Money Laundering","authors":"Luis Fernando Carvalho Dias, Fernando Silva Parreiras","doi":"10.1145/3330204.3330283","DOIUrl":"https://doi.org/10.1145/3330204.3330283","url":null,"abstract":"Money laundering is a method used by criminals to give lawful appearance to funds obtained illegally. Due to the difficulty of identification using traditional methods of research, technology has played an important role in this process. The knowledge discovery techniques were used in an experimental research with the objective of verifying their effectiveness and accuracy in the identification of relationships in banking transactions from real money laundering investigations. The graph matching techniques obtained the best results but it was found that, despite the low efficiency, frequent pattern mining techniques are important and should not be avoided.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131005731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Érick S. Florentino, Argus A. B. Cavalcante, R. Goldschmidt
Link prediction is a graph mining task that aims to identify pairs of non-connected vertices that have a high probability to connect in the future. This task has been frequently implemented by recommendation systems that suggest new interactions between users in social networks. In general, the state-of-the-art link prediction methods only consider data from the most complete and recent state of the network. They do not take into account information about the existing topology when new edges were added to the network's structure. This study raises the hypothesis that recovering such data may contribute to building predictive models more precise than the available ones since those data enrich the description of the application's context with examples that represent exactly the kind of event to be foreseen: the appearance of new connections. Hence, this paper evaluates such hypothesis. For this purpose, it proposes a link prediction method that is based on the historical evolution of the topologies of social networks. Results from experiments with ten real coauthorship social networks reveal the adequacy of the proposed method and the confirmation of the raised hypothesis.
{"title":"A Topological Data Evolution Based Method to Predict Links in Social Networks","authors":"Érick S. Florentino, Argus A. B. Cavalcante, R. Goldschmidt","doi":"10.1145/3330204.3330236","DOIUrl":"https://doi.org/10.1145/3330204.3330236","url":null,"abstract":"Link prediction is a graph mining task that aims to identify pairs of non-connected vertices that have a high probability to connect in the future. This task has been frequently implemented by recommendation systems that suggest new interactions between users in social networks. In general, the state-of-the-art link prediction methods only consider data from the most complete and recent state of the network. They do not take into account information about the existing topology when new edges were added to the network's structure. This study raises the hypothesis that recovering such data may contribute to building predictive models more precise than the available ones since those data enrich the description of the application's context with examples that represent exactly the kind of event to be foreseen: the appearance of new connections. Hence, this paper evaluates such hypothesis. For this purpose, it proposes a link prediction method that is based on the historical evolution of the topologies of social networks. Results from experiments with ten real coauthorship social networks reveal the adequacy of the proposed method and the confirmation of the raised hypothesis.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133724504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bruno Prece, E. Pacheco, R. Barros, Sylvio Barbon Junior
In the last few years, several new tools addressing maturity level management have been proposed, e.g. diagnostic assessment questionnaires (DAQ). In practice, the usage of questionnaires presents some drawbacks related to subjectivity, time cost, and applicant bias. Moreover, the questionnaires may present a large number of questions, as well as part of them redundant. Another important fact of real-life application of DAQs concerns the usage of multiple questionnaires, increasing the shortcoming impacts. To pave the way to a more convenient tool to support and facilitate the achievement of organizational strategies and objectives, we proposed an intelligent reduction of DAQs by the use of single-label and multilabel feature selection. In this paper, we reduced four DAQs (Risk Management, Infrastructure, Governance and Service Catalogs) with our proposal in comparison to different feature selection algorithms (χ2, Information Gain, Random Forest Importance and ReliefF). The reduction was driven by a machine learning prediction model towards ensuring the new subset of question grounded in the same obtained score result. Results showed that removing irrelevant and/or redundant question it was possible to increase the model fitting even reducing about one-third of the questions with the same predictive capacity.
{"title":"Improvements on diagnostic assessment questionnaires of Maturity Level Management with feature selection","authors":"Bruno Prece, E. Pacheco, R. Barros, Sylvio Barbon Junior","doi":"10.1145/3330204.3330216","DOIUrl":"https://doi.org/10.1145/3330204.3330216","url":null,"abstract":"In the last few years, several new tools addressing maturity level management have been proposed, e.g. diagnostic assessment questionnaires (DAQ). In practice, the usage of questionnaires presents some drawbacks related to subjectivity, time cost, and applicant bias. Moreover, the questionnaires may present a large number of questions, as well as part of them redundant. Another important fact of real-life application of DAQs concerns the usage of multiple questionnaires, increasing the shortcoming impacts. To pave the way to a more convenient tool to support and facilitate the achievement of organizational strategies and objectives, we proposed an intelligent reduction of DAQs by the use of single-label and multilabel feature selection. In this paper, we reduced four DAQs (Risk Management, Infrastructure, Governance and Service Catalogs) with our proposal in comparison to different feature selection algorithms (χ2, Information Gain, Random Forest Importance and ReliefF). The reduction was driven by a machine learning prediction model towards ensuring the new subset of question grounded in the same obtained score result. Results showed that removing irrelevant and/or redundant question it was possible to increase the model fitting even reducing about one-third of the questions with the same predictive capacity.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114880252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinícius Bischoff, Kleinner Farias, L. Gonçales, J. Barbosa
The integration of feature models plays a key role in many software engineering tasks, e.g., adding new features to software product lines (SPL) of information systems. Previous empirical studies have revealed that integrating design models is still considered a time-consuming and error-prone task. Unfortunately, integration approaches with tool support are still severely lacking. Even worse, little is known about the effort invested by developers to integrate models manually, and how correct the integrated models are. This paper proposes FMIT, which is a semiautomatic tool to support the integration of feature models. It comes up with a strategy-based approach to reduce the effort that developers invest to combine feature models and increase the amount of correctly integrated models. A controlled experiment was run with 10 volunteers through six realistic integration scenarios. Our results, supported by statistical tests, show that our semiautomatic approach not only reduced the integration effort by 73.01%, but also increased the number of correctly integrated feature models by 43.01%, compared with the manual approach. Our main contributions are a semiautomatic, strategy-based approach with tool support, and empirical evidence on its benefits. Our encouraging results open the way for the development of new heuristics and tools to support developers during the evolution of feature models.
{"title":"Towards a Semiautomatic Tool to Support the Integration of Feature Models","authors":"Vinícius Bischoff, Kleinner Farias, L. Gonçales, J. Barbosa","doi":"10.1145/3330204.3330249","DOIUrl":"https://doi.org/10.1145/3330204.3330249","url":null,"abstract":"The integration of feature models plays a key role in many software engineering tasks, e.g., adding new features to software product lines (SPL) of information systems. Previous empirical studies have revealed that integrating design models is still considered a time-consuming and error-prone task. Unfortunately, integration approaches with tool support are still severely lacking. Even worse, little is known about the effort invested by developers to integrate models manually, and how correct the integrated models are. This paper proposes FMIT, which is a semiautomatic tool to support the integration of feature models. It comes up with a strategy-based approach to reduce the effort that developers invest to combine feature models and increase the amount of correctly integrated models. A controlled experiment was run with 10 volunteers through six realistic integration scenarios. Our results, supported by statistical tests, show that our semiautomatic approach not only reduced the integration effort by 73.01%, but also increased the number of correctly integrated feature models by 43.01%, compared with the manual approach. Our main contributions are a semiautomatic, strategy-based approach with tool support, and empirical evidence on its benefits. Our encouraging results open the way for the development of new heuristics and tools to support developers during the evolution of feature models.","PeriodicalId":348938,"journal":{"name":"Proceedings of the XV Brazilian Symposium on Information Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122220768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}