In this paper two reconfiguration methodologies for three-phase electric power distribution systems based on multi-objective optimization algorithms are developed in order to simultaneously optimize two objective functions, (1) power losses and (2) three-phase unbalanced voltage minimization. The proposed optimization involves only radial topology configurations which is the most common configuration in electric distribution systems. The formulation of the problem considers the radiality as a constraint, increasing the computational complexity. The Prim and Kruskal algorithms are tested to fix infeasible configurations. In distribution systems, the three-phase unbalanced voltage and power losses limit the power supply to the loads and may even cause overheating in distribution lines, transformers and other equipment. An alternative to solve this problem is through a reconfiguration process, by opening and/or closing switches altering the distribution system configuration under operation. Hence, in this work the three-phase unbalanced voltage and power losses in radial distribution systems are addressed as a multi-objective optimization problem, firstly, using a method based on weighted sum; and, secondly, implementing NSGA-II algorithm. An example of distribution system is presented to prove the effectiveness of the proposed method.
{"title":"Multi-objective Evolutionary Algorithms for Power Distribution System Optimal Reconfiguration","authors":"Ivo Benitez Cattani","doi":"10.19153/cleiej.22.3.9","DOIUrl":"https://doi.org/10.19153/cleiej.22.3.9","url":null,"abstract":"In this paper two reconfiguration methodologies for three-phase electric power distribution systems based on multi-objective optimization algorithms are developed in order to simultaneously optimize two objective functions, (1) power losses and (2) three-phase unbalanced voltage minimization. The proposed optimization involves only radial topology configurations which is the most common configuration in electric distribution systems. The formulation of the problem considers the radiality as a constraint, increasing the computational complexity. The Prim and Kruskal algorithms are tested to fix infeasible configurations. In distribution systems, the three-phase unbalanced voltage and power losses limit the power supply to the loads and may even cause overheating in distribution lines, transformers and other equipment. An alternative to solve this problem is through a reconfiguration process, by opening and/or closing switches altering the distribution system configuration under operation. Hence, in this work the three-phase unbalanced voltage and power losses in radial distribution systems are addressed as a multi-objective optimization problem, firstly, using a method based on weighted sum; and, secondly, implementing NSGA-II algorithm. An example of distribution system is presented to prove the effectiveness of the proposed method.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122082281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field of astronomical data analysis has experienced an important paradigm shift in the recent years. The automation of certain analysis procedures is no longer a desirable feature for reducing the human effort, but a must have asset for coping with the extremely large datasets that new instrumentation technologies are producing. In particular, the detection of transit planets — bodies that move across the face of another body — is an ideal setup for intelligent automation. Knowing if the variation within a light curve is evidence of a planet, requires applying advanced pattern recognition methods to a very large number of candidate stars. Here we present a supervised learning approach to refine the results produced by a case-by-case analysis of light-curves, harnessing the generalization power of machine learning techniques to predict the currently unclassified light-curves. The method uses feature engineering to find a suitable representation for classification, and different performance criteria to evaluate them and decide. Our results show that this automatic technique can help to speed up the very time-consuming manual process that is currently done by expert scientists.
{"title":"Classical Machine Learning Techniques in the Search of Extrasolar Planets","authors":"F. Mena, M. Bugueño, Mauricio Araya","doi":"10.19153/cleiej.22.3.3","DOIUrl":"https://doi.org/10.19153/cleiej.22.3.3","url":null,"abstract":"The field of astronomical data analysis has experienced an important paradigm shift in the recent years. The automation of certain analysis procedures is no longer a desirable feature for reducing the human effort, but a must have asset for coping with the extremely large datasets that new instrumentation technologies are producing. In particular, the detection of transit planets — bodies that move across the face of another body — is an ideal setup for intelligent automation. Knowing if the variation within a light curve is evidence of a planet, requires applying advanced pattern recognition methods to a very large number of candidate stars. Here we present a supervised learning approach to refine the results produced by a case-by-case analysis of light-curves, harnessing the generalization power of machine learning techniques to predict the currently unclassified light-curves. The method uses feature engineering to find a suitable representation for classification, and different performance criteria to evaluate them and decide. Our results show that this automatic technique can help to speed up the very time-consuming manual process that is currently done by expert scientists.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126456310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linda Riquelme, Magalí González, Nathalie Aquino, L. Cernuzzi
Techniques for quality assurance have to deal with the complexity of software systems and the high probabilities of errors appearing in any stage of the software life cycle. Software testing is a widely used approach but, due to the costs involved in this process, development teams often debate its applicability in their projects. In the endeavor to reduce the complexity of this process, this study presents an approach for software development based on Test-Driven Development (TDD) and supported by Model-Based Testing (MBT) tools that allow the automatic generation of test-cases. The approach, called MoFQA (Model-First Quality Assurance), consists of two main aspects: i) a process based on testing techniques, which drives software development defining steps and recommended practices; and ii) a toolset for testers, end-users and stakeholders, which allow them to model system requirements that represent unit and abstract tests for the system and, ultimately, generate executable tests. The tools that MoFQA provides are applicable to web applications. In order to evaluate the usability of MoFQA tools, two preliminary validation experiences were driven and the results are presented.
{"title":"MoFQA: A TDD Process and Tool for Automatic Test Case Generation from MDD Models","authors":"Linda Riquelme, Magalí González, Nathalie Aquino, L. Cernuzzi","doi":"10.19153/cleiej.22.3.4","DOIUrl":"https://doi.org/10.19153/cleiej.22.3.4","url":null,"abstract":"Techniques for quality assurance have to deal with the complexity of software systems and the high probabilities of errors appearing in any stage of the software life cycle. Software testing is a widely used approach but, due to the costs involved in this process, development teams often debate its applicability in their projects. In the endeavor to reduce the complexity of this process, this study presents an approach for software development based on Test-Driven Development (TDD) and supported by Model-Based Testing (MBT) tools that allow the automatic generation of test-cases. The approach, called MoFQA (Model-First Quality Assurance), consists of two main aspects: i) a process based on testing techniques, which drives software development defining steps and recommended practices; and ii) a toolset for testers, end-users and stakeholders, which allow them to model system requirements that represent unit and abstract tests for the system and, ultimately, generate executable tests. The tools that MoFQA provides are applicable to web applications. In order to evaluate the usability of MoFQA tools, two preliminary validation experiences were driven and the results are presented.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"300 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124279137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swarm robotics is a system of multiple robots where a desired collective behavior emerges from the interactions between the robots and with the environment. This paper proposes an emotional model for the robots, to allow emerging behaviors. The emotional model uses four universal emotions: joy, sadness, fear and anger, assigned to each robot based on the level of satisfaction of its basic needs. These emotions lie on a spectrum where, depending on where the emotion of the robot lies, it can affect its behavior and of its neighboring robots. The more negative the emotion is, the more individualistic it becomes in its decisions. The more positive the robot is in its emotion, the more it will consider the group and global goals. Each robot is able to recognize another robot's emotion in the system based on their current state, using the AR2P recognition algorithm. Specifically, the paper addresses emotions’ influence on the behavior of the system, at the individual and collective levels, and the emotions’ effects on the emergent behaviors of a multi-robot system. The paper analyses two emergent scenarios: nectar harvesting and object transportation, and shows the importance of the emotions into the emergent behavior in a multi-robot system.
{"title":"An emotional model for swarm robotics","authors":"Angel Gil, E. Puerto, J. Aguilar, E. Dapena","doi":"10.19153/cleiej.22.3.6","DOIUrl":"https://doi.org/10.19153/cleiej.22.3.6","url":null,"abstract":"Swarm robotics is a system of multiple robots where a desired collective behavior emerges from the interactions between the robots and with the environment. This paper proposes an emotional model for the robots, to allow emerging behaviors. The emotional model uses four universal emotions: joy, sadness, fear and anger, assigned to each robot based on the level of satisfaction of its basic needs. These emotions lie on a spectrum where, depending on where the emotion of the robot lies, it can affect its behavior and of its neighboring robots. The more negative the emotion is, the more individualistic it becomes in its decisions. The more positive the robot is in its emotion, the more it will consider the group and global goals. Each robot is able to recognize another robot's emotion in the system based on their current state, using the AR2P recognition algorithm. Specifically, the paper addresses emotions’ influence on the behavior of the system, at the individual and collective levels, and the emotions’ effects on the emergent behaviors of a multi-robot system. The paper analyses two emergent scenarios: nectar harvesting and object transportation, and shows the importance of the emotions into the emergent behavior in a multi-robot system.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128626754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The discovery of knowledge in textual databases is an approach that basically seeks for implicit relationships between different concepts in different documents written in natural language, in order to identify new useful knowledge. To assist in this process, this approach can count on the help of Text Mining techniques. Despite all the progress made, researchers in this area must still deal with a large volume of information and with the challenge of identifying the causal relationships between concepts in a certain field. A statistical and verbal semantic approach that supports the understanding of the semantic logic between concepts may help the extraction of relevant information and knowledge. The objective of this work is to support the user with the identification of implicit relationships between concepts present in different texts, considering their causal relationships. We propose a hybrid approach for the discovery of implicit knowledge present in a text corpus, using analysis based on association rules together with metrics from complex networks to identify relevant associations, verbal semantics to determine the causal relationships, and causal concept maps for their visualization. Through a case study, a set of texts from alternative medicine was selected and the different extractions showed that the proposed approach facilitates the identification of implicit knowledge by the user.
{"title":"A New Statistical and Verbal-Semantic Approach to Pattern Extraction in Text Mining Applications","authors":"D. G. Vasques, P. Martins, S. O. Rezende","doi":"10.19153/cleiej.22.3.5","DOIUrl":"https://doi.org/10.19153/cleiej.22.3.5","url":null,"abstract":"The discovery of knowledge in textual databases is an approach that basically seeks for implicit relationships between different concepts in different documents written in natural language, in order to identify new useful knowledge. To assist in this process, this approach can count on the help of Text Mining techniques. Despite all the progress made, researchers in this area must still deal with a large volume of information and with the challenge of identifying the causal relationships between concepts in a certain field. A statistical and verbal semantic approach that supports the understanding of the semantic logic between concepts may help the extraction of relevant information and knowledge. The objective of this work is to support the user with the identification of implicit relationships between concepts present in different texts, considering their causal relationships. We propose a hybrid approach for the discovery of implicit knowledge present in a text corpus, using analysis based on association rules together with metrics from complex networks to identify relevant associations, verbal semantics to determine the causal relationships, and causal concept maps for their visualization. Through a case study, a set of texts from alternative medicine was selected and the different extractions showed that the proposed approach facilitates the identification of implicit knowledge by the user.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126635356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the Filtered-Association Rules Network (Filtered-ARN) is presented to structure, prune, and analyze a set of association rules in order to construct candidate hypotheses. The Filtered-ARN algorithm selects association rules with the use of asymmetric objective measures, Added Value and Gain then builds a network allowing more exploration information. The Filtered-ARN was validated using three datasets: Lenses, Hayes-roth, and Soybean Large, available online. We carried out a concept proof experiment using a real dataset with data on organic fertilization (Green Manure) for text the proposed method. The results were validated by comparing the Filtered-ARN with the conventional ARN and also comparing the results with the decision tree. The approach presented promising results, showing its ability to explain a set of objective items and the aid to build more consolidated hypotheses by guaranteeing statistical dependence with the use of objective measures.
{"title":"Filtered-ARN: Asymmetric objective measures applied to filter Association Rules Networks","authors":"D. Calçada, S. O. Rezende","doi":"10.19153/cleiej.22.3.2","DOIUrl":"https://doi.org/10.19153/cleiej.22.3.2","url":null,"abstract":"In this paper, the Filtered-Association Rules Network (Filtered-ARN) is presented to structure, prune, and analyze a set of association rules in order to construct candidate hypotheses. The Filtered-ARN algorithm selects association rules with the use of asymmetric objective measures, Added Value and Gain then builds a network allowing more exploration information. The Filtered-ARN was validated using three datasets: Lenses, Hayes-roth, and Soybean Large, available online. We carried out a concept proof experiment using a real dataset with data on organic fertilization (Green Manure) for text the proposed method. The results were validated by comparing the Filtered-ARN with the conventional ARN and also comparing the results with the decision tree. The approach presented promising results, showing its ability to explain a set of objective items and the aid to build more consolidated hypotheses by guaranteeing statistical dependence with the use of objective measures.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116870902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evelyn Maria Aranda Acuna, José Luis Vázquez Noguera, Cynthia Villalba
The current process of medical terminology coding in health standards at the Hospital of Clinics of Paraguay is performed by physicians, normally interns or residents. They search the medical terminology code in impressed coding manuals or in internet by using their cell phones. This search process takes a lot time to be done during the medical consultation. This work proposes and evaluates a friendly medical terminology server thought web services, using Apache Lucene, as a search engine library, and the Metathesaurus of Unified System of Medical Language (UMLS), as an information source. The server is developed for Spanish speakers. Results show that physicians can find medical terminology code with the terminology server, using friendly or familiar terms, 18 times faster than with the current search process. The user satisfaction degree is ``Good” according to an adjective rating of the System Usability Scale (SUS). In addition, a comparison with a search engine of medical terminology called Metamorphosys shows that the implemented terminology server is quite competitive and it responses in similar average time
{"title":"Medical Terminology Server for the Hospital of Clinics of Paraguay using Apache Lucene and the UMLS Metathesaurus","authors":"Evelyn Maria Aranda Acuna, José Luis Vázquez Noguera, Cynthia Villalba","doi":"10.19153/cleiej.22.3.8","DOIUrl":"https://doi.org/10.19153/cleiej.22.3.8","url":null,"abstract":"The current process of medical terminology coding in health standards at the Hospital of Clinics of Paraguay is performed by physicians, normally interns or residents. They search the medical terminology code in impressed coding manuals or in internet by using their cell phones. This search process takes a lot time to be done during the medical consultation. This work proposes and evaluates a friendly medical terminology server thought web services, using Apache Lucene, as a search engine library, and the Metathesaurus of Unified System of Medical Language (UMLS), as an information source. The server is developed for Spanish speakers. Results show that physicians can find medical terminology code with the terminology server, using friendly or familiar terms, 18 times faster than with the current search process. The user satisfaction degree is ``Good” according to an adjective rating of the System Usability Scale (SUS). In addition, a comparison with a search engine of medical terminology called Metamorphosys shows that the implemented terminology server is quite competitive and it responses in similar average time","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114899838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gustavo Betarte, M. Cristiá, C. Luna, Adrián Silveira, D. Zanarini
Formal methods (FM) are mathematics-based software development methods aimed at producing ``code for a nuclear power reactor''. That is, due application of FM can produce bug-free, zero-defect, correct-by-construction, guaranteed, certified software. However, the software industry seldom use FM. One of the main reasons for such a situation is that there exists the perception (which might well be a fact) that FM increase software costs. On the other hand, FM can be partially applied thus producing high-quality software, although not necessarily bug-free. In this paper we outline some FM related techniques whose application the cryptocurrency community should take into consideration because they could bridge the gap between ``loose web code'' and ``code for a nuclear power reactor''. We include relevant case studies in the area of cryptocurrency.
{"title":"Set-Based Models for Cryptocurrency Software","authors":"Gustavo Betarte, M. Cristiá, C. Luna, Adrián Silveira, D. Zanarini","doi":"10.19153/cleiej.24.3.0","DOIUrl":"https://doi.org/10.19153/cleiej.24.3.0","url":null,"abstract":"Formal methods (FM) are mathematics-based software development methods aimed at producing ``code for a nuclear power reactor''. That is, due application of FM can produce bug-free, zero-defect, correct-by-construction, guaranteed, certified software. However, the software industry seldom use FM. One of the main reasons for such a situation is that there exists the perception (which might well be a fact) that FM increase software costs. On the other hand, FM can be partially applied thus producing high-quality software, although not necessarily bug-free. \u0000In this paper we outline some FM related techniques whose application the cryptocurrency community should take into consideration because they could bridge the gap between ``loose web code'' and ``code for a nuclear power reactor''. We include relevant case studies in the area of cryptocurrency.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126386727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. L. Roldán, Marcela Vegetti, S. Gonnet, M. M. Marciszack, H. Leone
This paper proposes an ontology that defines and integrates the concepts adopted for use cases and test cases specification. These concepts belong to the metamodels of different Requirements Engineering and testing management supporting tools, and their formalization in an ontology language prevents using concepts ambiguously and enables interoperability among the involved tools, in order to achieve semantic consistency and artifacts tracing.
{"title":"An Ontology for Specifying and Tracing Requirements Engineering Artifacts and Test Artifacts","authors":"M. L. Roldán, Marcela Vegetti, S. Gonnet, M. M. Marciszack, H. Leone","doi":"10.19153/CLEIEJ.22.1.2","DOIUrl":"https://doi.org/10.19153/CLEIEJ.22.1.2","url":null,"abstract":"This paper proposes an ontology that defines and integrates the concepts adopted for use cases and test cases specification. These concepts belong to the metamodels of different Requirements Engineering and testing management supporting tools, and their formalization in an ontology language prevents using concepts ambiguously and enables interoperability among the involved tools, in order to achieve semantic consistency and artifacts tracing.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131888014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This special issue of the CLEIej consists of extended and revised versions of Selected Papers presented at the XXI Ibero-American Conference on Software Engineering (CIbSE 2018), hold in Bogotá, Colombia, in April 2018.
{"title":"Preface to the CIbSE 2018 Special Issue","authors":"L. Cernuzzi, T. Conte, Giovanni Giachetti","doi":"10.19153/CLEIEJ.22.1.0","DOIUrl":"https://doi.org/10.19153/CLEIEJ.22.1.0","url":null,"abstract":"This special issue of the CLEIej consists of extended and revised versions of Selected Papers presented at the XXI Ibero-American Conference on Software Engineering (CIbSE 2018), hold in Bogotá, Colombia, in April 2018.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125944164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}