In this paper it is presented a software development model based on transformations that allows to derive, in an automatic way, classes in object-oriented programming languages (Ada 95, C++, Eiffel and Java) starting from formal specifications. The set of transformations that conforms the software development model are a systematic steps, which starts from the algebraic specification of a type. This algebraic specification describes the abstract behavior of a type (type of interest) by means of other type, previously specified and implemented (representation type). In a progressive way, the transformations steps allow get a program (class) nearby to the initial specification (type of interest). These transformations obtain -in the first step- an intermediate specification (class specification) that it describes the operations of the type of interest by means of pre and post-conditions. Then, the intermediate specification is used to obtain imperative code in language-independent notation (pseudo-class); and finally the pseudo-class is transformed to any object- oriented programming language for which it has been defined transformations.
{"title":"A Software Development Model for the Automatic Generation of Classes","authors":"Eugenio Scalise, N. Zambrano","doi":"10.19153/cleiej.4.2.2","DOIUrl":"https://doi.org/10.19153/cleiej.4.2.2","url":null,"abstract":"\u0000 \u0000 \u0000In this paper it is presented a software development model based on transformations that allows to derive, in an automatic way, classes in object-oriented programming languages (Ada 95, C++, Eiffel and Java) starting from formal specifications. The set of transformations that conforms the software development model are a systematic steps, which starts from the algebraic specification of a type. This algebraic specification describes the abstract behavior of a type (type of interest) by means of other type, previously specified and implemented (representation type). In a progressive way, the transformations steps allow get a program (class) nearby to the initial specification (type of interest). These transformations obtain -in the first step- an intermediate specification (class specification) that it describes the operations of the type of interest by means of pre and post-conditions. Then, the intermediate specification is used to obtain imperative code in language-independent notation (pseudo-class); and finally the pseudo-class is transformed to any object- oriented programming language for which it has been defined transformations. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115447501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Incremental evolution has proved to be an extremely useful mechanism in complex actions sequence learning. Its performance is based on the decomposition of the original problem into increasingly complex stages whose learning is carried out sequentially, starting from the simplest stage and thus increasing its generality and difficulty. The present work proposes neural array applications as a novel mechanism for complex actions sequence learning. Each array is composed by several neural nets obtained by means of an evolving process allowing them to acquire various degrees of specialization. Neural nets constituting the same array are organized so that, in each assessment, there is only one in charge of its response. The proposed strategy is applied to problems presented by obstacle evasion and target reaching as a means to show the capability of this proposal to solve complex problems. The measurements carried out show the superiority of evolving neural arrays over traditional neuroevolving methods that handle neural network populations – SANE is being particularly used as a comparative reference due to its high performance. Neural array capability to recover from previous defective evolving stages has been tested, evincing highly plausible final successful outcomes – even in those adverse cases. Finally, conclusions are presented as well as some future lines of work.
{"title":"Evolving Neural Arrays A new mechanism for learning complex action sequences","authors":"Leonardo Corbalán, L. Lanzarini","doi":"10.19153/cleiej.6.1.5","DOIUrl":"https://doi.org/10.19153/cleiej.6.1.5","url":null,"abstract":"\u0000 \u0000 \u0000Incremental evolution has proved to be an extremely useful mechanism in complex actions sequence learning. Its performance is based on the decomposition of the original problem into increasingly complex stages whose learning is carried out sequentially, starting from the simplest stage and thus increasing its generality and difficulty. \u0000The present work proposes neural array applications as a novel mechanism for complex actions sequence learning. Each array is composed by several neural nets obtained by means of an evolving process allowing them to acquire various degrees of specialization. Neural nets constituting the same array are organized so that, in each assessment, there is only one in charge of its response. \u0000The proposed strategy is applied to problems presented by obstacle evasion and target reaching as a means to show the capability of this proposal to solve complex problems. The measurements carried out show the superiority of evolving neural arrays over traditional neuroevolving methods that handle neural network populations – SANE is being particularly used as a comparative reference due to its high performance. \u0000Neural array capability to recover from previous defective evolving stages has been tested, evincing highly plausible final successful outcomes – even in those adverse cases. \u0000Finally, conclusions are presented as well as some future lines of work. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116099237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Simão, A. Vincenzi, Antônio Carlos Lima de Santana
Instrumentation is a technique frequently used in software engineering for several different purposes, e.g. program and/or specification execution trace, testing criteria coverage analy- sis, and reverse engineering. Instrumenting a software product can be divided into two main tasks: (i) deriving the software product structure and (ii) inserting statements for collecting runtime/simulation information. Most instrumentation approaches are specific to a given domain or language. Thus, it is very difficult to reuse the effort expended in developing an instrumenter, even if the target languages are quite similar. To tackle this problem, in this paper, we propose an instrumentation-oriented meta-language, named IDeL, designed to support the description of both main tasks of instru- mentation process, namely: (i) the product structure derivation and (ii) the insertion of the instrumentation statements. In order to apply IDeL to a specific language L, it should be in- stantiated with a context-free grammar of L. To promote IDeL’s practical use, we also developed a supporting tool, named idelgen, that can be thought of as an application generator, based on the transformational programming paradigm and tailored to the instrumentation process. We illustrate the main concepts of our proposal with examples describing the instrumentation required in some traditional data flow testing criteria for C language.
{"title":"A Language for the Description of Program Instrumentation and Automatic Generation of Instrumenters","authors":"A. Simão, A. Vincenzi, Antônio Carlos Lima de Santana","doi":"10.19153/cleiej.6.1.7","DOIUrl":"https://doi.org/10.19153/cleiej.6.1.7","url":null,"abstract":"\u0000 \u0000 \u0000Instrumentation is a technique frequently used in software engineering for several different purposes, e.g. program and/or specification execution trace, testing criteria coverage analy- sis, and reverse engineering. Instrumenting a software product can be divided into two main tasks: (i) deriving the software product structure and (ii) inserting statements for collecting runtime/simulation information. \u0000Most instrumentation approaches are specific to a given domain or language. Thus, it is very difficult to reuse the effort expended in developing an instrumenter, even if the target languages are quite similar. To tackle this problem, in this paper, we propose an instrumentation-oriented meta-language, named IDeL, designed to support the description of both main tasks of instru- mentation process, namely: (i) the product structure derivation and (ii) the insertion of the instrumentation statements. In order to apply IDeL to a specific language L, it should be in- stantiated with a context-free grammar of L. To promote IDeL’s practical use, we also developed a supporting tool, named idelgen, that can be thought of as an application generator, based on the transformational programming paradigm and tailored to the instrumentation process. We illustrate the main concepts of our proposal with examples describing the instrumentation required in some traditional data flow testing criteria for C language. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133738088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rogério Luís de Carvalho Costa, Sérgio Lifschitz, M. V. Salles
The use of software agents as Database Management System components lead to database systems that may be configured and extended to support new requirements. We focus here with the self-tuning feature, which demands a somewhat intelligent behavior that agents could add to traditional DBMS modules. We propose in this paper an agent-based database architecture to deal with automatic index creation. Implementation issues are also discussed, for a built-in agents and DBMS integration architecture.
{"title":"Index Self-tunning with Agent-based Databases","authors":"Rogério Luís de Carvalho Costa, Sérgio Lifschitz, M. V. Salles","doi":"10.19153/cleiej.6.1.6","DOIUrl":"https://doi.org/10.19153/cleiej.6.1.6","url":null,"abstract":"\u0000 \u0000 \u0000The use of software agents as Database Management System components lead to database systems that may be configured and extended to support new requirements. We focus here with the self-tuning feature, which demands a somewhat intelligent behavior that agents could add to traditional DBMS modules. We propose in this paper an agent-based database architecture to deal with automatic index creation. Implementation issues are also discussed, for a built-in agents and DBMS integration architecture. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116805990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Pure Type Systems with Pairs generalising earlier work on program extraction in Typed Lambda Calculus. We model the process of program extraction in these systems by means of a reduction relation called o-reduction, and give strategies for Bo-reduction which will be useful for an implementation of a proof assistant. More precisely, we give an algorithm to compute theo-normal form of a term in Pure Type System with Pairs, and show that this defines a prejection from Pure Type Systems with Pairs to standart Pure Type Systems. This result shows that o-reduction is an operational description of aprgram extraction that is independent of the particular Typed Lambda Calculus specified as a Pure Typoe System. For B-reduction, we define weak and strong reduction strategies using Interaction Nets, generalising well-know efficient strategies for the l-calculus to the general setting of Pure Type Systems.
{"title":"Reduction Strategies for Program Extraction","authors":"M. Fernández, I. Mackie, P. Severi, Nora Szasz","doi":"10.19153/cleiej.6.1.2","DOIUrl":"https://doi.org/10.19153/cleiej.6.1.2","url":null,"abstract":"We introduce Pure Type Systems with Pairs generalising earlier work on program extraction in Typed Lambda Calculus. We model the process of program extraction in these systems by means of a reduction relation called o-reduction, and give strategies for Bo-reduction which will be useful for an implementation of a proof assistant. More precisely, we give an algorithm to compute theo-normal form of a term in Pure Type System with Pairs, and show that this defines a prejection from Pure Type Systems with Pairs to standart Pure Type Systems. This result shows that o-reduction is an operational description of aprgram extraction that is independent of the particular Typed Lambda Calculus specified as a Pure Typoe System. For B-reduction, we define weak and strong reduction strategies using Interaction Nets, generalising well-know efficient strategies for the l-calculus to the general setting of Pure Type Systems.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"29 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123476728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new algorithm for the induction of decision trees, based on adaptive techniques. One of the main feature of this algorithm is the application of automata theory to formalize the problem of decision tree induction and the use of a hybrid approach, which integrates both syntactical and statistical strategies. Some experimental results are also pre- sented indicating that the adaptive approach is useful in the construction of efficient learning algorithms.
{"title":"Decision Tree Induction using Adaptive FSA","authors":"H. Pistori, J. J. Neto","doi":"10.19153/cleiej.6.1.4","DOIUrl":"https://doi.org/10.19153/cleiej.6.1.4","url":null,"abstract":"\u0000 \u0000 \u0000This paper introduces a new algorithm for the induction of decision trees, based on adaptive techniques. One of the main feature of this algorithm is the application of automata theory to formalize the problem of decision tree induction and the use of a hybrid approach, which integrates both syntactical and statistical strategies. Some experimental results are also pre- sented indicating that the adaptive approach is useful in the construction of efficient learning algorithms. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130094106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Jacinto, G. Librelotto, J. Ramalho, P. Henriques
After being able to mark-up text and validate its structure according to a document type specification, we may start thinking it would be natural to be able to validate some non- structural issues in the documents. This paper is to formally discuss semantic-related aspects. In that context, we introduce a domain specific language developed for such a purpose: XCSL. XCSL is not just a language, it is also a processing model. Furthermore, we discuss the general philosophy underlying the proposed approach, presenting the architecture of our semantic vali- dation system, and we detail the respective processor. To illustrate the use of XCSL language and the subsequent processing, we present two case-studies. Nowadays, we can find some other languages to restrict XML documents to those semantically valid — namely Schematron and XML-Schema. So, before concluding the paper, we compare XCSL to those approaches.
{"title":"XCSL: XML Constraint Specification Language","authors":"Marta Jacinto, G. Librelotto, J. Ramalho, P. Henriques","doi":"10.19153/cleiej.6.1.3","DOIUrl":"https://doi.org/10.19153/cleiej.6.1.3","url":null,"abstract":"\u0000 \u0000 \u0000After being able to mark-up text and validate its structure according to a document type specification, we may start thinking it would be natural to be able to validate some non- structural issues in the documents. This paper is to formally discuss semantic-related aspects. In that context, we introduce a domain specific language developed for such a purpose: XCSL. XCSL is not just a language, it is also a processing model. Furthermore, we discuss the general philosophy underlying the proposed approach, presenting the architecture of our semantic vali- dation system, and we detail the respective processor. To illustrate the use of XCSL language and the subsequent processing, we present two case-studies. Nowadays, we can find some other languages to restrict XML documents to those semantically valid — namely Schematron and XML-Schema. So, before concluding the paper, we compare XCSL to those approaches. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116533775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Process integration in Software Engineering Environments (SEE) is very important to allow tool integration. In this paper, we present a knowledge-based approach to improve process integration in ODE, an ontology-based SEE.
{"title":"Knowledge-based Support to Process Integration in ODE","authors":"F. B. Ruy, Gleidson Bertollo, R. Falbo","doi":"10.19153/cleiej.7.1.3","DOIUrl":"https://doi.org/10.19153/cleiej.7.1.3","url":null,"abstract":"\u0000 \u0000 \u0000Process integration in Software Engineering Environments (SEE) is very important to allow tool integration. In this paper, we present a knowledge-based approach to improve process integration in ODE, an ontology-based SEE. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129384885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paulo de Tarso Costa de Sousa, H. Prado, E. Moresi, M. Ladeira
Knowledge Discovery in Databases (KDD), as any organizational process, is carried out beneath a Knowledge Management (KM) model adopted (even informally) by a corporation. KDD is grossly described in three steps: pre-processing, data mining, and post-processing. The latter is mainly related to the task of transforming in knowledge the patterns issued in the data mining step. On the other hand, KM comprises the following phases, in which knowledge is the subject of the actions: identification of abilities, acquisition, selection and validation, organization and storage, sharing, application, and creation. Although there are many overlaps between KDD and KM, one of them is broadly recognized: the point in which knowledge arises. This paper concerns a study aimed at clarifying relations between the overlapping areas of KDD and knowledge creation, in KM. The work is conducted by means of a case study using the data from the Electoral Court of the Federal District (ECFD), Brazil. The study was developed over a 1.717.000-citizens data set from which data mining models were built by applying algorithms from Weka. It was observed that, although the importance of Information Technology is well recognized in the KM realm, the techniques of KDD deserve a special place in the knowledge creation phase of KM. Moreover, beyond the overlap of post- processing and knowledge creation, other steps of KDD can contribute significantly to KM. An example is the fact that one important decision taken from the ECFD board was taken on the basis of a knowledge acquired from the pre-processing step of KDD.
{"title":"Contributions of KDD to the Knowledge Management Process: a Case Study","authors":"Paulo de Tarso Costa de Sousa, H. Prado, E. Moresi, M. Ladeira","doi":"10.19153/cleiej.7.1.2","DOIUrl":"https://doi.org/10.19153/cleiej.7.1.2","url":null,"abstract":"\u0000 \u0000 \u0000Knowledge Discovery in Databases (KDD), as any organizational process, is carried out beneath a Knowledge Management (KM) model adopted (even informally) by a corporation. KDD is grossly described in three steps: pre-processing, data mining, and post-processing. The latter is mainly related to the task of transforming in knowledge the patterns issued in the data mining step. On the other hand, KM comprises the following phases, in which knowledge is the subject of the actions: identification of abilities, acquisition, selection and validation, organization and storage, sharing, application, and creation. Although there are many overlaps between KDD and KM, one of them is broadly recognized: the point in which knowledge arises. This paper concerns a study aimed at clarifying relations between the overlapping areas of KDD and knowledge creation, in KM. The work is conducted by means of a case study using the data from the Electoral Court of the Federal District (ECFD), Brazil. The study was developed over a 1.717.000-citizens data set from which data mining models were built by applying algorithms from Weka. It was observed that, although the importance of Information Technology is well recognized in the KM realm, the techniques of KDD deserve a special place in the knowledge creation phase of KM. Moreover, beyond the overlap of post- processing and knowledge creation, other steps of KDD can contribute significantly to KM. An example is the fact that one important decision taken from the ECFD board was taken on the basis of a knowledge acquired from the pre-processing step of KDD. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122784561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A major challenge in building groupware systems is to provide support for control and coordina- tion of users actions on shared resources. This support includes the maintenance of the current state of the collaborative multi-user environment such as control of group interaction rules and coordination of users actions or tasks. We propose an extension of the visual presentation/underlying data model currently followed when developing interactive single user applications. We claim that groupware systems require two additional components: user-related data and group interaction rules. The former component maintains information about active users, their roles, and privileges. While the latter keeps the state of the current collaborative environment to control and coordinate user actions. Furthermore, our approach allows developers build each system component separately, promoting the decom- position of the application’s computational objects and its collaborative environment specifica- tion.
{"title":"An Implementation Model for Collaborative Applications","authors":"M. Cortes, Prateek Mishra","doi":"10.19153/cleiej.1.1.2","DOIUrl":"https://doi.org/10.19153/cleiej.1.1.2","url":null,"abstract":"\u0000 \u0000 \u0000A major challenge in building groupware systems is to provide support for control and coordina- \u0000tion of users actions on shared resources. This support includes the maintenance of the current state of the collaborative multi-user environment such as control of group interaction rules and coordination of users actions or tasks. \u0000We propose an extension of the visual presentation/underlying data model currently followed when developing interactive single user applications. We claim that groupware systems require two additional components: user-related data and group interaction rules. The former component maintains information about active users, their roles, and privileges. While the latter keeps the state of the current collaborative environment to control and coordinate user actions. Furthermore, our approach allows developers build each system component separately, promoting the decom- position of the application’s computational objects and its collaborative environment specifica- tion. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127463836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}