P. C. Santana-Mancilla, Elba del Carmen Valderrama Bahamóndez
{"title":"Preface to the CLIHC 2019 Special Issue","authors":"P. C. Santana-Mancilla, Elba del Carmen Valderrama Bahamóndez","doi":"10.19153/CLEIEJ.23.2.0","DOIUrl":"https://doi.org/10.19153/CLEIEJ.23.2.0","url":null,"abstract":"","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"277 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123428507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Mora-Zamora, Esteban Brenes-Villalobos, F. Durán
A formal language for game design is an endeavor many academics and industry personalities have been tackling since the mid-nineties. One of the most renowned formal models for game design, the MDA Framework, includes steps to delimit and conceptualize the experience with a top-down approach. There is, however, a significant lack of high detail models for mechanic construction as well as difficulty balancing. In this paper we propose two formal models for novice designers. The 5-Part-Model (5PM) for building and diagnosing game mechanics, and the Dimensions of Challenge (DoC) formal model for balancing and fine-tuning difficulty in games.
{"title":"5PM and DoC - Formal Models for Game Design","authors":"R. Mora-Zamora, Esteban Brenes-Villalobos, F. Durán","doi":"10.19153/CLEIEJ.23.2.1","DOIUrl":"https://doi.org/10.19153/CLEIEJ.23.2.1","url":null,"abstract":"A formal language for game design is an endeavor many academics and industry personalities have been tackling since the mid-nineties. One of the most renowned formal models for game design, the MDA Framework, includes steps to delimit and conceptualize the experience with a top-down approach. There is, however, a significant lack of high detail models for mechanic construction as well as difficulty balancing. \u0000In this paper we propose two formal models for novice designers. The 5-Part-Model (5PM) for building and diagnosing game mechanics, and the Dimensions of Challenge (DoC) formal model for balancing and fine-tuning difficulty in games.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115549366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucrecia Llerena, Nancy Rodríguez, John W. Castro, S. T. Acuña
As a result of the growth of non-developer users of OSS applications, usability has over the last ten years begun to attract the interest of the OSS community. The OSS community has some special characteristics which are an obstacle to the direct adoption of many usability techniques as specified in the HCI field. The aim of this research is to adapt and evaluate the feasibility of applying the Personas usability technique to one OSS project from the viewpoint of the development team. The applied research method was a case study of the following OSS project LibreOffice-Writer. We formalized the application procedure of the adapted usability technique. We found that either there were no procedures for adopting usability technique in OSS or they were not fully systematized. Additionally, we identified the adverse conditions that are an obstacle to their adoption in OSS and propose the special adaptations required to overcome the obstacles. To avoid some of the adverse conditions, we created web artefacts (e.g. wiki) that are very popular in the OSS field. Additionally, we found that there are obstacles to the application of the technique. Despite these obstacles, it is feasible to apply the adapted Personas technique in OSS project.
{"title":"Applying a Usability Technique in the LibreOffice Writer Project","authors":"Lucrecia Llerena, Nancy Rodríguez, John W. Castro, S. T. Acuña","doi":"10.19153/CLEIEJ.23.2.4","DOIUrl":"https://doi.org/10.19153/CLEIEJ.23.2.4","url":null,"abstract":"As a result of the growth of non-developer users of OSS applications, usability has over the last ten years begun to attract the interest of the OSS community. The OSS community has some special characteristics which are an obstacle to the direct adoption of many usability techniques as specified in the HCI field. The aim of this research is to adapt and evaluate the feasibility of applying the Personas usability technique to one OSS project from the viewpoint of the development team. The applied research method was a case study of the following OSS project LibreOffice-Writer. We formalized the application procedure of the adapted usability technique. We found that either there were no procedures for adopting usability technique in OSS or they were not fully systematized. Additionally, we identified the adverse conditions that are an obstacle to their adoption in OSS and propose the special adaptations required to overcome the obstacles. To avoid some of the adverse conditions, we created web artefacts (e.g. wiki) that are very popular in the OSS field. Additionally, we found that there are obstacles to the application of the technique. Despite these obstacles, it is feasible to apply the adapted Personas technique in OSS project.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"4 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114037298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: With just over 50 years since birth, software engineering gathers more and more topics. This diversity, which shows how broad and prolific the area is, also greatly fragments knowledge. Efforts to develop classifications and taxonomies can collaborate in ordering this knowledge. Objective: This work aims to contribute to organizing software engineering education knowledge, a sub-area in which formalization is still necessary. Method: We propose a process for the construction of controlled vocabularies. We instantiated this process twice; first, using automatic clustering techniques to analyze over 1,000 articles; and then, we focused on concepts related to teaching techniques and methods. Results: We present a taxonomy with 60 terms with covers concepts to be taught, methods to use, and where to do it. The ‘teaching approaches and methods’ category covers 26 terms with their definitions and most relevant references. Implications: The taxonomy can be used by teachers and researchers to understand the breadth of the field, to place their research initiatives in a broader context and to conduct more rigorous searches in the literature. We believe it is necessary to continue working on the taxonomy’s expansion and also to carry out validation activities, if possible, including experts’ validation.
{"title":"Developing a Taxonomy for Software Engineering Education Through an Empirical Approach","authors":"Sebastián Pizard, Diego Vallespir","doi":"10.19153/CLEIEJ.23.2.5","DOIUrl":"https://doi.org/10.19153/CLEIEJ.23.2.5","url":null,"abstract":"Background: With just over 50 years since birth, software engineering gathers more and more topics. This diversity, which shows how broad and prolific the area is, also greatly fragments knowledge. Efforts to develop classifications and taxonomies can collaborate in ordering this knowledge. Objective: This work aims to contribute to organizing software engineering education knowledge, a sub-area in which formalization is still necessary. Method: We propose a process for the construction of controlled vocabularies. We instantiated this process twice; first, using automatic clustering techniques to analyze over 1,000 articles; and then, we focused on concepts related to teaching techniques and methods. Results: We present a taxonomy with 60 terms with covers concepts to be taught, methods to use, and where to do it. The ‘teaching approaches and methods’ category covers 26 terms with their definitions and most relevant references. Implications: The taxonomy can be used by teachers and researchers to understand the breadth of the field, to place their research initiatives in a broader context and to conduct more rigorous searches in the literature. We believe it is necessary to continue working on the taxonomy’s expansion and also to carry out validation activities, if possible, including experts’ validation.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"29 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132938312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vladimir Villarreal, Gabriela Marín-Raventós, H. Cancela
This special issue of the CLEI Electronic Journal (CLEIej) consists of extended and revised versions of Selected Papers presented at the 44rd Latin American Conference in Informatics (CLEI 2019), held in Panama City, Panama, September 30th to October 4th, 2019.
{"title":"Preface to the CLEI 2019 Special Issue","authors":"Vladimir Villarreal, Gabriela Marín-Raventós, H. Cancela","doi":"10.19153/cleiej.23.1.0","DOIUrl":"https://doi.org/10.19153/cleiej.23.1.0","url":null,"abstract":"This special issue of the CLEI Electronic Journal (CLEIej) consists of extended and revised versions of Selected Papers presented at the 44rd Latin American Conference in Informatics (CLEI 2019), held in Panama City, Panama, September 30th to October 4th, 2019.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130969424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandra Márquez Herrera, A. Cuadros-Vargas, H. Pedrini
A neural network is a mathematical model that is able to perform a task automatically or semi-automatically after learning the human knowledge that we provided. Moreover, a Convolutional Neural Network (CNN) is a type of neural network that has shown to efficiently learn tasks related to the area of image analysis, such as image segmentation, whose main purpose is to find regions or separable objects within an image. A more specific type of segmentation, called semantic segmentation, guarantees that each region has a semantic meaning by giving it a label or class. Since CNNs can automate the task of image semantic segmentation, they have been very useful for the medical area, applying them to the segmentation of organs or abnormalities (tumors). This work aims to improve the task of binary semantic segmentation of volumetric medical images acquired by Magnetic Resonance Imaging (MRI) using a pre-existing Three-Dimensional Convolutional Neural Network (3D CNN) architecture. We propose a formulation of a loss function for training this 3D CNN, for improving pixel-wise segmentation results. This loss function is formulated based on the idea of adapting a similarity coefficient, used for measuring the spatial overlap between the prediction and ground truth, and then using it to train the network. As contribution, the developed approach achieved good performance in a context where the pixel classes are imbalanced. We show how the choice of the loss function for training can affect the nal quality of the segmentation. We validate our proposal over two medical image semantic segmentation datasets and show comparisons in performance between the proposed loss function and other pre-existing loss functions used for binary semantic segmentation.
{"title":"Semantic Segmentation of 3D Medical Images with 3D Convolutional Neural Networks","authors":"Alejandra Márquez Herrera, A. Cuadros-Vargas, H. Pedrini","doi":"10.19153/cleiej.23.1.4","DOIUrl":"https://doi.org/10.19153/cleiej.23.1.4","url":null,"abstract":"A neural network is a mathematical model that is able to perform a task automatically or semi-automatically after learning the human knowledge that we provided. Moreover, a Convolutional Neural Network (CNN) is a type of neural network that has shown to efficiently learn tasks related to the area of image analysis, such as image segmentation, whose main purpose is to find regions or separable objects within an image. A more specific type of segmentation, called semantic segmentation, guarantees that each region has a semantic meaning by giving it a label or class. Since CNNs can automate the task of image semantic segmentation, they have been very useful for the medical area, applying them to the segmentation of organs or abnormalities (tumors). This work aims to improve the task of binary semantic segmentation of volumetric medical images acquired by Magnetic Resonance Imaging (MRI) using a pre-existing Three-Dimensional Convolutional Neural Network (3D CNN) architecture. We propose a formulation of a loss function for training this 3D CNN, for improving pixel-wise segmentation results. This loss function is formulated based on the idea of adapting a similarity coefficient, used for measuring the spatial overlap between the prediction and ground truth, and then using it to train the network. As contribution, the developed approach achieved good performance in a context where the pixel classes are imbalanced. We show how the choice of the loss function for training can affect the nal quality of the segmentation. We validate our proposal over two medical image semantic segmentation datasets and show comparisons in performance between the proposed loss function and other pre-existing loss functions used for binary semantic segmentation.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132335894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scenario: The current markets require online processing and analysis of data as soon as they arrive to make decisions or implement actions as soon as possible. PAbMM is a real-time processing architecture specialized in measurement projects, where the processing is guided by measurement metadata derived from a measurement framework through the project definition. Objective: To extend the measurement framework incorporating scenarios and entity states as a way to online interpret the indicator’s decision criteria according to scenarios and entity states, approaching their conditional likelihoods. Methodology: An extension based on entity and context states is proposed to implement scenarios and entity states. A memory structure based on the occurrence matrix is defined to approach the associated conditional likelihoods while the data are processed. A new hierarchical complimentary schema is introduced to foster the project definition interoperability considering the new concepts. An extension of the cincamipd library was carried forward to support the complementary schema. An application case is shown as a proof-of-concept. Results: A discrete simulation is introduced for describing the times and sizes associated with the new schema when the volume of the projects to update grow-up. The results of the discrete simulation are very promising, only 0.308 seconds were necessary for updating 1000 active projects. Conclusions: The simulation provides an applicability reference to analyse its convenience according to the project requirements. This allows implementing scenarios and entity states to increase the suitability between indicators and decision criteria according to the current scenario and entity state under analysis.
{"title":"A Real-Time Entity Monitoring based on States and Scenarios","authors":"M. Diván, M. Reynoso","doi":"10.19153/cleiej.23.1.2","DOIUrl":"https://doi.org/10.19153/cleiej.23.1.2","url":null,"abstract":"Scenario: The current markets require online processing and analysis of data as soon as they arrive to make decisions or implement actions as soon as possible. PAbMM is a real-time processing architecture specialized in measurement projects, where the processing is guided by measurement metadata derived from a measurement framework through the project definition. Objective: To extend the measurement framework incorporating scenarios and entity states as a way to online interpret the indicator’s decision criteria according to scenarios and entity states, approaching their conditional likelihoods. Methodology: An extension based on entity and context states is proposed to implement scenarios and entity states. A memory structure based on the occurrence matrix is defined to approach the associated conditional likelihoods while the data are processed. A new hierarchical complimentary schema is introduced to foster the project definition interoperability considering the new concepts. An extension of the cincamipd library was carried forward to support the complementary schema. An application case is shown as a proof-of-concept. Results: A discrete simulation is introduced for describing the times and sizes associated with the new schema when the volume of the projects to update grow-up. The results of the discrete simulation are very promising, only 0.308 seconds were necessary for updating 1000 active projects. Conclusions: The simulation provides an applicability reference to analyse its convenience according to the project requirements. This allows implementing scenarios and entity states to increase the suitability between indicators and decision criteria according to the current scenario and entity state under analysis.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127931216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To achieve a business objective, organizations may require variants of the same business process that depend on the context in which they are enacted. Several proposals have emerged to deal with the variability of business processes, focused on the modeling of a so-called process family. The proposals try to avoid modeling each variant separately, which implies duplication and maintenance of the common parts. Few of them also focus on automatically deriving a process variant from the definition of a process family, which is a central and complex task. One of these proposals is the Common Variability Language (CVL), which allows representing variability transparently in a host language. This article aims to explore the use of CVL together with the Business Process Model and Notation (BPMN 2.0) for modeling business process families, and the use of Model-Driven Engineering (MDE) techniques for the automatic generation of process variants. We also present a graphical tool supporting these ideas and a qualitative evaluation of the variability approach by using the VIVACE framework.
{"title":"Model-driven support for business process families with the Common Variability Language (CVL)","authors":"Daniel Calegari, Andrea Delgado, Leonel Peña","doi":"10.19153/cleiej.23.1.3","DOIUrl":"https://doi.org/10.19153/cleiej.23.1.3","url":null,"abstract":"\u0000 \u0000 \u0000To achieve a business objective, organizations may require variants of the same business process that depend on the context in which they are enacted. Several proposals have emerged to deal with the variability of business processes, focused on the modeling of a so-called process family. The proposals try to avoid modeling each variant separately, which implies duplication and maintenance of the common parts. Few of them also focus on automatically deriving a process variant from the definition of a process family, which is a central and complex task. One of these proposals is the Common Variability Language (CVL), which allows representing variability transparently in a host language. This article aims to explore the use of CVL together with the Business Process Model and Notation (BPMN 2.0) for modeling business process families, and the use of Model-Driven Engineering (MDE) techniques for the automatic generation of process variants. We also present a graphical tool supporting these ideas and a qualitative evaluation of the variability approach by using the VIVACE framework. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130431282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The application of Artificial Intelligence mechanisms allows the development of systems capable to solve very complex engineering problems. Multi-agent systems (MAS) are one paradigm that allows an alternative way to design distributed control systems. While research in this area grew exponentially before 2009, there is a need to understand the status quo of the field from 2009 to June 2017. An extension of the results of a SLR related to Multi-Agent Systems, its applications and research gaps, following Kitchenham and Wholin guidelines are presented in this paper. From the analysis of 279 papers (out of 3522 candidates), our findings suggest that: a) there were 20 gaps related to agent-oriented methodologies; coordination, cooperation and negotiation; modelling, developing, testing and debugging; b) 24 gaps related to specific domains (recycling, dynamic evacuation, hazard management, health-care, industry, logistics and manufacturing, machine learning, ambient assisted living); and 14 gaps related to specific areas within MAS (A-Teams, dynamic MAS and mobile agents, ABMS, evolutionary MAS, and self-organizing MAS). These gaps specify lines of research where the MAS community must work to achieve the unification of the agent-oriented paradigm; as well as strengthen ties with the industry.
{"title":"Tendencies in Multi-Agent Systems: A Systematic Literature Review","authors":"M. Falcó, Gabriela Robiolo","doi":"10.19153/cleiej.23.1.1","DOIUrl":"https://doi.org/10.19153/cleiej.23.1.1","url":null,"abstract":"The application of Artificial Intelligence mechanisms allows the development of systems capable to solve very complex engineering problems. Multi-agent systems (MAS) are one paradigm that allows an alternative way to design distributed control systems. While research in this area grew exponentially before 2009, there is a need to understand the status quo of the field from 2009 to June 2017. An extension of the results of a SLR related to Multi-Agent Systems, its applications and research gaps, following Kitchenham and Wholin guidelines are presented in this paper. From the analysis of 279 papers (out of 3522 candidates), our findings suggest that: a) there were 20 gaps related to agent-oriented methodologies; coordination, cooperation and negotiation; modelling, developing, testing and debugging; b) 24 gaps related to specific domains (recycling, dynamic evacuation, hazard management, health-care, industry, logistics and manufacturing, machine learning, ambient assisted living); and 14 gaps related to specific areas within MAS (A-Teams, dynamic MAS and mobile agents, ABMS, evolutionary MAS, and self-organizing MAS). These gaps specify lines of research where the MAS community must work to achieve the unification of the agent-oriented paradigm; as well as strengthen ties with the industry.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115828125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a research model in didactics of programming elaborated within the theoretical framework of the epistemological theory of Jean Piaget. That theory explains the construction of scientific knowledge based on empirical studies made by Piaget over many years. The model arises from the analysis of the results of the application of principles of the theory, especially the triad of intra-inter-trans stages, to the empirical study of the construction of the concepts of algorithm, data structure and program. The elaboration of the model is a contribution to the development of the didactics of programming and, in general, of the didactics of computer science, since the model can be used in other computer science topics. Didactics is a specific area within computer science, with its own foundations and methods, which studies in depth topics related to education in the discipline. Two empirical studies about the construction of knowledge of algorithms and data structures, and of the corresponding programs as executable objects, are briefly described to illustrate the model.Both examples use a search algorithm (binary and linear) and the implementations are in the programming language C.
{"title":"A research model in didactics of programming","authors":"Sylvia da Rosa, F. Gómez","doi":"10.19153/cleiej.23.1.5","DOIUrl":"https://doi.org/10.19153/cleiej.23.1.5","url":null,"abstract":"This paper presents a research model in didactics of programming elaborated within the theoretical framework of the epistemological theory of Jean Piaget. That theory explains the construction of scientific knowledge based on empirical studies made by Piaget over many years. The model arises from the analysis of the results of the application of principles of the theory, especially the triad of intra-inter-trans stages, to the empirical study of the construction of the concepts of algorithm, data structure and program. The elaboration of the model is a contribution to the development of the didactics of programming and, in general, of the didactics of computer science, since the model can be used in other computer science topics. Didactics is a specific area within computer science, with its own foundations and methods, which studies in depth topics related to education in the discipline. Two empirical studies about the construction of knowledge of algorithms and data structures, and of the corresponding programs as executable objects, are briefly described to illustrate the model.Both examples use a search algorithm (binary and linear) and the implementations are in the programming language C.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116299945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}