Although a variety of user story refinement techniques have been proposed, there are still few empirical studies that assess their effectiveness in the industry. This paper reports the results of a mixed methods approach to evaluate the combined use of techniques for refining user stories in a Chilean banking organization. A combination of INVEST, 3 Cs and Specification by Example (SbE) techniques was proposed to improve the quality of user stories that were being generated in the organization during its transition from traditional development methods to agility. To validate the proposal, a comparison of case studies was carried out. A group of user stories that were created and refined with the method previously used in the organization was contrasted with a second and third group of stories for which the proposed technique was used. The results show that the combined use of INVEST, 3 Cs and SbE improved the user stories quality and was related to positive changes in the development team motivation.
{"title":"Improving User Stories: A Case Study in the Chilean Banking Industry","authors":"J. Gómez, Claudia López","doi":"10.1109/CLEI.2018.00020","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00020","url":null,"abstract":"Although a variety of user story refinement techniques have been proposed, there are still few empirical studies that assess their effectiveness in the industry. This paper reports the results of a mixed methods approach to evaluate the combined use of techniques for refining user stories in a Chilean banking organization. A combination of INVEST, 3 Cs and Specification by Example (SbE) techniques was proposed to improve the quality of user stories that were being generated in the organization during its transition from traditional development methods to agility. To validate the proposal, a comparison of case studies was carried out. A group of user stories that were created and refined with the method previously used in the organization was contrasted with a second and third group of stories for which the proposed technique was used. The results show that the combined use of INVEST, 3 Cs and SbE improved the user stories quality and was related to positive changes in the development team motivation.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129563078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field of astronomical data analysis has experienced an important paradigm shift in the recent years. The automation of certain analysis procedures is no longer a desirable feature for reducing the human effort, but a must have asset for coping with the extremely large datasets that new instrumentation technologies are producing. In particular, the detection of transit planets — bodies that move across the face of another body — is an ideal setup for intelligent automation. Knowing if the variation within a light curve is evidence of a planet, requires applying advanced pattern recognition methods to a very large number of candidate stars. Here we present a supervised learning approach to refine the results produced by a case-by-case analysis of light-curves, harnessing the generalization power of machine learning techniques to predict the currently unclassified light-curves. The method uses feature engineering to find a suitable representation for classification, and different performance criteria to evaluate them and decide. Our results show that this automatic technique can help to speed up the very time-consuming manual process that is currently done by scientific experts.
{"title":"Refining Exoplanet Detection Using Supervised Learning and Feature Engineering","authors":"M. Bugueño, F. Mena, Mauricio Araya","doi":"10.1109/CLEI.2018.00041","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00041","url":null,"abstract":"The field of astronomical data analysis has experienced an important paradigm shift in the recent years. The automation of certain analysis procedures is no longer a desirable feature for reducing the human effort, but a must have asset for coping with the extremely large datasets that new instrumentation technologies are producing. In particular, the detection of transit planets — bodies that move across the face of another body — is an ideal setup for intelligent automation. Knowing if the variation within a light curve is evidence of a planet, requires applying advanced pattern recognition methods to a very large number of candidate stars. Here we present a supervised learning approach to refine the results produced by a case-by-case analysis of light-curves, harnessing the generalization power of machine learning techniques to predict the currently unclassified light-curves. The method uses feature engineering to find a suitable representation for classification, and different performance criteria to evaluate them and decide. Our results show that this automatic technique can help to speed up the very time-consuming manual process that is currently done by scientific experts.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126644778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As educators, we must design, prepare, proctor and grade hundreds of exams during their careers. From this overwhelming task, we collect little or none objective evidence about the quality of the exams themselves. Thus, at most there is an intuitive learning about what characterizes a good or a bad exam. It is very likely that we blindly repeat in our exams rights and wrongs of the past. There exist metrics about the quality of an exam, and even metrics about the quality of each of the individual items in the exam. Using actual college courses, our research found experimental evidence that proves that it is possible to predict with great accuracy, parting from historical statistical data, the quality metrics that an exam will show even before applying it to a standard group of college students. With this result, we built an automatic system that generates "good" exams from an item bank enriched with statistical information from previous exams. Besides, powerful tools for analysis and controlled adjustment of each exam and each item were developed.
{"title":"The Examiner: Automatic Generation of \"Good\" Exams","authors":"F. Torres-Rojas","doi":"10.1109/CLEI.2018.00096","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00096","url":null,"abstract":"As educators, we must design, prepare, proctor and grade hundreds of exams during their careers. From this overwhelming task, we collect little or none objective evidence about the quality of the exams themselves. Thus, at most there is an intuitive learning about what characterizes a good or a bad exam. It is very likely that we blindly repeat in our exams rights and wrongs of the past. There exist metrics about the quality of an exam, and even metrics about the quality of each of the individual items in the exam. Using actual college courses, our research found experimental evidence that proves that it is possible to predict with great accuracy, parting from historical statistical data, the quality metrics that an exam will show even before applying it to a standard group of college students. With this result, we built an automatic system that generates \"good\" exams from an item bank enriched with statistical information from previous exams. Besides, powerful tools for analysis and controlled adjustment of each exam and each item were developed.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134575014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the flexible job shop scheduling problem (FJSP) we have a set of jobs and a set of machines. A job is characterized by a set of operations that must be processed in a predetermined order. Each operation can be processed in a specific set of machines and each of these machines can process at most one operation at a time, respecting the restriction that before starting a new operation, the current one must be finished. Scheduling is an assignment of operations at time intervals on machines. The classic objective of the FJSP is to find a schedule that minimizes the completion time of the jobs, called makespan. Considering that the FJSP is an NP-hard problem, solution methods based on metaheuristics become a good alternative, since they aim to explore the space of solutions in an intelligent way, obtaining high-quality but not necessarily optimal solutions at a reduced computational cost. Thus, to solve the FJSP, this article describes a hybrid iterated local search (HILS) algorithm, which uses the simulated annealing (SA) metaheuristic as local search. Computational experiments with a standard set of instances of the problem indicated that the proposed HILS implementation is robust and competitive when compared with the best algorithms of the literature.
{"title":"A Hybrid Iterated Local Search Metaheuristic for the Flexible job Shop Scheduling Problem","authors":"Dayan de C. Bissoli, André R. S. Amaral","doi":"10.1109/CLEI.2018.00026","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00026","url":null,"abstract":"In the flexible job shop scheduling problem (FJSP) we have a set of jobs and a set of machines. A job is characterized by a set of operations that must be processed in a predetermined order. Each operation can be processed in a specific set of machines and each of these machines can process at most one operation at a time, respecting the restriction that before starting a new operation, the current one must be finished. Scheduling is an assignment of operations at time intervals on machines. The classic objective of the FJSP is to find a schedule that minimizes the completion time of the jobs, called makespan. Considering that the FJSP is an NP-hard problem, solution methods based on metaheuristics become a good alternative, since they aim to explore the space of solutions in an intelligent way, obtaining high-quality but not necessarily optimal solutions at a reduced computational cost. Thus, to solve the FJSP, this article describes a hybrid iterated local search (HILS) algorithm, which uses the simulated annealing (SA) metaheuristic as local search. Computational experiments with a standard set of instances of the problem indicated that the proposed HILS implementation is robust and competitive when compared with the best algorithms of the literature.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134526083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Sousa, Ronaldo de Castro Del-Fiaco, Lilian Berton
Homicide mortality is a worldwide concern and has occupied the agenda of researchers and public managers. In Brazil, homicide is the third leading cause of death in the general population and the first in the 15-39 age group. In South America, Brazil has the third highest homicide mortality, behind Venezuela and Colombia. To measure the impacts of violence it is important to assess health systems and criminal justice, as well as other areas. In this paper, we analyze the spatial distribution of homicide mortality in the state of Goiás, Center-West of Brazil, since the homicide rate increased from 24.5 per 100,000 in 2002 to 42.6 per 100,000 in 2014 in this location. Moreover, this state had the fifth position of homicides in Brazil in 2014. We considered socio-demographic variables for the state, performed analysis about correlation and employed three clustering algorithms: K-means, Density-based and Hierarchical. The results indicate the homicide rates are higher in cities neighbors of large urban centers, although these cities have the best socioeconomic indicators.
{"title":"Cluster Analysis of Homicide Rates in the Brazilian State of Goiás from 2002 to 2014","authors":"S. Sousa, Ronaldo de Castro Del-Fiaco, Lilian Berton","doi":"10.1109/CLEI.2018.00060","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00060","url":null,"abstract":"Homicide mortality is a worldwide concern and has occupied the agenda of researchers and public managers. In Brazil, homicide is the third leading cause of death in the general population and the first in the 15-39 age group. In South America, Brazil has the third highest homicide mortality, behind Venezuela and Colombia. To measure the impacts of violence it is important to assess health systems and criminal justice, as well as other areas. In this paper, we analyze the spatial distribution of homicide mortality in the state of Goiás, Center-West of Brazil, since the homicide rate increased from 24.5 per 100,000 in 2002 to 42.6 per 100,000 in 2014 in this location. Moreover, this state had the fifth position of homicides in Brazil in 2014. We considered socio-demographic variables for the state, performed analysis about correlation and employed three clustering algorithms: K-means, Density-based and Hierarchical. The results indicate the homicide rates are higher in cities neighbors of large urban centers, although these cities have the best socioeconomic indicators.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115249268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natália Pereira de Oliveira, M. Fantinato, L. Thom
Techniques of functional size measurement are easily found in the literature, however, in the evaluation process of these techniques is not always approached which makes its validity questionable. The evaluation of the Business Process Point Analysis (BPPA) technique is the object of study of this article that aims to consistently evaluate its reproducibility and accuracy, identifying its limitations. BPPA was proposed so that project managers can systematically estimate the functional size of a business process automation project. Thus, this article presents the execution of a quasi-experiment realized with 58 graduate and postgraduate students, who measured the functional size of three business process models. The results of this experiment present the low reproducibility and accuracy of the technique as well as its limitations.
{"title":"Evaluation of Reproducibility and Accuracy of the Business Process Point Analysis Technique","authors":"Natália Pereira de Oliveira, M. Fantinato, L. Thom","doi":"10.1109/CLEI.2018.00073","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00073","url":null,"abstract":"Techniques of functional size measurement are easily found in the literature, however, in the evaluation process of these techniques is not always approached which makes its validity questionable. The evaluation of the Business Process Point Analysis (BPPA) technique is the object of study of this article that aims to consistently evaluate its reproducibility and accuracy, identifying its limitations. BPPA was proposed so that project managers can systematically estimate the functional size of a business process automation project. Thus, this article presents the execution of a quasi-experiment realized with 58 graduate and postgraduate students, who measured the functional size of three business process models. The results of this experiment present the low reproducibility and accuracy of the technique as well as its limitations.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122215335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Layane Rodrigues de Souza Queiroz, L. Mundim, M. Andretta
The two-dimensional knapsack problem with irregularly shaped items is solved in this work. It is utilized the concept of inner-fit raster and no-fit raster to verify packing feasibility, which stands for non-overlapping between items that are entirely contained inside the bin. The problem solution is obtained with a biased random-key genetic algorithm in which each chromosome contains information related to the order and rotation where each item should be packed into the bin. The chromosome also contains information about which heuristic has to be used to pack items and the probability of an offspring inheriting information from an elite parent. It is adopted three heuristics for positioning items, which are: bottom-left, left-bottom, and horizontal zig-zag. The experiments over literature instances showed that the developed genetic algorithm is very effective since it could obtain an optimal solution for 53.4% of the instances and improved the bin's occupancy ratio in about 2.1% when observing all the instances.
{"title":"Genetic Algorithm for the Knapsack Problem with Irregular Shaped Items","authors":"Layane Rodrigues de Souza Queiroz, L. Mundim, M. Andretta","doi":"10.1109/CLEI.2018.00031","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00031","url":null,"abstract":"The two-dimensional knapsack problem with irregularly shaped items is solved in this work. It is utilized the concept of inner-fit raster and no-fit raster to verify packing feasibility, which stands for non-overlapping between items that are entirely contained inside the bin. The problem solution is obtained with a biased random-key genetic algorithm in which each chromosome contains information related to the order and rotation where each item should be packed into the bin. The chromosome also contains information about which heuristic has to be used to pack items and the probability of an offspring inheriting information from an elite parent. It is adopted three heuristics for positioning items, which are: bottom-left, left-bottom, and horizontal zig-zag. The experiments over literature instances showed that the developed genetic algorithm is very effective since it could obtain an optimal solution for 53.4% of the instances and improved the bin's occupancy ratio in about 2.1% when observing all the instances.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117090666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marcelly Homem Coelho, Vinicius Ferri Pereira, Edilene Cristiano de Figuere Valeriano, L. Frigo, Eliane Pozzebon
In the activity of beekeeping, there are routine reviews, production anagement, in some eventuality it is necessary for the beekeeper to intervene. For this intervention, the hives must be opened. The opening for inspection should be done only when necessary and in order to interfere as little as possible in the activity of the colonies. Due to the fact that the openings cause wear to the swarm, since, during the revisions, usually occurs honey consumption, bee mortality and exposure of the pictures to the environment. The objective of the article is to analyze the temperature and internal humidity, smoke index and the meteorological conditions associated with the apiary. Based on this objective, the following questions arise: How to monitor the hive? And how to generate knowledge with the data collected? In order to answer the questions an embedded system was developed consisting of internal and external sensing modules for the hives. While the question of generating knowledge with the collected data was developed a computational model used the software MATLAB. From the obtained results, it can be concluded that the Fuzzy Logic is applicable in the treatment of data with multiple values and that the developed system helps the beekeepers to extract several information without the need to influence the work of the colonies.
{"title":"Application of Fuzzy Control in Embedded Sensing Systems for Beekeeping Monitoring","authors":"Marcelly Homem Coelho, Vinicius Ferri Pereira, Edilene Cristiano de Figuere Valeriano, L. Frigo, Eliane Pozzebon","doi":"10.1109/CLEI.2018.00033","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00033","url":null,"abstract":"In the activity of beekeeping, there are routine reviews, production anagement, in some eventuality it is necessary for the beekeeper to intervene. For this intervention, the hives must be opened. The opening for inspection should be done only when necessary and in order to interfere as little as possible in the activity of the colonies. Due to the fact that the openings cause wear to the swarm, since, during the revisions, usually occurs honey consumption, bee mortality and exposure of the pictures to the environment. The objective of the article is to analyze the temperature and internal humidity, smoke index and the meteorological conditions associated with the apiary. Based on this objective, the following questions arise: How to monitor the hive? And how to generate knowledge with the data collected? In order to answer the questions an embedded system was developed consisting of internal and external sensing modules for the hives. While the question of generating knowledge with the collected data was developed a computational model used the software MATLAB. From the obtained results, it can be concluded that the Fuzzy Logic is applicable in the treatment of data with multiple values and that the developed system helps the beekeepers to extract several information without the need to influence the work of the colonies.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122495183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. H. S. Bomfim, E. M. Salgueiro, R. J. P. B. Salgueiro
This paper presents a service to make the anonymization in SDN to guarantee the availability of the services in the network through the concealment of the stations. Because of this it was developed the anonymizer BomIP using C Language and Libpcap Library, then it was included as a service on RunOS OpenFlow Controller. To validate this service it were made three cases studies on a simulated environment of a Denial of Service attack. The results shows that BomIP has a running time 65% better than others anonymizers, besides BomIP guarantees that all packets can be tracked and a mitigation of 80% from the attacks trials, supporting the services provides by the newtork to continue running.
{"title":"An Anonymization Service for Software-Defined Networks","authors":"L. H. S. Bomfim, E. M. Salgueiro, R. J. P. B. Salgueiro","doi":"10.1109/CLEI.2018.00089","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00089","url":null,"abstract":"This paper presents a service to make the anonymization in SDN to guarantee the availability of the services in the network through the concealment of the stations. Because of this it was developed the anonymizer BomIP using C Language and Libpcap Library, then it was included as a service on RunOS OpenFlow Controller. To validate this service it were made three cases studies on a simulated environment of a Denial of Service attack. The results shows that BomIP has a running time 65% better than others anonymizers, besides BomIP guarantees that all packets can be tracked and a mitigation of 80% from the attacks trials, supporting the services provides by the newtork to continue running.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123620400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Passos Costa, P. Sampaio, Valéria Farinazzo Martins Salvador
The technological advances provide the development and wide adoption of different kinds of humanmachine interfaces, which leads to the creation of new applications such as those based on multimedia and virtual reality (3D). In particular, the proposal of interaction metaphors applied to 3D environments which aim at replicating real world concepts into the virtual environment, facilitates user's interaction. The utilization of gestural interaction metaphors within a 3D environment can turn the user experience more familiar and concrete, making the training curb smaller. However, in order to apply interaction metaphors it is necessary their classification and generalization, so that they can be widely deployed in different applications. This paper introduces the development of a generic and customizable solution for the mediation of user gestural interaction (selection, manipulation and navigation) with heterogeneous rendering engines for virtual reality environments. This solution, called Gesture3DFramework, allows the users context and gesture-metaphors configuration to be easily customized so that it can be adaptable to multiple 3D virtual environments. With Gesture3DFramework, the final user (and developer) will be provided with a higher level of abstraction when it comes to the development of interactive virtual reality applications, since once the configuration directives have been described, the system will adapt itself to the specific interaction routines of the applied rendering engine.
{"title":"Gesture3DFramework: A Generic Gesture-Based Interaction Middleware Applied to 3D Environments","authors":"Diego Passos Costa, P. Sampaio, Valéria Farinazzo Martins Salvador","doi":"10.1109/CLEI.2018.00076","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00076","url":null,"abstract":"The technological advances provide the development and wide adoption of different kinds of humanmachine interfaces, which leads to the creation of new applications such as those based on multimedia and virtual reality (3D). In particular, the proposal of interaction metaphors applied to 3D environments which aim at replicating real world concepts into the virtual environment, facilitates user's interaction. The utilization of gestural interaction metaphors within a 3D environment can turn the user experience more familiar and concrete, making the training curb smaller. However, in order to apply interaction metaphors it is necessary their classification and generalization, so that they can be widely deployed in different applications. This paper introduces the development of a generic and customizable solution for the mediation of user gestural interaction (selection, manipulation and navigation) with heterogeneous rendering engines for virtual reality environments. This solution, called Gesture3DFramework, allows the users context and gesture-metaphors configuration to be easily customized so that it can be adaptable to multiple 3D virtual environments. With Gesture3DFramework, the final user (and developer) will be provided with a higher level of abstraction when it comes to the development of interactive virtual reality applications, since once the configuration directives have been described, the system will adapt itself to the specific interaction routines of the applied rendering engine.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128118238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}