The discovery of knowledge in textual databases is an approach that basically seeks for implicit relationships between different concepts in different documents written in natural language, in order to identify new useful knowledge. To assist in this process, this approach can count on the help of Text Mining techniques. Despite all the progress made, researchers in this area must still deal with a large number of false relationships generated by most of the available processes. A semantic approach that supports the understanding of the relationships may bridge this gap. Thus, the objective of this work is to support the identification of implicit relationships between concepts present in different texts, considering the verbal semantics of relationships. To this end, analysis based on association rules were used together with metrics from complex networks and a verbal semantics approach. Through a case study, a set of texts from alternative medicine was selected and the different extractions showed that the proposed approach facilitates the identification of implicit causal relationships.
{"title":"A Semantic Approach to Uncovering Implicit Relationships in Textual Databases","authors":"D. G. Vasques, P. Martins, S. O. Rezende","doi":"10.1109/CLEI.2018.00065","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00065","url":null,"abstract":"The discovery of knowledge in textual databases is an approach that basically seeks for implicit relationships between different concepts in different documents written in natural language, in order to identify new useful knowledge. To assist in this process, this approach can count on the help of Text Mining techniques. Despite all the progress made, researchers in this area must still deal with a large number of false relationships generated by most of the available processes. A semantic approach that supports the understanding of the relationships may bridge this gap. Thus, the objective of this work is to support the identification of implicit relationships between concepts present in different texts, considering the verbal semantics of relationships. To this end, analysis based on association rules were used together with metrics from complex networks and a verbal semantics approach. Through a case study, a set of texts from alternative medicine was selected and the different extractions showed that the proposed approach facilitates the identification of implicit causal relationships.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134106432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernanda Papa, Valeria de Castro, Pablo Becker, E. Marcos, L. Olsina
In 2014, Rey Juan Carlos University (from Spain) has implemented the Service Science, Management and Engineering career. From the study conducted in 2016, the formulated hypothesis that the students' evidenced performance in the different modules of the career (i.e., Fundamentals, Technology, Personal Skills, Services and Companies) is affected by the secondary school modality of provenance. Aimed at monitoring the performance of first-year students of this career, in the present work we propose the use of a Holistic Evaluation Approach for Quality. This approach has a family of integrated evaluation strategies which allows achieving goals for diverse purposes such as to understand, compare, monitor, among others. In a nutshell, a strategy integrates a conceptual base with process and method specifications, which ensures that evaluation projects' results are consistent and comparable over time. Particularly, in this work, we use the monitoring strategy for understanding the performance of students, in the first year of the abovementioned career, from 2014 to 2017 period of time. The yielded outcomes will serve as a basis for the development of a predictive model. This will allow to the University to propose instruments aimed at improving the performance of students regarding the predominant modality of provenance.
2014年,西班牙雷胡安卡洛斯大学(Rey Juan Carlos University)开设了服务科学、管理与工程专业。在2016年进行的研究中,提出了学生在职业生涯的不同模块(即基础知识、技术、个人技能、服务和公司)中的证据表现受到中学来源模式的影响的假设。为了监测这一职业的一年级学生的表现,在本工作中,我们建议使用整体质量评估方法。这种方法有一系列综合评价战略,可以实现各种目的的目标,例如了解、比较、监测等。简而言之,策略将概念基础与过程和方法规范集成在一起,从而确保评估项目的结果随着时间的推移是一致的和可比较的。特别是,在这项工作中,我们使用监测策略来了解学生在上述职业生涯的第一年,即2014年至2017年期间的表现。所得结果将作为发展预测模型的基础。这将使大学能够提出旨在提高学生在主要来源形式方面的表现的工具。
{"title":"Monitoring Strategy for Analyzing the First-Year University Students' Performance According to Their Profiles of Provenance","authors":"Fernanda Papa, Valeria de Castro, Pablo Becker, E. Marcos, L. Olsina","doi":"10.1109/CLEI.2018.00104","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00104","url":null,"abstract":"In 2014, Rey Juan Carlos University (from Spain) has implemented the Service Science, Management and Engineering career. From the study conducted in 2016, the formulated hypothesis that the students' evidenced performance in the different modules of the career (i.e., Fundamentals, Technology, Personal Skills, Services and Companies) is affected by the secondary school modality of provenance. Aimed at monitoring the performance of first-year students of this career, in the present work we propose the use of a Holistic Evaluation Approach for Quality. This approach has a family of integrated evaluation strategies which allows achieving goals for diverse purposes such as to understand, compare, monitor, among others. In a nutshell, a strategy integrates a conceptual base with process and method specifications, which ensures that evaluation projects' results are consistent and comparable over time. Particularly, in this work, we use the monitoring strategy for understanding the performance of students, in the first year of the abovementioned career, from 2014 to 2017 period of time. The yielded outcomes will serve as a basis for the development of a predictive model. This will allow to the University to propose instruments aimed at improving the performance of students regarding the predominant modality of provenance.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116625596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Aliaga, Ernesto Dufrechu, P. Ezzatti, E. S. Quintana‐Ortí
The solution of sparse linear systems of large dimension is a important stage in problems that span a diverse kind of applications. For this reason, a number of iterative solvers have been developed, among which ILUPACK integrates an inverse-based multilevel ILU preconditioner with appealing numerical properties. In this work we extend the iterative methods available in ILUPACK. Concretely, we develop a data-parallel implementation of the BiCGStab method for GPUs hardware platforms that completes the functionality of ILUPACK-preconditioned solvers for general linear systems. The experimental evaluation carried out in a hybrid hardware platform, including a multicore CPU and a Nvidia GPU, shows that our novel proposal reaches speedups values between 5 and 10× when is compared with the CPU counterpart and values of up to 8.2× runtime reduction over other GPU solvers.
{"title":"Extending ILUPACK with a GPU Version of the BiCGStab Method","authors":"J. Aliaga, Ernesto Dufrechu, P. Ezzatti, E. S. Quintana‐Ortí","doi":"10.1109/CLEI.2018.00092","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00092","url":null,"abstract":"The solution of sparse linear systems of large dimension is a important stage in problems that span a diverse kind of applications. For this reason, a number of iterative solvers have been developed, among which ILUPACK integrates an inverse-based multilevel ILU preconditioner with appealing numerical properties. In this work we extend the iterative methods available in ILUPACK. Concretely, we develop a data-parallel implementation of the BiCGStab method for GPUs hardware platforms that completes the functionality of ILUPACK-preconditioned solvers for general linear systems. The experimental evaluation carried out in a hybrid hardware platform, including a multicore CPU and a Nvidia GPU, shows that our novel proposal reaches speedups values between 5 and 10× when is compared with the CPU counterpart and values of up to 8.2× runtime reduction over other GPU solvers.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125792023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Derlis A. Garcete, José Luis Vázquez Noguera, Cynthia Villalba
Locating objects or people in an indoor environment quickly, accurately and at low cost is a great need nowadays and in different scenarios. Some examples are the location of products in a warehouse or the quick location of patients, medical personnel or equipment in a hospital. A location system is necessary for health care, home care, stock control or inventory. In this context, this paper presents the design and implementation of a low-cost centralized indoor location system that uses BLE technology (Bluetooth Low Energy) together with the particle filter algorithm. Experimental results show that the system can achieve an accuracy of 1.8 m, at best, with an accuracy of 74% within 3 m. The cost of the infrastructure goes hand in hand with the number of objects or people to be located.
{"title":"Centralized indoor positioning system using bluetooth low energy","authors":"Derlis A. Garcete, José Luis Vázquez Noguera, Cynthia Villalba","doi":"10.1109/CLEI.2018.00109","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00109","url":null,"abstract":"Locating objects or people in an indoor environment quickly, accurately and at low cost is a great need nowadays and in different scenarios. Some examples are the location of products in a warehouse or the quick location of patients, medical personnel or equipment in a hospital. A location system is necessary for health care, home care, stock control or inventory. In this context, this paper presents the design and implementation of a low-cost centralized indoor location system that uses BLE technology (Bluetooth Low Energy) together with the particle filter algorithm. Experimental results show that the system can achieve an accuracy of 1.8 m, at best, with an accuracy of 74% within 3 m. The cost of the infrastructure goes hand in hand with the number of objects or people to be located.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122136915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Vargas, S. Novaes, Raphael Cóbe, R. Iope, S. Stanzani, T. Tomei
Deep neural networks provide the canvas to create models of millions of parameters to fit distributions involving an equally large number of random variables. The contribution of this study is twofold. First, we introduce a diffraction dataset containing computer-based simulations of a Young's interference experiment. Then, we demonstrate the adeptness of variational autoencoders to learn diffraction patterns and extract a latent feature that correlates with the physical wavelength.
{"title":"Shedding Light on Variational Autoencoders","authors":"J. Vargas, S. Novaes, Raphael Cóbe, R. Iope, S. Stanzani, T. Tomei","doi":"10.1109/CLEI.2018.00043","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00043","url":null,"abstract":"Deep neural networks provide the canvas to create models of millions of parameters to fit distributions involving an equally large number of random variables. The contribution of this study is twofold. First, we introduce a diffraction dataset containing computer-based simulations of a Young's interference experiment. Then, we demonstrate the adeptness of variational autoencoders to learn diffraction patterns and extract a latent feature that correlates with the physical wavelength.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126495600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diogo Alberto do Espírito Santo Saraiva, Bruno Rafael de Oliveira Rodrigues, Fernando Hadad Zaidan, Fernando Silva Parreiras
Computer Supported Cooperative Work (CSCW) is a research field focused on understanding characteristics of interdependent group work with the objective of designing adequate computer-based technology to support cooperative work processes. One of the key concepts of CSCW is the provision of relevant information to workers in a team, a concept named awareness. As the market and research community have already perceived the importance of providing fast and reliable information among team workers, it shares the interest of CSCW in awareness improvement. This addresses the following research question: What is the quality of awareness support in agile collaborative tools? To answer this question a survey was performed with 200 users which provided feedback scores for each given design element related to support of different awareness aspects. We used a Formal Technical Review (FTR) method specifically focused on awareness assessment, named Awareness Checklist. According to this method, there are 54 design elements that influence or contribute to awareness support. Those elements can be categorized in 14 design categories, which are directly related to six awareness types: Collaborations, Location, Context, Social, Workspace and Situation. We found that Microsoft Team Foundation Server, Jira and Trello offer more collaborative aware support, however about localization and context the DotProject tool obtained the highest score, as well in Social and the Situation too. The results offer the opportunity to assess the quality of awareness in any collaborative software used in small or bigger business projects and can be used to demonstrate certain aspects of the software which can be improved to achieve their user's satisfaction. The same concept can also be used to outline the tools main advantages and disadvantages, acting as a quality reviewer that can help to choose which collaborative tools should be adopted according to categories strengths and weaknesses
计算机支持的协同工作(CSCW)是一个研究领域,其重点是了解相互依赖的群体工作的特征,目的是设计适当的基于计算机的技术来支持协同工作过程。CSCW的一个关键概念是向团队中的员工提供相关信息,这个概念被称为意识。由于市场和研究界已经认识到在团队工作人员中提供快速和可靠的信息的重要性,因此他们与CSCW共同关注提高意识。这解决了以下研究问题:敏捷协作工具中的意识支持质量是什么?为了回答这个问题,我们对200名用户进行了一项调查,为每个给定的设计元素提供反馈分数,这些元素与支持不同的意识方面有关。我们使用正式技术审查(FTR)方法,特别关注意识评估,称为意识检查表。根据这种方法,有54个设计元素影响或有助于意识支持。这些元素可以分为14个设计类别,它们与六种意识类型直接相关:协作、位置、上下文、社会、工作空间和情况。我们发现Microsoft Team Foundation Server、Jira和Trello提供了更多的协作意识支持,但在本地化和上下文方面,DotProject工具获得了最高分,在社交和情境方面也是如此。结果提供了评估在小型或大型商业项目中使用的任何协作软件的意识质量的机会,并且可以用来演示软件的某些方面,这些方面可以改进以达到用户的满意度。同样的概念也可以用来概述工具的主要优点和缺点,作为一个质量审查者,可以根据类别的优点和缺点来帮助选择应该采用哪些协作工具
{"title":"Quality Assessment of Awareness Support in Agile Collaborative Tools","authors":"Diogo Alberto do Espírito Santo Saraiva, Bruno Rafael de Oliveira Rodrigues, Fernando Hadad Zaidan, Fernando Silva Parreiras","doi":"10.1109/CLEI.2018.00013","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00013","url":null,"abstract":"Computer Supported Cooperative Work (CSCW) is a research field focused on understanding characteristics of interdependent group work with the objective of designing adequate computer-based technology to support cooperative work processes. One of the key concepts of CSCW is the provision of relevant information to workers in a team, a concept named awareness. As the market and research community have already perceived the importance of providing fast and reliable information among team workers, it shares the interest of CSCW in awareness improvement. This addresses the following research question: What is the quality of awareness support in agile collaborative tools? To answer this question a survey was performed with 200 users which provided feedback scores for each given design element related to support of different awareness aspects. We used a Formal Technical Review (FTR) method specifically focused on awareness assessment, named Awareness Checklist. According to this method, there are 54 design elements that influence or contribute to awareness support. Those elements can be categorized in 14 design categories, which are directly related to six awareness types: Collaborations, Location, Context, Social, Workspace and Situation. We found that Microsoft Team Foundation Server, Jira and Trello offer more collaborative aware support, however about localization and context the DotProject tool obtained the highest score, as well in Social and the Situation too. The results offer the opportunity to assess the quality of awareness in any collaborative software used in small or bigger business projects and can be used to demonstrate certain aspects of the software which can be improved to achieve their user's satisfaction. The same concept can also be used to outline the tools main advantages and disadvantages, acting as a quality reviewer that can help to choose which collaborative tools should be adopted according to categories strengths and weaknesses","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114188152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Security is an increasingly important area, considering the sophistication and increase of threats present in the digital world. The need for information protection contrasts with the lack of professionals and the limited space dedicated to the area in Information Technology (IT) courses. Games and competitions have been used to motivate students of Computing to improve their practical knowledge on the subject and also to foster the interest of potential students 'and professionals in Security. The creation of these games requires specialized knowledge to develop new problems, since the novelty of these games is important to reach the desired level of difficulty and to ensure competitiveness. This work proposes the use of randomization to generate problems and entire competitions in an automated way, obtaining exclusive instances of problems for each player. As proof of concept, a tool for generating challenges was developed to evaluate the proposal. Competitions with automatically generated problems were promoted, which included students of undergraduate courses and professional qualification in Computing, in two different institutions. The performance in the competitions and the perception of satisfaction, interest and learning of the students involved were analyzed. The results show that the automatic challenges generation is feasible, and the use of competitions in the teaching of Computer Security is motivating and effective for didactic purposes.
{"title":"Automatic Challenge Generation for Teaching Computer Security","authors":"Ricardo de la Rocha Ladeira, R. Obelheiro","doi":"10.1109/CLEI.2018.00098","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00098","url":null,"abstract":"Computer Security is an increasingly important area, considering the sophistication and increase of threats present in the digital world. The need for information protection contrasts with the lack of professionals and the limited space dedicated to the area in Information Technology (IT) courses. Games and competitions have been used to motivate students of Computing to improve their practical knowledge on the subject and also to foster the interest of potential students 'and professionals in Security. The creation of these games requires specialized knowledge to develop new problems, since the novelty of these games is important to reach the desired level of difficulty and to ensure competitiveness. This work proposes the use of randomization to generate problems and entire competitions in an automated way, obtaining exclusive instances of problems for each player. As proof of concept, a tool for generating challenges was developed to evaluate the proposal. Competitions with automatically generated problems were promoted, which included students of undergraduate courses and professional qualification in Computing, in two different institutions. The performance in the competitions and the perception of satisfaction, interest and learning of the students involved were analyzed. The results show that the automatic challenges generation is feasible, and the use of competitions in the teaching of Computer Security is motivating and effective for didactic purposes.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126589252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Serafino, Benjamin Cicerchia, Juan P. Mitriatti, Agustín Balmer, Martín Faroppa, C. Russo, Hugo Ramón
This paper intends to implement registration of images to temporal sequences of maize crops digitally sensed in field trials through the use of a robotic platform. The navigation of trials of this platform equipped with visible and multispectral light sensors, and the registration processes used allow us to generate "stacks images" by different genetic varieties of maize with the aim of obtaining a phenological characterization of them, and to compare them. In addition to the basic problems of displacement, lighting, and angle of the different captures, there is the drawback that the objects (plants) are not the same and in many cases also not similar to the previous images (different phenological stages). Added to this is the complexity of sensing the images in outdoor environments, and in particular the general conditions in the field (uneven path surface, climatic conditions). The proposed algorithms are part of the generation of multilayer digital image data banks that allow, through other digital analysis and processing techniques, the automated identification and interpretation of different characteristics for the determination of the response of the different genetic varieties of corn that is studied.
{"title":"Digital Recording of Temporal Sequences of Images Applied to the Analysis of the Phenological Evolution of Maize Crops","authors":"S. Serafino, Benjamin Cicerchia, Juan P. Mitriatti, Agustín Balmer, Martín Faroppa, C. Russo, Hugo Ramón","doi":"10.1109/CLEI.2018.00083","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00083","url":null,"abstract":"This paper intends to implement registration of images to temporal sequences of maize crops digitally sensed in field trials through the use of a robotic platform. The navigation of trials of this platform equipped with visible and multispectral light sensors, and the registration processes used allow us to generate \"stacks images\" by different genetic varieties of maize with the aim of obtaining a phenological characterization of them, and to compare them. In addition to the basic problems of displacement, lighting, and angle of the different captures, there is the drawback that the objects (plants) are not the same and in many cases also not similar to the previous images (different phenological stages). Added to this is the complexity of sensing the images in outdoor environments, and in particular the general conditions in the field (uneven path surface, climatic conditions). The proposed algorithms are part of the generation of multilayer digital image data banks that allow, through other digital analysis and processing techniques, the automated identification and interpretation of different characteristics for the determination of the response of the different genetic varieties of corn that is studied.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114515457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Plichoski, Chidambaram Chidambaram, R. S. Parpinelli
It is well known that face recognition (FR) systems cannot perform well under uncontrolled conditions, but there are no general and robust approaches with total immunity to all conditions. Hence, we present an adjustable FR framework with the aid of the Differential Evolution (DE) optimization algorithm. This approach implements several preprocessing and feature extraction techniques aiming to compensate the illumination variation. The main feature of the present work stands on the use of the DE which is responsible for choosing which strategies to use, as well as tunning the parameters involved. In this case study, we aim to address the illumination compensation problem applying on the well known Yale Extended B face dataset. According to the proposed FR framework, the DE can choose any combination of the following techniques and tune its necessary parameters achieving optimized values: the Gamma Intensity Correction (GIC), the Wavelet-based Illumination Normalization (WBIN), the Gaussian Blur, the Laplacian Edge Detection, the Discrete Wavelet Transform (DWT), the Discrete Cosine Transform (DCT), and the Local Binary Patterns (LBP). Our experimental analysis confirms that the proposed approach is suitable for FR using images under varying conditions. It is proved by the average recognition rate of 99.95% obtained using four different datasets.
{"title":"An Adjustable Face Recognition System for Illumination Compensation Based on Differential Evolution","authors":"G. Plichoski, Chidambaram Chidambaram, R. S. Parpinelli","doi":"10.1109/CLEI.2018.00036","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00036","url":null,"abstract":"It is well known that face recognition (FR) systems cannot perform well under uncontrolled conditions, but there are no general and robust approaches with total immunity to all conditions. Hence, we present an adjustable FR framework with the aid of the Differential Evolution (DE) optimization algorithm. This approach implements several preprocessing and feature extraction techniques aiming to compensate the illumination variation. The main feature of the present work stands on the use of the DE which is responsible for choosing which strategies to use, as well as tunning the parameters involved. In this case study, we aim to address the illumination compensation problem applying on the well known Yale Extended B face dataset. According to the proposed FR framework, the DE can choose any combination of the following techniques and tune its necessary parameters achieving optimized values: the Gamma Intensity Correction (GIC), the Wavelet-based Illumination Normalization (WBIN), the Gaussian Blur, the Laplacian Edge Detection, the Discrete Wavelet Transform (DWT), the Discrete Cosine Transform (DCT), and the Local Binary Patterns (LBP). Our experimental analysis confirms that the proposed approach is suitable for FR using images under varying conditions. It is proved by the average recognition rate of 99.95% obtained using four different datasets.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121785496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clodis Boscarioli, L. Torres, G. Krüger, M. Oyamada
Data Warehouses has consolidate as the decision support technology used by Organizations that uses OLAP applications to access the stored data. As these data volume increases more efficient approaches to process them are needed. To do so, both traditional relational databases management systems and columnar ones can be used, each one with their advantages over the Data Warehouse modeling. More normalized models are traditional among tuple oriented relational databases, whereas denormalized ones bring a better performance in columnar DBMS. A comparative study between MonetDB and PostgreSQL DBMS using TPC-H as a benchmark is presented here, to investigate which one is indicated to manage a Data Warehouse in information access. The results confirmed that, isolated, in denormalized environments MonetDB excels, while PostgreSQL is better for normalized modeling. In general, MonetDB stands out compared to PostgreSQL, with performance gains of almost 500% on normalized model, and over 1000% on the denormalized one.
{"title":"Evaluating the Impact of Data Modeling on OLAP Applications using Relacional and Columnar DBMS","authors":"Clodis Boscarioli, L. Torres, G. Krüger, M. Oyamada","doi":"10.1109/CLEI.2018.00062","DOIUrl":"https://doi.org/10.1109/CLEI.2018.00062","url":null,"abstract":"Data Warehouses has consolidate as the decision support technology used by Organizations that uses OLAP applications to access the stored data. As these data volume increases more efficient approaches to process them are needed. To do so, both traditional relational databases management systems and columnar ones can be used, each one with their advantages over the Data Warehouse modeling. More normalized models are traditional among tuple oriented relational databases, whereas denormalized ones bring a better performance in columnar DBMS. A comparative study between MonetDB and PostgreSQL DBMS using TPC-H as a benchmark is presented here, to investigate which one is indicated to manage a Data Warehouse in information access. The results confirmed that, isolated, in denormalized environments MonetDB excels, while PostgreSQL is better for normalized modeling. In general, MonetDB stands out compared to PostgreSQL, with performance gains of almost 500% on normalized model, and over 1000% on the denormalized one.","PeriodicalId":379986,"journal":{"name":"2018 XLIV Latin American Computer Conference (CLEI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129011945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}