Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640217
Rafael Adorno, Diego Galeano, D. Stalder, L. Cernuzzi, Alberto Paccanaro
Emerging infectious diseases such as COVID-19, caused by the SARS-CoV-2 virus, require systematic strategies to assist in the discovery of effective treatments. Drug repositioning, the process of finding new therapeutic indications for commercialized drugs, is a promising alternative to the development of new drugs, with lower costs and shorter development times. In this paper, we propose a recommendation system called geometric confidence non-negative matrix factorization (GcNMF) to assist in the repositioning of 126 broad spectrum antiviral drugs for 80 viruses, including SARS-CoV-2. GcNMF models the non-Euclidean structure of the space using graphs, and produces a ranked list of drugs for each virus. Our experiments reveal that GcNMF significanlty outperforms other matrix decomposition methods at predicting missing drug-virus associations. Our analysis suggests that GcNMF could assist pharmacological experts in the search for effective drugs against viral diseases.
{"title":"A Recommender System Approach for Predicting Effective Antivirals","authors":"Rafael Adorno, Diego Galeano, D. Stalder, L. Cernuzzi, Alberto Paccanaro","doi":"10.1109/CLEI53233.2021.9640217","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640217","url":null,"abstract":"Emerging infectious diseases such as COVID-19, caused by the SARS-CoV-2 virus, require systematic strategies to assist in the discovery of effective treatments. Drug repositioning, the process of finding new therapeutic indications for commercialized drugs, is a promising alternative to the development of new drugs, with lower costs and shorter development times. In this paper, we propose a recommendation system called geometric confidence non-negative matrix factorization (GcNMF) to assist in the repositioning of 126 broad spectrum antiviral drugs for 80 viruses, including SARS-CoV-2. GcNMF models the non-Euclidean structure of the space using graphs, and produces a ranked list of drugs for each virus. Our experiments reveal that GcNMF significanlty outperforms other matrix decomposition methods at predicting missing drug-virus associations. Our analysis suggests that GcNMF could assist pharmacological experts in the search for effective drugs against viral diseases.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"101 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77341945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640155
Alethia Hume, Nicolás Ferreira, L. Cernuzzi
In today's world, characterized by the massive generation of data, a major problem related to data manipulation occurs when people's privacy is violated. To face this situation different regulations and solutions, to help users get in control over their data, emerged. However, many of the solutions require a certain level of knowledge in the field of privacy or offer unclear information that does not facilitate a real control of their data by the users. Added to this is the unfriendly user interface design as one of the factors that prevents users from managing their privacy settings effectively. Thus, in this work we explore the effect of the application of Participatory Design (PD) techniques in the implementation of privacy enhancing technologies. In particular, we focus on the use of PD for the design of a privacy dashboard that encourages the immersion of users with privacy issues and gives them greater control with a user interface according to usability criteria. The evaluation of the PD process, which has resulted in a high-fidelity prototype of the dashboard, shows encouraging results and greater user immersion in privacy management.
{"title":"The design of a privacy dashboard for an academic environment based on participatory design","authors":"Alethia Hume, Nicolás Ferreira, L. Cernuzzi","doi":"10.1109/CLEI53233.2021.9640155","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640155","url":null,"abstract":"In today's world, characterized by the massive generation of data, a major problem related to data manipulation occurs when people's privacy is violated. To face this situation different regulations and solutions, to help users get in control over their data, emerged. However, many of the solutions require a certain level of knowledge in the field of privacy or offer unclear information that does not facilitate a real control of their data by the users. Added to this is the unfriendly user interface design as one of the factors that prevents users from managing their privacy settings effectively. Thus, in this work we explore the effect of the application of Participatory Design (PD) techniques in the implementation of privacy enhancing technologies. In particular, we focus on the use of PD for the design of a privacy dashboard that encourages the immersion of users with privacy issues and gives them greater control with a user interface according to usability criteria. The evaluation of the PD process, which has resulted in a high-fidelity prototype of the dashboard, shows encouraging results and greater user immersion in privacy management.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"110 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79299811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9639903
W. Silva, Elias de Oliveira, M. Curi, Jean-Rémi Bourguet
A subject's ability to express himself demonstrates his ability to understand reality. Text production is a way to verify the proficiency of such a skill. This correlation can help in the teaching-learning process since the learning diagnosis depends on the identification of possible instructional gaps, which subsidize the composition of better teaching strategies. In this article, we present an approach to characterizing learning profiles and estimating grades in the assessment of writing tests. For that, we used item response theory and machine learning techniques in the dataset of test scores of the Exame Nacional do Ensino Médio carried out in 2019. The results show that using a portion of only 2k training instances of the 3; 7M instances and only one of the five competencies evaluated, it is possible to have a correct prediction of the skill with a p-value 0:06 and pearson correlation of 0:94. Our approach shows the benefits of employing such techniques in a real-world scenario.
{"title":"Writing Proficiency Assessment: Regression Analysis of Item Response Theory supported by Machine Learning Techniques","authors":"W. Silva, Elias de Oliveira, M. Curi, Jean-Rémi Bourguet","doi":"10.1109/CLEI53233.2021.9639903","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9639903","url":null,"abstract":"A subject's ability to express himself demonstrates his ability to understand reality. Text production is a way to verify the proficiency of such a skill. This correlation can help in the teaching-learning process since the learning diagnosis depends on the identification of possible instructional gaps, which subsidize the composition of better teaching strategies. In this article, we present an approach to characterizing learning profiles and estimating grades in the assessment of writing tests. For that, we used item response theory and machine learning techniques in the dataset of test scores of the Exame Nacional do Ensino Médio carried out in 2019. The results show that using a portion of only 2k training instances of the 3; 7M instances and only one of the five competencies evaluated, it is possible to have a correct prediction of the skill with a p-value 0:06 and pearson correlation of 0:94. Our approach shows the benefits of employing such techniques in a real-world scenario.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"38 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84914828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9639975
Julio César Mello Román, José Luis Vázquez Noguera, H. Legal-Ayala, Diego Pinto, M. Monteiro, Jesús César Ariel López Colmán
The acquisition of microscopic images of minerals with good contrast is critical for the identification and analysis of their properties. However, in many cases, the microscopic images of minerals obtained are unclear due to the image environment, imperfect adjustment of the microscopy operators or improper collection of samples. In this paper, we present an algorithm to enhance the microscopic images of minerals by multiscale Top-Hat transform using contrast adjustment weights. First, the multiple dark and bright features of the mineral image are extracted using the top-hat transform. Secondly, bright scale differences and dark scale differences obtained in the previous step are calculated. Third, all the intensities of the multiple dark and bright features from the previous steps are summed separately. Finally, the bright features adjusted for a contrast weight are then added to the image and dark features adjusted for the same weight are subtracted from the image. Experimental results on various kinds of microscopic mineral images verified the effective performance of this proposed enhancing the contrast, improving the detail and spatial information about the images
{"title":"Microscopy Mineral Image Enhancement Using Multiscale Top-Hat Transform","authors":"Julio César Mello Román, José Luis Vázquez Noguera, H. Legal-Ayala, Diego Pinto, M. Monteiro, Jesús César Ariel López Colmán","doi":"10.1109/CLEI53233.2021.9639975","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9639975","url":null,"abstract":"The acquisition of microscopic images of minerals with good contrast is critical for the identification and analysis of their properties. However, in many cases, the microscopic images of minerals obtained are unclear due to the image environment, imperfect adjustment of the microscopy operators or improper collection of samples. In this paper, we present an algorithm to enhance the microscopic images of minerals by multiscale Top-Hat transform using contrast adjustment weights. First, the multiple dark and bright features of the mineral image are extracted using the top-hat transform. Secondly, bright scale differences and dark scale differences obtained in the previous step are calculated. Third, all the intensities of the multiple dark and bright features from the previous steps are summed separately. Finally, the bright features adjusted for a contrast weight are then added to the image and dark features adjusted for the same weight are subtracted from the image. Experimental results on various kinds of microscopic mineral images verified the effective performance of this proposed enhancing the contrast, improving the detail and spatial information about the images","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"70 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82126320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640055
Harry Jonathan Márquez Muñoz, Yudith Cardinale
User-centric applications have recently gained popularity in particular in the tourism domain, in order to satisfy the individual needs of users and to provide personalized information. In turn, there is a need on modeling user profiles, considering different aspects of the information related to the user himself and his context. However, there is a lack of standardization to represent such information. The Semantic Web seems to be a clear solution for the formal representation of this knowledge, due to its capacity for organization and reasoning, in particular through ontologies. There are works that propose ontologies to model the user profile, but only cover partial aspects of the information required and are only applicable to specific applications, they do not propose a generalized user profile model applicable to the domain of tourism. This work proposes the development of GENTE ontology, a GENeral ontology for Tourism Environments, that represents the different dimensions of the information related to users and their context. In addition, techniques to infer characteristics, preferences, interests, and behaviors of users, from their social networks are proposed and developed.
{"title":"GENTE: An Ontology to Represent Users in the Tourism Context","authors":"Harry Jonathan Márquez Muñoz, Yudith Cardinale","doi":"10.1109/CLEI53233.2021.9640055","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640055","url":null,"abstract":"User-centric applications have recently gained popularity in particular in the tourism domain, in order to satisfy the individual needs of users and to provide personalized information. In turn, there is a need on modeling user profiles, considering different aspects of the information related to the user himself and his context. However, there is a lack of standardization to represent such information. The Semantic Web seems to be a clear solution for the formal representation of this knowledge, due to its capacity for organization and reasoning, in particular through ontologies. There are works that propose ontologies to model the user profile, but only cover partial aspects of the information required and are only applicable to specific applications, they do not propose a generalized user profile model applicable to the domain of tourism. This work proposes the development of GENTE ontology, a GENeral ontology for Tourism Environments, that represents the different dimensions of the information related to users and their context. In addition, techniques to infer characteristics, preferences, interests, and behaviors of users, from their social networks are proposed and developed.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"22 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81296039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640012
R. Vásquez, José Eduardo Ochoa Luna
Personality Detection is a well-known field in Artificial Intelligence. Similar to Sentiment Analysis, it classifies a text in various labels that denote common patterns according to personality models such as Big-5 or Myers-Briggs Type Indicator (MBTI). Personality detection could be useful for recommendation systems, improvements in health care and counseling, forensics, job screening, to name a few applications. Most of the works on personality detection use traditional machine learning approaches which rely on open dictionaries and tokenizers resulting in low performance and replication issues. In contrast, Deep Learning Transformer models have gained popularity for their high performance. In this research, we propose several Transformer approaches for detecting personality according to the MBTI personality model and compare them to find out the most suitable for this task. In our experiments on the MBTI Kaggle benchmark dataset, we achieved 88.63% in terms of accuracy and 88.97% of F1-Score which allow us to outperform current state-of-the-art results.
人格检测是人工智能中一个众所周知的领域。与情感分析类似,它根据人格模型(如Big-5或Myers-Briggs Type Indicator, MBTI)将文本分类为不同的标签,这些标签表示常见的模式。人格检测可以用于推荐系统、改善医疗保健和咨询、法医学、工作筛选等应用。大多数关于个性检测的工作使用传统的机器学习方法,这些方法依赖于开放字典和标记器,导致低性能和复制问题。相比之下,深度学习转换器模型因其高性能而广受欢迎。在本研究中,我们根据MBTI人格模型提出了几种Transformer人格检测方法,并对它们进行比较,找出最适合这项任务的方法。在MBTI Kaggle基准数据集的实验中,我们的准确率达到了88.63%,F1-Score达到了88.97%,这使得我们的表现优于当前最先进的结果。
{"title":"Transformer-based Approaches for Personality Detection using the MBTI Model","authors":"R. Vásquez, José Eduardo Ochoa Luna","doi":"10.1109/CLEI53233.2021.9640012","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640012","url":null,"abstract":"Personality Detection is a well-known field in Artificial Intelligence. Similar to Sentiment Analysis, it classifies a text in various labels that denote common patterns according to personality models such as Big-5 or Myers-Briggs Type Indicator (MBTI). Personality detection could be useful for recommendation systems, improvements in health care and counseling, forensics, job screening, to name a few applications. Most of the works on personality detection use traditional machine learning approaches which rely on open dictionaries and tokenizers resulting in low performance and replication issues. In contrast, Deep Learning Transformer models have gained popularity for their high performance. In this research, we propose several Transformer approaches for detecting personality according to the MBTI personality model and compare them to find out the most suitable for this task. In our experiments on the MBTI Kaggle benchmark dataset, we achieved 88.63% in terms of accuracy and 88.97% of F1-Score which allow us to outperform current state-of-the-art results.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"75 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88361567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640110
Tadeo R. Saldivar-Patiño, Jorge L. Recalde-Ramírez, María M. López, Diego Pinto
The educational infrastructure in the Department of Caazapá, as in other regions of Paraguay, presents characteristics that do not favor the development of the educational process. Caazapá currently has 469 schools in this department, and the average number of students per school is 83. If we also consider that 62% of schools have less than 15 students per class, it can be inferred that there is an underutilization of the infrastructure and cost overruns in large part of the schools. In contrast, 1% of the schools have on average more than 49 students per classroom. This inefficient distribution of schools causes high investment costs for improving and maintaining schools and resource management problems. It is imperative to the application of strategies that are oriented to the optimization of available resources. This study adopts a mixed-integer linear programming model to select schools to minimize operating costs, investment in infrastructure, and transportation. We combine operation research techniques with geographic information systems to analyze the problem and interpret the results. The results show opportunities for improvement in the design of the educational network, and it is feasible to reduce investment costs by consolidating the demand in fewer establishments than currently exists. Additionally, this result would also allow generating economies of scale to optimize the operating costs of the establishments.
{"title":"Schools selection in the Department of Caazapá applying mathematical programming","authors":"Tadeo R. Saldivar-Patiño, Jorge L. Recalde-Ramírez, María M. López, Diego Pinto","doi":"10.1109/CLEI53233.2021.9640110","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640110","url":null,"abstract":"The educational infrastructure in the Department of Caazapá, as in other regions of Paraguay, presents characteristics that do not favor the development of the educational process. Caazapá currently has 469 schools in this department, and the average number of students per school is 83. If we also consider that 62% of schools have less than 15 students per class, it can be inferred that there is an underutilization of the infrastructure and cost overruns in large part of the schools. In contrast, 1% of the schools have on average more than 49 students per classroom. This inefficient distribution of schools causes high investment costs for improving and maintaining schools and resource management problems. It is imperative to the application of strategies that are oriented to the optimization of available resources. This study adopts a mixed-integer linear programming model to select schools to minimize operating costs, investment in infrastructure, and transportation. We combine operation research techniques with geographic information systems to analyze the problem and interpret the results. The results show opportunities for improvement in the design of the educational network, and it is feasible to reduce investment costs by consolidating the demand in fewer establishments than currently exists. Additionally, this result would also allow generating economies of scale to optimize the operating costs of the establishments.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"161 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78533698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640139
P. Montaña, Adriana Marotta
This article presents an experience carried out in a Uruguayan health institution to evaluate and adapt the quality of its patient data to the national requirements for integration into the National Electronic Medical Record. First, the international and national context is presented with respect to the standards and methods applied for health information. Then the process followed by the institution is described, from the initial analysis of the situation of its data to the final results of the evaluation and perspectives of an action plan to improve the quality of its data.
{"title":"Data Quality Management oriented to the Electronic Medical Record","authors":"P. Montaña, Adriana Marotta","doi":"10.1109/CLEI53233.2021.9640139","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640139","url":null,"abstract":"This article presents an experience carried out in a Uruguayan health institution to evaluate and adapt the quality of its patient data to the national requirements for integration into the National Electronic Medical Record. First, the international and national context is presented with respect to the standards and methods applied for health information. Then the process followed by the institution is described, from the initial analysis of the situation of its data to the final results of the evaluation and perspectives of an action plan to improve the quality of its data.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"313 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73233312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640031
Fabian Fallas-Moya, Jeremiah Duncan, Tabitha K. Samuel, Amir Sadovnik
Reinforcement Learning has been widely applied to play classic games where the agents learn the rules by playing the game by themselves. Recent works in general Reinforcement Learning use many improvements such as memory replay to boost the results and training time but we have not found research that focuses on the impact of memory replay in agents that play simple classic video games. In this research, we present an analysis of the impact of three different techniques of memory replay in the performance of a Deep Q-Learning model using different levels of difficulty of the Pacman video game. Also, we propose a multi-channel image - a novel way to create input tensors for training the model - inspired by one-hot encoding, and we show in the experiment section that the performance is improved by using this idea. We find that our model is able to learn faster than previous work and is even able to learn how to consistently win on the mediumClassic board after only 3,000 training episodes, previously thought to take much longer.
{"title":"Measuring the Impact of Memory Replay in Training Pacman Agents using Reinforcement Learning","authors":"Fabian Fallas-Moya, Jeremiah Duncan, Tabitha K. Samuel, Amir Sadovnik","doi":"10.1109/CLEI53233.2021.9640031","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640031","url":null,"abstract":"Reinforcement Learning has been widely applied to play classic games where the agents learn the rules by playing the game by themselves. Recent works in general Reinforcement Learning use many improvements such as memory replay to boost the results and training time but we have not found research that focuses on the impact of memory replay in agents that play simple classic video games. In this research, we present an analysis of the impact of three different techniques of memory replay in the performance of a Deep Q-Learning model using different levels of difficulty of the Pacman video game. Also, we propose a multi-channel image - a novel way to create input tensors for training the model - inspired by one-hot encoding, and we show in the experiment section that the performance is improved by using this idea. We find that our model is able to learn faster than previous work and is even able to learn how to consistently win on the mediumClassic board after only 3,000 training episodes, previously thought to take much longer.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"11 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84321695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9639958
Helver Novoa Mendoza, W. J. Giraldo, Emilio Granell, F. Giraldo
This work is framed in the domain of software engineering. Specifically, it is situated in the subdomain of user interface evaluation. The context of the same comprises the phenomenon of visual attention and its evaluation through indicators that allow evaluating the quality of these interfaces. Specifically, it presents a model for the prediction of visual attention based on saliency maps, machine learning and biometric data. Its objective is to serve as a support to promote the usability of user interfaces. Experiments carried out with the eye tracker by the Institute for Cognitive Sciences at the University of Osnabrück and the University Medical Center in Hamburg-Eppendorf, among which free visualization tasks on user interfaces such as web pages, formed the input with which the model was developed. Its general structure consists of two elements: a convolutional neural network and Guided Grad-CAM (a convolutional layer visualization method). Biometric components were used to train the network: images whose size was set as a function of the foveal radius and the user's distance from the interface. The natural units of information (nats) were used as a measure to evaluate the accuracy of the model.
{"title":"Visual Attention Prediction Model Based on Prominence Maps, Machine Learning and Biometric Data","authors":"Helver Novoa Mendoza, W. J. Giraldo, Emilio Granell, F. Giraldo","doi":"10.1109/CLEI53233.2021.9639958","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9639958","url":null,"abstract":"This work is framed in the domain of software engineering. Specifically, it is situated in the subdomain of user interface evaluation. The context of the same comprises the phenomenon of visual attention and its evaluation through indicators that allow evaluating the quality of these interfaces. Specifically, it presents a model for the prediction of visual attention based on saliency maps, machine learning and biometric data. Its objective is to serve as a support to promote the usability of user interfaces. Experiments carried out with the eye tracker by the Institute for Cognitive Sciences at the University of Osnabrück and the University Medical Center in Hamburg-Eppendorf, among which free visualization tasks on user interfaces such as web pages, formed the input with which the model was developed. Its general structure consists of two elements: a convolutional neural network and Guided Grad-CAM (a convolutional layer visualization method). Biometric components were used to train the network: images whose size was set as a function of the foveal radius and the user's distance from the interface. The natural units of information (nats) were used as a measure to evaluate the accuracy of the model.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"35 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82656560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}