Gustavo Betarte, J. Campo, Andrea Delgado, Laura González, Álvaro Martín, R. Martínez, Bárbara Muracciole
Since the beginning of 2020, COVID-19 has had a strong impact on the health of the world population. The mostly used approach to stop the epidemic is the application of controls of a classic epidemic such as case isolation, contact monitoring, and quarantine, as well as physical distancing and hygienic measures. Tracing the contacts of infected people is one of the main strategies for controlling the pandemic. Manual contact tracing is a slow, error-prone (by omission or forgotten) process, and vulnerable in terms of security and privacy. Furthermore, it needs to be carried out by specially trained personnel and it is not effective in identifying contacts with strangers (for example in public transport, supermarkets, etc). Given the high rates of contagion, which makes difficult an effective manual contact tracing, multiple initiatives arose for developing digital proximity tracing technologies. In this paper, we discuss in depth the security and personal data protection requirements that these technologies must satisfy, and we present an exhaustive and detailed list of the various applications that have been deployed globally, as well as the underlying infrastructure models and technologies they used. In particular, we identify potential threats that could undermine the satisfaction of the analyzed requirements, violating hegemonic personal data protection regulations.
{"title":"Contact tracing solutions for COVID-19: applications, data privacy and security","authors":"Gustavo Betarte, J. Campo, Andrea Delgado, Laura González, Álvaro Martín, R. Martínez, Bárbara Muracciole","doi":"10.19153/cleiej.25.2.4","DOIUrl":"https://doi.org/10.19153/cleiej.25.2.4","url":null,"abstract":"\u0000 \u0000 \u0000Since the beginning of 2020, COVID-19 has had a strong impact on the health of the world population. The mostly used approach to stop the epidemic is the application of controls of a classic epidemic such as case isolation, contact monitoring, and quarantine, as well as physical distancing and hygienic measures. Tracing the contacts of infected people is one of the main strategies for controlling the pandemic. Manual contact tracing is a slow, error-prone (by omission or forgotten) process, and vulnerable in terms of security and privacy. Furthermore, it needs to be carried out by specially trained personnel and it is not effective in identifying contacts with strangers (for example in public transport, supermarkets, etc). Given the high rates of contagion, which makes difficult an effective manual contact tracing, multiple initiatives arose for developing digital proximity tracing technologies. In this paper, we discuss in depth the security and personal data protection requirements that these technologies must satisfy, and we present an exhaustive and detailed list of the various applications that have been deployed globally, as well as the underlying infrastructure models and technologies they used. In particular, we identify potential threats that could undermine the satisfaction of the analyzed requirements, violating hegemonic personal data protection regulations. \u0000 \u0000 \u0000","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121468139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudina Rattaro, Gabriela Pereyra, Lucas Inglés, P. Belzarena
5G is the new 3GPP technology designed to solve a wide range of requirements. On the one hand, it must be able to support high bit rates and ultra-low latency services, and on the other hand, it should be able to connect a massive amount of devices with loose bandwidth and delay requirements. Network Slicing is a key paradigm in 5G, and future 6G networks will inherit it for the concurrent provisioning of diverse quality of service. As scheduling is always a delicate vendor topic and there are not so many free and complete simulation tools to support all 5G features, in this paper, we present Py5cheSim. Py5cheSim is a flexible and open-source simulator based on Python and specially oriented to simulate cell capacity in 3GPP 5G networks and beyond. To the best of our knowledge, Py5cheSim is the first simulator that supports Network Slicing at the Radio Access Network. It offers an environment that allows the development of new scheduling algorithms in a researcher-friendly way without detailed knowledge of the core of the tool. The present work describes its design and implementation choices, the validation process and results, and different use cases.
{"title":"An Open Source Multi-Slice Cell Capacity Framework","authors":"Claudina Rattaro, Gabriela Pereyra, Lucas Inglés, P. Belzarena","doi":"10.19153/cleiej.25.2.2","DOIUrl":"https://doi.org/10.19153/cleiej.25.2.2","url":null,"abstract":"5G is the new 3GPP technology designed to solve a wide range of requirements. On the one hand, it must be able to support high bit rates and ultra-low latency services, and on the other hand, it should be able to connect a massive amount of devices with loose bandwidth and delay requirements. Network Slicing is a key paradigm in 5G, and future 6G networks will inherit it for the concurrent provisioning of diverse quality of service. As scheduling is always a delicate vendor topic and there are not so many free and complete simulation tools to support all 5G features, in this paper, we present Py5cheSim. Py5cheSim is a flexible and open-source simulator based on Python and specially oriented to simulate cell capacity in 3GPP 5G networks and beyond. To the best of our knowledge, Py5cheSim is the first simulator that supports Network Slicing at the Radio Access Network. It offers an environment that allows the development of new scheduling algorithms in a researcher-friendly way without detailed knowledge of the core of the tool. The present work describes its design and implementation choices, the validation process and results, and different use cases.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134013644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clarice de Azevedo Souza, Micael Oliveira, J. Bessa, Kelson Mota, Rosiane de Freitas
A fast way to reconstruct and validate the three-dimensional molecular conformation of SARS-CoV-2 virus proteins is addressed in this article, involving the most worrying variant discovered in patients from Brazil so far in late 2021, the lineage B.1.1.28/P.1. The proposed methodology is based on the sequencing of virus proteins and that, through the incorporation of mutations in silico, which are then computationally reconstructed using an enumerative feasibility algorithm validated by the Ramachandran diagram and structural alignment, in addition to the subsequent study of structural stability through classical molecular dynamics. From the resulting structure to the ACE2-RBD complex, the valid solution presented 97.06% of the residues in the most favorable region while the reference crystallographic structure presented 95.0%, a difference therefore very small and revealing the great consistency of the developed algorithm. Another important result was the low RMSD alignment between the best solution by the BP algorithm and the reference structure, where we obtained 0.483 A. Finally, the molecular dynamics indicated greater structural stability in the ACE2-RBD interaction with the P.1 strain, which could be a plausible explanation for convergent evolution that provides an increase in the interaction affinity with the ACE2 receptor.
{"title":"3D structural prediction, analysis and validation of Sars-Cov-2 protein molecules","authors":"Clarice de Azevedo Souza, Micael Oliveira, J. Bessa, Kelson Mota, Rosiane de Freitas","doi":"10.19153/cleiej.25.2.9","DOIUrl":"https://doi.org/10.19153/cleiej.25.2.9","url":null,"abstract":"A fast way to reconstruct and validate the three-dimensional molecular conformation of SARS-CoV-2 virus proteins is addressed in this article, involving the most worrying variant discovered in patients from Brazil so far in late 2021, the lineage B.1.1.28/P.1. The proposed methodology is based on the sequencing of virus proteins and that, through the incorporation of mutations in silico, which are then computationally reconstructed using an enumerative feasibility algorithm validated by the Ramachandran diagram and structural alignment, in addition to the subsequent study of structural stability through classical molecular dynamics. From the resulting structure to the ACE2-RBD complex, the valid solution presented 97.06% of the residues in the most favorable region while the reference crystallographic structure presented 95.0%, a difference therefore very small and revealing the great consistency of the developed algorithm. Another important result was the low RMSD alignment between the best solution by the BP algorithm and the reference structure, where we obtained 0.483 A. Finally, the molecular dynamics indicated greater structural stability in the ACE2-RBD interaction with the P.1 strain, which could be a plausible explanation for convergent evolution that provides an increase in the interaction affinity with the ACE2 receptor.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123854478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alethia Hume, L. Cernuzzi, José Luis Zarza, Ivano Bison, D. Gática-Pérez
In the context of the project "WeNet: Internet of us" we are studying the role of diversity in relation to Internet-mediated social interactions. In this paper, in particular, we analyze a possible relationship between personality aspects and social interaction mediated by digital platforms. More specifically, we rely on the five personality traits (Extraversion, Agreeableness, Conscientiousness, Emotional Stability and Openness to Experience), commonly referred to as "Big-five", and associate them to automatically extracted behavioral characteristics derived from the experience of using a Chatbot for a closed community of students at the Universidad Católica "Nuestra Señora de la Asunci´ón" (UC). The personality data comes from a self-report made by the users through questionnaires. According to a survey to the participants, overall the results show very positive appraisals about the use of the Chatbot in terms of user experience and its main functionalities, which is very encouraging for future pilots. As for the role of personality in relation to the main use of the Chatbot, although further experience is required to confirm trends, the results suggest that the Big-five personality traits are to some extent correlated with: the active participation (Agreeableness and Openess); the type of contribution in term of length of questions/requests for help and answers (Agreeableness, Neuroticism and Openness); and, the network of interactions evolution over time (Openness and Neuroticism).
在“WeNet:我们的互联网”项目的背景下,我们正在研究多样性在互联网介导的社会互动中的作用。在本文中,我们特别分析了人格方面与数字平台介导的社会互动之间的可能关系。更具体地说,我们依靠五种人格特征(外向性,亲和性,责任心,情绪稳定性和经验开放性),通常被称为“大五”,并将它们与自动提取的行为特征联系起来,这些行为特征来自于在universsidad Católica“Nuestra Señora de la Asunci´ón”(UC)的封闭学生社区使用聊天机器人的经验。个性数据来自用户通过问卷进行的自我报告。根据对参与者的一项调查,总体结果显示,在用户体验和主要功能方面,对聊天机器人的使用给予了非常积极的评价,这对未来的试点来说是非常鼓舞人心的。至于人格在主要使用聊天机器人方面的作用,虽然需要进一步的经验来证实趋势,但结果表明,大五人格特征在一定程度上与:积极参与(宜人性和开放性);问题/请求帮助和回答的长度的贡献类型(宜人性、神经质和开放性);互动网络随着时间的推移而进化(开放性和神经质)。
{"title":"Analysis of the Big-Five personality traits in the Chatbot \"UC - Paraguay\"","authors":"Alethia Hume, L. Cernuzzi, José Luis Zarza, Ivano Bison, D. Gática-Pérez","doi":"10.19153/cleiej.25.2.10","DOIUrl":"https://doi.org/10.19153/cleiej.25.2.10","url":null,"abstract":"In the context of the project \"WeNet: Internet of us\" we are studying the role of diversity in relation to Internet-mediated social interactions. In this paper, in particular, we analyze a possible relationship between personality aspects and social interaction mediated by digital platforms. More specifically, we rely on the five personality traits (Extraversion, Agreeableness, Conscientiousness, Emotional Stability and Openness to Experience), commonly referred to as \"Big-five\", and associate them to automatically extracted behavioral characteristics derived from the experience of using a Chatbot for a closed community of students at the Universidad Católica \"Nuestra Señora de la Asunci´ón\" (UC). The personality data comes from a self-report made by the users through questionnaires. According to a survey to the participants, overall the results show very positive appraisals about the use of the Chatbot in terms of user experience and its main functionalities, which is very encouraging for future pilots. As for the role of personality in relation to the main use of the Chatbot, although further experience is required to confirm trends, the results suggest that the Big-five personality traits are to some extent correlated with: the active participation (Agreeableness and Openess); the type of contribution in term of length of questions/requests for help and answers (Agreeableness, Neuroticism and Openness); and, the network of interactions evolution over time (Openness and Neuroticism).","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114838303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear recognition has gained attention within the biometrics community recently. Ear images can be captured from a distance without contact, and the explicit cooperation of the subject is not needed. In addition, ears do not suffer extreme change over time and are not affected by facial expressions. All these characteristics are convenient when implementing surveillance and security applications. At the same time, applying any Deep Learning (DL) algorithm usually demands large amounts of samples to train networks. Thus, we introduce a large-scale database and explore fine-tuning pre-trained Convolutional Neural Networks (CNN) to adapt ear domain images taken under uncontrolled conditions. We built an ear dataset from the VGGFace dataset by profiting the face recognition field. Moreover, according to our experiments, adapting the VGGFace model to the ear domain leads to a better performance than using a model trained on general image recognition. The efficiency of the trained models has been tested on the UERC dataset achieving a significant improvement of around 9% compared to approaches in the literature. Additionally, a score-level fusion technique was explored by combining the matching scores of two models. This fusion resulted in an improvement of around 4% more. Open-set and close-set experiments have been performed and evaluated using Rank-1 and Rank-5 recognition rate metrics
{"title":"Domain Adaptation for Unconstrained Ear Recognition with Convolutional Neural Networks","authors":"Solange Ramos-Cooper, Guillermo Cámara Chávez","doi":"10.19153/cleiej.25.2.8","DOIUrl":"https://doi.org/10.19153/cleiej.25.2.8","url":null,"abstract":"Ear recognition has gained attention within the biometrics community recently. Ear images can be captured from a distance without contact, and the explicit cooperation of the subject is not needed. In addition, ears do not suffer extreme change over time and are not affected by facial expressions. All these characteristics are convenient when implementing surveillance and security applications. At the same time, applying any Deep Learning (DL) algorithm usually demands large amounts of samples to train networks. Thus, we introduce a large-scale database and explore fine-tuning pre-trained Convolutional Neural Networks (CNN) to adapt ear domain images taken under uncontrolled conditions. We built an ear dataset from the VGGFace dataset by profiting the face recognition field. Moreover, according to our experiments, adapting the VGGFace model to the ear domain leads to a better performance than using a model trained on general image recognition. The efficiency of the trained models has been tested on the UERC dataset achieving a significant improvement of around 9% compared to approaches in the literature. Additionally, a score-level fusion technique was explored by combining the matching scores of two models. This fusion resulted in an improvement of around 4% more. Open-set and close-set experiments have been performed and evaluated using Rank-1 and Rank-5 recognition rate metrics","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122918483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura González, Andrea Delgado, Juan Canaparo, Fabián Gambetta
The daily operation of organizations leaves a trail of the execution of business processes (BPs) including activities, events and decisions taken by participants. %, as a basis for process improvement. Compliance requirements add specific control elements to process execution, e.g. domain and/or country regulations to be fulfilled, enforcing order of interaction messages or activities, or security checks on roles and permissions. As the amount of available data in organizations grows everyday, using execution data to detect compliance violations and its causes, can help organizations to take corrective actions for improving their processes and comply to applying rules. Compliance requirements violations can be detected at runtime to prevent further execution, or in a post mortem way using Process Mining to evaluate process execution data against the specified compliance requirements for the process. In this paper we present a BP compliance Requirements Model (BPCRM) defining generic compliance controls that can be used to specify specific compliance requirements over BPs, that are used as input to assess compliance violations with process mining. This model can be seen as a catalogue that includes a set of predefined compliance rules or patterns in one place, helping organizations to specify and evaluate the compliance of their processes.
{"title":"Evaluation of Compliance Requirements for collaborative business process with process mining and a model of generic compliance controls","authors":"Laura González, Andrea Delgado, Juan Canaparo, Fabián Gambetta","doi":"10.19153/cleiej.25.2.7","DOIUrl":"https://doi.org/10.19153/cleiej.25.2.7","url":null,"abstract":"The daily operation of organizations leaves a trail of the execution of business processes (BPs) including activities, events and decisions taken by participants. %, as a basis for process improvement. Compliance requirements add specific control elements to process execution, e.g. domain and/or country regulations to be fulfilled, enforcing order of interaction messages or activities, or security checks on roles and permissions. As the amount of available data in organizations grows everyday, using execution data to detect compliance violations and its causes, can help organizations to take corrective actions for improving their processes and comply to applying rules. Compliance requirements violations can be detected at runtime to prevent further execution, or in a post mortem way using Process Mining to evaluate process execution data against the specified compliance requirements for the process. In this paper we present a BP compliance Requirements Model (BPCRM) defining generic compliance controls that can be used to specify specific compliance requirements over BPs, that are used as input to assess compliance violations with process mining. This model can be seen as a catalogue that includes a set of predefined compliance rules or patterns in one place, helping organizations to specify and evaluate the compliance of their processes.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128615278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Realistic real-time water-solid interaction has been an open problem in Computer Graphics since its beginnings, mainly due to the complex interactions that happen at the interface between solid objects and liquids, both when objects are completely or partially wet, or when they are fully submerged.In this paper we present a method that tackles the two main aspects of this problem, namely the buoyancy of objects submerged into fluids, and the superficial liquid propagation and appearance changes that arises at the interface between the surface of solid objects in contact with a liquids.For the first problem (buoyancy) a method is proposed to realistically compute the fluid-to-solid coupling problem. Our proposal is suitable for a wide spectrum of cases, such as rigid or deformable objects, hollow or filled, permeable or impermeable, and with variable mass distribution. In the case of permeable materials, which allow liquid to pass through the object, the presented method incorporates the dynamics of the fluid in which the object is submerged, and decouples the computation of the physical quantities involved in the buoyancy force of the empty object with respect to to the liquid contained within it. On the other hand, the visual appearance of certain materials depends on their intrinsic light transfer properties, the lighting present and other environmental contributions. Thus, complementing the first approach in this paper, a new technique is introduced to model and render the appearance changes of absorbent materials when there is liquid on their surface. Also, a new method was developed to solve the problem of the interaction between the object surface and liquids, taking advantage of texture coordinates. An algorithm was proposed to model the main physical processes that occur on the surface of a wet or wet solid object. Finally, we model the change in appearance that typically arise in most materials in contact with fluids, and an algorithm is implemented achieving real-time performance. The complete solution is designed taking advantage of superscalar architectures and GPU acceleration, allowing a flexible integration with the pipelines of current graphic engines.
{"title":"A Comprehensive Method for Liquid-to-Solid Interactions","authors":"J. M. Bajo, C. Delrieux, G. Patow","doi":"10.19153/cleiej.25.1.4","DOIUrl":"https://doi.org/10.19153/cleiej.25.1.4","url":null,"abstract":"Realistic real-time water-solid interaction has been an open problem in Computer Graphics since its beginnings, mainly due to the complex interactions that happen at the interface between solid objects and liquids, both when objects are completely or partially wet, or when they are fully submerged.In this paper we present a method that tackles the two main aspects of this problem, namely the buoyancy of objects submerged into fluids, and the superficial liquid propagation and appearance changes that arises at the interface between the surface of solid objects in contact with a liquids.For the first problem (buoyancy) a method is proposed to realistically compute the fluid-to-solid coupling problem. Our proposal is suitable for a wide spectrum of cases, such as rigid or deformable objects, hollow or filled, permeable or impermeable, and with variable mass distribution. In the case of permeable materials, which allow liquid to pass through the object, the presented method incorporates the dynamics of the fluid in which the object is submerged, and decouples the computation of the physical quantities involved in the buoyancy force of the empty object with respect to to the liquid contained within it. On the other hand, the visual appearance of certain materials depends on their intrinsic light transfer properties, the lighting present and other environmental contributions. Thus, complementing the first approach in this paper, a new technique is introduced to model and render the appearance changes of absorbent materials when there is liquid on their surface. Also, a new method was developed to solve the problem of the interaction between the object surface and liquids, taking advantage of texture coordinates. An algorithm was proposed to model the main physical processes that occur on the surface of a wet or wet solid object. Finally, we model the change in appearance that typically arise in most materials in contact with fluids, and an algorithm is implemented achieving real-time performance. The complete solution is designed taking advantage of superscalar architectures and GPU acceleration, allowing a flexible integration with the pipelines of current graphic engines.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132659971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pattern-set matching refers to a class of problems where learning takes place through sets rather than elements. Much used in computer vision, this approach presents robustness to variations such as illumination, intrinsic parameters of the signal capture devices, and pose of the analyzed object. Inspired by applications of subspace analysis, three new collections of methods are presented in this paper: (1) New representations for two-dimensional sets; (2) Shallow networks for image classification; and (3) Subspaces for tensor representation and classification. New representations are proposed with the aim of preserving the spatial structure and maintaining a fast processing time. We also introduce a technique to keep temporal structure, even using the principal component analysis, which classically does not model sequences. In shallow networks, we present two convolutional neural networks that do not require backpropagation, employing only subspaces for its convolution filters. These networks present advantages when the training time and hardware resources are scarce. Finally, to handle tensor data, such as video data, we propose methods that employ subspaces for representation in a compact and discriminative way. Our proposed work has been applied to several problems, such as 2D data representation, shallow networks for image classification, and tensor representation and learning.
{"title":"Pattern-set Representations using Linear, Shallow and Tensor Subspaces","authors":"B. Gatto, E. M. Santos, Waldir S. S. Júnior","doi":"10.19153/cleiej.25.1.5","DOIUrl":"https://doi.org/10.19153/cleiej.25.1.5","url":null,"abstract":"Pattern-set matching refers to a class of problems where learning takes place through sets rather than elements. Much used in computer vision, this approach presents robustness to variations such as illumination, intrinsic parameters of the signal capture devices, and pose of the analyzed object. Inspired by applications of subspace analysis, three new collections of methods are presented in this paper: (1) New representations for two-dimensional sets; (2) Shallow networks for image classification; and (3) Subspaces for tensor representation and classification. New representations are proposed with the aim of preserving the spatial structure and maintaining a fast processing time. We also introduce a technique to keep temporal structure, even using the principal component analysis, which classically does not model sequences. In shallow networks, we present two convolutional neural networks that do not require backpropagation, employing only subspaces for its convolution filters. These networks present advantages when the training time and hardware resources are scarce. Finally, to handle tensor data, such as video data, we propose methods that employ subspaces for representation in a compact and discriminative way. Our proposed work has been applied to several problems, such as 2D data representation, shallow networks for image classification, and tensor representation and learning.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126723387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This document presents the development of a statistical HPSG parser for Spanish. HPSG is a deep linguistic formalism that combines syntactic and semantic information in the same representation, and is capable of elegantly modeling many linguistic phenomena. We describe the HPSG grammar adapted to Spanish we designed and the construction of our corpus. Then we present the different parsing algorithms we implemented for our corpus and grammar: a bottom-up strategy, a CKY with supertagger approach, and a LSTM top-down approach. We then show the experimental results obtained by our parsers compared among themselves and also to other external Spanish parsers for some global metrics and for some particular phenomena we wanted to test. The LSTM top-down approach was the strategy that obtained the best results on most of the metrics (for our parsers and external parsers as well), including constituency metrics (87.57 unlabeled F1, 82.06 labeled F1), dependency metrics (91.32 UAS, 88.96 LAS), and SRL (87.68 unlabeled, 80.66 labeled), and most of the particular phenomenon metrics such as clitics reduplication, relative referents detection and coordination chain identification.
{"title":"Statistical Deep Parsing for Spanish: Abridged Version","authors":"Luis Chiruzzo","doi":"10.19153/cleiej.25.1.2","DOIUrl":"https://doi.org/10.19153/cleiej.25.1.2","url":null,"abstract":"This document presents the development of a statistical HPSG parser for Spanish. HPSG is a deep linguistic formalism that combines syntactic and semantic information in the same representation, and is capable of elegantly modeling many linguistic phenomena. We describe the HPSG grammar adapted to Spanish we designed and the construction of our corpus. Then we present the different parsing algorithms we implemented for our corpus and grammar: a bottom-up strategy, a CKY with supertagger approach, and a LSTM top-down approach. We then show the experimental results obtained by our parsers compared among themselves and also to other external Spanish parsers for some global metrics and for some particular phenomena we wanted to test. The LSTM top-down approach was the strategy that obtained the best results on most of the metrics (for our parsers and external parsers as well), including constituency metrics (87.57 unlabeled F1, 82.06 labeled F1), dependency metrics (91.32 UAS, 88.96 LAS), and SRL (87.68 unlabeled, 80.66 labeled), and most of the particular phenomenon metrics such as clitics reduplication, relative referents detection and coordination chain identification.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122465058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quality indicators (QIs) are functions that assign a real value to a set that represents the Pareto front approximation of a multi-objective optimization problem. In the evolutionary multi-objective optimization community, QIs have been mainly employed in two ways: (1) for the performance assessment of multi-objective evolutionary algorithms (MOEAs), which produce Pareto front approximations, and (2) to be adopted as the backbone of selection mechanisms of MOEAs. Regardless of the continuing advances on QIs and their utilization in MOEAs, there are currently a vast number of open questions in this researcharea. In this doctoral thesis, we have focused on two main research directions: the design of new selection mechanisms based on the competition and cooperation of multiple QIs, aiming to compensate for the weaknesses (in terms of convergence and diversity properties) of individual QIs with the strengths of the others. The second research axis is the generation of new QIs that are compliant with the Pareto dominance relation extended to sets. Such QIs have a direct impact on the type of conclusions that can be drawn about the performance of MOEAs. Our experimental results have shown that the use of multiple QIs either to design new selection mechanisms or to construct new Pareto-compliant QIs is a promising research direction that can improve the capabilities of MOEAs and that allows for a performance assessment of MOEAs with a higher degree of confidence.
{"title":"New Findings on Indicator-based Multi-Objective Evolutionary Algorithms: A Brief Summary","authors":"Jesús Guillermo Falcón-Cardona","doi":"10.19153/cleiej.25.1.3","DOIUrl":"https://doi.org/10.19153/cleiej.25.1.3","url":null,"abstract":"Quality indicators (QIs) are functions that assign a real value to a set that represents the Pareto front approximation of a multi-objective optimization problem. In the evolutionary multi-objective optimization community, QIs have been mainly employed in two ways: (1) for the performance assessment of multi-objective evolutionary algorithms (MOEAs), which produce Pareto front approximations, and (2) to be adopted as the backbone of selection mechanisms of MOEAs. Regardless of the continuing advances on QIs and their utilization in MOEAs, there are currently a vast number of open questions in this researcharea. In this doctoral thesis, we have focused on two main research directions: the design of new selection mechanisms based on the competition and cooperation of multiple QIs, aiming to compensate for the weaknesses (in terms of convergence and diversity properties) of individual QIs with the strengths of the others. The second research axis is the generation of new QIs that are compliant with the Pareto dominance relation extended to sets. Such QIs have a direct impact on the type of conclusions that can be drawn about the performance of MOEAs. Our experimental results have shown that the use of multiple QIs either to design new selection mechanisms or to construct new Pareto-compliant QIs is a promising research direction that can improve the capabilities of MOEAs and that allows for a performance assessment of MOEAs with a higher degree of confidence.","PeriodicalId":418941,"journal":{"name":"CLEI Electron. J.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123458728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}