Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640057
Michael Barquero Salazar, Gabriela Marín-Raventós
The implementation of electronic commerce as a new way of commercialization is not only a business decision, since not all companies, nor all consumers in a region are technologically prepared to adopt electronic commerce. To implement this type of trade, companies should carry out a diagnostic study, both to know if they have the technological resources internally, and to diagnose if their potential clients also have such preparation. From this perspective, this work seeks to design an instrument that allows diagnosing the technological preparation and acceptance of micro and small businesses, and their potential consumers in rural areas. For the elaboration of the instruments, an iterative process was used, composed of three iterations of design and evaluation. Academic experts with extensive experience in instrument design carried out the first evaluation; the second iteration was evaluated in a pilot field study where 6 companies and 10 consumers from regions such as Pococí and San Carlos participated. Finally, the third iteration was developed with a case study in the canton of Río Cuarto de Alajuela, 29 companies and 261 consumers from the region participated in this study. The diagnosis highlights that most of these companies state that they are not technologically prepared to implement e-commerce, while most of the participating consumers indicate that they do feel technologically prepared to use these platforms.
实施电子商务作为一种新的商业化方式不仅仅是一个商业决策,因为不是所有的公司,也不是一个地区的所有消费者在技术上都准备好采用电子商务。为了实施这种类型的贸易,公司应该进行诊断研究,既要知道他们内部是否有技术资源,也要诊断他们的潜在客户是否也有这样的准备。从这个角度来看,这项工作旨在设计一种工具,可以诊断微型和小型企业及其在农村地区的潜在消费者的技术准备和接受程度。在制定这些工具时,采用了一个迭代过程,由三次设计和评价迭代组成。具有丰富仪器设计经验的学术专家进行了首次评估;第二次迭代在试点实地研究中进行了评估,来自Pococí和圣卡洛斯等地区的6家公司和10名消费者参与了研究。最后,第三次迭代是在Río阿拉胡埃拉省(Cuarto de Alajuela)进行的案例研究,该地区的29家公司和261名消费者参与了这项研究。诊断强调,大多数这些公司表示,他们没有技术上准备实施电子商务,而大多数参与的消费者表示,他们确实觉得技术上准备使用这些平台。
{"title":"Diagnosis for the adoption of e-commerce platforms in micro and small enterprises in rural areas: Case study of the region of Río Cuarto, Alajuela","authors":"Michael Barquero Salazar, Gabriela Marín-Raventós","doi":"10.1109/CLEI53233.2021.9640057","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640057","url":null,"abstract":"The implementation of electronic commerce as a new way of commercialization is not only a business decision, since not all companies, nor all consumers in a region are technologically prepared to adopt electronic commerce. To implement this type of trade, companies should carry out a diagnostic study, both to know if they have the technological resources internally, and to diagnose if their potential clients also have such preparation. From this perspective, this work seeks to design an instrument that allows diagnosing the technological preparation and acceptance of micro and small businesses, and their potential consumers in rural areas. For the elaboration of the instruments, an iterative process was used, composed of three iterations of design and evaluation. Academic experts with extensive experience in instrument design carried out the first evaluation; the second iteration was evaluated in a pilot field study where 6 companies and 10 consumers from regions such as Pococí and San Carlos participated. Finally, the third iteration was developed with a case study in the canton of Río Cuarto de Alajuela, 29 companies and 261 consumers from the region participated in this study. The diagnosis highlights that most of these companies state that they are not technologically prepared to implement e-commerce, while most of the participating consumers indicate that they do feel technologically prepared to use these platforms.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"90 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75919678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640180
A. Spengler, P. S. Souza
The popularization of Bitcoin and other cryptocurrencies has motivated the interest in using blockchain infrastructure in contexts other than the original. This is due to blockchain allows for a distribution of data with decentralized management and in a secure environment. In this scenario, the goal of this work is to evaluate the impact of database usage when blockchain is employed to manipulate large volumes of heterogeneous data. The methodology used in our evaluation considers the Hyperledger Fabric to set up a network for sharing medical data, which is obtained from a real database. The performance of this network was collected through experimental studies, with the Hyperledger Caliper benchmark, by measuring the throughput and latency of the network with and without the CouchDB database. Our results show the impact of the overhead imposed by the database when it is used in a blockchain network. This work contributes to future developers of blockchain applications as it shows the impact of database usage on such applications.
{"title":"The impact of using CouchDB on Hyperledger Fabric performance for heterogeneous medical data storage","authors":"A. Spengler, P. S. Souza","doi":"10.1109/CLEI53233.2021.9640180","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640180","url":null,"abstract":"The popularization of Bitcoin and other cryptocurrencies has motivated the interest in using blockchain infrastructure in contexts other than the original. This is due to blockchain allows for a distribution of data with decentralized management and in a secure environment. In this scenario, the goal of this work is to evaluate the impact of database usage when blockchain is employed to manipulate large volumes of heterogeneous data. The methodology used in our evaluation considers the Hyperledger Fabric to set up a network for sharing medical data, which is obtained from a real database. The performance of this network was collected through experimental studies, with the Hyperledger Caliper benchmark, by measuring the throughput and latency of the network with and without the CouchDB database. Our results show the impact of the overhead imposed by the database when it is used in a blockchain network. This work contributes to future developers of blockchain applications as it shows the impact of database usage on such applications.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"68 4 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76494224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640179
A. Nebel, Laura González, Guzmán Llambías
The development of large-scale software systems is usually supported by integration platforms, which provide connectivity and mediation capabilities to facilitate the integration of heterogeneous and distributed applications. Integration platforms have traditionally been built as monolithic systems which, in some of the current contexts (e.g. market's high pace of demand, large amount of users and data), present issues in terms of scalability, maintainability and fault tolerance, among others. In turn, microservices architecture is an approach for developing applications as a set of small independent services, which may contribute to address such limitations (e.g. maintaining and scaling services independently, according to their specific needs). Indeed, various integration platform proposals leveraging this approach have emerged during the last years. However, those proposals are domain-specific and/or they do not provide insights regarding the architecture and implementation of the platform. This paper proposes a general-purpose microservice-based integration platform, which aims to address limitations of monolithic solutions and of the aforementioned existing proposals. Our work comprises the definition of the platform and its main functionality, a description of its microservice-based architecture, and implementation alternatives as well as prototypes for some of its main components.
{"title":"MicroIP: A general-purpose microservice-based integration platform","authors":"A. Nebel, Laura González, Guzmán Llambías","doi":"10.1109/CLEI53233.2021.9640179","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640179","url":null,"abstract":"The development of large-scale software systems is usually supported by integration platforms, which provide connectivity and mediation capabilities to facilitate the integration of heterogeneous and distributed applications. Integration platforms have traditionally been built as monolithic systems which, in some of the current contexts (e.g. market's high pace of demand, large amount of users and data), present issues in terms of scalability, maintainability and fault tolerance, among others. In turn, microservices architecture is an approach for developing applications as a set of small independent services, which may contribute to address such limitations (e.g. maintaining and scaling services independently, according to their specific needs). Indeed, various integration platform proposals leveraging this approach have emerged during the last years. However, those proposals are domain-specific and/or they do not provide insights regarding the architecture and implementation of the platform. This paper proposes a general-purpose microservice-based integration platform, which aims to address limitations of monolithic solutions and of the aforementioned existing proposals. Our work comprises the definition of the platform and its main functionality, a description of its microservice-based architecture, and implementation alternatives as well as prototypes for some of its main components.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"9 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77460637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640067
R. Costaguta, N. Salazar
The pandemic caused by Covid-19 affected all educational levels throughout the world. Several countries established periods of confinement for their inhabitants, and this led many teachers to adapt their courses for virtual development. This adaptation reduced in many cases to generating a rapid and emergency response through the digitization of what designed for a face-to-face environment, for example, with classes through videoconferences, redesigning activities to solve in a virtual environment, and solving online questionnaires. However, this article presents the adaptation of a university curricular space, specially designed for virtuality. It's about the educational innovation carried out on the Artificial Intelligence subject, which designed to take advantage of the potential of web 3.0 tools and mitigate the inconveniences of non-face-to-face classes. The results obtained highlight the innovative academic value of this initiative, and its transferability and sustainability over time.
{"title":"Disruptive Innovation: A valuable experience in the teaching and learning process of Artificial Intelligence","authors":"R. Costaguta, N. Salazar","doi":"10.1109/CLEI53233.2021.9640067","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640067","url":null,"abstract":"The pandemic caused by Covid-19 affected all educational levels throughout the world. Several countries established periods of confinement for their inhabitants, and this led many teachers to adapt their courses for virtual development. This adaptation reduced in many cases to generating a rapid and emergency response through the digitization of what designed for a face-to-face environment, for example, with classes through videoconferences, redesigning activities to solve in a virtual environment, and solving online questionnaires. However, this article presents the adaptation of a university curricular space, specially designed for virtuality. It's about the educational innovation carried out on the Artificial Intelligence subject, which designed to take advantage of the potential of web 3.0 tools and mitigate the inconveniences of non-face-to-face classes. The results obtained highlight the innovative academic value of this initiative, and its transferability and sustainability over time.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"11 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88618946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/clei53233.2021.9640121
{"title":"Program Committees","authors":"","doi":"10.1109/clei53233.2021.9640121","DOIUrl":"https://doi.org/10.1109/clei53233.2021.9640121","url":null,"abstract":"","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73364646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640022
Bernis Loor-Zambrano, Frank Tello-Salvador, Roberth Alcivar-Cevallos, Leticia Vaca Cárdenas
Currently, social networks play a fundamental role in disseminating information on natural disasters and urban emergencies. This article presents a Systematic Literature Review (SLR) on using social media data as a basis for applying different classification, clustering, and prediction algorithms in emergency response scenarios. The first part focuses on information sources; after, the investigations that used classification, clustering, and prediction techniques or algorithms are described. Finally, the results obtained can be used to make optimal allocation and resource management decisions according to the emergency event.
{"title":"Approaches of predictive and clustering methods used in emergency events: A Systematic Literature Review","authors":"Bernis Loor-Zambrano, Frank Tello-Salvador, Roberth Alcivar-Cevallos, Leticia Vaca Cárdenas","doi":"10.1109/CLEI53233.2021.9640022","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640022","url":null,"abstract":"Currently, social networks play a fundamental role in disseminating information on natural disasters and urban emergencies. This article presents a Systematic Literature Review (SLR) on using social media data as a basis for applying different classification, clustering, and prediction algorithms in emergency response scenarios. The first part focuses on information sources; after, the investigations that used classification, clustering, and prediction techniques or algorithms are described. Finally, the results obtained can be used to make optimal allocation and resource management decisions according to the emergency event.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"12 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76233600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640197
Laura González, Andrea Delgado
The daily operation of organizations leaves a trail of the execution of business processes (BPs) including activities, events and decisions taken by participants. Compliance requirements add specific control elements to process execution, e.g. domain and/or country regulations to be fulfilled, enforcing order of interaction messages or activities, or security checks on roles and permissions. As the amount of available data in organizations grows everyday, using execution data to detect compliance violations and its causes, can help organizations to take corrective actions for improving their processes and comply to applying rules. Compliance requirements violations can be detected at runtime to prevent further execution, or in a post mortem way using Process Mining to evaluate process execution data against the specified compliance requirements for the process. In this paper we present a BP compliance Requirements Model (BPCRM) defining generic compliance controls that can be used to specify specific compliance requirements over BPs, that are used as input to assess compliance violations with process mining. This model can be seen as a catalogue that includes a set of predefined compliance rules or patterns in one place, helping organizations to specify and evaluate the compliance of their processes.
{"title":"Compliance Requirements Model for collaborative business process and evaluation with process mining","authors":"Laura González, Andrea Delgado","doi":"10.1109/CLEI53233.2021.9640197","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640197","url":null,"abstract":"The daily operation of organizations leaves a trail of the execution of business processes (BPs) including activities, events and decisions taken by participants. Compliance requirements add specific control elements to process execution, e.g. domain and/or country regulations to be fulfilled, enforcing order of interaction messages or activities, or security checks on roles and permissions. As the amount of available data in organizations grows everyday, using execution data to detect compliance violations and its causes, can help organizations to take corrective actions for improving their processes and comply to applying rules. Compliance requirements violations can be detected at runtime to prevent further execution, or in a post mortem way using Process Mining to evaluate process execution data against the specified compliance requirements for the process. In this paper we present a BP compliance Requirements Model (BPCRM) defining generic compliance controls that can be used to specify specific compliance requirements over BPs, that are used as input to assess compliance violations with process mining. This model can be seen as a catalogue that includes a set of predefined compliance rules or patterns in one place, helping organizations to specify and evaluate the compliance of their processes.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"19 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75352461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640117
Hugo Álvarez-Chaves, David F. Barrero, Helena Hernández Martínez, M. Benito
The COVID-19 pandemic has underlined that Emergency Department (ED) overcrowding is a critical factor in care services. Getting an approximation of the number of patients attending the department can assist in service resources planning and prevent overcrowding. In this manuscript we present the forecasting results for the admissions, inpatients and discharges series in ED by using different time aggregations (eight hours, twelve hours, one day and the service workers official shifts) and classical time series algorithms. Moreover, series forecasting is performed in two terms: long (four months ahead) and short (seven days ahead). The results show that time aggregations strongly influence the forecast quality, decreasing the effectiveness for one-day aggregations. In addition, best metrics are not obtained in the same aggregation, so there is no best aggregation for all cases. Therefore, it is essential to analyse the ED-related problem faced for the time aggregation selection.
{"title":"An Analysis of the Time Aggregation Influence on Patients Forecasting in Emergency Services","authors":"Hugo Álvarez-Chaves, David F. Barrero, Helena Hernández Martínez, M. Benito","doi":"10.1109/CLEI53233.2021.9640117","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640117","url":null,"abstract":"The COVID-19 pandemic has underlined that Emergency Department (ED) overcrowding is a critical factor in care services. Getting an approximation of the number of patients attending the department can assist in service resources planning and prevent overcrowding. In this manuscript we present the forecasting results for the admissions, inpatients and discharges series in ED by using different time aggregations (eight hours, twelve hours, one day and the service workers official shifts) and classical time series algorithms. Moreover, series forecasting is performed in two terms: long (four months ahead) and short (seven days ahead). The results show that time aggregations strongly influence the forecast quality, decreasing the effectiveness for one-day aggregations. In addition, best metrics are not obtained in the same aggregation, so there is no best aggregation for all cases. Therefore, it is essential to analyse the ED-related problem faced for the time aggregation selection.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"37 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80914730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9639974
Álvaro Cabana, Lorena Etcheverry, M. Fariello, P. Bermolen, Marcelo Fiori
By February 2021, Uruguay was experiencing the first wave of the COVID-19 pandemic, while many countries were already suffering the second wave. Several countries took various measures to prevent the saturation of the health system, ranging from closure of restaurants and suspension of classes to nighttime traffic restrictions. In this paper, we explore the effect of mobility restriction measures on the infection incidence in countries that are in some way similar to Uruguay: they have between one and twelve million inhabitants, a reasonable testing effort and they had the epidemic under control at some point. For these countries, we study mobility indexes provided by Google, an index on governmental measures compiled by the University of Oxford, and the daily new cases per 100,000 inhabitants. First, we observed that the mobility reported by Google is directly related to government measures: the higher the level of restrictive measures, the lower the mobility index. Then, we analyze the influence of mobility reduction on the growth/decrease speed of the 7-day average of new cases per 100,000 inhabitants (P7) and show that high levels of mobility reduction lead to a decrease in the index. Finally, we related the required duration of mobility restrictions with the P7 maximum and also point out the risk of lifting the measures too early.
到2021年2月,乌拉圭正在经历COVID-19大流行的第一波,而许多国家已经遭受了第二波。几个国家采取了各种措施,从关闭餐馆和停课到夜间交通限制,以防止卫生系统饱和。在这篇论文中,我们探讨了行动限制措施对感染发生率的影响,这些国家在某种程度上与乌拉圭相似:它们有100万到1200万居民,有合理的检测工作,并且在某种程度上控制了疫情。对于这些国家,我们研究了谷歌(Google)提供的流动性指数、牛津大学(University of Oxford)编制的政府措施指数,以及每10万居民每日新增病例数。首先,我们观察到谷歌报告的流动性与政府措施直接相关:限制措施水平越高,流动性指数越低。然后,我们分析了流动性减少对每10万居民7天平均新增病例增长/减少速度的影响(P7),并表明流动性减少水平高导致指数下降。最后,我们将行动限制所需的持续时间与P7最大值联系起来,并指出过早解除措施的风险。
{"title":"Assessing the impact of mobility reduction in the second wave of COVID-19","authors":"Álvaro Cabana, Lorena Etcheverry, M. Fariello, P. Bermolen, Marcelo Fiori","doi":"10.1109/CLEI53233.2021.9639974","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9639974","url":null,"abstract":"By February 2021, Uruguay was experiencing the first wave of the COVID-19 pandemic, while many countries were already suffering the second wave. Several countries took various measures to prevent the saturation of the health system, ranging from closure of restaurants and suspension of classes to nighttime traffic restrictions. In this paper, we explore the effect of mobility restriction measures on the infection incidence in countries that are in some way similar to Uruguay: they have between one and twelve million inhabitants, a reasonable testing effort and they had the epidemic under control at some point. For these countries, we study mobility indexes provided by Google, an index on governmental measures compiled by the University of Oxford, and the daily new cases per 100,000 inhabitants. First, we observed that the mobility reported by Google is directly related to government measures: the higher the level of restrictive measures, the lower the mobility index. Then, we analyze the influence of mobility reduction on the growth/decrease speed of the 7-day average of new cases per 100,000 inhabitants (P7) and show that high levels of mobility reduction lead to a decrease in the index. Finally, we related the required duration of mobility restrictions with the P7 maximum and also point out the risk of lifting the measures too early.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"6 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78636159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-25DOI: 10.1109/CLEI53233.2021.9640043
Bruno Amaral, Juan Manuel Tirado Martin, Lorena Etcheverry, P. Ezzatti
The application of graph databases to different domains is gaining momentum. The Resource Description Framework (RDF) is one of the data models supported by graph databases, and SPARQL is the standard query language for RDF graphs. These databases are also known as RDF triplestores. Many triplestores are implemented over the relational data model, using tables to store graphs and translating SPARQL queries into SQL queries, and this approach can lead to unnecessary overheads. On the other hand, in the context of High- Performance Computing (HPC), implementations over hybrid hardware platforms using Numerical Linear Algebra (NLA) operations have become an effective and efficient computing strategy in the last decade. In particular, Graphics Processing Units (GPUs) have been adopted to perform general-purpose computations due to their high performance, reasonable prices, and an attractive relationship between computing capacity and energy consumption. In the context described above, this paper presents an initial study on the efficient implementation of a set of SPARQL queries in terms of NLA operations. Additionally, we evaluate the performance of implementing these operations on GPUs.
{"title":"Improving the performance of graph database queries using linear algebra operations","authors":"Bruno Amaral, Juan Manuel Tirado Martin, Lorena Etcheverry, P. Ezzatti","doi":"10.1109/CLEI53233.2021.9640043","DOIUrl":"https://doi.org/10.1109/CLEI53233.2021.9640043","url":null,"abstract":"The application of graph databases to different domains is gaining momentum. The Resource Description Framework (RDF) is one of the data models supported by graph databases, and SPARQL is the standard query language for RDF graphs. These databases are also known as RDF triplestores. Many triplestores are implemented over the relational data model, using tables to store graphs and translating SPARQL queries into SQL queries, and this approach can lead to unnecessary overheads. On the other hand, in the context of High- Performance Computing (HPC), implementations over hybrid hardware platforms using Numerical Linear Algebra (NLA) operations have become an effective and efficient computing strategy in the last decade. In particular, Graphics Processing Units (GPUs) have been adopted to perform general-purpose computations due to their high performance, reasonable prices, and an attractive relationship between computing capacity and energy consumption. In the context described above, this paper presents an initial study on the efficient implementation of a set of SPARQL queries in terms of NLA operations. Additionally, we evaluate the performance of implementing these operations on GPUs.","PeriodicalId":6803,"journal":{"name":"2021 XLVII Latin American Computing Conference (CLEI)","volume":"59 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80534155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}