Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365875
Asya Stojanova-Doycheva, T. Glushkova, E. Doychev, N. Moraliyska
The paper presents an extension of the knowledge-base of the Tourist Guide with the rich information about Bulgarian cultural, historical and natural sites, available in the databases created under the BECC project. To accomplish this task, the architecture of Tourist Guide that was created as the reference architecture of Virtual-Physical Space (ViPS) is presented, and the restructuring process of the components in this architecture is described. In order to use the created databases in BECC project, we had to re-engineered them on the basis of standards for the presentation of cultural and historical sites such as UNESCO and CCO (Cataloging Cultural Objects).
{"title":"A re-engineering approach for extension of the Tourist Guide Knowledge Base","authors":"Asya Stojanova-Doycheva, T. Glushkova, E. Doychev, N. Moraliyska","doi":"10.1109/CloudTech49835.2020.9365875","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365875","url":null,"abstract":"The paper presents an extension of the knowledge-base of the Tourist Guide with the rich information about Bulgarian cultural, historical and natural sites, available in the databases created under the BECC project. To accomplish this task, the architecture of Tourist Guide that was created as the reference architecture of Virtual-Physical Space (ViPS) is presented, and the restructuring process of the components in this architecture is described. In order to use the created databases in BECC project, we had to re-engineered them on the basis of standards for the presentation of cultural and historical sites such as UNESCO and CCO (Cataloging Cultural Objects).","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121364821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365907
M. S. Shaikh, Jatna Bavishi, Reema Patel
Fog computing, which works complementary to cloud computing, is being developed to overcome the issue of cloud computing such as latency in the cases where the data is to be retrieved immediately. But, along with solving the problem of latency, fog computing brings along with it different set of security issues than cloud computing. The storage and processing capabilities of fog computing is limited and hence, the security issues must be solved with these constrained resources. One of the problems faced when the data is stored outside the internal network is loss of confidentiality. For this, the data must be encrypted. But, whenever document needs to be searched, all the related documents must be decrypted first and later the required document is to be fetched. Within this time frame, the document data can be accessed by an unauthorized person. So, in this paper, a searchable symmetric encryption scheme is proposed wherein the authorized members of an organization can search over the encrypted data and retrieve the required document in order to preserve the security and privacy of the data. Also, the searching complexity of the algorithm is much less so that it suitable to fog computing environment.
{"title":"Secure, Efficient and Dynamic Data Search using Searchable Symmetric Encryption","authors":"M. S. Shaikh, Jatna Bavishi, Reema Patel","doi":"10.1109/CloudTech49835.2020.9365907","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365907","url":null,"abstract":"Fog computing, which works complementary to cloud computing, is being developed to overcome the issue of cloud computing such as latency in the cases where the data is to be retrieved immediately. But, along with solving the problem of latency, fog computing brings along with it different set of security issues than cloud computing. The storage and processing capabilities of fog computing is limited and hence, the security issues must be solved with these constrained resources. One of the problems faced when the data is stored outside the internal network is loss of confidentiality. For this, the data must be encrypted. But, whenever document needs to be searched, all the related documents must be decrypted first and later the required document is to be fetched. Within this time frame, the document data can be accessed by an unauthorized person. So, in this paper, a searchable symmetric encryption scheme is proposed wherein the authorized members of an organization can search over the encrypted data and retrieve the required document in order to preserve the security and privacy of the data. Also, the searching complexity of the algorithm is much less so that it suitable to fog computing environment.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126973807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365913
S. Lahmiri, R. Saadé, Danielle Morin, F. Nebebe
Cryptocurrencies are digital assets gaining popularity and generating huge transactions on electronic platforms. We develop an ensemble predictive system based on artificial neural networks to forecast Bitcoin daily trading volume level. Indeed, although ensemble forecasts are increasingly employed in various forecasting tasks, developing an intelligent predictive system for Bitcoin trading volume based on ensemble forecasts has not been addressed yet. Ensemble Bitcoin trading volume are forecasted using two specific artificial neural networks; namely, radial basis function neural networks (RBFNN) and generalized regression neural networks (GRNN). They are adopted to respectively capture local and general patterns in Bitcoin trading volume data. Finally, the feedforward artificial neural network (FFNN) is implemented to generate Bitcoin final trading volume after having aggregated the forecasts from RBFNN and GRNN. In this regard, FFNN is executed to merge local and global forecasts in a nonlinear framework. Overall, our proposed ensemble predictive system reduced the forecasting errors by 18.81% and 62.86% when compared to its components RBFNN and GRNN, respectively. In addition, the ensemble system reduced the forecasting error by 90.49% when compared to a single FFNN used as a basic reference model. Thus, the empirical outcomes show that our proposed ensemble predictive model allows achieving an improvement in terms of forecasting. Regarding the practical results of this work, while being fast, applying the artificial neural networks to develop an ensemble predictive system to forecast Bitcoin daily trading volume is recommended to apply for addressing simultaneously local and global patterns used to characterize Bitcoin trading data. We conclude that the proposed artificial neural networks ensemble forecasting model is easy to implement and efficient for Bitcoin daily volume forecasting.
{"title":"An Artificial Neural Networks Based Ensemble System to Forecast Bitcoin Daily Trading Volume","authors":"S. Lahmiri, R. Saadé, Danielle Morin, F. Nebebe","doi":"10.1109/CloudTech49835.2020.9365913","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365913","url":null,"abstract":"Cryptocurrencies are digital assets gaining popularity and generating huge transactions on electronic platforms. We develop an ensemble predictive system based on artificial neural networks to forecast Bitcoin daily trading volume level. Indeed, although ensemble forecasts are increasingly employed in various forecasting tasks, developing an intelligent predictive system for Bitcoin trading volume based on ensemble forecasts has not been addressed yet. Ensemble Bitcoin trading volume are forecasted using two specific artificial neural networks; namely, radial basis function neural networks (RBFNN) and generalized regression neural networks (GRNN). They are adopted to respectively capture local and general patterns in Bitcoin trading volume data. Finally, the feedforward artificial neural network (FFNN) is implemented to generate Bitcoin final trading volume after having aggregated the forecasts from RBFNN and GRNN. In this regard, FFNN is executed to merge local and global forecasts in a nonlinear framework. Overall, our proposed ensemble predictive system reduced the forecasting errors by 18.81% and 62.86% when compared to its components RBFNN and GRNN, respectively. In addition, the ensemble system reduced the forecasting error by 90.49% when compared to a single FFNN used as a basic reference model. Thus, the empirical outcomes show that our proposed ensemble predictive model allows achieving an improvement in terms of forecasting. Regarding the practical results of this work, while being fast, applying the artificial neural networks to develop an ensemble predictive system to forecast Bitcoin daily trading volume is recommended to apply for addressing simultaneously local and global patterns used to characterize Bitcoin trading data. We conclude that the proposed artificial neural networks ensemble forecasting model is easy to implement and efficient for Bitcoin daily volume forecasting.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122182837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365903
Mohamed El Ghmary, Youssef Hmimz, T. Chanyour, Ali Ouacha, Mohammed Ouçamah Cherkaoui Malki
In Mobile Cloud Computing, Smart Mobile Devices (SMDs) and Cloud Computing are combined to create a new infrastructure that allows data processing and storage outside the device. The Internet of Things refers to the billions of physical devices that are connected to the Internet. With the rapid development of these, it is clear that the requirements are largely based on the need for autonomous devices to facilitate the services required by applications that require rapid response time and flexible mobility. In this article, we study the management of computational resources and the trade-off between the consumed energy by an SMD and the processing time of its tasks. For this, we define a system model, a problem formulation and offer heuristic solutions for offloading tasks in order to jointly optimize the allocation of computing resources under limited energy and sensitive latency. In addition, we use the residual energy of the SMD battery and the sensitive latency of its tasks in defining the weighting factor of energy consumption and processing time.
{"title":"Multi-task Offloading and Computational Resources Management in a Mobile Edge Computing Environment","authors":"Mohamed El Ghmary, Youssef Hmimz, T. Chanyour, Ali Ouacha, Mohammed Ouçamah Cherkaoui Malki","doi":"10.1109/CloudTech49835.2020.9365903","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365903","url":null,"abstract":"In Mobile Cloud Computing, Smart Mobile Devices (SMDs) and Cloud Computing are combined to create a new infrastructure that allows data processing and storage outside the device. The Internet of Things refers to the billions of physical devices that are connected to the Internet. With the rapid development of these, it is clear that the requirements are largely based on the need for autonomous devices to facilitate the services required by applications that require rapid response time and flexible mobility. In this article, we study the management of computational resources and the trade-off between the consumed energy by an SMD and the processing time of its tasks. For this, we define a system model, a problem formulation and offer heuristic solutions for offloading tasks in order to jointly optimize the allocation of computing resources under limited energy and sensitive latency. In addition, we use the residual energy of the SMD battery and the sensitive latency of its tasks in defining the weighting factor of energy consumption and processing time.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129082066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365881
L. Bouhouch, M. Zbakh, C. Tadonki
With the pervasiveness of the "Big Data" characteristic together with the expansion of geographically distributed datacenters in the Cloud computing context, processing large- scale data applications has become a crucial issue. Indeed, the task of finding the most efficient way of storing massive data across distributed locations is increasingly complex. Furthermore, the execution time of a given task that requires several datasets might be dominated by the cost of data migrations/exchanges, which depends on the initial placement of the input datasets over the set of datacenters in the Cloud and also on the dynamic data management strategy. In this paper, we propose a data placement strategy to improve the workflow execution time through the reduction of the cost associated to data movements between geographically distributed datacenters, considering their characteristics such as storage capacity and read/write speeds. We formalize the overall problem and then propose a data placement algorithm structured into two phases. First, we compute the estimated transfer time to move all involved datasets from their respective locations to the one where the corresponding tasks are executed. Second, we apply a greedy algorithm in order to assign each dataset to the optimal datacenter w.r.t the overall cost of data migrations. The heterogeneity of the datacenters together with their characteristics (storage and bandwidth) are both taken into account. Our experiments are conducted using Cloudsim simulator. The obtained results show that our proposed strategy produces an efficient placement and actually reduces the overheads of the data movement compared to both a random assignment and a selected placement algorithm from the literature.
{"title":"A Big Data Placement Strategy in Geographically Distributed Datacenters","authors":"L. Bouhouch, M. Zbakh, C. Tadonki","doi":"10.1109/CloudTech49835.2020.9365881","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365881","url":null,"abstract":"With the pervasiveness of the \"Big Data\" characteristic together with the expansion of geographically distributed datacenters in the Cloud computing context, processing large- scale data applications has become a crucial issue. Indeed, the task of finding the most efficient way of storing massive data across distributed locations is increasingly complex. Furthermore, the execution time of a given task that requires several datasets might be dominated by the cost of data migrations/exchanges, which depends on the initial placement of the input datasets over the set of datacenters in the Cloud and also on the dynamic data management strategy. In this paper, we propose a data placement strategy to improve the workflow execution time through the reduction of the cost associated to data movements between geographically distributed datacenters, considering their characteristics such as storage capacity and read/write speeds. We formalize the overall problem and then propose a data placement algorithm structured into two phases. First, we compute the estimated transfer time to move all involved datasets from their respective locations to the one where the corresponding tasks are executed. Second, we apply a greedy algorithm in order to assign each dataset to the optimal datacenter w.r.t the overall cost of data migrations. The heterogeneity of the datacenters together with their characteristics (storage and bandwidth) are both taken into account. Our experiments are conducted using Cloudsim simulator. The obtained results show that our proposed strategy produces an efficient placement and actually reduces the overheads of the data movement compared to both a random assignment and a selected placement algorithm from the literature.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"47 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129794931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365887
S. E. Hajjami, Jamal Malki, M. Berrada, Bouziane Fourka
The continuous dematerialization of real-world data greatly contributes to the important growing of the exchanged data. In this case, anomaly detection is increasingly becoming an important task of data analysis in order to detect abnormal data, which is of particular interest and may require action. Recent advances in artificial intelligence approaches, such as machine learning, are making an important breakthrough in this area. Typically, these techniques have been designed for balanced data sets or that have certain assumptions about the distribution of data. However, the real applications are rather confronted with an imbalanced data distribution, where normal data are present in large quantities and abnormal cases are generally very few. This makes anomaly detection similar to looking for the needle in a haystack. In this article, we develop an experimental setup for comparative analysis of two types of machine learning techniques in their application to anomaly detection systems. We study their performance taking into account anomaly distribution in an imbalanced dataset.
{"title":"Machine Learning for anomaly detection. Performance study considering anomaly distribution in an imbalanced dataset","authors":"S. E. Hajjami, Jamal Malki, M. Berrada, Bouziane Fourka","doi":"10.1109/CloudTech49835.2020.9365887","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365887","url":null,"abstract":"The continuous dematerialization of real-world data greatly contributes to the important growing of the exchanged data. In this case, anomaly detection is increasingly becoming an important task of data analysis in order to detect abnormal data, which is of particular interest and may require action. Recent advances in artificial intelligence approaches, such as machine learning, are making an important breakthrough in this area. Typically, these techniques have been designed for balanced data sets or that have certain assumptions about the distribution of data. However, the real applications are rather confronted with an imbalanced data distribution, where normal data are present in large quantities and abnormal cases are generally very few. This makes anomaly detection similar to looking for the needle in a haystack. In this article, we develop an experimental setup for comparative analysis of two types of machine learning techniques in their application to anomaly detection systems. We study their performance taking into account anomaly distribution in an imbalanced dataset.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126503651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365868
Rim Doukha, S. Mahmoudi, M. Zbakh, P. Manneback
During the last years, the use of Cloud computing environment has increased as a result of the various services offered by Cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure, etc.). Many companies are moving their data and applications to the Cloud in order to tackle the complex configuration effort, for having more flexibility, maintenance, and resource availability. However, it is important to mention the challenges that developers may face when using a Cloud solution such as the variation of applications requirements (in terms of computation, memory and energy consumption) over time, which makes the deployment and migration a hard process. In fact, the deployment will not depend only on the application, but it will also rely on the related services and hardware for the proper functioning of the application. In this paper, we propose a Cloud infrastructure for automatic deployment of applications using the services of Kubernetes, Docker, Ansible and Slurm. Our architecture includes a script to deploy the application depending of its requirement needs. Experiments are conducted with the analysis and the deployment of Deep Learning (DL) applications and more particularly images classification and object localization.
在过去几年中,由于云提供商提供的各种服务(Amazon Web services、Google Cloud、Microsoft Azure等),云计算环境的使用有所增加。许多公司正在将数据和应用程序迁移到云上,以便处理复杂的配置工作,从而获得更大的灵活性、可维护性和资源可用性。然而,重要的是要提到开发人员在使用云解决方案时可能面临的挑战,例如随着时间的推移应用程序需求(在计算、内存和能耗方面)的变化,这使得部署和迁移成为一个困难的过程。实际上,部署不仅依赖于应用程序,而且还依赖于相关的服务和硬件来实现应用程序的正常功能。在本文中,我们提出了一个云基础架构,用于使用Kubernetes, Docker, Ansible和Slurm的服务自动部署应用程序。我们的体系结构包括一个脚本,用于根据需求部署应用程序。通过分析和部署深度学习(DL)应用程序,特别是图像分类和对象定位,进行了实验。
{"title":"Deployment of Containerized Deep Learning Applications in the Cloud","authors":"Rim Doukha, S. Mahmoudi, M. Zbakh, P. Manneback","doi":"10.1109/CloudTech49835.2020.9365868","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365868","url":null,"abstract":"During the last years, the use of Cloud computing environment has increased as a result of the various services offered by Cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure, etc.). Many companies are moving their data and applications to the Cloud in order to tackle the complex configuration effort, for having more flexibility, maintenance, and resource availability. However, it is important to mention the challenges that developers may face when using a Cloud solution such as the variation of applications requirements (in terms of computation, memory and energy consumption) over time, which makes the deployment and migration a hard process. In fact, the deployment will not depend only on the application, but it will also rely on the related services and hardware for the proper functioning of the application. In this paper, we propose a Cloud infrastructure for automatic deployment of applications using the services of Kubernetes, Docker, Ansible and Slurm. Our architecture includes a script to deploy the application depending of its requirement needs. Experiments are conducted with the analysis and the deployment of Deep Learning (DL) applications and more particularly images classification and object localization.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122675899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365885
Joaquim Vieira, N. Luwes
Studying one’s enemy has been the key to winning a war for centuries. This can be seen in the success of the historical success of the Chinese, Greek, and Mongolian empires. Over the years, these dogmas of studying one’s enemy before the battle have now made its way into our modern lives as well, not only in real battle but also on the electronic battlefield. In E-Sports, teams will often study one another to find the strengths and weaknesses that can be exploited and avoided in the next encounters. The manner in which this is done is through the observing of prior matches to determine patterns in the manner in which they operate, play and react. This process is extremely time-consuming as one match is a minimum of an hour in duration. This paper demonstrates a program able to acquire and track player positions throughout a match in order to simplify and automate this process. This tracking data can be used with machine learning and or neural networks as part of a professional E-Sport prediction model.
{"title":"An Image processing Player Acquisition and Tracking System for E-sports","authors":"Joaquim Vieira, N. Luwes","doi":"10.1109/CloudTech49835.2020.9365885","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365885","url":null,"abstract":"Studying one’s enemy has been the key to winning a war for centuries. This can be seen in the success of the historical success of the Chinese, Greek, and Mongolian empires. Over the years, these dogmas of studying one’s enemy before the battle have now made its way into our modern lives as well, not only in real battle but also on the electronic battlefield. In E-Sports, teams will often study one another to find the strengths and weaknesses that can be exploited and avoided in the next encounters. The manner in which this is done is through the observing of prior matches to determine patterns in the manner in which they operate, play and react. This process is extremely time-consuming as one match is a minimum of an hour in duration. This paper demonstrates a program able to acquire and track player positions throughout a match in order to simplify and automate this process. This tracking data can be used with machine learning and or neural networks as part of a professional E-Sport prediction model.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122905205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365916
Nadia Boutaher, A. Elomri, N. Abghour, K. Moussaid, M. Rida
Big Data technologies concern several critical areas such as Healthcare, Finance, Manufacturing, Transport, and E-Commerce. Hence, they play an indispensable role in the financial sector, especially within the banking services which are impacted by the digitalization of services and the evolvement of e-commerce transactions. Therefore, the emergence of the credit card use and the increasing number of fraudsters have generated different issues that concern the banking sector. Unfortunately, these issues obstruct the performance of Fraud Control Systems (Fraud Detection Systems & Fraud Prevention Systems) and abuse the transparency of online payments. Thus, financial institutions aim to secure credit card transactions and allow their customers to use e-banking services safely and efficiently. To reach this goal, they try to develop more relevant fraud detection techniques that can identify more fraudulent transactions and decrease frauds. The purpose of this article is to define the fundamental aspects of fraud detection, the current systems of fraud detection, the issues and challenges of frauds related to the banking sector, and the existing solutions based on machine learning techniques.
{"title":"A Review of Credit Card Fraud Detection Using Machine Learning Techniques","authors":"Nadia Boutaher, A. Elomri, N. Abghour, K. Moussaid, M. Rida","doi":"10.1109/CloudTech49835.2020.9365916","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365916","url":null,"abstract":"Big Data technologies concern several critical areas such as Healthcare, Finance, Manufacturing, Transport, and E-Commerce. Hence, they play an indispensable role in the financial sector, especially within the banking services which are impacted by the digitalization of services and the evolvement of e-commerce transactions. Therefore, the emergence of the credit card use and the increasing number of fraudsters have generated different issues that concern the banking sector. Unfortunately, these issues obstruct the performance of Fraud Control Systems (Fraud Detection Systems & Fraud Prevention Systems) and abuse the transparency of online payments. Thus, financial institutions aim to secure credit card transactions and allow their customers to use e-banking services safely and efficiently. To reach this goal, they try to develop more relevant fraud detection techniques that can identify more fraudulent transactions and decrease frauds. The purpose of this article is to define the fundamental aspects of fraud detection, the current systems of fraud detection, the issues and challenges of frauds related to the banking sector, and the existing solutions based on machine learning techniques.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115168969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-24DOI: 10.1109/CloudTech49835.2020.9365870
Fadlallah Chbib, L. Khoukhi, W. Fahs, Jamal Haydar, R. Khatoun
Recently, with the various issues experienced in street mobility, the presence and improvement of the Vehicular Ad hoc Networks (VANETs) become a necessity. VANETs has distinctive positive effects on vehicular safety, street security, traffic the executives, drivers' comfort and passengers' convenience. Roads have encountered an increase in traffic congestion as a reason for the growth in population density. Thus, people face drastic delays that lead to a radical impact on the economics, society and environment. In this paper, we propose a novel Signal to Interference Routing (SIR) protocol that aims to discover the best routing path between source and destination. SIR differs from existing protocols by proposing a new metric that depends on Signal to Interference (SIR) at each Service Channel (SCH) of each node from source to destination. The proposed protocol aims to minimize interferences in vehicular environment considering both MAC and routing layers. We use Network Simulator NS-2.35 in order to implement and evaluate the proposed protocol. The results show significant improvements concerning an end to end delay, throughput and packet delivery ratio.
{"title":"A Cross-Layered Interference in Multichannel MAC of VANET","authors":"Fadlallah Chbib, L. Khoukhi, W. Fahs, Jamal Haydar, R. Khatoun","doi":"10.1109/CloudTech49835.2020.9365870","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365870","url":null,"abstract":"Recently, with the various issues experienced in street mobility, the presence and improvement of the Vehicular Ad hoc Networks (VANETs) become a necessity. VANETs has distinctive positive effects on vehicular safety, street security, traffic the executives, drivers' comfort and passengers' convenience. Roads have encountered an increase in traffic congestion as a reason for the growth in population density. Thus, people face drastic delays that lead to a radical impact on the economics, society and environment. In this paper, we propose a novel Signal to Interference Routing (SIR) protocol that aims to discover the best routing path between source and destination. SIR differs from existing protocols by proposing a new metric that depends on Signal to Interference (SIR) at each Service Channel (SCH) of each node from source to destination. The proposed protocol aims to minimize interferences in vehicular environment considering both MAC and routing layers. We use Network Simulator NS-2.35 in order to implement and evaluate the proposed protocol. The results show significant improvements concerning an end to end delay, throughput and packet delivery ratio.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133949253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}