首页 > 最新文献

2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)最新文献

英文 中文
A re-engineering approach for extension of the Tourist Guide Knowledge Base 导游知识库扩展的再造方法
Asya Stojanova-Doycheva, T. Glushkova, E. Doychev, N. Moraliyska
The paper presents an extension of the knowledge-base of the Tourist Guide with the rich information about Bulgarian cultural, historical and natural sites, available in the databases created under the BECC project. To accomplish this task, the architecture of Tourist Guide that was created as the reference architecture of Virtual-Physical Space (ViPS) is presented, and the restructuring process of the components in this architecture is described. In order to use the created databases in BECC project, we had to re-engineered them on the basis of standards for the presentation of cultural and historical sites such as UNESCO and CCO (Cataloging Cultural Objects).
本文介绍了《旅游指南》知识库的扩展,其中包括在BECC项目下创建的数据库中提供的关于保加利亚文化、历史和自然遗址的丰富信息。为了完成这一任务,本文提出了作为虚拟物理空间(ViPS)参考体系结构的《导游》体系结构,并描述了该体系结构中各构件的重构过程。为了在BECC项目中使用创建的数据库,我们必须根据教科文组织和CCO(文物编目)等文化和历史遗址的展示标准对其进行重新设计。
{"title":"A re-engineering approach for extension of the Tourist Guide Knowledge Base","authors":"Asya Stojanova-Doycheva, T. Glushkova, E. Doychev, N. Moraliyska","doi":"10.1109/CloudTech49835.2020.9365875","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365875","url":null,"abstract":"The paper presents an extension of the knowledge-base of the Tourist Guide with the rich information about Bulgarian cultural, historical and natural sites, available in the databases created under the BECC project. To accomplish this task, the architecture of Tourist Guide that was created as the reference architecture of Virtual-Physical Space (ViPS) is presented, and the restructuring process of the components in this architecture is described. In order to use the created databases in BECC project, we had to re-engineered them on the basis of standards for the presentation of cultural and historical sites such as UNESCO and CCO (Cataloging Cultural Objects).","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121364821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure, Efficient and Dynamic Data Search using Searchable Symmetric Encryption 使用可搜索对称加密的安全,高效和动态数据搜索
M. S. Shaikh, Jatna Bavishi, Reema Patel
Fog computing, which works complementary to cloud computing, is being developed to overcome the issue of cloud computing such as latency in the cases where the data is to be retrieved immediately. But, along with solving the problem of latency, fog computing brings along with it different set of security issues than cloud computing. The storage and processing capabilities of fog computing is limited and hence, the security issues must be solved with these constrained resources. One of the problems faced when the data is stored outside the internal network is loss of confidentiality. For this, the data must be encrypted. But, whenever document needs to be searched, all the related documents must be decrypted first and later the required document is to be fetched. Within this time frame, the document data can be accessed by an unauthorized person. So, in this paper, a searchable symmetric encryption scheme is proposed wherein the authorized members of an organization can search over the encrypted data and retrieve the required document in order to preserve the security and privacy of the data. Also, the searching complexity of the algorithm is much less so that it suitable to fog computing environment.
雾计算作为云计算的补充,正在开发中,以克服云计算的问题,例如在需要立即检索数据的情况下的延迟。但是,在解决延迟问题的同时,雾计算也带来了与云计算不同的一系列安全问题。雾计算的存储和处理能力是有限的,因此必须利用这些有限的资源来解决安全问题。当数据存储在内部网络之外时,面临的一个问题是失去保密性。为此,数据必须加密。但是,每当需要搜索文档时,必须首先解密所有相关文档,然后才能获取所需的文档。在此时间范围内,未经授权的人员可以访问文档数据。为此,本文提出了一种可搜索的对称加密方案,其中组织的授权成员可以对加密后的数据进行搜索并检索所需的文档,以保证数据的安全性和隐私性。同时,该算法的搜索复杂度大大降低,适合雾计算环境。
{"title":"Secure, Efficient and Dynamic Data Search using Searchable Symmetric Encryption","authors":"M. S. Shaikh, Jatna Bavishi, Reema Patel","doi":"10.1109/CloudTech49835.2020.9365907","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365907","url":null,"abstract":"Fog computing, which works complementary to cloud computing, is being developed to overcome the issue of cloud computing such as latency in the cases where the data is to be retrieved immediately. But, along with solving the problem of latency, fog computing brings along with it different set of security issues than cloud computing. The storage and processing capabilities of fog computing is limited and hence, the security issues must be solved with these constrained resources. One of the problems faced when the data is stored outside the internal network is loss of confidentiality. For this, the data must be encrypted. But, whenever document needs to be searched, all the related documents must be decrypted first and later the required document is to be fetched. Within this time frame, the document data can be accessed by an unauthorized person. So, in this paper, a searchable symmetric encryption scheme is proposed wherein the authorized members of an organization can search over the encrypted data and retrieve the required document in order to preserve the security and privacy of the data. Also, the searching complexity of the algorithm is much less so that it suitable to fog computing environment.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126973807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Artificial Neural Networks Based Ensemble System to Forecast Bitcoin Daily Trading Volume 基于人工神经网络的集成系统预测比特币日交易量
S. Lahmiri, R. Saadé, Danielle Morin, F. Nebebe
Cryptocurrencies are digital assets gaining popularity and generating huge transactions on electronic platforms. We develop an ensemble predictive system based on artificial neural networks to forecast Bitcoin daily trading volume level. Indeed, although ensemble forecasts are increasingly employed in various forecasting tasks, developing an intelligent predictive system for Bitcoin trading volume based on ensemble forecasts has not been addressed yet. Ensemble Bitcoin trading volume are forecasted using two specific artificial neural networks; namely, radial basis function neural networks (RBFNN) and generalized regression neural networks (GRNN). They are adopted to respectively capture local and general patterns in Bitcoin trading volume data. Finally, the feedforward artificial neural network (FFNN) is implemented to generate Bitcoin final trading volume after having aggregated the forecasts from RBFNN and GRNN. In this regard, FFNN is executed to merge local and global forecasts in a nonlinear framework. Overall, our proposed ensemble predictive system reduced the forecasting errors by 18.81% and 62.86% when compared to its components RBFNN and GRNN, respectively. In addition, the ensemble system reduced the forecasting error by 90.49% when compared to a single FFNN used as a basic reference model. Thus, the empirical outcomes show that our proposed ensemble predictive model allows achieving an improvement in terms of forecasting. Regarding the practical results of this work, while being fast, applying the artificial neural networks to develop an ensemble predictive system to forecast Bitcoin daily trading volume is recommended to apply for addressing simultaneously local and global patterns used to characterize Bitcoin trading data. We conclude that the proposed artificial neural networks ensemble forecasting model is easy to implement and efficient for Bitcoin daily volume forecasting.
加密货币是一种越来越受欢迎并在电子平台上产生巨额交易的数字资产。我们开发了一个基于人工神经网络的集成预测系统来预测比特币的日交易量水平。事实上,尽管集成预测越来越多地用于各种预测任务,但基于集成预测开发比特币交易量的智能预测系统尚未得到解决。采用两种特定的人工神经网络对整体比特币交易量进行预测;即径向基函数神经网络(RBFNN)和广义回归神经网络(GRNN)。它们分别用于捕获比特币交易量数据中的局部和一般模式。最后,利用前馈人工神经网络(FFNN)对RBFNN和GRNN的预测结果进行汇总,生成比特币最终交易量。在这方面,FFNN被用于在非线性框架中合并局部和全局预测。总体而言,与RBFNN和GRNN相比,我们提出的集成预测系统的预测误差分别降低了18.81%和62.86%。此外,与单个FFNN作为基本参考模型相比,集成系统的预测误差降低了90.49%。因此,实证结果表明,我们提出的集成预测模型可以在预测方面实现改进。对于本工作的实际结果,在快速的同时,建议应用人工神经网络开发一个集成预测系统来预测比特币日交易量,用于同时处理用于表征比特币交易数据的本地和全局模式。我们的结论是,所提出的人工神经网络集成预测模型易于实现,对于比特币日交易量预测是有效的。
{"title":"An Artificial Neural Networks Based Ensemble System to Forecast Bitcoin Daily Trading Volume","authors":"S. Lahmiri, R. Saadé, Danielle Morin, F. Nebebe","doi":"10.1109/CloudTech49835.2020.9365913","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365913","url":null,"abstract":"Cryptocurrencies are digital assets gaining popularity and generating huge transactions on electronic platforms. We develop an ensemble predictive system based on artificial neural networks to forecast Bitcoin daily trading volume level. Indeed, although ensemble forecasts are increasingly employed in various forecasting tasks, developing an intelligent predictive system for Bitcoin trading volume based on ensemble forecasts has not been addressed yet. Ensemble Bitcoin trading volume are forecasted using two specific artificial neural networks; namely, radial basis function neural networks (RBFNN) and generalized regression neural networks (GRNN). They are adopted to respectively capture local and general patterns in Bitcoin trading volume data. Finally, the feedforward artificial neural network (FFNN) is implemented to generate Bitcoin final trading volume after having aggregated the forecasts from RBFNN and GRNN. In this regard, FFNN is executed to merge local and global forecasts in a nonlinear framework. Overall, our proposed ensemble predictive system reduced the forecasting errors by 18.81% and 62.86% when compared to its components RBFNN and GRNN, respectively. In addition, the ensemble system reduced the forecasting error by 90.49% when compared to a single FFNN used as a basic reference model. Thus, the empirical outcomes show that our proposed ensemble predictive model allows achieving an improvement in terms of forecasting. Regarding the practical results of this work, while being fast, applying the artificial neural networks to develop an ensemble predictive system to forecast Bitcoin daily trading volume is recommended to apply for addressing simultaneously local and global patterns used to characterize Bitcoin trading data. We conclude that the proposed artificial neural networks ensemble forecasting model is easy to implement and efficient for Bitcoin daily volume forecasting.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122182837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi-task Offloading and Computational Resources Management in a Mobile Edge Computing Environment 移动边缘计算环境下的多任务卸载与计算资源管理
Mohamed El Ghmary, Youssef Hmimz, T. Chanyour, Ali Ouacha, Mohammed Ouçamah Cherkaoui Malki
In Mobile Cloud Computing, Smart Mobile Devices (SMDs) and Cloud Computing are combined to create a new infrastructure that allows data processing and storage outside the device. The Internet of Things refers to the billions of physical devices that are connected to the Internet. With the rapid development of these, it is clear that the requirements are largely based on the need for autonomous devices to facilitate the services required by applications that require rapid response time and flexible mobility. In this article, we study the management of computational resources and the trade-off between the consumed energy by an SMD and the processing time of its tasks. For this, we define a system model, a problem formulation and offer heuristic solutions for offloading tasks in order to jointly optimize the allocation of computing resources under limited energy and sensitive latency. In addition, we use the residual energy of the SMD battery and the sensitive latency of its tasks in defining the weighting factor of energy consumption and processing time.
在移动云计算中,智能移动设备(smd)和云计算相结合,创建了一个新的基础设施,允许在设备之外处理和存储数据。物联网是指连接到互联网的数十亿物理设备。随着这些技术的快速发展,很明显,这些要求在很大程度上是基于对自主设备的需求,以促进需要快速响应时间和灵活移动性的应用程序所需的服务。在本文中,我们研究了计算资源的管理以及SMD消耗的能量与其任务处理时间之间的权衡。为此,我们定义了系统模型和问题表述,并给出了任务卸载的启发式解决方案,以便在有限的能量和敏感的延迟下共同优化计算资源的分配。此外,我们利用SMD电池的剩余能量及其任务的敏感延迟来定义能量消耗和处理时间的权重因子。
{"title":"Multi-task Offloading and Computational Resources Management in a Mobile Edge Computing Environment","authors":"Mohamed El Ghmary, Youssef Hmimz, T. Chanyour, Ali Ouacha, Mohammed Ouçamah Cherkaoui Malki","doi":"10.1109/CloudTech49835.2020.9365903","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365903","url":null,"abstract":"In Mobile Cloud Computing, Smart Mobile Devices (SMDs) and Cloud Computing are combined to create a new infrastructure that allows data processing and storage outside the device. The Internet of Things refers to the billions of physical devices that are connected to the Internet. With the rapid development of these, it is clear that the requirements are largely based on the need for autonomous devices to facilitate the services required by applications that require rapid response time and flexible mobility. In this article, we study the management of computational resources and the trade-off between the consumed energy by an SMD and the processing time of its tasks. For this, we define a system model, a problem formulation and offer heuristic solutions for offloading tasks in order to jointly optimize the allocation of computing resources under limited energy and sensitive latency. In addition, we use the residual energy of the SMD battery and the sensitive latency of its tasks in defining the weighting factor of energy consumption and processing time.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129082066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Big Data Placement Strategy in Geographically Distributed Datacenters 地理分布数据中心中的大数据放置策略
L. Bouhouch, M. Zbakh, C. Tadonki
With the pervasiveness of the "Big Data" characteristic together with the expansion of geographically distributed datacenters in the Cloud computing context, processing large- scale data applications has become a crucial issue. Indeed, the task of finding the most efficient way of storing massive data across distributed locations is increasingly complex. Furthermore, the execution time of a given task that requires several datasets might be dominated by the cost of data migrations/exchanges, which depends on the initial placement of the input datasets over the set of datacenters in the Cloud and also on the dynamic data management strategy. In this paper, we propose a data placement strategy to improve the workflow execution time through the reduction of the cost associated to data movements between geographically distributed datacenters, considering their characteristics such as storage capacity and read/write speeds. We formalize the overall problem and then propose a data placement algorithm structured into two phases. First, we compute the estimated transfer time to move all involved datasets from their respective locations to the one where the corresponding tasks are executed. Second, we apply a greedy algorithm in order to assign each dataset to the optimal datacenter w.r.t the overall cost of data migrations. The heterogeneity of the datacenters together with their characteristics (storage and bandwidth) are both taken into account. Our experiments are conducted using Cloudsim simulator. The obtained results show that our proposed strategy produces an efficient placement and actually reduces the overheads of the data movement compared to both a random assignment and a selected placement algorithm from the literature.
随着“大数据”特征的普及以及云计算环境下地理分布式数据中心的扩展,处理大规模数据应用已成为一个关键问题。事实上,寻找跨分布式位置存储海量数据的最有效方法的任务正变得越来越复杂。此外,需要多个数据集的给定任务的执行时间可能由数据迁移/交换的成本决定,这取决于输入数据集在云中数据中心集上的初始位置,也取决于动态数据管理策略。在本文中,我们提出了一种数据放置策略,通过减少与地理分布的数据中心之间的数据移动相关的成本来改善工作流执行时间,同时考虑到它们的特征,如存储容量和读/写速度。我们将整个问题形式化,然后提出了一个分为两个阶段的数据放置算法。首先,我们计算将所有涉及的数据集从各自的位置移动到执行相应任务的位置的估计传输时间。其次,我们应用贪婪算法将每个数据集分配到最优数据中心,而不考虑数据迁移的总成本。数据中心的异构性及其特性(存储和带宽)都被考虑在内。我们的实验是使用Cloudsim模拟器进行的。得到的结果表明,与随机分配和文献中选择的放置算法相比,我们提出的策略产生了有效的放置,并且实际上减少了数据移动的开销。
{"title":"A Big Data Placement Strategy in Geographically Distributed Datacenters","authors":"L. Bouhouch, M. Zbakh, C. Tadonki","doi":"10.1109/CloudTech49835.2020.9365881","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365881","url":null,"abstract":"With the pervasiveness of the \"Big Data\" characteristic together with the expansion of geographically distributed datacenters in the Cloud computing context, processing large- scale data applications has become a crucial issue. Indeed, the task of finding the most efficient way of storing massive data across distributed locations is increasingly complex. Furthermore, the execution time of a given task that requires several datasets might be dominated by the cost of data migrations/exchanges, which depends on the initial placement of the input datasets over the set of datacenters in the Cloud and also on the dynamic data management strategy. In this paper, we propose a data placement strategy to improve the workflow execution time through the reduction of the cost associated to data movements between geographically distributed datacenters, considering their characteristics such as storage capacity and read/write speeds. We formalize the overall problem and then propose a data placement algorithm structured into two phases. First, we compute the estimated transfer time to move all involved datasets from their respective locations to the one where the corresponding tasks are executed. Second, we apply a greedy algorithm in order to assign each dataset to the optimal datacenter w.r.t the overall cost of data migrations. The heterogeneity of the datacenters together with their characteristics (storage and bandwidth) are both taken into account. Our experiments are conducted using Cloudsim simulator. The obtained results show that our proposed strategy produces an efficient placement and actually reduces the overheads of the data movement compared to both a random assignment and a selected placement algorithm from the literature.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"47 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129794931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine Learning for anomaly detection. Performance study considering anomaly distribution in an imbalanced dataset 异常检测的机器学习。考虑不平衡数据集中异常分布的性能研究
S. E. Hajjami, Jamal Malki, M. Berrada, Bouziane Fourka
The continuous dematerialization of real-world data greatly contributes to the important growing of the exchanged data. In this case, anomaly detection is increasingly becoming an important task of data analysis in order to detect abnormal data, which is of particular interest and may require action. Recent advances in artificial intelligence approaches, such as machine learning, are making an important breakthrough in this area. Typically, these techniques have been designed for balanced data sets or that have certain assumptions about the distribution of data. However, the real applications are rather confronted with an imbalanced data distribution, where normal data are present in large quantities and abnormal cases are generally very few. This makes anomaly detection similar to looking for the needle in a haystack. In this article, we develop an experimental setup for comparative analysis of two types of machine learning techniques in their application to anomaly detection systems. We study their performance taking into account anomaly distribution in an imbalanced dataset.
现实世界数据的不断非物质化极大地促进了交换数据的重要增长。在这种情况下,异常检测日益成为数据分析的一项重要任务,以便检测出异常数据,这是一个特别有趣的问题,可能需要采取行动。人工智能方法的最新进展,如机器学习,正在这一领域取得重要突破。通常,这些技术是为平衡数据集或对数据分布有某些假设而设计的。然而,实际应用往往面临着数据分布不平衡的问题,正常数据大量存在,异常数据很少。这使得异常检测类似于大海捞针。在本文中,我们开发了一个实验装置,用于比较分析两种类型的机器学习技术在异常检测系统中的应用。我们考虑了不平衡数据集中的异常分布来研究它们的性能。
{"title":"Machine Learning for anomaly detection. Performance study considering anomaly distribution in an imbalanced dataset","authors":"S. E. Hajjami, Jamal Malki, M. Berrada, Bouziane Fourka","doi":"10.1109/CloudTech49835.2020.9365887","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365887","url":null,"abstract":"The continuous dematerialization of real-world data greatly contributes to the important growing of the exchanged data. In this case, anomaly detection is increasingly becoming an important task of data analysis in order to detect abnormal data, which is of particular interest and may require action. Recent advances in artificial intelligence approaches, such as machine learning, are making an important breakthrough in this area. Typically, these techniques have been designed for balanced data sets or that have certain assumptions about the distribution of data. However, the real applications are rather confronted with an imbalanced data distribution, where normal data are present in large quantities and abnormal cases are generally very few. This makes anomaly detection similar to looking for the needle in a haystack. In this article, we develop an experimental setup for comparative analysis of two types of machine learning techniques in their application to anomaly detection systems. We study their performance taking into account anomaly distribution in an imbalanced dataset.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126503651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Deployment of Containerized Deep Learning Applications in the Cloud 在云端部署容器化深度学习应用
Rim Doukha, S. Mahmoudi, M. Zbakh, P. Manneback
During the last years, the use of Cloud computing environment has increased as a result of the various services offered by Cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure, etc.). Many companies are moving their data and applications to the Cloud in order to tackle the complex configuration effort, for having more flexibility, maintenance, and resource availability. However, it is important to mention the challenges that developers may face when using a Cloud solution such as the variation of applications requirements (in terms of computation, memory and energy consumption) over time, which makes the deployment and migration a hard process. In fact, the deployment will not depend only on the application, but it will also rely on the related services and hardware for the proper functioning of the application. In this paper, we propose a Cloud infrastructure for automatic deployment of applications using the services of Kubernetes, Docker, Ansible and Slurm. Our architecture includes a script to deploy the application depending of its requirement needs. Experiments are conducted with the analysis and the deployment of Deep Learning (DL) applications and more particularly images classification and object localization.
在过去几年中,由于云提供商提供的各种服务(Amazon Web services、Google Cloud、Microsoft Azure等),云计算环境的使用有所增加。许多公司正在将数据和应用程序迁移到云上,以便处理复杂的配置工作,从而获得更大的灵活性、可维护性和资源可用性。然而,重要的是要提到开发人员在使用云解决方案时可能面临的挑战,例如随着时间的推移应用程序需求(在计算、内存和能耗方面)的变化,这使得部署和迁移成为一个困难的过程。实际上,部署不仅依赖于应用程序,而且还依赖于相关的服务和硬件来实现应用程序的正常功能。在本文中,我们提出了一个云基础架构,用于使用Kubernetes, Docker, Ansible和Slurm的服务自动部署应用程序。我们的体系结构包括一个脚本,用于根据需求部署应用程序。通过分析和部署深度学习(DL)应用程序,特别是图像分类和对象定位,进行了实验。
{"title":"Deployment of Containerized Deep Learning Applications in the Cloud","authors":"Rim Doukha, S. Mahmoudi, M. Zbakh, P. Manneback","doi":"10.1109/CloudTech49835.2020.9365868","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365868","url":null,"abstract":"During the last years, the use of Cloud computing environment has increased as a result of the various services offered by Cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure, etc.). Many companies are moving their data and applications to the Cloud in order to tackle the complex configuration effort, for having more flexibility, maintenance, and resource availability. However, it is important to mention the challenges that developers may face when using a Cloud solution such as the variation of applications requirements (in terms of computation, memory and energy consumption) over time, which makes the deployment and migration a hard process. In fact, the deployment will not depend only on the application, but it will also rely on the related services and hardware for the proper functioning of the application. In this paper, we propose a Cloud infrastructure for automatic deployment of applications using the services of Kubernetes, Docker, Ansible and Slurm. Our architecture includes a script to deploy the application depending of its requirement needs. Experiments are conducted with the analysis and the deployment of Deep Learning (DL) applications and more particularly images classification and object localization.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122675899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Image processing Player Acquisition and Tracking System for E-sports 一种电子竞技图像处理玩家获取与跟踪系统
Joaquim Vieira, N. Luwes
Studying one’s enemy has been the key to winning a war for centuries. This can be seen in the success of the historical success of the Chinese, Greek, and Mongolian empires. Over the years, these dogmas of studying one’s enemy before the battle have now made its way into our modern lives as well, not only in real battle but also on the electronic battlefield. In E-Sports, teams will often study one another to find the strengths and weaknesses that can be exploited and avoided in the next encounters. The manner in which this is done is through the observing of prior matches to determine patterns in the manner in which they operate, play and react. This process is extremely time-consuming as one match is a minimum of an hour in duration. This paper demonstrates a program able to acquire and track player positions throughout a match in order to simplify and automate this process. This tracking data can be used with machine learning and or neural networks as part of a professional E-Sport prediction model.
几个世纪以来,研究敌人一直是赢得战争的关键。这可以从中国、希腊和蒙古帝国的历史成功中看到。多年来,这些在战斗前研究敌人的教条现在也进入了我们的现代生活,不仅在真实的战斗中,而且在电子战场上。在电子竞技中,团队通常会相互研究对方的优势和劣势,以便在接下来的比赛中加以利用和避免。这种方法是通过观察之前的比赛来确定他们操作、发挥和反应的方式。这个过程非常耗时,因为一场比赛至少要持续一个小时。为了简化和自动化这一过程,本文演示了一个能够在整个比赛中获取和跟踪球员位置的程序。这些跟踪数据可以与机器学习和/或神经网络一起使用,作为专业电子竞技预测模型的一部分。
{"title":"An Image processing Player Acquisition and Tracking System for E-sports","authors":"Joaquim Vieira, N. Luwes","doi":"10.1109/CloudTech49835.2020.9365885","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365885","url":null,"abstract":"Studying one’s enemy has been the key to winning a war for centuries. This can be seen in the success of the historical success of the Chinese, Greek, and Mongolian empires. Over the years, these dogmas of studying one’s enemy before the battle have now made its way into our modern lives as well, not only in real battle but also on the electronic battlefield. In E-Sports, teams will often study one another to find the strengths and weaknesses that can be exploited and avoided in the next encounters. The manner in which this is done is through the observing of prior matches to determine patterns in the manner in which they operate, play and react. This process is extremely time-consuming as one match is a minimum of an hour in duration. This paper demonstrates a program able to acquire and track player positions throughout a match in order to simplify and automate this process. This tracking data can be used with machine learning and or neural networks as part of a professional E-Sport prediction model.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122905205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Review of Credit Card Fraud Detection Using Machine Learning Techniques 基于机器学习技术的信用卡欺诈检测综述
Nadia Boutaher, A. Elomri, N. Abghour, K. Moussaid, M. Rida
Big Data technologies concern several critical areas such as Healthcare, Finance, Manufacturing, Transport, and E-Commerce. Hence, they play an indispensable role in the financial sector, especially within the banking services which are impacted by the digitalization of services and the evolvement of e-commerce transactions. Therefore, the emergence of the credit card use and the increasing number of fraudsters have generated different issues that concern the banking sector. Unfortunately, these issues obstruct the performance of Fraud Control Systems (Fraud Detection Systems & Fraud Prevention Systems) and abuse the transparency of online payments. Thus, financial institutions aim to secure credit card transactions and allow their customers to use e-banking services safely and efficiently. To reach this goal, they try to develop more relevant fraud detection techniques that can identify more fraudulent transactions and decrease frauds. The purpose of this article is to define the fundamental aspects of fraud detection, the current systems of fraud detection, the issues and challenges of frauds related to the banking sector, and the existing solutions based on machine learning techniques.
大数据技术涉及几个关键领域,如医疗保健、金融、制造业、运输和电子商务。因此,它们在金融领域发挥着不可或缺的作用,特别是在受到服务数字化和电子商务交易发展影响的银行服务中。因此,信用卡使用的出现和欺诈者数量的增加产生了与银行业有关的不同问题。不幸的是,这些问题阻碍了欺诈控制系统(欺诈检测系统和欺诈预防系统)的性能,并滥用了在线支付的透明度。因此,金融机构的目标是确保信用卡交易的安全,并允许他们的客户安全有效地使用电子银行服务。为了达到这一目标,他们试图开发更相关的欺诈检测技术,以识别更多的欺诈交易并减少欺诈。本文的目的是定义欺诈检测的基本方面,当前的欺诈检测系统,与银行业相关的欺诈问题和挑战,以及基于机器学习技术的现有解决方案。
{"title":"A Review of Credit Card Fraud Detection Using Machine Learning Techniques","authors":"Nadia Boutaher, A. Elomri, N. Abghour, K. Moussaid, M. Rida","doi":"10.1109/CloudTech49835.2020.9365916","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365916","url":null,"abstract":"Big Data technologies concern several critical areas such as Healthcare, Finance, Manufacturing, Transport, and E-Commerce. Hence, they play an indispensable role in the financial sector, especially within the banking services which are impacted by the digitalization of services and the evolvement of e-commerce transactions. Therefore, the emergence of the credit card use and the increasing number of fraudsters have generated different issues that concern the banking sector. Unfortunately, these issues obstruct the performance of Fraud Control Systems (Fraud Detection Systems & Fraud Prevention Systems) and abuse the transparency of online payments. Thus, financial institutions aim to secure credit card transactions and allow their customers to use e-banking services safely and efficiently. To reach this goal, they try to develop more relevant fraud detection techniques that can identify more fraudulent transactions and decrease frauds. The purpose of this article is to define the fundamental aspects of fraud detection, the current systems of fraud detection, the issues and challenges of frauds related to the banking sector, and the existing solutions based on machine learning techniques.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115168969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Cross-Layered Interference in Multichannel MAC of VANET VANET多通道MAC中的跨层干扰
Fadlallah Chbib, L. Khoukhi, W. Fahs, Jamal Haydar, R. Khatoun
Recently, with the various issues experienced in street mobility, the presence and improvement of the Vehicular Ad hoc Networks (VANETs) become a necessity. VANETs has distinctive positive effects on vehicular safety, street security, traffic the executives, drivers' comfort and passengers' convenience. Roads have encountered an increase in traffic congestion as a reason for the growth in population density. Thus, people face drastic delays that lead to a radical impact on the economics, society and environment. In this paper, we propose a novel Signal to Interference Routing (SIR) protocol that aims to discover the best routing path between source and destination. SIR differs from existing protocols by proposing a new metric that depends on Signal to Interference (SIR) at each Service Channel (SCH) of each node from source to destination. The proposed protocol aims to minimize interferences in vehicular environment considering both MAC and routing layers. We use Network Simulator NS-2.35 in order to implement and evaluate the proposed protocol. The results show significant improvements concerning an end to end delay, throughput and packet delivery ratio.
近年来,随着街道交通遇到的各种问题,车辆自组织网络(VANETs)的存在和改进成为一种必要。VANETs对车辆安全、街道安全、交通管理、驾驶员舒适度和乘客便利性都有显著的正向影响。由于人口密度的增长,道路遇到了交通拥堵的增加。因此,人们面临着严重的延误,这对经济、社会和环境造成了根本性的影响。在本文中,我们提出了一种新的信号干扰路由(SIR)协议,旨在发现源和目的之间的最佳路由路径。SIR与现有协议的不同之处在于,它提出了一种新的度量,该度量依赖于从源到目的的每个节点的每个业务通道(SCH)上的信号与干扰(SIR)。该协议考虑了MAC层和路由层,旨在最大限度地减少车辆环境中的干扰。我们使用网络模拟器NS-2.35来实现和评估所提出的协议。结果表明,在端到端延迟、吞吐量和数据包传送率方面有显著改善。
{"title":"A Cross-Layered Interference in Multichannel MAC of VANET","authors":"Fadlallah Chbib, L. Khoukhi, W. Fahs, Jamal Haydar, R. Khatoun","doi":"10.1109/CloudTech49835.2020.9365870","DOIUrl":"https://doi.org/10.1109/CloudTech49835.2020.9365870","url":null,"abstract":"Recently, with the various issues experienced in street mobility, the presence and improvement of the Vehicular Ad hoc Networks (VANETs) become a necessity. VANETs has distinctive positive effects on vehicular safety, street security, traffic the executives, drivers' comfort and passengers' convenience. Roads have encountered an increase in traffic congestion as a reason for the growth in population density. Thus, people face drastic delays that lead to a radical impact on the economics, society and environment. In this paper, we propose a novel Signal to Interference Routing (SIR) protocol that aims to discover the best routing path between source and destination. SIR differs from existing protocols by proposing a new metric that depends on Signal to Interference (SIR) at each Service Channel (SCH) of each node from source to destination. The proposed protocol aims to minimize interferences in vehicular environment considering both MAC and routing layers. We use Network Simulator NS-2.35 in order to implement and evaluate the proposed protocol. The results show significant improvements concerning an end to end delay, throughput and packet delivery ratio.","PeriodicalId":272860,"journal":{"name":"2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133949253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1