Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00024
Daniel Rammer, S. Pallickara, S. Pallickara
Across several domains there has been a substantial growth in data volumes. A majority of the generated data are geotagged. This data includes a wealth of information that can inform insights, planning, and decision-making. The proliferation of open-source analytical engines has democratized access to tools and processing frameworks to analyze data. However, several of the analytical engines do not include streamlined support for spatial data wrangling and processing. Here, we present our language-agnostic methodology for effective analyses over voluminous spatiotemporal datasets using Spark. In particular, we introduce support for spatial data processing within the foundational constructs underpinning development of Spark programs DataFrames, Datasets, and RDDs. Our empirical benchmarks demonstrate the suitability of our methodology; in contrast to alternative distribution spatial analytics frameworks, we achieve over 2x speed-up for spatial range queries. Our methodology also makes effective utilization of resources by reducing disk I/O by a factor of 18, network I/O by 5 orders of magnitude, and peak memory utilization by 58% for the same set of analytic tasks.
{"title":"Towards Timely, Resource-Efficient Analyses Through Spatially-Aware Constructs within Spark","authors":"Daniel Rammer, S. Pallickara, S. Pallickara","doi":"10.1109/UCC48980.2020.00024","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00024","url":null,"abstract":"Across several domains there has been a substantial growth in data volumes. A majority of the generated data are geotagged. This data includes a wealth of information that can inform insights, planning, and decision-making. The proliferation of open-source analytical engines has democratized access to tools and processing frameworks to analyze data. However, several of the analytical engines do not include streamlined support for spatial data wrangling and processing. Here, we present our language-agnostic methodology for effective analyses over voluminous spatiotemporal datasets using Spark. In particular, we introduce support for spatial data processing within the foundational constructs underpinning development of Spark programs DataFrames, Datasets, and RDDs. Our empirical benchmarks demonstrate the suitability of our methodology; in contrast to alternative distribution spatial analytics frameworks, we achieve over 2x speed-up for spatial range queries. Our methodology also makes effective utilization of resources by reducing disk I/O by a factor of 18, network I/O by 5 orders of magnitude, and peak memory utilization by 58% for the same set of analytic tasks.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122820068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00045
Alif Akbar Pranata, Olivier Barais, Johann Bourcier, L. Noirie
Cloud applications and services have significantly increased the importance of system and service configuration activities. These activities include updating (i) these services, (ii) their dependencies on third parties, (iii) their configurations, (iv) the configuration of the execution environment, (v) network configurations. The high frequency of updates results in significant configuration complexity that can lead to failures or performance drops. To mitigate these risks, service providers extensively rely on testing techniques, such as metamorphic testing, to detect these failures before moving to production. However, the development and maintenance of these tests are costly, especially the oracle, which must determine whether a system’s performance remains within acceptable boundaries. This paper explores the use of a learning method called Principal Component Analysis (PCA) to learn about acceptable performance metrics on cloudnative services and identify a metamorphic relationship between the nominal service behavior and the value of these metrics. We investigate the following research question: Is it possible to combine the metamorphic testing technique with learning methods on service monitoring data to detect error-prone reconfigurations before moving to production? We remove the developers’ burden to define a specific oracle in detecting these configuration issues. For validation, we applied this proposal on a distributed media streaming application whose authentication was managed by an external identity and access management services. This application illustrates both the heterogeneity of the technologies used to build this type of service and its large configuration space. Our proposal demonstrated the ability to identify error-prone reconfigurations using PCA.
{"title":"Misconfiguration Discovery with Principal Component Analysis for Cloud-Native Services","authors":"Alif Akbar Pranata, Olivier Barais, Johann Bourcier, L. Noirie","doi":"10.1109/UCC48980.2020.00045","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00045","url":null,"abstract":"Cloud applications and services have significantly increased the importance of system and service configuration activities. These activities include updating (i) these services, (ii) their dependencies on third parties, (iii) their configurations, (iv) the configuration of the execution environment, (v) network configurations. The high frequency of updates results in significant configuration complexity that can lead to failures or performance drops. To mitigate these risks, service providers extensively rely on testing techniques, such as metamorphic testing, to detect these failures before moving to production. However, the development and maintenance of these tests are costly, especially the oracle, which must determine whether a system’s performance remains within acceptable boundaries. This paper explores the use of a learning method called Principal Component Analysis (PCA) to learn about acceptable performance metrics on cloudnative services and identify a metamorphic relationship between the nominal service behavior and the value of these metrics. We investigate the following research question: Is it possible to combine the metamorphic testing technique with learning methods on service monitoring data to detect error-prone reconfigurations before moving to production? We remove the developers’ burden to define a specific oracle in detecting these configuration issues. For validation, we applied this proposal on a distributed media streaming application whose authentication was managed by an external identity and access management services. This application illustrates both the heterogeneity of the technologies used to build this type of service and its large configuration space. Our proposal demonstrated the ability to identify error-prone reconfigurations using PCA.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131504344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00062
Jie Cui, Jing Zhang, Jiantao He, Hong Zhong, Yao Lu
Software-defined networks (SDNs) are key parts of the next generation networks owing to their high programmability and agility that traditional networks lack. However, the SDN controller is vulnerable to Distributed Denial-of-Service (DDoS) attacks. Once the SDN controller was unavailable due to the DDoS attack, all real-time services will be down immediately. Since the advantage of SDN is to process massive network data much faster, we need a real-time detecting algorithm to reduce the impact caused by the attack. To ensure the security of both the users and the SDN, we proposed a detection and defense mechanism against DDoS attacks in Software-defined networking (SDN) environments. The implementation of detection was based on the unbalance in the traffic distribution. The traffic unbalance can be detected by a clustering algorithm such as the K-Means algorithm. Furthermore, we used a Packet_IN message register to filter malicious packets and experimentally evaluated the performance of our scheme in terms of detection accuracy, defense effect, communication delay, and packet loss rate. The results show that our detection method is adaptable to defend against attacks of different scales and types and ensures the least possible decline in the quality of services.
{"title":"DDoS detection and defense mechanism for SDN controllers with K-Means","authors":"Jie Cui, Jing Zhang, Jiantao He, Hong Zhong, Yao Lu","doi":"10.1109/UCC48980.2020.00062","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00062","url":null,"abstract":"Software-defined networks (SDNs) are key parts of the next generation networks owing to their high programmability and agility that traditional networks lack. However, the SDN controller is vulnerable to Distributed Denial-of-Service (DDoS) attacks. Once the SDN controller was unavailable due to the DDoS attack, all real-time services will be down immediately. Since the advantage of SDN is to process massive network data much faster, we need a real-time detecting algorithm to reduce the impact caused by the attack. To ensure the security of both the users and the SDN, we proposed a detection and defense mechanism against DDoS attacks in Software-defined networking (SDN) environments. The implementation of detection was based on the unbalance in the traffic distribution. The traffic unbalance can be detected by a clustering algorithm such as the K-Means algorithm. Furthermore, we used a Packet_IN message register to filter malicious packets and experimentally evaluated the performance of our scheme in terms of detection accuracy, defense effect, communication delay, and packet loss rate. The results show that our detection method is adaptable to defend against attacks of different scales and types and ensures the least possible decline in the quality of services.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128530725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00033
Jörg Domaschka, Simon Volpert, Daniel Seybold
The evolution of distributed Database Management Systems (DBMSs) has led to heterogeneity in DBMS technologies. Particularly DBMSs applying a shared-nothing approach enable distributed operation and support fine-grained configurations of distribution characteristics such as replication degree and consistency. Overall, the operation of such DBMSs on IaaS clouds leads to a large configuration space involving different cloud providers, cloud resources and pricing models.The selection of a specific configuration impacts nonfunctional features such as performance, availability, consistency, but also costs of the deployment. In consequence, these need to be traded-off against each other and a suitable configuration needs to be found, satisfying technical and operational aspects. Yet, due to the strong interdependency between different non-functional features as well as the large number of DBMSs, configuration options, and cloud providers, a manual analysis and comparison is not possible.In this paper, we present Hathi, an evaluation-driven Multi Criteria Decision Making (MCDM) framework for planning of cloud-hosted distributed DBMS. By specifying DBMS configurations, workloads, and cloud offers, Hathi automatically performs experiments and evaluates their results. These are then matched against a list of user-defined preferences using an MCDM algorithm.Our evaluation shows that Hathi is able of performing largescale evaluation scenarios involving multiple DBMS in various cluster sizes, cloud providers, and cloud offers. Hathi can weight the resulting data and derives deployment recommendations with respect to throughput, latency, cost, consistency, availability, and stability.
{"title":"Hathi: An MCDM-based Approach to Capacity Planning for Cloud-hosted DBMS","authors":"Jörg Domaschka, Simon Volpert, Daniel Seybold","doi":"10.1109/UCC48980.2020.00033","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00033","url":null,"abstract":"The evolution of distributed Database Management Systems (DBMSs) has led to heterogeneity in DBMS technologies. Particularly DBMSs applying a shared-nothing approach enable distributed operation and support fine-grained configurations of distribution characteristics such as replication degree and consistency. Overall, the operation of such DBMSs on IaaS clouds leads to a large configuration space involving different cloud providers, cloud resources and pricing models.The selection of a specific configuration impacts nonfunctional features such as performance, availability, consistency, but also costs of the deployment. In consequence, these need to be traded-off against each other and a suitable configuration needs to be found, satisfying technical and operational aspects. Yet, due to the strong interdependency between different non-functional features as well as the large number of DBMSs, configuration options, and cloud providers, a manual analysis and comparison is not possible.In this paper, we present Hathi, an evaluation-driven Multi Criteria Decision Making (MCDM) framework for planning of cloud-hosted distributed DBMS. By specifying DBMS configurations, workloads, and cloud offers, Hathi automatically performs experiments and evaluates their results. These are then matched against a list of user-defined preferences using an MCDM algorithm.Our evaluation shows that Hathi is able of performing largescale evaluation scenarios involving multiple DBMS in various cluster sizes, cloud providers, and cloud offers. Hathi can weight the resulting data and derives deployment recommendations with respect to throughput, latency, cost, consistency, availability, and stability.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123502152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/UCC48980.2020.00036
Bruno Machado Agostinho, Fellipe Bratti Pasini, F. Gomes, A. R. Pinto, M. Dantas
The interest in blockchain technologies has been growing in the last years. Decentralization, scalability, and data integrity are characteristics that can solve problems in different application fields. However, some issues need to be addressed, aiming to facilitate using these systems. Every day the number of users increases, and the possibility of using multiple cryptocurrencies, with several wallets in each one, also brought an issue: How to manage all these addresses? This work proposes a novel architecture, named Wallet Domain Name System (WDNS), to handle several wallets and contracts in different blockchains. Using the Ethereum network to develop a DNS approach, the WDNS uses smart contracts to store and resolve domains and enable multiple subdomains that are managed by the users. Our work provides an open and free data architecture where any person and system can connect and consume. The initial tests showed an average transaction time of almost 15 seconds and a price of 3.30 USD plus 0.0012 USD per character for the domain requests. Also, the tests showed a fixed cost of 0.71 USD plus 1 USD per each synchronized instruction. Comparing the proposed domain price and the average renewal prices for internet domains makes it possible to ensure our proposal’s feasibility.
{"title":"An Approach Adopting Ethereum as a Wallet Domain Name System within the Economy of Things Context","authors":"Bruno Machado Agostinho, Fellipe Bratti Pasini, F. Gomes, A. R. Pinto, M. Dantas","doi":"10.1109/UCC48980.2020.00036","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00036","url":null,"abstract":"The interest in blockchain technologies has been growing in the last years. Decentralization, scalability, and data integrity are characteristics that can solve problems in different application fields. However, some issues need to be addressed, aiming to facilitate using these systems. Every day the number of users increases, and the possibility of using multiple cryptocurrencies, with several wallets in each one, also brought an issue: How to manage all these addresses? This work proposes a novel architecture, named Wallet Domain Name System (WDNS), to handle several wallets and contracts in different blockchains. Using the Ethereum network to develop a DNS approach, the WDNS uses smart contracts to store and resolve domains and enable multiple subdomains that are managed by the users. Our work provides an open and free data architecture where any person and system can connect and consume. The initial tests showed an average transaction time of almost 15 seconds and a price of 3.30 USD plus 0.0012 USD per character for the domain requests. Also, the tests showed a fixed cost of 0.71 USD plus 1 USD per each synchronized instruction. Comparing the proposed domain price and the average renewal prices for internet domains makes it possible to ensure our proposal’s feasibility.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"8 Suppl 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121508270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-16DOI: 10.1109/UCC48980.2020.00066
A. Alsalemi, Ayman Al-Kababji, Yassine Himeur, F. Bensaali, A. Amira
Energy efficiency is a crucial factor in the wellbeing of our planet. In parallel, Machine Learning (ML) plays an instrumental role in automating our lives and creating convenient workflows for enhancing behavior. So, analyzing energy behavior can help understand weak points and lay the path towards better interventions. Moving towards higher performance, cloud platforms can assist researchers in conducting classification trials that need high computational power. Under the larger umbrella of the Consumer Engagement Towards Energy Saving Behavior by means of Exploiting Micro Moments and Mobile Recommendation Systems (EM)3 framework, we aim to influence consumers’ behavioral change via improving their power consumption consciousness. In this paper, common cloud artificial intelligence platforms are benchmarked and compared for micromoment classification. Amazon Web Services, Google Cloud Platform, Google Colab, and Microsoft Azure Machine Learning are employed on simulated and real energy consumption datasets. The KNN, DNN, and SVM classifiers have been employed. Superb performance has been observed in the selected cloud platforms, showing relatively close performance. Yet, the nature of some algorithms limits the training performance.
{"title":"Cloud Energy Micro-Moment Data Classification: A Platform Study","authors":"A. Alsalemi, Ayman Al-Kababji, Yassine Himeur, F. Bensaali, A. Amira","doi":"10.1109/UCC48980.2020.00066","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00066","url":null,"abstract":"Energy efficiency is a crucial factor in the wellbeing of our planet. In parallel, Machine Learning (ML) plays an instrumental role in automating our lives and creating convenient workflows for enhancing behavior. So, analyzing energy behavior can help understand weak points and lay the path towards better interventions. Moving towards higher performance, cloud platforms can assist researchers in conducting classification trials that need high computational power. Under the larger umbrella of the Consumer Engagement Towards Energy Saving Behavior by means of Exploiting Micro Moments and Mobile Recommendation Systems (EM)3 framework, we aim to influence consumers’ behavioral change via improving their power consumption consciousness. In this paper, common cloud artificial intelligence platforms are benchmarked and compared for micromoment classification. Amazon Web Services, Google Cloud Platform, Google Colab, and Microsoft Azure Machine Learning are employed on simulated and real energy consumption datasets. The KNN, DNN, and SVM classifiers have been employed. Superb performance has been observed in the selected cloud platforms, showing relatively close performance. Yet, the nature of some algorithms limits the training performance.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125463195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-16DOI: 10.1109/UCC48980.2020.00048
Jie Zhao, M. A. Rodriguez, R. Buyya
The COVID-19 global pandemic is an unprecedented health crisis. Many researchers around the world have produced an extensive collection of literature since the outbreak. Analysing this information to extract knowledge and provide meaningful insights in a timely manner requires a considerable amount of computational power. Cloud platforms are designed to provide this computational power in an on-demand and elastic manner. Specifically, hybrid clouds, composed of private and public data centers, are particularly well suited to deploy computationally intensive workloads in a cost-efficient, yet scalable manner. In this paper, we developed a system utilising the Aneka Platform as a Service middleware with parallel processing and multi-cloud capability to accelerate the data process pipeline and article categorising process using machine learning on a hybrid cloud. The results are then persisted for further referencing, searching and visualising. The performance evaluation shows that the system can help with reducing processing time and achieving linear scalability. Beyond COVID-19, the application might be used directly in broader scholarly article indexing and analysing.
{"title":"High-Performance Mining of COVID-19 Open Research Datasets for Text Classification and Insights in Cloud Computing Environments","authors":"Jie Zhao, M. A. Rodriguez, R. Buyya","doi":"10.1109/UCC48980.2020.00048","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00048","url":null,"abstract":"The COVID-19 global pandemic is an unprecedented health crisis. Many researchers around the world have produced an extensive collection of literature since the outbreak. Analysing this information to extract knowledge and provide meaningful insights in a timely manner requires a considerable amount of computational power. Cloud platforms are designed to provide this computational power in an on-demand and elastic manner. Specifically, hybrid clouds, composed of private and public data centers, are particularly well suited to deploy computationally intensive workloads in a cost-efficient, yet scalable manner. In this paper, we developed a system utilising the Aneka Platform as a Service middleware with parallel processing and multi-cloud capability to accelerate the data process pipeline and article categorising process using machine learning on a hybrid cloud. The results are then persisted for further referencing, searching and visualising. The performance evaluation shows that the system can help with reducing processing time and achieving linear scalability. Beyond COVID-19, the application might be used directly in broader scholarly article indexing and analysing.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124657137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-30DOI: 10.1109/UCC48980.2020.00027
H. Mehta, P. Harvey, O. Rana, R. Buyya, B. Varghese
Containers are popular for deploying workloads. However, there are limited software-based methods (hardware- based methods are expensive) for obtaining the power consumed by containers to facilitate power-aware container scheduling. This paper presents WattsApp, a tool underpinned by a six step software-based method for power-aware container scheduling to minimize power cap violations on a server. The proposed method relies on a neural network-based power estimation model and a power capped container scheduling technique. Experimental studies are pursued in a lab-based environment on 10 benchmarks on Intel and ARM processors. The results highlight that power estimation has negligible overheads - nearly 90% of all data samples can be estimated with less than a 10% error, and the Mean Absolute Percentage Error (MAPE) is less than 6%. The power-aware scheduling of WattsApp is more effective than Intel’s Running Power Average Limit (RAPL) based power capping as it does not degrade the performance of all running containers.
{"title":"WattsApp: Power-Aware Container Scheduling","authors":"H. Mehta, P. Harvey, O. Rana, R. Buyya, B. Varghese","doi":"10.1109/UCC48980.2020.00027","DOIUrl":"https://doi.org/10.1109/UCC48980.2020.00027","url":null,"abstract":"Containers are popular for deploying workloads. However, there are limited software-based methods (hardware- based methods are expensive) for obtaining the power consumed by containers to facilitate power-aware container scheduling. This paper presents WattsApp, a tool underpinned by a six step software-based method for power-aware container scheduling to minimize power cap violations on a server. The proposed method relies on a neural network-based power estimation model and a power capped container scheduling technique. Experimental studies are pursued in a lab-based environment on 10 benchmarks on Intel and ARM processors. The results highlight that power estimation has negligible overheads - nearly 90% of all data samples can be estimated with less than a 10% error, and the Mean Absolute Percentage Error (MAPE) is less than 6%. The power-aware scheduling of WattsApp is more effective than Intel’s Running Power Average Limit (RAPL) based power capping as it does not degrade the performance of all running containers.","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131424442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wahabou Abdou, Ana Roxin, B. Yenke, J. Toutouh, F. Ababsa
Program Committee K. Pasupa, King Mongkut's Institute of Technology Ladkrabang, Thailand X. Chen, Nanjing University of Posts and Communications, China X. Yang, Sichuan University, China W. Lei, University of Jinan, China A. Paul, Kyungpook National University, South Korea W. Wu, Sichuan University, China Y. Fang, Northwest A&F University, China J. Wu, Xidian University, China F. Frati, Università degli Studi di Milano, Italy L. Arnone, STMicroelectronics, Italy M. Sacco, ITIA-CNR, Italy G. Gianini, EBTIC/Khalifa University of Science and Technology, UAE Bin Ye, Queen's University Belfast, United Kingdom
{"title":"Workshop Program Committees","authors":"Wahabou Abdou, Ana Roxin, B. Yenke, J. Toutouh, F. Ababsa","doi":"10.1109/icdmw.2009.7","DOIUrl":"https://doi.org/10.1109/icdmw.2009.7","url":null,"abstract":"Program Committee K. Pasupa, King Mongkut's Institute of Technology Ladkrabang, Thailand X. Chen, Nanjing University of Posts and Communications, China X. Yang, Sichuan University, China W. Lei, University of Jinan, China A. Paul, Kyungpook National University, South Korea W. Wu, Sichuan University, China Y. Fang, Northwest A&F University, China J. Wu, Xidian University, China F. Frati, Università degli Studi di Milano, Italy L. Arnone, STMicroelectronics, Italy M. Sacco, ITIA-CNR, Italy G. Gianini, EBTIC/Khalifa University of Science and Technology, UAE Bin Ye, Queen's University Belfast, United Kingdom","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123258423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1109/icspcs.2018.8631775
M. Fateh
{"title":"Steering Committee","authors":"M. Fateh","doi":"10.1109/icspcs.2018.8631775","DOIUrl":"https://doi.org/10.1109/icspcs.2018.8631775","url":null,"abstract":"","PeriodicalId":125849,"journal":{"name":"2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114157655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}