Amir Teshome Wonjiga, S. Peisert, Louis Rilling, C. Morin
Migrating an application from local compute resources to commercial cloud resources involves giving up full control of the physical infrastructure, as the cloud service provider (CSP) is responsible for managing the physical infrastructure, including its security. The reliance of a tenant on a CSP can create a trust issue around whether the CSP is upholding its end of the bargain. CSPs acknowledge this and provide a guarantee through a Service Level Agreement (SLA). SLAs need to be verified for satisfaction of the defined objectives. To avoid raising the trust issue again, such a verification procedure needs to be unbiased and independently achievable by both tenants and CSPs without one relying on the other party. In this paper, we consider an SLA offered by the provider that guarantees the integrity of tenants' data, and propose to verify the SLA using an integrity checking method based on a distributed ledger. Our proposed method allows both CSPs and tenants to perform integrity checking without one party relying on the other. The method uses a blockchain as a distributed ledger to store evidence of data integrity. Assuming the ledger as a secure, trusted source of information, the evidence can be used to resolve conflicts between providers and tenants. In addition, we present a prototype implementation and an experimental evaluation to show the feasibility of our verification method and to measure the time overhead.
{"title":"Blockchain as a Trusted Component in Cloud SLA Verification","authors":"Amir Teshome Wonjiga, S. Peisert, Louis Rilling, C. Morin","doi":"10.1145/3368235.3368872","DOIUrl":"https://doi.org/10.1145/3368235.3368872","url":null,"abstract":"Migrating an application from local compute resources to commercial cloud resources involves giving up full control of the physical infrastructure, as the cloud service provider (CSP) is responsible for managing the physical infrastructure, including its security. The reliance of a tenant on a CSP can create a trust issue around whether the CSP is upholding its end of the bargain. CSPs acknowledge this and provide a guarantee through a Service Level Agreement (SLA). SLAs need to be verified for satisfaction of the defined objectives. To avoid raising the trust issue again, such a verification procedure needs to be unbiased and independently achievable by both tenants and CSPs without one relying on the other party. In this paper, we consider an SLA offered by the provider that guarantees the integrity of tenants' data, and propose to verify the SLA using an integrity checking method based on a distributed ledger. Our proposed method allows both CSPs and tenants to perform integrity checking without one party relying on the other. The method uses a blockchain as a distributed ledger to store evidence of data integrity. Assuming the ledger as a secure, trusted source of information, the evidence can be used to resolve conflicts between providers and tenants. In addition, we present a prototype implementation and an experimental evaluation to show the feasibility of our verification method and to measure the time overhead.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115300646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With new paradigms to deliver software, a programmer's utopia is close to becoming reality, where she only focuses on the realization of an application without messing around with infrastructural limitations and deployment considerations. Currently, this vision is supported by a paradigm shift of cloud providers' service models, where new abstraction layers enable in particular serverless computing. Besides, the Internet of Things (IoT) requires a shift from the cloud paradigms to a fog computing perspective, where the functionality of a system needs to be allocated in the cloud-to-fog-continuum. In this regard, we analyze the applicability of a Function as a Service (FaaS) framework on an IoT service platform - SensIoT, which actually monitors environmental factors. Additionally, we deliver functions to cheap, energy-efficient Single Board Computers, which nowadays rapidly emerge as nodes of the IoT. We evaluate our approach by analyzing the resource usages of a FaaS enabled SensIoT and give an outline whether the combination of serverless computing, fog computing, and the IoT is going to enable the era of cloudless computing.
{"title":"Applicability of Serverless Computing in Fog Computing Environments for IoT Scenarios","authors":"Marcel Großmann, Christos Ioannidis, D. Le","doi":"10.1145/3368235.3368834","DOIUrl":"https://doi.org/10.1145/3368235.3368834","url":null,"abstract":"With new paradigms to deliver software, a programmer's utopia is close to becoming reality, where she only focuses on the realization of an application without messing around with infrastructural limitations and deployment considerations. Currently, this vision is supported by a paradigm shift of cloud providers' service models, where new abstraction layers enable in particular serverless computing. Besides, the Internet of Things (IoT) requires a shift from the cloud paradigms to a fog computing perspective, where the functionality of a system needs to be allocated in the cloud-to-fog-continuum. In this regard, we analyze the applicability of a Function as a Service (FaaS) framework on an IoT service platform - SensIoT, which actually monitors environmental factors. Additionally, we deliver functions to cheap, energy-efficient Single Board Computers, which nowadays rapidly emerge as nodes of the IoT. We evaluate our approach by analyzing the resource usages of a FaaS enabled SensIoT and give an outline whether the combination of serverless computing, fog computing, and the IoT is going to enable the era of cloudless computing.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"71 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114120275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdessalam Elhabbash, Yehia El-khatib, G. Blair, Yuhui Lin, A. Barker, John Thomson
The current large selection of cloud instances that are functionally equivalent makes selecting the right cloud service a challenging decision. We envision a model driven engineering (MDE) approach to raise the level of abstraction for cloud service selection. One way to achieve this is through a domain specific language (DSL) for modelling the service level objectives (SLOs) and a brokerage system that utilises the SLO model to select services. However, this demands an understanding of the provider SLAs and the capabilities of the current cloud modelling languages (CMLs). This paper investigates the state-of-the-art for SLO support in both cloud providers SLAs and CMLs in order to identify the gaps for SLO support. We then outline research directions towards achieving the MDE-based cloud brokerage.
{"title":"Envisioning SLO-driven Service Selection in Multi-cloud Applications","authors":"Abdessalam Elhabbash, Yehia El-khatib, G. Blair, Yuhui Lin, A. Barker, John Thomson","doi":"10.1145/3368235.3368831","DOIUrl":"https://doi.org/10.1145/3368235.3368831","url":null,"abstract":"The current large selection of cloud instances that are functionally equivalent makes selecting the right cloud service a challenging decision. We envision a model driven engineering (MDE) approach to raise the level of abstraction for cloud service selection. One way to achieve this is through a domain specific language (DSL) for modelling the service level objectives (SLOs) and a brokerage system that utilises the SLO model to select services. However, this demands an understanding of the provider SLAs and the capabilities of the current cloud modelling languages (CMLs). This paper investigates the state-of-the-art for SLO support in both cloud providers SLAs and CMLs in order to identify the gaps for SLO support. We then outline research directions towards achieving the MDE-based cloud brokerage.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124334075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An ever-increasing number of different types of objects are connecting to the Internet, and this phenomenon is called the Internet of Things(IoT). Processing the IoT generated data by Cloud Computing causes high latency. Fog Computing is a new motivation for resolving the latency issue, which is a hosting environment between the IoT and the Cloud layers. IoT applications are faced with three significant challenges: big data, device heterogeneity, and Fog resiliency. With the motivation of resolving the challenges, this proposal introduces a Microservice software framework for implementing automatic functions in the IoT-Fog-Cloud ecosystem. The proposed Microservice framework will also enable the development of IoT-based context-aware intelligent decision-making systems. We describe the functionality and contribution of each automatic function in the paper.
{"title":"A Framework of Automation on Context-Aware Internet of Things (IoT) Systems","authors":"Hossein Chegini, Aniket Mahanti","doi":"10.1145/3368235.3368848","DOIUrl":"https://doi.org/10.1145/3368235.3368848","url":null,"abstract":"An ever-increasing number of different types of objects are connecting to the Internet, and this phenomenon is called the Internet of Things(IoT). Processing the IoT generated data by Cloud Computing causes high latency. Fog Computing is a new motivation for resolving the latency issue, which is a hosting environment between the IoT and the Cloud layers. IoT applications are faced with three significant challenges: big data, device heterogeneity, and Fog resiliency. With the motivation of resolving the challenges, this proposal introduces a Microservice software framework for implementing automatic functions in the IoT-Fog-Cloud ecosystem. The proposed Microservice framework will also enable the development of IoT-based context-aware intelligent decision-making systems. We describe the functionality and contribution of each automatic function in the paper.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117177649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses a question of whether resources suffering nonlinear fluctuations can maintain their stability as a system expands for computing tasks in a distributed manner. To this end, we suggest that by evolving individual resources following the self-organized criticality of sandpile model, the whole load distribution system can reach a stable state after a small but extremely local overhead occurs, leading to lots of avalanches. The proposed load balancing approach is evaluated in terms of latency minimization.
{"title":"Towards Self-Organized Load Distribution over Chaotic Resources","authors":"Yong-Hyuk Moon, Yong-Ju Lee","doi":"10.1145/3368235.3369366","DOIUrl":"https://doi.org/10.1145/3368235.3369366","url":null,"abstract":"This paper addresses a question of whether resources suffering nonlinear fluctuations can maintain their stability as a system expands for computing tasks in a distributed manner. To this end, we suggest that by evolving individual resources following the self-organized criticality of sandpile model, the whole load distribution system can reach a stable state after a small but extremely local overhead occurs, leading to lots of avalanches. The proposed load balancing approach is evaluated in terms of latency minimization.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126813603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serverless computing is a growing industry trend with corresponding rise in interest by scholars and tinkerers. Increasingly, open source and academic system prototypes are being proposed especially in relation with cloud, edge and fog computing among other distributed computing specialisations. Due to the strict separation between elastically scalable stateless microservices bound to stateful backend services prevalent in this computing paradigm, the resulting applications are inherently distributed with favourable characteristics such as elastic scalability and disposability. Still, software application developers are confronted with a multitude of different methods and tools to build, test and deploy their function-based applications in today's serverless ecosystems. The logical next step is therefore a methodical development approach with key enablers based on a classification of languages, tools, systems, system behaviours, patterns, pitfalls, application architectures, compositions and cloud services around the serverless application development process.
{"title":"Serverless Computing and Cloud Function-based Applications","authors":"Josef Spillner","doi":"10.1145/3368235.3370269","DOIUrl":"https://doi.org/10.1145/3368235.3370269","url":null,"abstract":"Serverless computing is a growing industry trend with corresponding rise in interest by scholars and tinkerers. Increasingly, open source and academic system prototypes are being proposed especially in relation with cloud, edge and fog computing among other distributed computing specialisations. Due to the strict separation between elastically scalable stateless microservices bound to stateful backend services prevalent in this computing paradigm, the resulting applications are inherently distributed with favourable characteristics such as elastic scalability and disposability. Still, software application developers are confronted with a multitude of different methods and tools to build, test and deploy their function-based applications in today's serverless ecosystems. The logical next step is therefore a methodical development approach with key enablers based on a classification of languages, tools, systems, system behaviours, patterns, pitfalls, application architectures, compositions and cloud services around the serverless application development process.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126270409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cryptocurrency market is very volatile, trading prices for some tokens can experience a sudden spike up or downturn in a matter of minutes. As a result, traders are facing difficulty following with all the trading price movements unless they are monitoring them manually. Hence, we propose a real-time alert system for monitoring those trading prices, sending notifications to users if any target prices match or an anomaly occurs. We adopt a streaming platform as the backbone of our system. It can handle thousands of messages per second with low latency rate at an average of 19 seconds on our testing environment. Long-Short-Term-Memory (LSTM) model is used as an anomaly detector. We compare the impact of five different data normalisation approaches with LSTM model on Bitcoin price dataset. The result shows that decimal scaling produces only Mean Absolute Percentage Error (MAPE) of 8.4 per cent prediction error rate on daily price data, which is the best performance achieved compared to other observed methods. However, with one-minute price dataset, our model produces higher prediction error making it impractical to distinguish between normal and anomaly points of price movement.
{"title":"Intelligent Price Alert System for Digital Assets - Cryptocurrencies","authors":"Sronglong Chhem, A. Anjum, Bilal Arshad","doi":"10.1145/3368235.3368874","DOIUrl":"https://doi.org/10.1145/3368235.3368874","url":null,"abstract":"Cryptocurrency market is very volatile, trading prices for some tokens can experience a sudden spike up or downturn in a matter of minutes. As a result, traders are facing difficulty following with all the trading price movements unless they are monitoring them manually. Hence, we propose a real-time alert system for monitoring those trading prices, sending notifications to users if any target prices match or an anomaly occurs. We adopt a streaming platform as the backbone of our system. It can handle thousands of messages per second with low latency rate at an average of 19 seconds on our testing environment. Long-Short-Term-Memory (LSTM) model is used as an anomaly detector. We compare the impact of five different data normalisation approaches with LSTM model on Bitcoin price dataset. The result shows that decimal scaling produces only Mean Absolute Percentage Error (MAPE) of 8.4 per cent prediction error rate on daily price data, which is the best performance achieved compared to other observed methods. However, with one-minute price dataset, our model produces higher prediction error making it impractical to distinguish between normal and anomaly points of price movement.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132202120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurrency bugs are difficult to diagnose and fix, due to the nature of the bugs and how they manifest themselves during execution. Traditional approaches for diagnosing concurrency bugs attempt to reproduce the exact execution schedule which reveals the bug, resulting in high runtime overhead. In this paper, we present our work in identifying concurrency bugs using resource consumption footprints. This is based on the observation that resource access and consumption patterns are critical indications of the run-time behavior of concurrent software, and can be used as a powerful mechanism to guide the software debugging process. We demonstrate that monitoring resource footprints at runtime can effectively help detect software bugs. Specifically, for MPI programs, a simple SVM classifier can detect deadlocks with high accuracy using only the CPU usage patterns.
{"title":"Deadlock Detection for Concurrent Programs Using Resource Footprints","authors":"Sonam Sherpa, Abdi Vicenciodelmoral, Xinghui Zhao","doi":"10.1145/3368235.3369370","DOIUrl":"https://doi.org/10.1145/3368235.3369370","url":null,"abstract":"Concurrency bugs are difficult to diagnose and fix, due to the nature of the bugs and how they manifest themselves during execution. Traditional approaches for diagnosing concurrency bugs attempt to reproduce the exact execution schedule which reveals the bug, resulting in high runtime overhead. In this paper, we present our work in identifying concurrency bugs using resource consumption footprints. This is based on the observation that resource access and consumption patterns are critical indications of the run-time behavior of concurrent software, and can be used as a powerful mechanism to guide the software debugging process. We demonstrate that monitoring resource footprints at runtime can effectively help detect software bugs. Specifically, for MPI programs, a simple SVM classifier can detect deadlocks with high accuracy using only the CPU usage patterns.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134638873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The practice of collecting big performance data has changed how infrastructure providers model and manage the system in the past decade. There is a methodology shift from domain-knowledge based white-box models, e.g., queueing [1] and simulation[2], to black-box data-driven models, e.g., machine learning. Such a game change for resource management from workload characterization[3], dependability prediction [4,5] to sprinting policy[6], can be seen from major IT infastructure providers, e.g., IBM and Google. While applying higher order deep neural networks show promises in predicting performance [4,5], the scalability of such an approach is often limited. A plethoral of prior work focus on deriving complex and highly accurate models, such as deep neural networks, overlooking the constraints of computation efficiency and the scalability. Their applicability on resource management problems of the production systems is thus hindered. A crucial aspect to derive accurate and scalable predictive performance models lies on leveraging the domain expertise, white-box models, and black-box models. Examples of scalable ticket management services from IBM [4] and predicting job failures [5] at Google. Model driven computation sprinting [6] dynamically scales the frequency and the allocation of computing cores based on grey box models which outperforms deep neural networks. Aforementioned case studies strongly argue for the importance of combing domain-driven and data-driven models At the same time, various of acceleration techniques are developed to reduce the computation overhead of (deep) machine learning models in small scale and isolated testbed. Managing the performance of clusters that are dominated by machine learning workloads remains challenging and calls for novel solutions. SlimML [9] accelerates the ML modeli training time by only processing critical data set at a slight cost of accuracy, whereas Dias [7] simultaneously explores the data dropping and frequency sprinting for ML clusters that support multiple priorities of different training workloads. Aforementioned studies point out the complexity of managing the accuracy-efficiency tradeoff of ML jobs in a cluster-like environment where jobs interfere each other via sharing the underlying resources and common data sets.
{"title":"Opportunities and Challenges for Resource Management and Machine Learning Clusters","authors":"L. Chen","doi":"10.1145/3368235.3369376","DOIUrl":"https://doi.org/10.1145/3368235.3369376","url":null,"abstract":"The practice of collecting big performance data has changed how infrastructure providers model and manage the system in the past decade. There is a methodology shift from domain-knowledge based white-box models, e.g., queueing [1] and simulation[2], to black-box data-driven models, e.g., machine learning. Such a game change for resource management from workload characterization[3], dependability prediction [4,5] to sprinting policy[6], can be seen from major IT infastructure providers, e.g., IBM and Google. While applying higher order deep neural networks show promises in predicting performance [4,5], the scalability of such an approach is often limited. A plethoral of prior work focus on deriving complex and highly accurate models, such as deep neural networks, overlooking the constraints of computation efficiency and the scalability. Their applicability on resource management problems of the production systems is thus hindered. A crucial aspect to derive accurate and scalable predictive performance models lies on leveraging the domain expertise, white-box models, and black-box models. Examples of scalable ticket management services from IBM [4] and predicting job failures [5] at Google. Model driven computation sprinting [6] dynamically scales the frequency and the allocation of computing cores based on grey box models which outperforms deep neural networks. Aforementioned case studies strongly argue for the importance of combing domain-driven and data-driven models At the same time, various of acceleration techniques are developed to reduce the computation overhead of (deep) machine learning models in small scale and isolated testbed. Managing the performance of clusters that are dominated by machine learning workloads remains challenging and calls for novel solutions. SlimML [9] accelerates the ML modeli training time by only processing critical data set at a slight cost of accuracy, whereas Dias [7] simultaneously explores the data dropping and frequency sprinting for ML clusters that support multiple priorities of different training workloads. Aforementioned studies point out the complexity of managing the accuracy-efficiency tradeoff of ML jobs in a cluster-like environment where jobs interfere each other via sharing the underlying resources and common data sets.","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125470556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UCC/BDCAT'19 Poster Chairs Welcome Message","authors":"Kenichi Kourai, Evangelos Pournaras","doi":"10.1145/3368235.3368880","DOIUrl":"https://doi.org/10.1145/3368235.3368880","url":null,"abstract":"","PeriodicalId":166357,"journal":{"name":"Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122343216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}