{"title":"Session details: 1st International Workshop on Blockchain for Smart Cyber-Physical Systems (BlockCPS)","authors":"","doi":"10.1145/3517183","DOIUrl":"https://doi.org/10.1145/3517183","url":null,"abstract":"","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128832943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blockchain, a potentially disruptive technology, advances many different applications, e.g., crypto-currencies, supply chains, and the Internet of Things. Under the hood of blockchain, it is required to handle different kinds of digital assets and data. The next-generation blockchain ecosystem is expected to consist of numerous applications, and each application may have a distinct representation of digital assets. However, digital assets cannot be directly recorded on the blockchain, and a tokenization process is required to format these assets. Tokenization on blockchain will inevitably require a certain level of proper standards to enrich advanced functionalities and enhance interoperable capabilities for future applications. However, due to specific features of digital assets, it is hard to obtain a standard token form to represent all kinds of assets. For example, when considering fungibility, some assets are divisible and identical, commonly referred to as fungible assets. In contrast, others that are not fungible are widely referred to as non-fungible assets. When tokenizing these assets, we are required to follow different tokenization processes. The way to effectively tokenize assets is thus essential and expecting to confront various unprecedented challenges. This paper provides a systematic and comprehensive study of the current progress of tokenization on blockchain. First, we explore general principles and practical schemes to tokenize digital assets for blockchain and classify digitized tokens into three categories: fungible, non-fungible, and semi-fungible. We then focus on discussing the well-known Ethereum standards on non-fungible tokens. Finally, we discuss several critical challenges and some potential research directions to advance the research on exploring the tokenization process on the blockchain. To the best of our knowledge, this is the first systematic study for tokenization on blockchain.
{"title":"SoK: tokenization on blockchain","authors":"G. Wang, M. Nixon","doi":"10.1145/3492323.3495577","DOIUrl":"https://doi.org/10.1145/3492323.3495577","url":null,"abstract":"Blockchain, a potentially disruptive technology, advances many different applications, e.g., crypto-currencies, supply chains, and the Internet of Things. Under the hood of blockchain, it is required to handle different kinds of digital assets and data. The next-generation blockchain ecosystem is expected to consist of numerous applications, and each application may have a distinct representation of digital assets. However, digital assets cannot be directly recorded on the blockchain, and a tokenization process is required to format these assets. Tokenization on blockchain will inevitably require a certain level of proper standards to enrich advanced functionalities and enhance interoperable capabilities for future applications. However, due to specific features of digital assets, it is hard to obtain a standard token form to represent all kinds of assets. For example, when considering fungibility, some assets are divisible and identical, commonly referred to as fungible assets. In contrast, others that are not fungible are widely referred to as non-fungible assets. When tokenizing these assets, we are required to follow different tokenization processes. The way to effectively tokenize assets is thus essential and expecting to confront various unprecedented challenges. This paper provides a systematic and comprehensive study of the current progress of tokenization on blockchain. First, we explore general principles and practical schemes to tokenize digital assets for blockchain and classify digitized tokens into three categories: fungible, non-fungible, and semi-fungible. We then focus on discussing the well-known Ethereum standards on non-fungible tokens. Finally, we discuss several critical challenges and some potential research directions to advance the research on exploring the tokenization process on the blockchain. To the best of our knowledge, this is the first systematic study for tokenization on blockchain.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124997470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: 3rd International Workshop on Cloud, IoT and Fog Systems (CIFS)","authors":"","doi":"10.1145/3517184","DOIUrl":"https://doi.org/10.1145/3517184","url":null,"abstract":"","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125213028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With 92,000 deaths and 18 percent of the total child mortality every year, pneumonia is the leading cause of child mortality in children under 5 in Pakistan. Pakistan is one of the top 5 countries for childhood pneumonia deaths around the world. Bacteria and viruses are most common infectious agents of pneumonia. The diagnostic test for pneumonia detection is chest x-ray. Basic diagnostic tests facilities are available even at rural health centers. In proposed study, a pre-trained convolutional neural network; VGG19 model is fine-tuned on dataset of 5863 chest x-ray images of healthy, viral, and bacterial pneumonia. The VGG19, model 1 is trained on viral and bacterial pneumonia images, and model 2 is trained on multi-class data. The model 1 with viral and bacterial pneumonia images showed training accuracy of 0.83 and validation accuracy of 0.84. The model 2 with normal, viral, and bacterial pneumonia, showed training accuracy of 0.84 and validation accuracy of 0.85. The results show that the VGG19 model has powerful prediction capacity to identify correct features of types of pneumonia with reasonable accuracy even with smaller and unbalanced dataset. The results predict that already developed and trained algorithms can be used as ready to use clinical diagnostic tool, if fine-tuned with larger balanced dataset with few targeted changes. These tools can be used as second reader tool by the physicians, can process thousands of images in limited time with high accuracy, relieving the burden of patients on limited capacity of the healthcare facilities.
{"title":"Differentiation of bacterial and viral pneumonia in children under five using deep learning","authors":"M. Jadoon, A. Anjum","doi":"10.1145/3492323.3495631","DOIUrl":"https://doi.org/10.1145/3492323.3495631","url":null,"abstract":"With 92,000 deaths and 18 percent of the total child mortality every year, pneumonia is the leading cause of child mortality in children under 5 in Pakistan. Pakistan is one of the top 5 countries for childhood pneumonia deaths around the world. Bacteria and viruses are most common infectious agents of pneumonia. The diagnostic test for pneumonia detection is chest x-ray. Basic diagnostic tests facilities are available even at rural health centers. In proposed study, a pre-trained convolutional neural network; VGG19 model is fine-tuned on dataset of 5863 chest x-ray images of healthy, viral, and bacterial pneumonia. The VGG19, model 1 is trained on viral and bacterial pneumonia images, and model 2 is trained on multi-class data. The model 1 with viral and bacterial pneumonia images showed training accuracy of 0.83 and validation accuracy of 0.84. The model 2 with normal, viral, and bacterial pneumonia, showed training accuracy of 0.84 and validation accuracy of 0.85. The results show that the VGG19 model has powerful prediction capacity to identify correct features of types of pneumonia with reasonable accuracy even with smaller and unbalanced dataset. The results predict that already developed and trained algorithms can be used as ready to use clinical diagnostic tool, if fine-tuned with larger balanced dataset with few targeted changes. These tools can be used as second reader tool by the physicians, can process thousands of images in limited time with high accuracy, relieving the burden of patients on limited capacity of the healthcare facilities.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129356772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Critical applications deployed on cloud and in-house information technology infrastructures use software solutions known as high-availability clusters (HACs) to ensure higher availability. Our paper introduces a Bayesian prognostic (BP) framework that improves the ability of HACs to (i) predict component failures that can be resolved by reinitialising the failed component and (ii) propagate and predict failures in high-level components when the component failure cannot be resolved through reinitialisation. Preliminary experiments presented in the paper demonstrate that this BP framework can reduce the downtime for an enterprise application subjected to a wide range of injected faults by between 5.5 and 7.9 times compared to the availability achieved by the open-source HAC ClusterLabs stack (Pacemaker/Corosync).
{"title":"Towards a Bayesian prognostic framework for high-availability clusters","authors":"Premathas Somasekaram, R. Calinescu","doi":"10.1145/3492323.3495583","DOIUrl":"https://doi.org/10.1145/3492323.3495583","url":null,"abstract":"Critical applications deployed on cloud and in-house information technology infrastructures use software solutions known as high-availability clusters (HACs) to ensure higher availability. Our paper introduces a Bayesian prognostic (BP) framework that improves the ability of HACs to (i) predict component failures that can be resolved by reinitialising the failed component and (ii) propagate and predict failures in high-level components when the component failure cannot be resolved through reinitialisation. Preliminary experiments presented in the paper demonstrate that this BP framework can reduce the downtime for an enterprise application subjected to a wide range of injected faults by between 5.5 and 7.9 times compared to the availability achieved by the open-source HAC ClusterLabs stack (Pacemaker/Corosync).","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"330 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133401103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Managing Cloud applications with variable resource requirements over time is an insipid task that could benefit from autonomic application management. The management platform will then need to know what the application owner considers a good deployment for the current execution context, which is normally captured by a utility function. However, it is often difficult to define such a function directly by first principles in a way that would perfectly capture the application owner's preferences. This paper proposes a methodology for defining the utility function only from the monitoring measurements taken to assess the state and context of the running application.
{"title":"Marginal metric utility for autonomic cloud application management","authors":"M. Rózanska, G. Horn","doi":"10.1145/3492323.3495587","DOIUrl":"https://doi.org/10.1145/3492323.3495587","url":null,"abstract":"Managing Cloud applications with variable resource requirements over time is an insipid task that could benefit from autonomic application management. The management platform will then need to know what the application owner considers a good deployment for the current execution context, which is normally captured by a utility function. However, it is often difficult to define such a function directly by first principles in a way that would perfectly capture the application owner's preferences. This paper proposes a methodology for defining the utility function only from the monitoring measurements taken to assess the state and context of the running application.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131460643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, some applications need CNN inference on resource-constrained edge devices that may have very limited memory and computation capacity to fit a large CNN model. In such application scenarios, to deploy a large CNN model and perform inference on a single edge device is not feasible. A possible solution approach is to deploy a large CNN model on a (fully) distributed system at the edge and take advantage of all available edge devices to cooperatively perform the CNN inference. We have observed that existing methodologies, utilizing different partitioning strategies to deploy a CNN model and perform inference at the edge on a distributed system, have several disadvantages. Therefore, in this paper, we propose a novel partitioning strategy, called Vertical Partitioning Strategy, together with a novel methodology needed to utilize our partitioning strategy efficiently, for CNN model inference on a distributed system at the edge. We compare our experimental results on the YOLOv2 CNN model with results obtained by the existing three methodologies and show the advantages of our methodologies in terms of memory requirement per edge device and overall system performance. Moreover, our experimental results on other representative CNN models show that our novel methodology utilizing our novel partitioning strategy is able to deliver CNN inference with very reduced memory requirement per edge device and improved overall system performance at the same time.
{"title":"Low-memory and high-performance CNN inference on distributed systems at the edge","authors":"Erqian Tang, T. Stefanov","doi":"10.1145/3492323.3495629","DOIUrl":"https://doi.org/10.1145/3492323.3495629","url":null,"abstract":"Nowadays, some applications need CNN inference on resource-constrained edge devices that may have very limited memory and computation capacity to fit a large CNN model. In such application scenarios, to deploy a large CNN model and perform inference on a single edge device is not feasible. A possible solution approach is to deploy a large CNN model on a (fully) distributed system at the edge and take advantage of all available edge devices to cooperatively perform the CNN inference. We have observed that existing methodologies, utilizing different partitioning strategies to deploy a CNN model and perform inference at the edge on a distributed system, have several disadvantages. Therefore, in this paper, we propose a novel partitioning strategy, called Vertical Partitioning Strategy, together with a novel methodology needed to utilize our partitioning strategy efficiently, for CNN model inference on a distributed system at the edge. We compare our experimental results on the YOLOv2 CNN model with results obtained by the existing three methodologies and show the advantages of our methodologies in terms of memory requirement per edge device and overall system performance. Moreover, our experimental results on other representative CNN models show that our novel methodology utilizing our novel partitioning strategy is able to deliver CNN inference with very reduced memory requirement per edge device and improved overall system performance at the same time.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127663710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Lai, Didik Sudyana, Ying-Dar Lin, Miel Verkerken, Laurens D’hooge, T. Wauters, B. Volckaert, F. Turck
Intrusion Detection Systems (IDS) play an important role for detecting network intrusions. Because the intrusions have many variants and zero days, traditional signature- and anomaly-based IDS often fail to detect it. Machine learning (ML), on the other hand, has better capabilities for detecting variants. In this paper, we adopt ML-based IDS which consists of three in-sequence tasks: pre-processing, binary detection, and multi-class detection. We proposed ten different task assignments, which map these three tasks into a three-tier network for distributed IDS. We evaluated these with queueing theory to determine which tasks assignments are more appropriate for particular service providers. With simulated annealing, we allocated the total capacity appropriately to each tier. Our results suggest that the service provider can decide on the task assignments that best suit their needs. Only edge or a combination of edge and cloud could be utilized due to their shorter delay and greater operational simplicity. Utilizing only the fog or a combination of fog and edge remains the most private, which allows tenants to not have to share their raw private data with other parties and save more bandwidth. A combination of fog and cloud is easier to manage while still addressing privacy concerns, but the delay was 40% slower than the fog and edge combination. Our results also indicate that more than 85% of the total capacity is allocated and spread across nodes in the lowest tier for pre-processing to reduce delays.
{"title":"Machine learning based intrusion detection as a service: task assignment and capacity allocation in a multi-tier architecture","authors":"Y. Lai, Didik Sudyana, Ying-Dar Lin, Miel Verkerken, Laurens D’hooge, T. Wauters, B. Volckaert, F. Turck","doi":"10.1145/3492323.3495613","DOIUrl":"https://doi.org/10.1145/3492323.3495613","url":null,"abstract":"Intrusion Detection Systems (IDS) play an important role for detecting network intrusions. Because the intrusions have many variants and zero days, traditional signature- and anomaly-based IDS often fail to detect it. Machine learning (ML), on the other hand, has better capabilities for detecting variants. In this paper, we adopt ML-based IDS which consists of three in-sequence tasks: pre-processing, binary detection, and multi-class detection. We proposed ten different task assignments, which map these three tasks into a three-tier network for distributed IDS. We evaluated these with queueing theory to determine which tasks assignments are more appropriate for particular service providers. With simulated annealing, we allocated the total capacity appropriately to each tier. Our results suggest that the service provider can decide on the task assignments that best suit their needs. Only edge or a combination of edge and cloud could be utilized due to their shorter delay and greater operational simplicity. Utilizing only the fog or a combination of fog and edge remains the most private, which allows tenants to not have to share their raw private data with other parties and save more bandwidth. A combination of fog and cloud is easier to manage while still addressing privacy concerns, but the delay was 40% slower than the fog and edge combination. Our results also indicate that more than 85% of the total capacity is allocated and spread across nodes in the lowest tier for pre-processing to reduce delays.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127575613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present an Adaptive Brokerage for the Cloud (ABC) that can be used to simplify application deployment, monitoring and management processes in the cloud. The broker uses modern cloud infrastructure automation tools to test, deploy, monitor and optimise cloud resources. We used an e-commerce application to evaluate the entire functionality of the broker, we found out that different deployment options such as single-tier vs two-tier lead to interesting hardware and application performance insights. These insights are used to make effective infrastructure optimisation decisions.
{"title":"Adaptive brokerage framework for the cloud with functional testing","authors":"Sheriffo Ceesay, Yuhui Lin, A. Barker","doi":"10.1145/3492323.3495624","DOIUrl":"https://doi.org/10.1145/3492323.3495624","url":null,"abstract":"In this paper, we present an Adaptive Brokerage for the Cloud (ABC) that can be used to simplify application deployment, monitoring and management processes in the cloud. The broker uses modern cloud infrastructure automation tools to test, deploy, monitor and optimise cloud resources. We used an e-commerce application to evaluate the entire functionality of the broker, we found out that different deployment options such as single-tier vs two-tier lead to interesting hardware and application performance insights. These insights are used to make effective infrastructure optimisation decisions.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124407142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As in any institution, universities have processes defined in an unique way, involving many verification steps. Issuance of end-of-course certificates require multiple signatures from authorities, instructors and in some cases administrative staff. After an initial work of single signature certificates using blockchain for education purposes, we generate a prototype including more than one signature over the infrastructure generated for the Smart Ecosystem for Learning and Inclusion or SELI project. This prototype records certificates in a private non-monetary blockchain network, and is provided as an open source project. Countries like Ecuador, Turkey, Uruguay, and Finland can share certificates from the SELI platform through local nodes. This article provides relevant details about the implementation of the system, always with the aim of re-use existing software to reduce implementation time.
{"title":"Dealing with multi-step verification processes for certification issuance in universities","authors":"Andrés Heredia, Gabriel Barros-Gavilanes","doi":"10.1145/3492323.3495622","DOIUrl":"https://doi.org/10.1145/3492323.3495622","url":null,"abstract":"As in any institution, universities have processes defined in an unique way, involving many verification steps. Issuance of end-of-course certificates require multiple signatures from authorities, instructors and in some cases administrative staff. After an initial work of single signature certificates using blockchain for education purposes, we generate a prototype including more than one signature over the infrastructure generated for the Smart Ecosystem for Learning and Inclusion or SELI project. This prototype records certificates in a private non-monetary blockchain network, and is provided as an open source project. Countries like Ecuador, Turkey, Uruguay, and Finland can share certificates from the SELI platform through local nodes. This article provides relevant details about the implementation of the system, always with the aim of re-use existing software to reduce implementation time.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126166652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}