Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00066
Vikas Agarwal, Chris Butler, Lou Degenaro, Arun Kumar, A. Sailer, Gosia Steinder
Automation of cybersecurity processes has become crucial with large scale deployment of sensitive workloads in regulated on-prem, private, and public cloud environments. Regulatory and standards bodies such as Payment Card Industry (PCI), Federal Financial Institutions Examination Council (FFIEC), International Organization for Standardization (ISO), and others govern the minimal set of cybersecurity controls that an organization must implement. To meet such requirements while maintaining business agility, organizations need to modernize from manual document based compliance management to automated processes for continuous compliance. This modernized process is called compliance-as-code. In this paper, we present an architecture for compliance-as-code based on a standardized framework. We identify several design choices and our rationale behind those. Specifically, we introduce a system for manipulating compliance information in a standardized manner and a data interchange protocol for inter-operable communication of compliance information. We demonstrate the scalability of our approach and briefly describe deployment and experimental results in real world settings.
{"title":"Compliance-as-Code for Cybersecurity Automation in Hybrid Cloud","authors":"Vikas Agarwal, Chris Butler, Lou Degenaro, Arun Kumar, A. Sailer, Gosia Steinder","doi":"10.1109/CLOUD55607.2022.00066","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00066","url":null,"abstract":"Automation of cybersecurity processes has become crucial with large scale deployment of sensitive workloads in regulated on-prem, private, and public cloud environments. Regulatory and standards bodies such as Payment Card Industry (PCI), Federal Financial Institutions Examination Council (FFIEC), International Organization for Standardization (ISO), and others govern the minimal set of cybersecurity controls that an organization must implement. To meet such requirements while maintaining business agility, organizations need to modernize from manual document based compliance management to automated processes for continuous compliance. This modernized process is called compliance-as-code. In this paper, we present an architecture for compliance-as-code based on a standardized framework. We identify several design choices and our rationale behind those. Specifically, we introduce a system for manipulating compliance information in a standardized manner and a data interchange protocol for inter-operable communication of compliance information. We demonstrate the scalability of our approach and briefly describe deployment and experimental results in real world settings.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"57 57 1","pages":"427-437"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79830244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00061
Damian Borowiec, G. Yeung, A. Friday, Richard Harper, P. Garraghan
Cloud datacenters capable of provisioning high performance Machine Learning-as-a-Service (MLaaS) at reduced resource cost is achieved via auto-tuning: automated tensor program optimization of Deep Learning models to minimize inference latency within a hardware device. However given the extensive heterogeneity of Deep Learning models, libraries, and hardware devices, performing auto-tuning within Cloud datacenters incurs a significant time, compute resource, and energy cost of which state-of-the-art auto-tuning is not designed to mitigate. In this paper we propose Trimmer, a high performance and cost-efficient Deep Learning auto-tuning framework for Cloud datacenters. Trimmer maximizes DL model performance and tensor program cost-efficiency by preempting tensor program implementations exhibiting poor optimization improvement; and applying an ML-based filtering method to replace expensive low performing tensor programs to provide greater likelihood of selecting low latency tensor programs. Through an empirical study exploring the cost of DL model optimization techniques, our analysis indicates that 26–43% of total energy is expended on measuring tensor program implementations that do not positively contribute towards auto-tuning. Experiment results show that Trimmer achieves high auto-tuning cost-efficiency across different DL models, and reduces auto-tuning energy use by 21.8–40.9% for Cloud clusters whilst achieving DL model latency equivalent to state-of-the-art techniques.
{"title":"Trimmer: Cost-Efficient Deep Learning Auto-tuning for Cloud Datacenters","authors":"Damian Borowiec, G. Yeung, A. Friday, Richard Harper, P. Garraghan","doi":"10.1109/CLOUD55607.2022.00061","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00061","url":null,"abstract":"Cloud datacenters capable of provisioning high performance Machine Learning-as-a-Service (MLaaS) at reduced resource cost is achieved via auto-tuning: automated tensor program optimization of Deep Learning models to minimize inference latency within a hardware device. However given the extensive heterogeneity of Deep Learning models, libraries, and hardware devices, performing auto-tuning within Cloud datacenters incurs a significant time, compute resource, and energy cost of which state-of-the-art auto-tuning is not designed to mitigate. In this paper we propose Trimmer, a high performance and cost-efficient Deep Learning auto-tuning framework for Cloud datacenters. Trimmer maximizes DL model performance and tensor program cost-efficiency by preempting tensor program implementations exhibiting poor optimization improvement; and applying an ML-based filtering method to replace expensive low performing tensor programs to provide greater likelihood of selecting low latency tensor programs. Through an empirical study exploring the cost of DL model optimization techniques, our analysis indicates that 26–43% of total energy is expended on measuring tensor program implementations that do not positively contribute towards auto-tuning. Experiment results show that Trimmer achieves high auto-tuning cost-efficiency across different DL models, and reduces auto-tuning energy use by 21.8–40.9% for Cloud clusters whilst achieving DL model latency equivalent to state-of-the-art techniques.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"84 1","pages":"374-384"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85834221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00031
P. V. Seshadri, Harikrishnan Balagopal, Akash Nayak, Ashok Pon Kumar, Pablo Loyola
We present Move2Kube, a replatforming framework that automates the creation and transformation of DevOps artifacts of an application for deployment in a Cloud Native environment. Our contributions include a customizable transformer framework that allows for complete control over the artifacts being processed, and output generated. We provide case studies and open-source benchmark-based evidence comparing Move2Kube with similar state-of-the-art tools to demonstrate its effectiveness in terms of effort reduction, diverse utility, and highlight future lines of work. Move2Kube is being developed as an open-source community project and it is available at: https://move2kube.konveyor.io/
{"title":"Konveyor Move2Kube: A Framework For Automated Application Replatforming","authors":"P. V. Seshadri, Harikrishnan Balagopal, Akash Nayak, Ashok Pon Kumar, Pablo Loyola","doi":"10.1109/CLOUD55607.2022.00031","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00031","url":null,"abstract":"We present Move2Kube, a replatforming framework that automates the creation and transformation of DevOps artifacts of an application for deployment in a Cloud Native environment. Our contributions include a customizable transformer framework that allows for complete control over the artifacts being processed, and output generated. We provide case studies and open-source benchmark-based evidence comparing Move2Kube with similar state-of-the-art tools to demonstrate its effectiveness in terms of effort reduction, diverse utility, and highlight future lines of work. Move2Kube is being developed as an open-source community project and it is available at: https://move2kube.konveyor.io/","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"36 1","pages":"115-124"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82069107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00064
Jingoo Han, Ahmad Faraz Khan, Syed Zawad, Ali Anwar, Nathalie Baracaldo Angel, Yi Zhou, Feng Yan, A. Butt
In federated learning (FL), clients collectively train a global machine learning model with their own local data. Without sharing sensitive raw data, each client in FL only sends updated weights to consider privacy and security concerns. Most of existing FL works focus mainly on improving model accuracy and training time, but only a few works focus on FL incentive mechanisms. To build a high performance model after FL training, clients need to provide high quality and large amounts of data. However, in real FL scenarios, high-quality clients are reluctant to participate in FL process without reasonable compensation, because clients are self-interested and other clients can be business competitors. Even participation incurs some cost for contributing to the FL model with their local dataset. To address this problem, we propose TIFF, a novel tokenized incentive mechanism, where tokens are used as a means of paying for the services of providing participants and the training infrastructure. Without payment delays, participation can be monetized as both providers and consumers, which promotes continued long-term participation of high-quality data parties. Additionally, paid tokens are reimbursed to each client as consumers according to our newly proposed metrics (such as token reduction ratio and utility improvement ratio), which keeps clients engaged in FL process as consumers. To measure data quality, accuracy is calculated in training without additional overheads. We leverage historical accuracy records and random exploration to select high-utility participants and to prevent overfitting. Results show that TIFF provides more tokens to normal providers by up to 6.9% and less tokens to malicious providers by up to 18.1%, achieving improvement of the final model accuracy by up to 7.4%, compared to the default approach.
{"title":"TIFF: Tokenized Incentive for Federated Learning","authors":"Jingoo Han, Ahmad Faraz Khan, Syed Zawad, Ali Anwar, Nathalie Baracaldo Angel, Yi Zhou, Feng Yan, A. Butt","doi":"10.1109/CLOUD55607.2022.00064","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00064","url":null,"abstract":"In federated learning (FL), clients collectively train a global machine learning model with their own local data. Without sharing sensitive raw data, each client in FL only sends updated weights to consider privacy and security concerns. Most of existing FL works focus mainly on improving model accuracy and training time, but only a few works focus on FL incentive mechanisms. To build a high performance model after FL training, clients need to provide high quality and large amounts of data. However, in real FL scenarios, high-quality clients are reluctant to participate in FL process without reasonable compensation, because clients are self-interested and other clients can be business competitors. Even participation incurs some cost for contributing to the FL model with their local dataset. To address this problem, we propose TIFF, a novel tokenized incentive mechanism, where tokens are used as a means of paying for the services of providing participants and the training infrastructure. Without payment delays, participation can be monetized as both providers and consumers, which promotes continued long-term participation of high-quality data parties. Additionally, paid tokens are reimbursed to each client as consumers according to our newly proposed metrics (such as token reduction ratio and utility improvement ratio), which keeps clients engaged in FL process as consumers. To measure data quality, accuracy is calculated in training without additional overheads. We leverage historical accuracy records and random exploration to select high-utility participants and to prevent overfitting. Results show that TIFF provides more tokens to normal providers by up to 6.9% and less tokens to malicious providers by up to 18.1%, achieving improvement of the final model accuracy by up to 7.4%, compared to the default approach.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"103 1","pages":"407-416"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80649205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00049
Mohamed Mouine, M. Saied
The Internet of Things (IoT) has greatly benefited the technological advances of a variety of fields, such as manufacturing and medicine, to name a few. The context surrounding these use cases is, however, often widely different from conventional Cloud Computing and web applications. Cyberphysical environments present us with major concerns and constraints surrounding the resilience of systems, which often rely on critical infrastructure and important workloads to prevent major losses for businesses or even the endangerment of individuals. The supervision of these infrastructures, outside the controlled and relatively safe environment of a datacenter, is therefore one of the major considerations for modern IoT systems. In this paper, we evaluate the core concepts around this thesis and propose an architectural and conceptual approach to improve the monitoring, scalability, and orchestration of IoT systems. We leverage and integrate different solutions inspired by modern IoT practices and the cloud ecosystem to optimize both software and hardware aspects. The solution revolves around an Edge Computing approach, Event-driven communication (MQTT) in the Edge, the orchestration of containerized services using Ku-bernetes and KubeEdge, and Device Twins for the management of physical components. Through development, experiment, and evaluation, we propose an architecture and two complementary fault-tolerance strategies to address synchronization between cloud and edge components and improve the overall resilience of the system.
{"title":"Event-Driven Approach for Monitoring and Orchestration of Cloud and Edge-Enabled IoT Systems","authors":"Mohamed Mouine, M. Saied","doi":"10.1109/CLOUD55607.2022.00049","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00049","url":null,"abstract":"The Internet of Things (IoT) has greatly benefited the technological advances of a variety of fields, such as manufacturing and medicine, to name a few. The context surrounding these use cases is, however, often widely different from conventional Cloud Computing and web applications. Cyberphysical environments present us with major concerns and constraints surrounding the resilience of systems, which often rely on critical infrastructure and important workloads to prevent major losses for businesses or even the endangerment of individuals. The supervision of these infrastructures, outside the controlled and relatively safe environment of a datacenter, is therefore one of the major considerations for modern IoT systems. In this paper, we evaluate the core concepts around this thesis and propose an architectural and conceptual approach to improve the monitoring, scalability, and orchestration of IoT systems. We leverage and integrate different solutions inspired by modern IoT practices and the cloud ecosystem to optimize both software and hardware aspects. The solution revolves around an Edge Computing approach, Event-driven communication (MQTT) in the Edge, the orchestration of containerized services using Ku-bernetes and KubeEdge, and Device Twins for the management of physical components. Through development, experiment, and evaluation, we propose an architecture and two complementary fault-tolerance strategies to address synchronization between cloud and edge components and improve the overall resilience of the system.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"5 1","pages":"273-282"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72857767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00033
Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Marcelo Amaral
Containerization on the cloud offers several crucial benefits. However, these benefits are negated by the effects of virtual network stack and address encapsulation, especially for workloads that require intense communication. Socket replacement is a promising approach to breach this wall without changing the underlay infrastructure by replacing a nested network stack with a simple host network stack. Current state-of-the-art approaches perform this replacement by preloading the overridden socket library in a containerized process. However, the preloading approach requires user effort to modify the deploying manifests and a compromised security policy configuration of privileged containers to access the host namespace. This paper introduces a new replacement framework where a secured control plane agent performs the replacement by utilizing low-overhead BPF kernel tracing technology. As a result, containers can obtain host-native network performance and neither modification nor escalated privileges are required for user containers. Experiments on multiple benchmarks including iPerf, MPI, memslap, and GROMACS have been conducted to confirm efficacy.
{"title":"Bypass Container Overlay Networks with Transparent BPF-driven Socket Replacement","authors":"Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Marcelo Amaral","doi":"10.1109/CLOUD55607.2022.00033","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00033","url":null,"abstract":"Containerization on the cloud offers several crucial benefits. However, these benefits are negated by the effects of virtual network stack and address encapsulation, especially for workloads that require intense communication. Socket replacement is a promising approach to breach this wall without changing the underlay infrastructure by replacing a nested network stack with a simple host network stack. Current state-of-the-art approaches perform this replacement by preloading the overridden socket library in a containerized process. However, the preloading approach requires user effort to modify the deploying manifests and a compromised security policy configuration of privileged containers to access the host namespace. This paper introduces a new replacement framework where a secured control plane agent performs the replacement by utilizing low-overhead BPF kernel tracing technology. As a result, containers can obtain host-native network performance and neither modification nor escalated privileges are required for user containers. Experiments on multiple benchmarks including iPerf, MPI, memslap, and GROMACS have been conducted to confirm efficacy.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"228 1","pages":"134-143"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72751651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, with the maturation of container orchestration platforms like Kubernetes, containers are now becoming the default way to deploy cloud-native applications, designed as microservices, on public and private clouds. These trends have also spread to the field of Telecommunications, boosted by the onset of 5G. Network functions processing millions of packets per second, earlier run as proprietary physical boxes, are now being realized as disaggregated container based microservices (CNFs) running on commodity clusters managed by orchestrators, like Kubernetes, on Telco clouds. While container orchestrators have evolved to meet the needs of enterprise applications, Telco workloads still remain a second class citizen, as the orchestrator is presently unaware of the networking needs of CNFs and cannot guarantee QoS of network intensive functions. In this work, we examine orchestration of network sensitive functions and identify the key networking requirements of containerized Telco workloads from the orchestration platform. We design and propose NACO - Network Aware Container Orchestration, a minimal, cloud-native and scalable extension to the Kubernetes platform to address these requirements and provide first class lifecycle management of CNFs used in Telco workloads. We implement a prototype of the system and demonstrate that we can achieve network aware container orchestration with minimal operation times.
{"title":"Network Aware Container Orchestration for Telco Workloads","authors":"Kavya Govindarajan, Chander Govindarajan, Mudit Verma","doi":"10.1109/CLOUD55607.2022.00063","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00063","url":null,"abstract":"In recent years, with the maturation of container orchestration platforms like Kubernetes, containers are now becoming the default way to deploy cloud-native applications, designed as microservices, on public and private clouds. These trends have also spread to the field of Telecommunications, boosted by the onset of 5G. Network functions processing millions of packets per second, earlier run as proprietary physical boxes, are now being realized as disaggregated container based microservices (CNFs) running on commodity clusters managed by orchestrators, like Kubernetes, on Telco clouds. While container orchestrators have evolved to meet the needs of enterprise applications, Telco workloads still remain a second class citizen, as the orchestrator is presently unaware of the networking needs of CNFs and cannot guarantee QoS of network intensive functions. In this work, we examine orchestration of network sensitive functions and identify the key networking requirements of containerized Telco workloads from the orchestration platform. We design and propose NACO - Network Aware Container Orchestration, a minimal, cloud-native and scalable extension to the Kubernetes platform to address these requirements and provide first class lifecycle management of CNFs used in Telco workloads. We implement a prototype of the system and demonstrate that we can achieve network aware container orchestration with minimal operation times.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"2012 1","pages":"397-406"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87713690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00078
Rasmus Vestergaard, Elena Pagnin, Rohon Kundu, D. Lucani
This work proposes a novel design for secure cloud storage systems using a third party to meet three seemingly opposing demands: reduce storage requirements on the cloud, protect against erasures (data loss), and maintain confidentiality of the data. More specifically, we achieve storage cost reductions using data deduplication without requiring system users to trust that the cloud operates honestly. We analyze the security of our scheme against honest-but-curious and covert adversaries that may collude with multiple parties and show that no novel sensitive information can be inferred, assuming random oracles and a high min-entropy data source. We also provide a mathematical analysis to characterize its potential for compression given the popularity of individual chunks of data and its overall erasure protection capabilities. In fact, we show that the storage cost of our scheme for a chunk with r replicas is O(log(r)/r), while deduplication without security or reliability considerations is O(1/r), i.e., our added cost for providing reliability and security is only O(log(r)). We provide a proof of concept implementation to simulate performance and verify our analytical results.
{"title":"Secure Cloud Storage with Joint Deduplication and Erasure Protection","authors":"Rasmus Vestergaard, Elena Pagnin, Rohon Kundu, D. Lucani","doi":"10.1109/CLOUD55607.2022.00078","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00078","url":null,"abstract":"This work proposes a novel design for secure cloud storage systems using a third party to meet three seemingly opposing demands: reduce storage requirements on the cloud, protect against erasures (data loss), and maintain confidentiality of the data. More specifically, we achieve storage cost reductions using data deduplication without requiring system users to trust that the cloud operates honestly. We analyze the security of our scheme against honest-but-curious and covert adversaries that may collude with multiple parties and show that no novel sensitive information can be inferred, assuming random oracles and a high min-entropy data source. We also provide a mathematical analysis to characterize its potential for compression given the popularity of individual chunks of data and its overall erasure protection capabilities. In fact, we show that the storage cost of our scheme for a chunk with r replicas is O(log(r)/r), while deduplication without security or reliability considerations is O(1/r), i.e., our added cost for providing reliability and security is only O(log(r)). We provide a proof of concept implementation to simulate performance and verify our analytical results.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"125 1","pages":"554-563"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79425400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CLOUD55607.2022.00044
D. Venkatesh, Shivali Agarwal
The choice of pattern of data access from database tables is critical for a microservice to maximize benefits of distributed architecture. Traditionally, microservices have been designed using shared table access pattern commonly referred to as CRUD pattern. More recently, there has been a growing interest in applying other patterns like CQRS. In this work, we propose a system that recommends the most suitable pattern for a microservice as per the separation in read and write operations in the transactions performed by the service.
{"title":"Data Access Pattern Recommendations for Microservices Architecture","authors":"D. Venkatesh, Shivali Agarwal","doi":"10.1109/CLOUD55607.2022.00044","DOIUrl":"https://doi.org/10.1109/CLOUD55607.2022.00044","url":null,"abstract":"The choice of pattern of data access from database tables is critical for a microservice to maximize benefits of distributed architecture. Traditionally, microservices have been designed using shared table access pattern commonly referred to as CRUD pattern. More recently, there has been a growing interest in applying other patterns like CQRS. In this work, we propose a system that recommends the most suitable pattern for a microservice as per the separation in read and write operations in the transactions performed by the service.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"29 1","pages":"241-243"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81988191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}