Weili Chen, Mingjie Ma, Yongjian Ye, Zibin Zheng, Yuren Zhou
With the advancements in Internet technologies and Wireless Sensor Networks (WSN), a new era of the Internet of Things (IoT) is being realized. IoT produces a lot of information which can be used to improve the efficiency of our daily lives and provides advanced services in a wide range of application domains. However, the privacy and the data fusing problems remain major challenges, mainly due to the massive scale and distributed nature of IoT networks and the amount of data collected from IoT increasing at an exponential rate. Thus, a privacy-protected and inter-cloud data fusing platform is needed to the demand for data mining and analytic activities in IoT. In this paper, we propose such a platform based on JointCloud Blockchain and study a novel case of smart traveling based on the proposed platform.
{"title":"IoT Service Based on JointCloud Blockchain: The Case Study of Smart Traveling","authors":"Weili Chen, Mingjie Ma, Yongjian Ye, Zibin Zheng, Yuren Zhou","doi":"10.1109/SOSE.2018.00036","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00036","url":null,"abstract":"With the advancements in Internet technologies and Wireless Sensor Networks (WSN), a new era of the Internet of Things (IoT) is being realized. IoT produces a lot of information which can be used to improve the efficiency of our daily lives and provides advanced services in a wide range of application domains. However, the privacy and the data fusing problems remain major challenges, mainly due to the massive scale and distributed nature of IoT networks and the amount of data collected from IoT increasing at an exponential rate. Thus, a privacy-protected and inter-cloud data fusing platform is needed to the demand for data mining and analytic activities in IoT. In this paper, we propose such a platform based on JointCloud Blockchain and study a novel case of smart traveling based on the proposed platform.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123078646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The concept of microservices has gained increasing popularity since 2014. Almost during the same period, container technology keeps developing and is considered as an excellent way to build microservices-based applications. Mainstream public cloud vendors such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform all provide users with container-based solutions to implementing microservices. Workspace as a Service (WaaS) proposed by An et al. is another approach which uses containers to serve users. Both container-based microservices and WaaS are used to effectively utilize cluster resources via maintaining a number of containers. In this paper, we compare the designing ideas and supporting platforms of these two approaches, which provides a perspective for cluster administrators and users to understand the scenarios where to use them and how to make an appropriate choice to meet their needs. We find that container-based microservices are more suitable for professional IT companies while WaaS fits education and research institutions better.
自2014年以来,微服务的概念越来越受欢迎。几乎在同一时期,容器技术不断发展,被认为是构建基于微服务的应用程序的一种极好的方式。主流的公共云供应商,如Amazon Web Services、Microsoft Azure和Google cloud Platform,都为用户提供基于容器的解决方案来实现微服务。An等人提出的工作空间即服务(WaaS)是另一种使用容器为用户服务的方法。基于容器的微服务和WaaS都可以通过维护大量容器来有效地利用集群资源。在本文中,我们比较了这两种方法的设计思想和支持平台,这为集群管理员和用户提供了一个视角,以了解在哪里使用它们以及如何做出适当的选择来满足他们的需求。我们发现基于容器的微服务更适合专业的IT公司,而WaaS更适合教育和研究机构。
{"title":"Comparing Container-Based Microservices and Workspace as a Service: Which One to Choose?","authors":"Junming Ma, Bo An, Donggang Cao, Xiangqun Chen","doi":"10.1109/SOSE.2018.00040","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00040","url":null,"abstract":"The concept of microservices has gained increasing popularity since 2014. Almost during the same period, container technology keeps developing and is considered as an excellent way to build microservices-based applications. Mainstream public cloud vendors such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform all provide users with container-based solutions to implementing microservices. Workspace as a Service (WaaS) proposed by An et al. is another approach which uses containers to serve users. Both container-based microservices and WaaS are used to effectively utilize cluster resources via maintaining a number of containers. In this paper, we compare the designing ideas and supporting platforms of these two approaches, which provides a perspective for cluster administrators and users to understand the scenarios where to use them and how to make an appropriate choice to meet their needs. We find that container-based microservices are more suitable for professional IT companies while WaaS fits education and research institutions better.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125099664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Yen, F. Bastani, Wei Zhu, Hessam Moeini, San-Yih Hwang, Yuqun Zhang
Service technologies have been widely applied to many application domains to facilitate rapid system composition and deployment. However, existing service models need to be enhanced in order to be used in Internet-of-Things (IoT). Also, due to the massive-scale, IoT service discovery and composition cannot be centralized. Existing discovery routing protocols for peer-to-peer systems have their shortcomings and need to be improved. In this paper, we analyze the differences between IoT services and software services and identify the requirements for designing IoT service models that are additional to software service models. We then discuss a service ontology model for the specification of IoT services. For IoT service discovery, we survey existing discovery routing approaches, including those for conventional peer-to-peer networks and for IoT systems and discuss the potential problems when used in IoT networks. Then, we discuss our approach, summarization and ontology coding, which greatly reduce the memory requirements of the routing protocols, for the IoT networks.
{"title":"Service-Oriented IoT Modeling and Its Deviation from Software Services","authors":"I. Yen, F. Bastani, Wei Zhu, Hessam Moeini, San-Yih Hwang, Yuqun Zhang","doi":"10.1109/SOSE.2018.00014","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00014","url":null,"abstract":"Service technologies have been widely applied to many application domains to facilitate rapid system composition and deployment. However, existing service models need to be enhanced in order to be used in Internet-of-Things (IoT). Also, due to the massive-scale, IoT service discovery and composition cannot be centralized. Existing discovery routing protocols for peer-to-peer systems have their shortcomings and need to be improved. In this paper, we analyze the differences between IoT services and software services and identify the requirements for designing IoT service models that are additional to software service models. We then discuss a service ontology model for the specification of IoT services. For IoT service discovery, we survey existing discovery routing approaches, including those for conventional peer-to-peer networks and for IoT systems and discuss the potential problems when used in IoT networks. Then, we discuss our approach, summarization and ontology coding, which greatly reduce the memory requirements of the routing protocols, for the IoT networks.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"158 S326","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132905026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renyu Yang, Ouyang Xue, Yaofeng Chen, P. Townend, Jie Xu
Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application’s QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.
{"title":"Intelligent Resource Scheduling at Scale: A Machine Learning Perspective","authors":"Renyu Yang, Ouyang Xue, Yaofeng Chen, P. Townend, Jie Xu","doi":"10.1109/SOSE.2018.00025","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00025","url":null,"abstract":"Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application’s QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133638362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract—With the growing number of cloud providers and the increasing market of cloud computing, it’s more and more necessary to realize the cooperation and aggregation of clouds. JointCloud is a framework aiming at facilitating the consociation of various clouds on the market. And the JointCloud Collaboration Environment (JCCE) is the ideal environment for global cloud providers. To realize the clouds cooperation, migration is one of the most important issues that have to be considered. Live migration has always been one of the major primitive operations of virtualization and has been discussed for long. Traditional works deal the migration mainly by using a host-driven migration method, the majority work of which is dominated by the hypervisor. However, as the age of cloud aggregation comes, traditional methods show their defects. In cloud aggregation environment, cloud providers may refuse to supply the migration service to grasp their customers, or the hypervisors are heterogeneous on the two sides of migration. Those problems raise challenges to traditional host-driven methods. In this paper, we propose Cuckoo Migration, a new self- migration method using Intel new hardware feature. We leverage a special processor function, VMFUNC, to create a two-EPT architecture for the guest VM, so that the guest has a mirror memory space, which can be used as the duplication of memory. This paper mainly introduces how we build a two-EPT architecture for the guest and discusses how we leverage such architecture to do our self-migration.
{"title":"Cuckoo Migration: Self Migration on JointCloud Using New Hardware Features","authors":"Ruifeng Liu, Zeyu Mi","doi":"10.1109/SOSE.2018.00033","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00033","url":null,"abstract":"Abstract—With the growing number of cloud providers and the increasing market of cloud computing, it’s more and more necessary to realize the cooperation and aggregation of clouds. JointCloud is a framework aiming at facilitating the consociation of various clouds on the market. And the JointCloud Collaboration Environment (JCCE) is the ideal environment for global cloud providers. To realize the clouds cooperation, migration is one of the most important issues that have to be considered. Live migration has always been one of the major primitive operations of virtualization and has been discussed for long. Traditional works deal the migration mainly by using a host-driven migration method, the majority work of which is dominated by the hypervisor. However, as the age of cloud aggregation comes, traditional methods show their defects. In cloud aggregation environment, cloud providers may refuse to supply the migration service to grasp their customers, or the hypervisors are heterogeneous on the two sides of migration. Those problems raise challenges to traditional host-driven methods. In this paper, we propose Cuckoo Migration, a new self- migration method using Intel new hardware feature. We leverage a special processor function, VMFUNC, to create a two-EPT architecture for the guest VM, so that the guest has a mirror memory space, which can be used as the duplication of memory. This paper mainly introduces how we build a two-EPT architecture for the guest and discusses how we leverage such architecture to do our self-migration.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116177208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field of process model similarity matching is well examined for imperative process models like BPMN models, Petri nets, or EPCs. For the recently upcoming declarative process models, generally providing more flexibility than imperative models, however, there is a lack of comparison methods. Along with their advantage of providing more flexibility, declarative process models have a disadvantage in comprehending the models, especially the models' behavior. To overcome this problem, a comparison of imperative and declarative models is reasonable to check whether the declarative model represents a desired behavior which is easier to express and validate in an imperative notation. The work at hand provides a method based on flow dependencies, abstracting from the modeling type, for comparing two process models. It uses not only information about control-flow, but also data-based dependencies between process activities.
{"title":"Comparing Imperative and Declarative Process Models with Flow Dependencies","authors":"M. Baumann","doi":"10.1109/SOSE.2018.00017","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00017","url":null,"abstract":"The field of process model similarity matching is well examined for imperative process models like BPMN models, Petri nets, or EPCs. For the recently upcoming declarative process models, generally providing more flexibility than imperative models, however, there is a lack of comparison methods. Along with their advantage of providing more flexibility, declarative process models have a disadvantage in comprehending the models, especially the models' behavior. To overcome this problem, a comparison of imperative and declarative models is reasonable to check whether the declarative model represents a desired behavior which is easier to express and validate in an imperative notation. The work at hand provides a method based on flow dependencies, abstracting from the modeling type, for comparing two process models. It uses not only information about control-flow, but also data-based dependencies between process activities.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121691313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The big data platform always suffers from performance problems due to internal impairments (e.g. software bugs) and external impairments (e.g. resource hog). And the situation is exacerbated by the properties of velocity, variety and volume (3Vs) of big data. To recovery the system from performance anomaly, the first step is to find the root causes. In this paper, we propose a novel signature-based performance diagnosis approach to rapidly pinpoint the root causes of performance problems in big data platforms. The performance diagnosis is formalized as a pattern recognition problem. We leverage Maximum Information Criterion (MIC) to express the invariant relationships amongst the performance metrics in the normal state. Each performance problem occurred in the big data platform is signified by a unique binary vector named signature, which consists of a set of violations of MIC invariants. The signatures of multiple performance problems form a signature database. If the Key Performance Indicator (KPI) of the big data application exhibits model drift, our approach can identify the real culprits by retrieving the root causes which have similar signatures to the current performance problem. Moreover, considering the diversity of big data applications, we establish an ensemble approach to treat each application separately. The experiment evaluations in a controlled big data platform show that our approach can pinpoint the real culprits of performance problems in an average 84% precision and 87% recall when one fault occurs, which is better than several state-of-the-art approaches.
{"title":"An Ensemble Signature-Based Approach for Performance Diagnosis in Big Data Platform","authors":"H. Kou, Pengfei Chen","doi":"10.1109/SOSE.2018.00022","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00022","url":null,"abstract":"The big data platform always suffers from performance problems due to internal impairments (e.g. software bugs) and external impairments (e.g. resource hog). And the situation is exacerbated by the properties of velocity, variety and volume (3Vs) of big data. To recovery the system from performance anomaly, the first step is to find the root causes. In this paper, we propose a novel signature-based performance diagnosis approach to rapidly pinpoint the root causes of performance problems in big data platforms. The performance diagnosis is formalized as a pattern recognition problem. We leverage Maximum Information Criterion (MIC) to express the invariant relationships amongst the performance metrics in the normal state. Each performance problem occurred in the big data platform is signified by a unique binary vector named signature, which consists of a set of violations of MIC invariants. The signatures of multiple performance problems form a signature database. If the Key Performance Indicator (KPI) of the big data application exhibits model drift, our approach can identify the real culprits by retrieving the root causes which have similar signatures to the current performance problem. Moreover, considering the diversity of big data applications, we establish an ensemble approach to treat each application separately. The experiment evaluations in a controlled big data platform show that our approach can pinpoint the real culprits of performance problems in an average 84% precision and 87% recall when one fault occurs, which is better than several state-of-the-art approaches.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124371670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhen Tang, Heng Wu, Lei Sun, Zhongshan Ren, Wei Wang, Wei Zhou, Liang Yang
Flash-based Solid State Disk (SSD) is widely used in the Internet-based virtual computing environment, usually as cache of the hard disk drive-based virtual machine (VM) storage. Existing SSD caching schemes mainly treat the VMs as independent units and focus on critical performance metrics concerning one single VM, such as the IO latency, throughput, or the cache miss rate. However, in the Internet-based virtual computing environment, one transactional application usually consists of multiple VMs on different hypervisors. Transaction-aware SSD caching schemes may potentially better improve the end user-perceived quality of service. The key insight here is to utilize the relationships among VMs inside the transactional application to better guide the allocation of the SSD cache, so as to help learn the pattern of workload changes and build adaptive SSD caching schemes. To this end, we propose the Transaction-Aware SSD caching (TA-SSD), which takes the characteristics of transactions into consideration, uses closed loop adaptation to react to changing workload, and introduces the genetic algorithm to enable nearly optimal planning. The evaluation shows that comparing to the equally partitioned cache, the allocation produced by the TA-SSD can boost the performance by up to 40%, with dynamic changes in the intensity and the type of the workload.
基于flash的SSD (Solid State Disk)硬盘被广泛应用于基于互联网的虚拟计算环境中,通常作为基于硬盘驱动器的虚拟机存储的缓存。现有的SSD caching方案主要是将虚拟机作为独立的单元,关注单个虚拟机的关键性能指标,如IO时延、吞吐量、cache miss率等。然而,在基于internet的虚拟计算环境中,一个事务性应用程序通常由不同管理程序上的多个vm组成。事务感知的SSD缓存方案可能会更好地提高最终用户感知的服务质量。这里的关键观点是利用事务性应用程序内部vm之间的关系来更好地指导SSD缓存的分配,从而帮助了解工作负载变化的模式并构建自适应的SSD缓存方案。为此,我们提出了基于事务感知的SSD缓存(TA-SSD),它考虑了事务的特性,使用闭环自适应来应对工作负载的变化,并引入遗传算法来实现近乎最优的规划。评估表明,与等分区缓存相比,TA-SSD产生的分配可以在工作负载的强度和类型动态变化的情况下提高高达40%的性能。
{"title":"Transaction-aware SSD Cache Allocation for the Virtualization Environment","authors":"Zhen Tang, Heng Wu, Lei Sun, Zhongshan Ren, Wei Wang, Wei Zhou, Liang Yang","doi":"10.1109/SOSE.2018.00029","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00029","url":null,"abstract":"Flash-based Solid State Disk (SSD) is widely used in the Internet-based virtual computing environment, usually as cache of the hard disk drive-based virtual machine (VM) storage. Existing SSD caching schemes mainly treat the VMs as independent units and focus on critical performance metrics concerning one single VM, such as the IO latency, throughput, or the cache miss rate. However, in the Internet-based virtual computing environment, one transactional application usually consists of multiple VMs on different hypervisors. Transaction-aware SSD caching schemes may potentially better improve the end user-perceived quality of service. The key insight here is to utilize the relationships among VMs inside the transactional application to better guide the allocation of the SSD cache, so as to help learn the pattern of workload changes and build adaptive SSD caching schemes. To this end, we propose the Transaction-Aware SSD caching (TA-SSD), which takes the characteristics of transactions into consideration, uses closed loop adaptation to react to changing workload, and introduces the genetic algorithm to enable nearly optimal planning. The evaluation shows that comparing to the equally partitioned cache, the allocation produced by the TA-SSD can boost the performance by up to 40%, with dynamic changes in the intensity and the type of the workload.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133861627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging cloud computing arouses need for large-scale data processing which in turn promises vigorous developments on big data platforms running on Java Virtual Machine (JVM), such as Hadoop, Spark and Flink. Storing a large amount of data in memory allows those platforms to benefit from satisfying performance and powerful memory management and garbage collection service in Java. Non-volatile memory (NVM) provides nonvolatility, byte-addressable and fast access speed characteristics and thus becomes a superior alternative for volatile memory utilizing in future cloud system and Java world. This paper presents a recoverable garbage collector named DwarfGC to manage Java objects in NVM so as to ensure crash consistency and durability. DwarfGC persists heap-related metadata into NVM at the beginning of GC and relies on it for recovery. The metadata is stored in a space-efficient fashion but incurring little time overhead.
{"title":"DwarfGC: A Space-Efficient and Crash-Consistent Garbage Collector in NVM for Cloud Computing","authors":"Heting Li, Mingyu Wu","doi":"10.1109/SOSE.2018.00032","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00032","url":null,"abstract":"Emerging cloud computing arouses need for large-scale data processing which in turn promises vigorous developments on big data platforms running on Java Virtual Machine (JVM), such as Hadoop, Spark and Flink. Storing a large amount of data in memory allows those platforms to benefit from satisfying performance and powerful memory management and garbage collection service in Java. Non-volatile memory (NVM) provides nonvolatility, byte-addressable and fast access speed characteristics and thus becomes a superior alternative for volatile memory utilizing in future cloud system and Java world. This paper presents a recoverable garbage collector named DwarfGC to manage Java objects in NVM so as to ensure crash consistency and durability. DwarfGC persists heap-related metadata into NVM at the beginning of GC and relies on it for recovery. The metadata is stored in a space-efficient fashion but incurring little time overhead.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133767698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The microservice architectural style is an emerging trend in software engineering that allows building highly scalable and flexible systems. However, current state of the art provides only limited insight into the particular security concerns of microservice system. With this paper, we seek to unravel some of the mysteries surrounding microservice security by: providing a taxonomy of microservices security; assessing the security implications of the microservice architecture; and surveying related contemporary solutions, among others Docker Swarm and Netflix security decisions. We offer two important insights. On one hand, microservice security is a multi-faceted problem that requires a layered security solution that is not available out of the box at the moment. On the other hand, if these security challenges are solved, microservice architectures can improve security; their inherent properties of loose coupling, isolation, diversity, and fail fast all contribute to the increased robustness of a system. To address the lack of security guidelines this paper describes the design and implementation of a simple security framework for microservices that can be leveraged by practitioners. Proof-of-concept evaluation results show that the performance overhead of the security mechanisms is around 11%.
{"title":"Overcoming Security Challenges in Microservice Architectures","authors":"T. Yarygina, A. H. Bagge","doi":"10.1109/SOSE.2018.00011","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00011","url":null,"abstract":"The microservice architectural style is an emerging trend in software engineering that allows building highly scalable and flexible systems. However, current state of the art provides only limited insight into the particular security concerns of microservice system. With this paper, we seek to unravel some of the mysteries surrounding microservice security by: providing a taxonomy of microservices security; assessing the security implications of the microservice architecture; and surveying related contemporary solutions, among others Docker Swarm and Netflix security decisions. We offer two important insights. On one hand, microservice security is a multi-faceted problem that requires a layered security solution that is not available out of the box at the moment. On the other hand, if these security challenges are solved, microservice architectures can improve security; their inherent properties of loose coupling, isolation, diversity, and fail fast all contribute to the increased robustness of a system. To address the lack of security guidelines this paper describes the design and implementation of a simple security framework for microservices that can be leveraged by practitioners. Proof-of-concept evaluation results show that the performance overhead of the security mechanisms is around 11%.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115320554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}