In an early phase of a software development process (requirement analysis), functional and non-function requirements are gathered. While a lot of research has been done on how to bring functional requirements into the software, non-functional requirements are still challenging. One of the reasons is that non-functional requirements are often hard to measure and hard to test. Unfortunately, security, privacy, and data protections are such non-functional requirements. To make things even more complicate, software engineering is a social process. This means multiple parties (i.e., security experts, software architects, and programmers) have to work together, which will result unavoidable in misunderstandings and misinterpretation. Therefore, it is often not clear if security concerns are implemented correctly, or have been at least formalized correctly for later implementation during the requirement analysis. This paper is a discussion starter, on how to overcome communication-based problems, ensure that security concerns are implemented correctly, and how to avoid software erosion that later on breaks security concerns. Therefore, we discuss strategies which combine security concepts with software engineering methods by the intensive use of models. Such models are already used in academia and even in industry. We recommend to use models more often, more intensive, and for more concerns.
{"title":"Enforcing Security and Privacy via a Cooperation of Security Experts and Software Engineers: A Model-Based Vision","authors":"Marcus Hilbrich, Markus Frank","doi":"10.1109/SC2.2017.43","DOIUrl":"https://doi.org/10.1109/SC2.2017.43","url":null,"abstract":"In an early phase of a software development process (requirement analysis), functional and non-function requirements are gathered. While a lot of research has been done on how to bring functional requirements into the software, non-functional requirements are still challenging. One of the reasons is that non-functional requirements are often hard to measure and hard to test. Unfortunately, security, privacy, and data protections are such non-functional requirements. To make things even more complicate, software engineering is a social process. This means multiple parties (i.e., security experts, software architects, and programmers) have to work together, which will result unavoidable in misunderstandings and misinterpretation. Therefore, it is often not clear if security concerns are implemented correctly, or have been at least formalized correctly for later implementation during the requirement analysis. This paper is a discussion starter, on how to overcome communication-based problems, ensure that security concerns are implemented correctly, and how to avoid software erosion that later on breaks security concerns. Therefore, we discuss strategies which combine security concepts with software engineering methods by the intensive use of models. Such models are already used in academia and even in industry. We recommend to use models more often, more intensive, and for more concerns.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130710447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shih-Chun Huang, Yu-Cing Luo, Bing-Liang Chen, Yeh-Ching Chung, J. Chou
With the development of network technology, there are billions of devices accessing resources and services on the cloud through mobile telecommunication network. A great number of connections and data packets must be handled by the mobile network. It not only consumes the limited spectrum resources and network bandwidth, but also reduces the service quality of applications. To alleviate the problem, the concept of Mobile Edge Computing (MEC) has been proposed by European Telecommunications Standard Institute (ETSI) in 2014. MEC suggests to provide IT and cloud computing capabilities at the network edge to offer low-latency and high-bandwidth service. The architecture and the benefits of MEC have been discussed in many recent literature. But the implementation of underlying network is rarely discussed or evaluated in practice. In this paper, we present our prototype implementation of a MEC platform by developing an application-aware traffic redirection mechanism at edge network to reduce service latency and network bandwidth consumption. Our implementation is based on OAI, an open source project of 5G SoftRAN cellular system. To the best of our knowledge, it is also one of the few MEC solutions that have been built for 5G networks in practice.
{"title":"Application-Aware Traffic Redirection: A Mobile Edge Computing Implementation Toward Future 5G Networks","authors":"Shih-Chun Huang, Yu-Cing Luo, Bing-Liang Chen, Yeh-Ching Chung, J. Chou","doi":"10.1109/SC2.2017.11","DOIUrl":"https://doi.org/10.1109/SC2.2017.11","url":null,"abstract":"With the development of network technology, there are billions of devices accessing resources and services on the cloud through mobile telecommunication network. A great number of connections and data packets must be handled by the mobile network. It not only consumes the limited spectrum resources and network bandwidth, but also reduces the service quality of applications. To alleviate the problem, the concept of Mobile Edge Computing (MEC) has been proposed by European Telecommunications Standard Institute (ETSI) in 2014. MEC suggests to provide IT and cloud computing capabilities at the network edge to offer low-latency and high-bandwidth service. The architecture and the benefits of MEC have been discussed in many recent literature. But the implementation of underlying network is rarely discussed or evaluated in practice. In this paper, we present our prototype implementation of a MEC platform by developing an application-aware traffic redirection mechanism at edge network to reduce service latency and network bandwidth consumption. Our implementation is based on OAI, an open source project of 5G SoftRAN cellular system. To the best of our knowledge, it is also one of the few MEC solutions that have been built for 5G networks in practice.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116263740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing provides various diverse services for users accessing big data through high data rate cellular networks, e.g., LTE-A, IEEE 802.11ac, etc. Although LTE-A supports very high data rate, multi-hop relaying, and cooperative transmission, LTE-A suffers from high interference, path loss, high mobility, etc. Additionally, the accesses of cloud computing services need the transport layer protocols (e.g., TCP, UDP, and streaming) for achieving end-to-end transmissions. Clearly, the transmission QoS is significantly degraded when the big data transmissions are done through the TCP protocol over a high interference LTE-A environment. Thus, this paper proposes a cross-layer-based adaptive TCP algorithm to gather the LTE-A network states (e.g., AMC, CQI, relay link state, available bandwidth, etc.), and then feeds the state information back to the TCP sender for accurately executing the network congestion control of TCP. As a result, by using the accurate TCP congestion window (cwnd) under a high interference LTE-A, the number of timeouts and packet losses are significantly decreased. Numerical results demonstrate that the proposed approach outperforms the compared approaches in goodput and fairness, especially in high interference environment. Especially, the goodput of the proposed approach is 139.42% higher than that of NewReno The results can justify the claims of the proposed approach.
{"title":"Dynamic Flow Control for Big Data Transmissions toward 5G Multi-hop Relaying Mobile Networks","authors":"Ben-Jye Chang, Yihu Li, Shin-Pin Chen, Ying-Hsin Liang","doi":"10.1109/SC2.2017.19","DOIUrl":"https://doi.org/10.1109/SC2.2017.19","url":null,"abstract":"Cloud computing provides various diverse services for users accessing big data through high data rate cellular networks, e.g., LTE-A, IEEE 802.11ac, etc. Although LTE-A supports very high data rate, multi-hop relaying, and cooperative transmission, LTE-A suffers from high interference, path loss, high mobility, etc. Additionally, the accesses of cloud computing services need the transport layer protocols (e.g., TCP, UDP, and streaming) for achieving end-to-end transmissions. Clearly, the transmission QoS is significantly degraded when the big data transmissions are done through the TCP protocol over a high interference LTE-A environment. Thus, this paper proposes a cross-layer-based adaptive TCP algorithm to gather the LTE-A network states (e.g., AMC, CQI, relay link state, available bandwidth, etc.), and then feeds the state information back to the TCP sender for accurately executing the network congestion control of TCP. As a result, by using the accurate TCP congestion window (cwnd) under a high interference LTE-A, the number of timeouts and packet losses are significantly decreased. Numerical results demonstrate that the proposed approach outperforms the compared approaches in goodput and fairness, especially in high interference environment. Especially, the goodput of the proposed approach is 139.42% higher than that of NewReno The results can justify the claims of the proposed approach.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126100209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bing-Liang Chen, Shih-Chun Huang, Yu-Cing Luo, Yeh-Ching Chung, J. Chou
IoT applications are built on top of M2M platforms which facilitate the communication infrastructure among devices and to the clouds. Because of increasing M2M communication traffic and limited edge network bandwidth, it has become a crucial problem of M2M platform to prevent network congestion and service delay. A general approach is to deploy IoT service modules in M2M platform, so that data can be pre-processed and reduced before transmitting over the networks. Moreover, the service modules often need to be deployed dynamically at various locations of M2M platform to accommodate the mobility of devices moving across access networks, and the on-demand service requirement from users. However, existing M2M platforms have limited support to deployment dynamically and automatically. Therefore, the objective of our work is to build a dynamic module deployment framework in M2M platform to manage and optimize module deployment automatically according to user service requirements. We achieved the goal by implementing a solution that integrates a OSGi-based Application Framework(Kura), with a M2M platform(OM2M). By exploiting the resource reuse method in OSGi specification, we were able to reduce the module deployment time by 50~52%. Finally, a computation efficient and near-optimal algorithm was proposed to optimize the the module placement decision in our framework.
{"title":"A Dynamic Module Deployment Framework for M2M Platforms","authors":"Bing-Liang Chen, Shih-Chun Huang, Yu-Cing Luo, Yeh-Ching Chung, J. Chou","doi":"10.1109/SC2.2017.37","DOIUrl":"https://doi.org/10.1109/SC2.2017.37","url":null,"abstract":"IoT applications are built on top of M2M platforms which facilitate the communication infrastructure among devices and to the clouds. Because of increasing M2M communication traffic and limited edge network bandwidth, it has become a crucial problem of M2M platform to prevent network congestion and service delay. A general approach is to deploy IoT service modules in M2M platform, so that data can be pre-processed and reduced before transmitting over the networks. Moreover, the service modules often need to be deployed dynamically at various locations of M2M platform to accommodate the mobility of devices moving across access networks, and the on-demand service requirement from users. However, existing M2M platforms have limited support to deployment dynamically and automatically. Therefore, the objective of our work is to build a dynamic module deployment framework in M2M platform to manage and optimize module deployment automatically according to user service requirements. We achieved the goal by implementing a solution that integrates a OSGi-based Application Framework(Kura), with a M2M platform(OM2M). By exploiting the resource reuse method in OSGi specification, we were able to reduce the module deployment time by 50~52%. Finally, a computation efficient and near-optimal algorithm was proposed to optimize the the module placement decision in our framework.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122371853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud provides resources to the users based on their requirements by using several resource allocation schemes. Reliable resource allocation is one of the major issues of cloud computing. The objective of this paper is to provide a reliable resource allocation approach for cloud computing while minimizing the cost. The existing research works on resource allocation in cloud mostly address cost and resource utilization whereas we address the most crucial feature, which is cloud reliability. The main novelty of our work is that we consider not only reliability but also cost while allocating appropriate resources to the users. The aim of our proposed approach is to maximize reliability while minimizing the cost. In this regard, we propose a heuristic for resource allocation in cloud. We provide several performance analyses to validate our approach and the simulation results show that our approach provides increased reliability while allocating resources to the users.
{"title":"A Reliability-Based Resource Allocation Approach for Cloud Computing","authors":"A. B. Alam, Mohammad Zulkernine, A. Haque","doi":"10.1109/SC2.2017.46","DOIUrl":"https://doi.org/10.1109/SC2.2017.46","url":null,"abstract":"Cloud provides resources to the users based on their requirements by using several resource allocation schemes. Reliable resource allocation is one of the major issues of cloud computing. The objective of this paper is to provide a reliable resource allocation approach for cloud computing while minimizing the cost. The existing research works on resource allocation in cloud mostly address cost and resource utilization whereas we address the most crucial feature, which is cloud reliability. The main novelty of our work is that we consider not only reliability but also cost while allocating appropriate resources to the users. The aim of our proposed approach is to maximize reliability while minimizing the cost. In this regard, we propose a heuristic for resource allocation in cloud. We provide several performance analyses to validate our approach and the simulation results show that our approach provides increased reliability while allocating resources to the users.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129550599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fast regular expression matching (REM) is the core issue in deep packet inspection (DPI). Traditional REM mainly relies on deterministic finite automaton (DFA) to achieve fast matching. However, state explosion usually makes the DFA infeasible in practice. We propose the offset-FA to solve the state explosion problem in REM. The state explosion is mainly caused by the features of the large character set with closures or counting repetitions. We extract these features from original patterns, and represent them as an offset relation table and a reset table to keep semantic equivalence, and the rest fragments are compiled to a DFA called fragment-DFA. The fragment-DFA along with the offset relation table and reset table compose our Offset-FA. Experiments show that the offset-FA supports large rule sets and outperforms state-of-the-art solutions in space cost and matching speed.
{"title":"Offset-FA: Detach the Closures and Countings for Efficient Regular Expression Matching","authors":"Chengcheng Xu, Jinshu Su, Shuhui Chen, Biao Han","doi":"10.1109/SC2.2017.50","DOIUrl":"https://doi.org/10.1109/SC2.2017.50","url":null,"abstract":"Fast regular expression matching (REM) is the core issue in deep packet inspection (DPI). Traditional REM mainly relies on deterministic finite automaton (DFA) to achieve fast matching. However, state explosion usually makes the DFA infeasible in practice. We propose the offset-FA to solve the state explosion problem in REM. The state explosion is mainly caused by the features of the large character set with closures or counting repetitions. We extract these features from original patterns, and represent them as an offset relation table and a reset table to keep semantic equivalence, and the rest fragments are compiled to a DFA called fragment-DFA. The fragment-DFA along with the offset relation table and reset table compose our Offset-FA. Experiments show that the offset-FA supports large rule sets and outperforms state-of-the-art solutions in space cost and matching speed.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"495 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123067449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In SOA (Service-Oriented Architecture), the service match making process is conducted in the registry. When the amount of requests increases, the communication load between registry and requesters could also increase, which may overload the registry and result in a longer response time. This could be worse when the service match making is based on semantic rather than syntax. To address this issues, we proposed, in this paper, an extension of the SOA to reduce the response time and registry loading by reallocating the major service match making tasks to service providers. An experiment is also conducted to compare the performance of the proposed architecture with original SOA to illustrate the feasibility of the proposed approach.
{"title":"An Experiment on the Load Shifting from Service Registry to Service Providers in SOA","authors":"Kuo-Hsun Hsu, Kuan-Chou Lai, Li-Yung Huang, Hsuan-Fu Yang, Wei-Shan Tsai","doi":"10.1109/SC2.2017.48","DOIUrl":"https://doi.org/10.1109/SC2.2017.48","url":null,"abstract":"In SOA (Service-Oriented Architecture), the service match making process is conducted in the registry. When the amount of requests increases, the communication load between registry and requesters could also increase, which may overload the registry and result in a longer response time. This could be worse when the service match making is based on semantic rather than syntax. To address this issues, we proposed, in this paper, an extension of the SOA to reduce the response time and registry loading by reallocating the major service match making tasks to service providers. An experiment is also conducted to compare the performance of the proposed architecture with original SOA to illustrate the feasibility of the proposed approach.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123165310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clickstream data analysis involves collecting, analyzing and aggregating data for business analytics. Key business indicators such as user experience, product checkout flows, failed customer interactions are computed based on this data. A/B testing [18] or any data experimentation use clickstream data stream to compute business lifts or capture user feedback to new changes on the site. Handling such data at scale is extremely challenging, especially to design a system ensuring little to no data loss, bot filtering, event ordering, aggregation and sessionization of user visit. The entire operation must be near real-time so that computations performed can be fed back into services which can help in targeted personalization and better user experience. Sessions capture group of user interactions within stipulated time frame. Business metrics often computed on these user sessions. User sessions are therefore critical for business analytics as they represent true user behavior. We describe the process of creating a highly available data pipeline and computational model for user sessions at scale.
{"title":"Near Real-Time Tracking at Scale","authors":"D. Vasthimal, Sudeep Kumar, Mahesh Somani","doi":"10.1109/SC2.2017.44","DOIUrl":"https://doi.org/10.1109/SC2.2017.44","url":null,"abstract":"Clickstream data analysis involves collecting, analyzing and aggregating data for business analytics. Key business indicators such as user experience, product checkout flows, failed customer interactions are computed based on this data. A/B testing [18] or any data experimentation use clickstream data stream to compute business lifts or capture user feedback to new changes on the site. Handling such data at scale is extremely challenging, especially to design a system ensuring little to no data loss, bot filtering, event ordering, aggregation and sessionization of user visit. The entire operation must be near real-time so that computations performed can be fed back into services which can help in targeted personalization and better user experience. Sessions capture group of user interactions within stipulated time frame. Business metrics often computed on these user sessions. User sessions are therefore critical for business analytics as they represent true user behavior. We describe the process of creating a highly available data pipeline and computational model for user sessions at scale.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121908904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Tang, Shaoshan Liu, Chen Liu, C. Eisenbeis, J. Gaudiot
Abstract. Communication latency problems are universal and have become a major performance bottleneck as we scale in big data infrastructure and many-core architectures. Specifically, research institutes around the world have built specialized supercomputers with powerful computation units in order to accelerate scientific computation. However, the problem often comes from the communication side instead of the computation side. In this paper we first demonstrate the severity of communication latency problems. Then we use Lattice Quantum Chromo Dynamic (LQCD) simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware. In detail, we first implement a software value predictor on LQCD simulations: our results indicate that 22.15% of the predictions result in performance gain and only 2.65% of the predictions lead to rollbacks. Next we explore the hardware value predictor design, which results in a 20-fold reduction of the prediction latency. In addition, based on the observation that the full range of floating point accuracy may not be always needed, we propose and implement an initial design of the tolerance value predictor: as the tolerance range increases, the prediction accuracy also increases dramatically.
{"title":"Accelerating Lattice Quantum Chromodynamics Simulations with Value Prediction","authors":"Jie Tang, Shaoshan Liu, Chen Liu, C. Eisenbeis, J. Gaudiot","doi":"10.1109/SC2.2017.39","DOIUrl":"https://doi.org/10.1109/SC2.2017.39","url":null,"abstract":"Abstract. Communication latency problems are universal and have become a major performance bottleneck as we scale in big data infrastructure and many-core architectures. Specifically, research institutes around the world have built specialized supercomputers with powerful computation units in order to accelerate scientific computation. However, the problem often comes from the communication side instead of the computation side. In this paper we first demonstrate the severity of communication latency problems. Then we use Lattice Quantum Chromo Dynamic (LQCD) simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware. In detail, we first implement a software value predictor on LQCD simulations: our results indicate that 22.15% of the predictions result in performance gain and only 2.65% of the predictions lead to rollbacks. Next we explore the hardware value predictor design, which results in a 20-fold reduction of the prediction latency. In addition, based on the observation that the full range of floating point accuracy may not be always needed, we propose and implement an initial design of the tolerance value predictor: as the tolerance range increases, the prediction accuracy also increases dramatically.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121372128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ching-Hsiang Su, Wei-Chih Huang, Van-Dai Ta, Chuan-Ming Liu, Sheng-Lung Peng
Recently big data are crucial important for data computing and analytics. Traditional computing paradigm is inefficient for computing by the complexity and computational cost. Cloud computing is a modern trend of computing paradigm in which typically real-time scalable resources such as files, data, programs, hardware, and third party services can be accessible from a web browser via the Internet to users. It is the new trend for big data analytics that provides high reliability, availability, and scalability services. This paper proposed an automated cloud analysis framework and management system based on OpenStack and other open-source projects such as Apache Spark, Sparkler, RESTful API, and JBoss web server. The automated cloud provides a cluster of virtual machines which utilizes the storage and memory in order to support multiple data analysis. In addition, OpenStack also provide services for authenticating and user account management on cloud environment which enhance the cloud security. In addition, REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. RESTful API is the essential implementation of REST web architecture for web services. It provide data and services are shared on cloud through uniform interface. Finally, data analysis works effectively by using parallel computing model with realtime data processing in Apache Spark and Sparkler.
{"title":"Exploiting a Cloud Framework for Automatically and Effectively Providing Data Analyzers","authors":"Ching-Hsiang Su, Wei-Chih Huang, Van-Dai Ta, Chuan-Ming Liu, Sheng-Lung Peng","doi":"10.1109/SC2.2017.42","DOIUrl":"https://doi.org/10.1109/SC2.2017.42","url":null,"abstract":"Recently big data are crucial important for data computing and analytics. Traditional computing paradigm is inefficient for computing by the complexity and computational cost. Cloud computing is a modern trend of computing paradigm in which typically real-time scalable resources such as files, data, programs, hardware, and third party services can be accessible from a web browser via the Internet to users. It is the new trend for big data analytics that provides high reliability, availability, and scalability services. This paper proposed an automated cloud analysis framework and management system based on OpenStack and other open-source projects such as Apache Spark, Sparkler, RESTful API, and JBoss web server. The automated cloud provides a cluster of virtual machines which utilizes the storage and memory in order to support multiple data analysis. In addition, OpenStack also provide services for authenticating and user account management on cloud environment which enhance the cloud security. In addition, REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. RESTful API is the essential implementation of REST web architecture for web services. It provide data and services are shared on cloud through uniform interface. Finally, data analysis works effectively by using parallel computing model with realtime data processing in Apache Spark and Sparkler.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131986810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}