首页 > 最新文献

2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)最新文献

英文 中文
Enforcing Security and Privacy via a Cooperation of Security Experts and Software Engineers: A Model-Based Vision 通过安全专家和软件工程师的合作加强安全和隐私:基于模型的愿景
Marcus Hilbrich, Markus Frank
In an early phase of a software development process (requirement analysis), functional and non-function requirements are gathered. While a lot of research has been done on how to bring functional requirements into the software, non-functional requirements are still challenging. One of the reasons is that non-functional requirements are often hard to measure and hard to test. Unfortunately, security, privacy, and data protections are such non-functional requirements. To make things even more complicate, software engineering is a social process. This means multiple parties (i.e., security experts, software architects, and programmers) have to work together, which will result unavoidable in misunderstandings and misinterpretation. Therefore, it is often not clear if security concerns are implemented correctly, or have been at least formalized correctly for later implementation during the requirement analysis. This paper is a discussion starter, on how to overcome communication-based problems, ensure that security concerns are implemented correctly, and how to avoid software erosion that later on breaks security concerns. Therefore, we discuss strategies which combine security concepts with software engineering methods by the intensive use of models. Such models are already used in academia and even in industry. We recommend to use models more often, more intensive, and for more concerns.
在软件开发过程的早期阶段(需求分析),收集功能和非功能需求。虽然人们已经对如何将功能需求引入软件进行了大量研究,但非功能需求仍然具有挑战性。其中一个原因是,非功能性需求通常难以度量和测试。不幸的是,安全性、隐私性和数据保护都是非功能性需求。让事情变得更加复杂的是,软件工程是一个社会过程。这意味着多方(即安全专家、软件架构师和程序员)必须一起工作,这将不可避免地导致误解和误读。因此,通常不清楚是否正确地实现了安全关注点,或者至少在需求分析期间正确地形式化了稍后的实现。本文是讨论的开端,讨论如何克服基于通信的问题,确保安全关注点被正确实现,以及如何避免后来破坏安全关注点的软件侵蚀。因此,我们讨论了通过大量使用模型将安全概念与软件工程方法相结合的策略。这样的模型已经在学术界甚至工业中使用。我们建议更频繁地、更密集地使用模型,并考虑更多的问题。
{"title":"Enforcing Security and Privacy via a Cooperation of Security Experts and Software Engineers: A Model-Based Vision","authors":"Marcus Hilbrich, Markus Frank","doi":"10.1109/SC2.2017.43","DOIUrl":"https://doi.org/10.1109/SC2.2017.43","url":null,"abstract":"In an early phase of a software development process (requirement analysis), functional and non-function requirements are gathered. While a lot of research has been done on how to bring functional requirements into the software, non-functional requirements are still challenging. One of the reasons is that non-functional requirements are often hard to measure and hard to test. Unfortunately, security, privacy, and data protections are such non-functional requirements. To make things even more complicate, software engineering is a social process. This means multiple parties (i.e., security experts, software architects, and programmers) have to work together, which will result unavoidable in misunderstandings and misinterpretation. Therefore, it is often not clear if security concerns are implemented correctly, or have been at least formalized correctly for later implementation during the requirement analysis. This paper is a discussion starter, on how to overcome communication-based problems, ensure that security concerns are implemented correctly, and how to avoid software erosion that later on breaks security concerns. Therefore, we discuss strategies which combine security concepts with software engineering methods by the intensive use of models. Such models are already used in academia and even in industry. We recommend to use models more often, more intensive, and for more concerns.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130710447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Application-Aware Traffic Redirection: A Mobile Edge Computing Implementation Toward Future 5G Networks 应用感知流量重定向:面向未来5G网络的移动边缘计算实现
Shih-Chun Huang, Yu-Cing Luo, Bing-Liang Chen, Yeh-Ching Chung, J. Chou
With the development of network technology, there are billions of devices accessing resources and services on the cloud through mobile telecommunication network. A great number of connections and data packets must be handled by the mobile network. It not only consumes the limited spectrum resources and network bandwidth, but also reduces the service quality of applications. To alleviate the problem, the concept of Mobile Edge Computing (MEC) has been proposed by European Telecommunications Standard Institute (ETSI) in 2014. MEC suggests to provide IT and cloud computing capabilities at the network edge to offer low-latency and high-bandwidth service. The architecture and the benefits of MEC have been discussed in many recent literature. But the implementation of underlying network is rarely discussed or evaluated in practice. In this paper, we present our prototype implementation of a MEC platform by developing an application-aware traffic redirection mechanism at edge network to reduce service latency and network bandwidth consumption. Our implementation is based on OAI, an open source project of 5G SoftRAN cellular system. To the best of our knowledge, it is also one of the few MEC solutions that have been built for 5G networks in practice.
随着网络技术的发展,数以亿计的设备通过移动通信网络访问云上的资源和服务。移动网络必须处理大量的连接和数据包。它不仅消耗了有限的频谱资源和网络带宽,而且降低了应用的服务质量。为了缓解这个问题,欧洲电信标准协会(ETSI)在2014年提出了移动边缘计算(MEC)的概念。MEC建议在网络边缘提供IT和云计算能力,以提供低延迟和高带宽的服务。MEC的架构和好处在最近的许多文献中都有讨论。但在实践中,底层网络的实现很少被讨论或评估。在本文中,我们通过在边缘网络中开发应用感知的流量重定向机制,提出了MEC平台的原型实现,以减少服务延迟和网络带宽消耗。我们的实现基于5G SoftRAN蜂窝系统的开源项目OAI。据我们所知,它也是在实践中为5G网络构建的少数MEC解决方案之一。
{"title":"Application-Aware Traffic Redirection: A Mobile Edge Computing Implementation Toward Future 5G Networks","authors":"Shih-Chun Huang, Yu-Cing Luo, Bing-Liang Chen, Yeh-Ching Chung, J. Chou","doi":"10.1109/SC2.2017.11","DOIUrl":"https://doi.org/10.1109/SC2.2017.11","url":null,"abstract":"With the development of network technology, there are billions of devices accessing resources and services on the cloud through mobile telecommunication network. A great number of connections and data packets must be handled by the mobile network. It not only consumes the limited spectrum resources and network bandwidth, but also reduces the service quality of applications. To alleviate the problem, the concept of Mobile Edge Computing (MEC) has been proposed by European Telecommunications Standard Institute (ETSI) in 2014. MEC suggests to provide IT and cloud computing capabilities at the network edge to offer low-latency and high-bandwidth service. The architecture and the benefits of MEC have been discussed in many recent literature. But the implementation of underlying network is rarely discussed or evaluated in practice. In this paper, we present our prototype implementation of a MEC platform by developing an application-aware traffic redirection mechanism at edge network to reduce service latency and network bandwidth consumption. Our implementation is based on OAI, an open source project of 5G SoftRAN cellular system. To the best of our knowledge, it is also one of the few MEC solutions that have been built for 5G networks in practice.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116263740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Dynamic Flow Control for Big Data Transmissions toward 5G Multi-hop Relaying Mobile Networks 面向5G多跳中继移动网络的大数据传输动态流量控制
Ben-Jye Chang, Yihu Li, Shin-Pin Chen, Ying-Hsin Liang
Cloud computing provides various diverse services for users accessing big data through high data rate cellular networks, e.g., LTE-A, IEEE 802.11ac, etc. Although LTE-A supports very high data rate, multi-hop relaying, and cooperative transmission, LTE-A suffers from high interference, path loss, high mobility, etc. Additionally, the accesses of cloud computing services need the transport layer protocols (e.g., TCP, UDP, and streaming) for achieving end-to-end transmissions. Clearly, the transmission QoS is significantly degraded when the big data transmissions are done through the TCP protocol over a high interference LTE-A environment. Thus, this paper proposes a cross-layer-based adaptive TCP algorithm to gather the LTE-A network states (e.g., AMC, CQI, relay link state, available bandwidth, etc.), and then feeds the state information back to the TCP sender for accurately executing the network congestion control of TCP. As a result, by using the accurate TCP congestion window (cwnd) under a high interference LTE-A, the number of timeouts and packet losses are significantly decreased. Numerical results demonstrate that the proposed approach outperforms the compared approaches in goodput and fairness, especially in high interference environment. Especially, the goodput of the proposed approach is 139.42% higher than that of NewReno The results can justify the claims of the proposed approach.
云计算通过LTE-A、IEEE 802.11ac等高数据速率蜂窝网络,为用户访问大数据提供了多种多样的服务。尽管LTE-A支持非常高的数据速率、多跳中继和协同传输,但LTE-A存在高干扰、路径损耗、高移动性等问题。此外,云计算服务的访问需要传输层协议(如TCP、UDP和流)来实现端到端传输。显然,在高干扰的LTE-A环境下,通过TCP协议进行大数据传输时,传输QoS明显降低。为此,本文提出了一种基于跨层的自适应TCP算法,用于采集LTE-A网络状态(如AMC、CQI、中继链路状态、可用带宽等),并将状态信息反馈给TCP发送方,以准确执行TCP的网络拥塞控制。因此,在高干扰的LTE-A环境下,通过使用精确的TCP拥塞窗口(cwnd),可以显著减少超时和丢包的数量。数值计算结果表明,该方法在高干扰环境下的稳定性和公平性优于其他方法。特别是,该方法的goodput比NewReno的goodput高139.42%,证明了该方法的正确性。
{"title":"Dynamic Flow Control for Big Data Transmissions toward 5G Multi-hop Relaying Mobile Networks","authors":"Ben-Jye Chang, Yihu Li, Shin-Pin Chen, Ying-Hsin Liang","doi":"10.1109/SC2.2017.19","DOIUrl":"https://doi.org/10.1109/SC2.2017.19","url":null,"abstract":"Cloud computing provides various diverse services for users accessing big data through high data rate cellular networks, e.g., LTE-A, IEEE 802.11ac, etc. Although LTE-A supports very high data rate, multi-hop relaying, and cooperative transmission, LTE-A suffers from high interference, path loss, high mobility, etc. Additionally, the accesses of cloud computing services need the transport layer protocols (e.g., TCP, UDP, and streaming) for achieving end-to-end transmissions. Clearly, the transmission QoS is significantly degraded when the big data transmissions are done through the TCP protocol over a high interference LTE-A environment. Thus, this paper proposes a cross-layer-based adaptive TCP algorithm to gather the LTE-A network states (e.g., AMC, CQI, relay link state, available bandwidth, etc.), and then feeds the state information back to the TCP sender for accurately executing the network congestion control of TCP. As a result, by using the accurate TCP congestion window (cwnd) under a high interference LTE-A, the number of timeouts and packet losses are significantly decreased. Numerical results demonstrate that the proposed approach outperforms the compared approaches in goodput and fairness, especially in high interference environment. Especially, the goodput of the proposed approach is 139.42% higher than that of NewReno The results can justify the claims of the proposed approach.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126100209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dynamic Module Deployment Framework for M2M Platforms 面向M2M平台的动态模块部署框架
Bing-Liang Chen, Shih-Chun Huang, Yu-Cing Luo, Yeh-Ching Chung, J. Chou
IoT applications are built on top of M2M platforms which facilitate the communication infrastructure among devices and to the clouds. Because of increasing M2M communication traffic and limited edge network bandwidth, it has become a crucial problem of M2M platform to prevent network congestion and service delay. A general approach is to deploy IoT service modules in M2M platform, so that data can be pre-processed and reduced before transmitting over the networks. Moreover, the service modules often need to be deployed dynamically at various locations of M2M platform to accommodate the mobility of devices moving across access networks, and the on-demand service requirement from users. However, existing M2M platforms have limited support to deployment dynamically and automatically. Therefore, the objective of our work is to build a dynamic module deployment framework in M2M platform to manage and optimize module deployment automatically according to user service requirements. We achieved the goal by implementing a solution that integrates a OSGi-based Application Framework(Kura), with a M2M platform(OM2M). By exploiting the resource reuse method in OSGi specification, we were able to reduce the module deployment time by 50~52%. Finally, a computation efficient and near-optimal algorithm was proposed to optimize the the module placement decision in our framework.
物联网应用建立在M2M平台之上,促进了设备之间和云之间的通信基础设施。由于M2M通信流量的增加和边缘网络带宽的限制,防止网络拥塞和业务延迟已成为M2M平台的关键问题。一般的方法是在M2M平台上部署物联网业务模块,这样数据在通过网络传输之前就可以进行预处理和缩减。此外,业务模块往往需要动态部署在M2M平台的各个位置,以适应设备跨接入网的移动性和用户的按需服务需求。然而,现有的M2M平台对动态和自动部署的支持有限。因此,我们的工作目标是在M2M平台上构建一个动态模块部署框架,根据用户的业务需求自动管理和优化模块部署。我们实现了一个集成了基于osgi的应用程序框架(Kura)和M2M平台(OM2M)的解决方案,从而实现了这个目标。通过利用OSGi规范中的资源重用方法,使模块的部署时间减少了50~52%。最后,提出了一种计算效率高且接近最优的算法来优化框架中的模块放置决策。
{"title":"A Dynamic Module Deployment Framework for M2M Platforms","authors":"Bing-Liang Chen, Shih-Chun Huang, Yu-Cing Luo, Yeh-Ching Chung, J. Chou","doi":"10.1109/SC2.2017.37","DOIUrl":"https://doi.org/10.1109/SC2.2017.37","url":null,"abstract":"IoT applications are built on top of M2M platforms which facilitate the communication infrastructure among devices and to the clouds. Because of increasing M2M communication traffic and limited edge network bandwidth, it has become a crucial problem of M2M platform to prevent network congestion and service delay. A general approach is to deploy IoT service modules in M2M platform, so that data can be pre-processed and reduced before transmitting over the networks. Moreover, the service modules often need to be deployed dynamically at various locations of M2M platform to accommodate the mobility of devices moving across access networks, and the on-demand service requirement from users. However, existing M2M platforms have limited support to deployment dynamically and automatically. Therefore, the objective of our work is to build a dynamic module deployment framework in M2M platform to manage and optimize module deployment automatically according to user service requirements. We achieved the goal by implementing a solution that integrates a OSGi-based Application Framework(Kura), with a M2M platform(OM2M). By exploiting the resource reuse method in OSGi specification, we were able to reduce the module deployment time by 50~52%. Finally, a computation efficient and near-optimal algorithm was proposed to optimize the the module placement decision in our framework.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122371853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Reliability-Based Resource Allocation Approach for Cloud Computing 一种基于可靠性的云计算资源分配方法
A. B. Alam, Mohammad Zulkernine, A. Haque
Cloud provides resources to the users based on their requirements by using several resource allocation schemes. Reliable resource allocation is one of the major issues of cloud computing. The objective of this paper is to provide a reliable resource allocation approach for cloud computing while minimizing the cost. The existing research works on resource allocation in cloud mostly address cost and resource utilization whereas we address the most crucial feature, which is cloud reliability. The main novelty of our work is that we consider not only reliability but also cost while allocating appropriate resources to the users. The aim of our proposed approach is to maximize reliability while minimizing the cost. In this regard, we propose a heuristic for resource allocation in cloud. We provide several performance analyses to validate our approach and the simulation results show that our approach provides increased reliability while allocating resources to the users.
云通过多种资源分配方案,根据用户的需求向用户提供资源。可靠的资源分配是云计算的主要问题之一。本文的目标是为云计算提供一种可靠的资源分配方法,同时使成本最小化。现有的关于云环境下资源分配的研究主要关注成本和资源利用率,而我们研究的是云环境中最关键的特征——云可靠性。我们的工作的主要新颖之处在于,在为用户分配适当的资源时,我们不仅考虑了可靠性,而且考虑了成本。我们提出的方法的目的是在最小化成本的同时最大化可靠性。在这方面,我们提出了一种启发式的云资源分配方法。我们提供了几个性能分析来验证我们的方法,仿真结果表明,我们的方法在为用户分配资源时提供了更高的可靠性。
{"title":"A Reliability-Based Resource Allocation Approach for Cloud Computing","authors":"A. B. Alam, Mohammad Zulkernine, A. Haque","doi":"10.1109/SC2.2017.46","DOIUrl":"https://doi.org/10.1109/SC2.2017.46","url":null,"abstract":"Cloud provides resources to the users based on their requirements by using several resource allocation schemes. Reliable resource allocation is one of the major issues of cloud computing. The objective of this paper is to provide a reliable resource allocation approach for cloud computing while minimizing the cost. The existing research works on resource allocation in cloud mostly address cost and resource utilization whereas we address the most crucial feature, which is cloud reliability. The main novelty of our work is that we consider not only reliability but also cost while allocating appropriate resources to the users. The aim of our proposed approach is to maximize reliability while minimizing the cost. In this regard, we propose a heuristic for resource allocation in cloud. We provide several performance analyses to validate our approach and the simulation results show that our approach provides increased reliability while allocating resources to the users.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129550599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Offset-FA: Detach the Closures and Countings for Efficient Regular Expression Matching Offset-FA:分离闭包和计数以实现有效的正则表达式匹配
Chengcheng Xu, Jinshu Su, Shuhui Chen, Biao Han
Fast regular expression matching (REM) is the core issue in deep packet inspection (DPI). Traditional REM mainly relies on deterministic finite automaton (DFA) to achieve fast matching. However, state explosion usually makes the DFA infeasible in practice. We propose the offset-FA to solve the state explosion problem in REM. The state explosion is mainly caused by the features of the large character set with closures or counting repetitions. We extract these features from original patterns, and represent them as an offset relation table and a reset table to keep semantic equivalence, and the rest fragments are compiled to a DFA called fragment-DFA. The fragment-DFA along with the offset relation table and reset table compose our Offset-FA. Experiments show that the offset-FA supports large rule sets and outperforms state-of-the-art solutions in space cost and matching speed.
快速正则表达式匹配(REM)是深度包检测的核心问题。传统的快速匹配算法主要依靠确定性有限自动机(DFA)来实现快速匹配。然而,由于状态爆炸的原因,DFA在实际应用中往往不可行。为了解决REM中的状态爆炸问题,我们提出了偏移fa。状态爆炸主要是由带有闭包或计数重复的大字符集的特征引起的。我们从原始模式中提取这些特征,并将其表示为偏移关系表和重置表,以保持语义等价,其余的片段被编译成一个DFA,称为片段-DFA。片段- dfa与偏移关系表和重置表一起构成偏移- fa。实验表明,偏移fa支持大型规则集,并且在空间成本和匹配速度方面优于最先进的解决方案。
{"title":"Offset-FA: Detach the Closures and Countings for Efficient Regular Expression Matching","authors":"Chengcheng Xu, Jinshu Su, Shuhui Chen, Biao Han","doi":"10.1109/SC2.2017.50","DOIUrl":"https://doi.org/10.1109/SC2.2017.50","url":null,"abstract":"Fast regular expression matching (REM) is the core issue in deep packet inspection (DPI). Traditional REM mainly relies on deterministic finite automaton (DFA) to achieve fast matching. However, state explosion usually makes the DFA infeasible in practice. We propose the offset-FA to solve the state explosion problem in REM. The state explosion is mainly caused by the features of the large character set with closures or counting repetitions. We extract these features from original patterns, and represent them as an offset relation table and a reset table to keep semantic equivalence, and the rest fragments are compiled to a DFA called fragment-DFA. The fragment-DFA along with the offset relation table and reset table compose our Offset-FA. Experiments show that the offset-FA supports large rule sets and outperforms state-of-the-art solutions in space cost and matching speed.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"495 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123067449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Experiment on the Load Shifting from Service Registry to Service Providers in SOA SOA中从服务注册中心到服务提供者的负载转移实验
Kuo-Hsun Hsu, Kuan-Chou Lai, Li-Yung Huang, Hsuan-Fu Yang, Wei-Shan Tsai
In SOA (Service-Oriented Architecture), the service match making process is conducted in the registry. When the amount of requests increases, the communication load between registry and requesters could also increase, which may overload the registry and result in a longer response time. This could be worse when the service match making is based on semantic rather than syntax. To address this issues, we proposed, in this paper, an extension of the SOA to reduce the response time and registry loading by reallocating the major service match making tasks to service providers. An experiment is also conducted to compare the performance of the proposed architecture with original SOA to illustrate the feasibility of the proposed approach.
在SOA(面向服务的体系结构)中,服务匹配过程在注册中心中进行。当请求数量增加时,注册中心和请求者之间的通信负载也会增加,这可能会使注册中心过载并导致更长的响应时间。当服务匹配是基于语义而不是语法时,情况可能会更糟。为了解决这个问题,我们在本文中建议对SOA进行扩展,通过将主要的服务匹配任务重新分配给服务提供者来减少响应时间和注册中心负载。还进行了一个实验来比较所提出的体系结构与原始SOA的性能,以说明所提出方法的可行性。
{"title":"An Experiment on the Load Shifting from Service Registry to Service Providers in SOA","authors":"Kuo-Hsun Hsu, Kuan-Chou Lai, Li-Yung Huang, Hsuan-Fu Yang, Wei-Shan Tsai","doi":"10.1109/SC2.2017.48","DOIUrl":"https://doi.org/10.1109/SC2.2017.48","url":null,"abstract":"In SOA (Service-Oriented Architecture), the service match making process is conducted in the registry. When the amount of requests increases, the communication load between registry and requesters could also increase, which may overload the registry and result in a longer response time. This could be worse when the service match making is based on semantic rather than syntax. To address this issues, we proposed, in this paper, an extension of the SOA to reduce the response time and registry loading by reallocating the major service match making tasks to service providers. An experiment is also conducted to compare the performance of the proposed architecture with original SOA to illustrate the feasibility of the proposed approach.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123165310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Near Real-Time Tracking at Scale 接近实时跟踪的规模
D. Vasthimal, Sudeep Kumar, Mahesh Somani
Clickstream data analysis involves collecting, analyzing and aggregating data for business analytics. Key business indicators such as user experience, product checkout flows, failed customer interactions are computed based on this data. A/B testing [18] or any data experimentation use clickstream data stream to compute business lifts or capture user feedback to new changes on the site. Handling such data at scale is extremely challenging, especially to design a system ensuring little to no data loss, bot filtering, event ordering, aggregation and sessionization of user visit. The entire operation must be near real-time so that computations performed can be fed back into services which can help in targeted personalization and better user experience. Sessions capture group of user interactions within stipulated time frame. Business metrics often computed on these user sessions. User sessions are therefore critical for business analytics as they represent true user behavior. We describe the process of creating a highly available data pipeline and computational model for user sessions at scale.
点击流数据分析包括为业务分析收集、分析和汇总数据。关键业务指标(如用户体验、产品签出流程、失败的客户交互)是基于这些数据计算的。A/B测试[18]或任何数据实验使用clickstream数据流来计算业务提升或捕获用户对网站新变化的反馈。大规模处理这样的数据是极具挑战性的,特别是设计一个系统,确保很少或没有数据丢失,bot过滤,事件排序,聚合和用户访问的会话化。整个操作必须接近实时,以便执行的计算可以反馈到服务中,从而有助于有针对性的个性化和更好的用户体验。会话在规定的时间框架内捕获一组用户交互。业务指标通常根据这些用户会话计算。因此,用户会话对于业务分析至关重要,因为它们代表了真实的用户行为。我们描述了为大规模用户会话创建高可用性数据管道和计算模型的过程。
{"title":"Near Real-Time Tracking at Scale","authors":"D. Vasthimal, Sudeep Kumar, Mahesh Somani","doi":"10.1109/SC2.2017.44","DOIUrl":"https://doi.org/10.1109/SC2.2017.44","url":null,"abstract":"Clickstream data analysis involves collecting, analyzing and aggregating data for business analytics. Key business indicators such as user experience, product checkout flows, failed customer interactions are computed based on this data. A/B testing [18] or any data experimentation use clickstream data stream to compute business lifts or capture user feedback to new changes on the site. Handling such data at scale is extremely challenging, especially to design a system ensuring little to no data loss, bot filtering, event ordering, aggregation and sessionization of user visit. The entire operation must be near real-time so that computations performed can be fed back into services which can help in targeted personalization and better user experience. Sessions capture group of user interactions within stipulated time frame. Business metrics often computed on these user sessions. User sessions are therefore critical for business analytics as they represent true user behavior. We describe the process of creating a highly available data pipeline and computational model for user sessions at scale.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121908904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Accelerating Lattice Quantum Chromodynamics Simulations with Value Prediction 加速晶格量子色动力学模拟与值预测
Jie Tang, Shaoshan Liu, Chen Liu, C. Eisenbeis, J. Gaudiot
Abstract. Communication latency problems are universal and have become a major performance bottleneck as we scale in big data infrastructure and many-core architectures. Specifically, research institutes around the world have built specialized supercomputers with powerful computation units in order to accelerate scientific computation. However, the problem often comes from the communication side instead of the computation side. In this paper we first demonstrate the severity of communication latency problems. Then we use Lattice Quantum Chromo Dynamic (LQCD) simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware. In detail, we first implement a software value predictor on LQCD simulations: our results indicate that 22.15% of the predictions result in performance gain and only 2.65% of the predictions lead to rollbacks. Next we explore the hardware value predictor design, which results in a 20-fold reduction of the prediction latency. In addition, based on the observation that the full range of floating point accuracy may not be always needed, we propose and implement an initial design of the tolerance value predictor: as the tolerance range increases, the prediction accuracy also increases dramatically.
摘要通信延迟问题是普遍存在的,并且随着我们在大数据基础设施和多核架构中进行扩展,它已经成为一个主要的性能瓶颈。具体来说,世界各地的研究机构都建造了具有强大计算单元的专用超级计算机,以加速科学计算。然而,问题往往来自通信端,而不是计算端。在本文中,我们首先演示了通信延迟问题的严重性。然后,我们使用Lattice Quantum Chromo Dynamic (LQCD)模拟作为案例研究,以展示值预测技术如何减少通信开销,从而在不添加更昂贵的硬件的情况下获得更高的性能。详细地说,我们首先在LQCD模拟上实现了一个软件值预测器:我们的结果表明,22.15%的预测导致性能提高,只有2.65%的预测导致回滚。接下来,我们将探索硬件值预测器设计,它将预测延迟降低了20倍。此外,基于观察到并非总是需要全范围的浮点精度,我们提出并实现了容差值预测器的初步设计:随着容差范围的增大,预测精度也显著提高。
{"title":"Accelerating Lattice Quantum Chromodynamics Simulations with Value Prediction","authors":"Jie Tang, Shaoshan Liu, Chen Liu, C. Eisenbeis, J. Gaudiot","doi":"10.1109/SC2.2017.39","DOIUrl":"https://doi.org/10.1109/SC2.2017.39","url":null,"abstract":"Abstract. Communication latency problems are universal and have become a major performance bottleneck as we scale in big data infrastructure and many-core architectures. Specifically, research institutes around the world have built specialized supercomputers with powerful computation units in order to accelerate scientific computation. However, the problem often comes from the communication side instead of the computation side. In this paper we first demonstrate the severity of communication latency problems. Then we use Lattice Quantum Chromo Dynamic (LQCD) simulations as a case study to show how value prediction techniques can reduce the communication overheads, thus leading to higher performance without adding more expensive hardware. In detail, we first implement a software value predictor on LQCD simulations: our results indicate that 22.15% of the predictions result in performance gain and only 2.65% of the predictions lead to rollbacks. Next we explore the hardware value predictor design, which results in a 20-fold reduction of the prediction latency. In addition, based on the observation that the full range of floating point accuracy may not be always needed, we propose and implement an initial design of the tolerance value predictor: as the tolerance range increases, the prediction accuracy also increases dramatically.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121372128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting a Cloud Framework for Automatically and Effectively Providing Data Analyzers 利用云框架自动有效地提供数据分析
Ching-Hsiang Su, Wei-Chih Huang, Van-Dai Ta, Chuan-Ming Liu, Sheng-Lung Peng
Recently big data are crucial important for data computing and analytics. Traditional computing paradigm is inefficient for computing by the complexity and computational cost. Cloud computing is a modern trend of computing paradigm in which typically real-time scalable resources such as files, data, programs, hardware, and third party services can be accessible from a web browser via the Internet to users. It is the new trend for big data analytics that provides high reliability, availability, and scalability services. This paper proposed an automated cloud analysis framework and management system based on OpenStack and other open-source projects such as Apache Spark, Sparkler, RESTful API, and JBoss web server. The automated cloud provides a cluster of virtual machines which utilizes the storage and memory in order to support multiple data analysis. In addition, OpenStack also provide services for authenticating and user account management on cloud environment which enhance the cloud security. In addition, REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. RESTful API is the essential implementation of REST web architecture for web services. It provide data and services are shared on cloud through uniform interface. Finally, data analysis works effectively by using parallel computing model with realtime data processing in Apache Spark and Sparkler.
最近,大数据对数据计算和分析至关重要。传统的计算模式由于其复杂性和计算成本而导致计算效率低下。云计算是计算范式的一种现代趋势,其中典型的实时可伸缩资源(如文件、数据、程序、硬件和第三方服务)可以通过Internet从web浏览器访问给用户。提供高可靠性、高可用性和高可扩展性服务是大数据分析的新趋势。本文提出了一种基于OpenStack和Apache Spark、Sparkler、RESTful API、JBoss web服务器等开源项目的自动化云分析框架和管理系统。自动化云提供了一个虚拟机集群,它利用存储和内存来支持多种数据分析。此外,OpenStack还提供了云环境的认证和用户管理服务,增强了云环境的安全性。此外,REST提供了一组体系结构约束,当作为一个整体应用时,这些约束强调组件交互的可伸缩性、接口的通用性、组件的独立部署和中间组件,以减少交互延迟、加强安全性和封装遗留系统。RESTful API是web服务的REST web架构的基本实现。它通过统一的接口在云上提供数据和服务的共享。最后,在Apache Spark和Sparkler中采用并行计算模型进行实时数据处理,有效地完成了数据分析。
{"title":"Exploiting a Cloud Framework for Automatically and Effectively Providing Data Analyzers","authors":"Ching-Hsiang Su, Wei-Chih Huang, Van-Dai Ta, Chuan-Ming Liu, Sheng-Lung Peng","doi":"10.1109/SC2.2017.42","DOIUrl":"https://doi.org/10.1109/SC2.2017.42","url":null,"abstract":"Recently big data are crucial important for data computing and analytics. Traditional computing paradigm is inefficient for computing by the complexity and computational cost. Cloud computing is a modern trend of computing paradigm in which typically real-time scalable resources such as files, data, programs, hardware, and third party services can be accessible from a web browser via the Internet to users. It is the new trend for big data analytics that provides high reliability, availability, and scalability services. This paper proposed an automated cloud analysis framework and management system based on OpenStack and other open-source projects such as Apache Spark, Sparkler, RESTful API, and JBoss web server. The automated cloud provides a cluster of virtual machines which utilizes the storage and memory in order to support multiple data analysis. In addition, OpenStack also provide services for authenticating and user account management on cloud environment which enhance the cloud security. In addition, REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. RESTful API is the essential implementation of REST web architecture for web services. It provide data and services are shared on cloud through uniform interface. Finally, data analysis works effectively by using parallel computing model with realtime data processing in Apache Spark and Sparkler.","PeriodicalId":188326,"journal":{"name":"2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131986810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1