首页 > 最新文献

2012 IEEE 32nd International Conference on Distributed Computing Systems最新文献

英文 中文
Clustering Streaming Graphs 聚类流图
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.20
A. Eldawy, R. Khandekar, Kun-Lung Wu
In this paper, we propose techniques for clustering large-scale "streaming" graphs where the updates to a graph are given in form of a stream of vertex or edge additions and deletions. Our algorithm handles such updates in an online and incremental manner and it can be easily parallel zed. Several previous graph clustering algorithms fall short of handling massive and streaming graphs because they are centralized, they need to know the entire graph beforehand and are not incremental, or they incur an excessive computational overhead. Our algorithm's fundamental building block is called graph reservoir sampling. We maintain a reservoir sample of the edges as the graph changes while satisfying certain desired properties like bounding number of clusters or cluster-sizes. We then declare connected components in the sampled sub graph as clusters of the original graph. Our experiments on real graphs show that our approach not only yields clusterings with very good quality, but also obtains orders of magnitude higher throughput, when compared to offline algorithms.
在本文中,我们提出了聚类大规模“流”图的技术,其中对图的更新以顶点或边的添加和删除流的形式给出。我们的算法以在线和增量的方式处理这种更新,并且可以很容易地并行。以前的一些图聚类算法无法处理大规模和流图,因为它们是集中的,它们需要事先知道整个图,而不是增量的,或者它们会产生过多的计算开销。我们的算法的基本组成部分被称为图库采样。随着图的变化,我们在满足某些期望的属性(如簇的边界数或簇的大小)的同时,维护边缘的存储库样本。然后,我们将采样子图中的连接组件声明为原始图的聚类。我们在真实图上的实验表明,与离线算法相比,我们的方法不仅产生了非常好的聚类质量,而且获得了高数量级的吞吐量。
{"title":"Clustering Streaming Graphs","authors":"A. Eldawy, R. Khandekar, Kun-Lung Wu","doi":"10.1109/ICDCS.2012.20","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.20","url":null,"abstract":"In this paper, we propose techniques for clustering large-scale \"streaming\" graphs where the updates to a graph are given in form of a stream of vertex or edge additions and deletions. Our algorithm handles such updates in an online and incremental manner and it can be easily parallel zed. Several previous graph clustering algorithms fall short of handling massive and streaming graphs because they are centralized, they need to know the entire graph beforehand and are not incremental, or they incur an excessive computational overhead. Our algorithm's fundamental building block is called graph reservoir sampling. We maintain a reservoir sample of the edges as the graph changes while satisfying certain desired properties like bounding number of clusters or cluster-sizes. We then declare connected components in the sampled sub graph as clusters of the original graph. Our experiments on real graphs show that our approach not only yields clusterings with very good quality, but also obtains orders of magnitude higher throughput, when compared to offline algorithms.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"58 1","pages":"466-475"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78863801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Explaining BGP Slow Table Transfers 解释BGP慢表传输
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.14
Pei-chun Cheng, Jong Han Park, K. Patel, S. Amante, Lixia Zhang
Although there have been a plethora of studies on TCP performance in supporting of various applications, relatively little is known about the interaction between TCP and BGP, which is a specific application running on top of TCP. This paper investigates BGP's slow route propagation by analyzing packet traces collected from a large ISP and Route Views Oregon collector. In particular we focus on the prolonged periods of BGP routing table transfers and examine in detail the interplay between TCP and BGP. In addition to the problems reported in previous literature, this study reveals a number of new TCP transport problems, that collectively induce significant delays. Furthermore, we develop a tool, named T-DAT, that can be deployed together with BGP data collectors to infer various factors behind the observed delay, including BGP's sending and receiving behavior, TCP's parameter settings, TCP's flow and congestion control, and network path limitation. Identifying these delay contributing factors makes an important step for ISPs and router vendors to diagnose and improve the BGP performance.
尽管有大量关于TCP性能支持各种应用程序的研究,但对于TCP和BGP之间的交互知之甚少,BGP是运行在TCP之上的特定应用程序。本文通过分析从大型ISP和route Views俄勒冈收集器收集的数据包痕迹来研究BGP的缓慢路由传播。我们特别关注BGP路由表传输的延长周期,并详细检查TCP和BGP之间的相互作用。除了以前文献中报道的问题之外,本研究还揭示了一些新的TCP传输问题,这些问题共同导致了显著的延迟。此外,我们开发了一个名为T-DAT的工具,该工具可以与BGP数据收集器一起部署,以推断所观察到的延迟背后的各种因素,包括BGP的发送和接收行为、TCP的参数设置、TCP的流量和拥塞控制以及网络路径限制。识别这些导致延迟的因素对于isp和路由器供应商诊断和提高BGP性能是重要的一步。
{"title":"Explaining BGP Slow Table Transfers","authors":"Pei-chun Cheng, Jong Han Park, K. Patel, S. Amante, Lixia Zhang","doi":"10.1109/ICDCS.2012.14","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.14","url":null,"abstract":"Although there have been a plethora of studies on TCP performance in supporting of various applications, relatively little is known about the interaction between TCP and BGP, which is a specific application running on top of TCP. This paper investigates BGP's slow route propagation by analyzing packet traces collected from a large ISP and Route Views Oregon collector. In particular we focus on the prolonged periods of BGP routing table transfers and examine in detail the interplay between TCP and BGP. In addition to the problems reported in previous literature, this study reveals a number of new TCP transport problems, that collectively induce significant delays. Furthermore, we develop a tool, named T-DAT, that can be deployed together with BGP data collectors to infer various factors behind the observed delay, including BGP's sending and receiving behavior, TCP's parameter settings, TCP's flow and congestion control, and network path limitation. Identifying these delay contributing factors makes an important step for ISPs and router vendors to diagnose and improve the BGP performance.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"387 1","pages":"657-666"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74291845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Dynamic Activation Policies for Event Capture with Rechargeable Sensors 用可充电传感器捕获事件的动态激活策略
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.70
Zhu Ren, Peng Cheng, Jiming Chen, David K. Y. Yau, Youxian Sun
We consider the problem of event capture by a rechargeable sensor network. We assume that the events of interest follow a renewal process whose event inter-arrival times are drawn from a general probability distribution, and that a stochastic recharge process is used to provide energy for the sensors' operation. Dynamics of the event and recharge processes make the optimal sensor activation problem highly challenging. In this paper we first consider the single-sensor problem. Using dynamic control theory, we consider a full-information model in which, independent of its activation schedule, the sensor will know whether an event has occurred in the last time slot or not. In this case, the problem is framed as a Markov decision process (MDP), and we develop a simple and optimal policy for the solution. We then further consider a partial-information model where the sensor knows about the occurrence of an event only when it is active. This problem falls into the class of partially observable Markov decision processes (POMDP). Since the POMDP's optimal policy has exponential computational complexity and is intrinsically hard to solve, we propose an efficient heuristic clustering policy and evaluate its performance. Finally, our solutions are extended to handle a network setting in which multiple sensors collaborate to capture the events. We provide extensive simulation results to evaluate the performance of our solutions.
我们考虑了一个可充电传感器网络的事件捕获问题。我们假设感兴趣的事件遵循更新过程,其事件间到达时间从一般概率分布中提取,并且使用随机补给过程为传感器的运行提供能量。事件和充电过程的动态性使得传感器的最佳激活问题极具挑战性。本文首先考虑单传感器问题。利用动态控制理论,我们考虑了一个完全信息模型,在该模型中,传感器将知道事件是否在最后一个时隙发生,而不依赖于其激活计划。在这种情况下,问题被框架为马尔可夫决策过程(MDP),我们为解决方案开发了一个简单而最优的策略。然后,我们进一步考虑部分信息模型,其中传感器仅在活动时才知道事件的发生。该问题属于部分可观察马尔可夫决策过程(POMDP)。由于POMDP的最优策略具有指数级的计算复杂度和本质上难以求解,我们提出了一种高效的启发式聚类策略并对其性能进行了评估。最后,我们的解决方案扩展到处理多个传感器协作捕获事件的网络设置。我们提供广泛的模拟结果来评估我们的解决方案的性能。
{"title":"Dynamic Activation Policies for Event Capture with Rechargeable Sensors","authors":"Zhu Ren, Peng Cheng, Jiming Chen, David K. Y. Yau, Youxian Sun","doi":"10.1109/ICDCS.2012.70","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.70","url":null,"abstract":"We consider the problem of event capture by a rechargeable sensor network. We assume that the events of interest follow a renewal process whose event inter-arrival times are drawn from a general probability distribution, and that a stochastic recharge process is used to provide energy for the sensors' operation. Dynamics of the event and recharge processes make the optimal sensor activation problem highly challenging. In this paper we first consider the single-sensor problem. Using dynamic control theory, we consider a full-information model in which, independent of its activation schedule, the sensor will know whether an event has occurred in the last time slot or not. In this case, the problem is framed as a Markov decision process (MDP), and we develop a simple and optimal policy for the solution. We then further consider a partial-information model where the sensor knows about the occurrence of an event only when it is active. This problem falls into the class of partially observable Markov decision processes (POMDP). Since the POMDP's optimal policy has exponential computational complexity and is intrinsically hard to solve, we propose an efficient heuristic clustering policy and evaluate its performance. Finally, our solutions are extended to handle a network setting in which multiple sensors collaborate to capture the events. We provide extensive simulation results to evaluate the performance of our solutions.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"9 1","pages":"152-162"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81261461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Scalable Name Lookup in NDN Using Effective Name Component Encoding 使用有效名称组件编码的NDN中可扩展的名称查找
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.35
Yi Wang, Keqiang He, Huichen Dai, Wei Meng, Junchen Jiang, B. Liu, Yan Chen
Name-based route lookup is a key function for Named Data Networking (NDN). The NDN names are hierarchical and have variable and unbounded lengths, which are much longer than IPv4/6 address, making fast name lookup a challenging issue. In this paper, we propose an effective Name Component Encoding (NCE) solution with the following two techniques: (1) A code allocation mechanism is developed to achieve memory-efficient encoding for name components, (2) We apply an improved State Transition Arrays to accelerate the longest name prefix matching and design a fast and incremental update mechanism which satisfies the special requirements of NDN forwarding process, namely to insert, modify, and delete name prefixes frequently. Furthermore, we analyze the memory consumption and time complexity of NCE. Experimental results on a name set containing 3,000,000 names demonstrate that compared with the character trie NCE reduces overall 30% memory. Besides, NCE performs a few millions lookups per second (on an Intel 2.8 GHz CPU), a speedup of over 7 times compared with the character trie. Our evaluation results also show that NCE can scale up to accommodate the potential future growth of the name sets.
基于名称的路由查找是命名数据网络(NDN)的一个关键功能。NDN名称是分层的,具有可变和无限的长度,比IPv4/6地址长得多,这使得快速名称查找成为一个具有挑战性的问题。本文提出了一种有效的名称组件编码(NCE)解决方案,采用以下两种技术:(1)开发了一种代码分配机制,实现了名称组件的内存高效编码;(2)采用改进的状态转换阵列,加速了最长名称前缀匹配,设计了一种快速增量更新机制,满足了NDN转发过程中频繁插入、修改和删除名称前缀的特殊要求。此外,我们还分析了NCE的内存消耗和时间复杂度。在包含3,000,000个名字的名称集上的实验结果表明,与字符组相比,NCE总体上减少了30%的内存。此外,NCE每秒执行数百万次查找(在英特尔2.8 GHz CPU上),与字符树相比,速度提高了7倍以上。我们的评估结果还表明,NCE可以扩大规模,以适应名称集的潜在未来增长。
{"title":"Scalable Name Lookup in NDN Using Effective Name Component Encoding","authors":"Yi Wang, Keqiang He, Huichen Dai, Wei Meng, Junchen Jiang, B. Liu, Yan Chen","doi":"10.1109/ICDCS.2012.35","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.35","url":null,"abstract":"Name-based route lookup is a key function for Named Data Networking (NDN). The NDN names are hierarchical and have variable and unbounded lengths, which are much longer than IPv4/6 address, making fast name lookup a challenging issue. In this paper, we propose an effective Name Component Encoding (NCE) solution with the following two techniques: (1) A code allocation mechanism is developed to achieve memory-efficient encoding for name components, (2) We apply an improved State Transition Arrays to accelerate the longest name prefix matching and design a fast and incremental update mechanism which satisfies the special requirements of NDN forwarding process, namely to insert, modify, and delete name prefixes frequently. Furthermore, we analyze the memory consumption and time complexity of NCE. Experimental results on a name set containing 3,000,000 names demonstrate that compared with the character trie NCE reduces overall 30% memory. Besides, NCE performs a few millions lookups per second (on an Intel 2.8 GHz CPU), a speedup of over 7 times compared with the character trie. Our evaluation results also show that NCE can scale up to accommodate the potential future growth of the name sets.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"19 1","pages":"688-697"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89069168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 137
Growing Secure Distributed Systems from a Spore 从孢子发展安全的分布式系统
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.68
Yunus Basagalar, Vassilios Lekakis, P. Keleher
This paper describes the design and evaluation of Spore, a secure cloud-based file system that minimizes trust and functionality assumptions on underlying servers. Spore differs from other systems in that system relationships are formalized only through signed data objects, rather than in complicated protocols executed between clients and servers. This approach allows Spore to bootstrap a file system from a single object, providing integrity and security guarantees while storing all data as simple, immutable objects on untrusted servers. We use simulation to characterize the performance of this system, focusing primarily on the cost incurred in compensating for the minimal server support. We show that while a naive approach is quite inefficient, a series of simple optimizations can enable the system to perform well in real-world scenarios.
本文描述了Spore的设计和评估,这是一个安全的基于云的文件系统,可以最大限度地减少对底层服务器的信任和功能假设。Spore与其他系统的不同之处在于,系统关系仅通过签名数据对象形式化,而不是在客户端和服务器之间执行复杂的协议。这种方法允许《孢子》从单个对象引导文件系统,提供完整性和安全性保证,同时将所有数据作为简单的、不可变的对象存储在不受信任的服务器上。我们使用模拟来描述该系统的性能,主要关注补偿最小服务器支持所产生的成本。我们表明,虽然朴素的方法效率很低,但一系列简单的优化可以使系统在实际场景中表现良好。
{"title":"Growing Secure Distributed Systems from a Spore","authors":"Yunus Basagalar, Vassilios Lekakis, P. Keleher","doi":"10.1109/ICDCS.2012.68","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.68","url":null,"abstract":"This paper describes the design and evaluation of Spore, a secure cloud-based file system that minimizes trust and functionality assumptions on underlying servers. Spore differs from other systems in that system relationships are formalized only through signed data objects, rather than in complicated protocols executed between clients and servers. This approach allows Spore to bootstrap a file system from a single object, providing integrity and security guarantees while storing all data as simple, immutable objects on untrusted servers. We use simulation to characterize the performance of this system, focusing primarily on the cost incurred in compensating for the minimal server support. We show that while a naive approach is quite inefficient, a series of simple optimizations can enable the system to perform well in real-world scenarios.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"188 ","pages":"546-555"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91450269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ADAPT: Availability-Aware MapReduce Data Placement for Non-dedicated Distributed Computing ADAPT:非专用分布式计算的可用性感知MapReduce数据放置
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.48
Hui Jin, Xi Yang, Xian-He Sun, I. Raicu
The MapReduce programming paradigm is gaining more and more popularity recently due to its merits of ease of programming, data distribution and fault tolerance. The low barrier of adoption of MapReduce makes it a promising framework for non-dedicated distributed computing environments. However, the variability of hosts resources and availability could substantially degrade the performance of MapReduce applications. The replication-based fault tolerance mechanism helps to alleviate some problems at the cost of inefficient storage space utilization. Intelligent solutions that guarantee the performance of MapReduce applications with low data replication degree are needed to promote the idea of running MapReduce applications in non-dedicated environment at lower costs. In this research, we propose an Availability-aware Data Placement (ADAPT) strategy to improve the application performance without extra storage cost. The basic idea of ADAPT is to dispatch data based on the availability of each node, reduce network traffic, improve data locality, and optimize the application performance. We implement the prototype of ADAPT within the Hadoop framework, an open-source implementation of MapReduce. The performance of ADAPT is evaluated in an emulated non-dedicated distributed environment. The experimental results show that ADAPT can improve the performance by more than 30%. ADAPT achieves high reliability without the need for additional data replication. ADAPT has also been evaluated for large-scale computing environment through simulations, with promising results.
MapReduce编程范式由于其易于编程、数据分布和容错等优点,近年来越来越受欢迎。采用MapReduce的低门槛使它成为非专用分布式计算环境的一个很有前途的框架。然而,主机资源和可用性的可变性会大大降低MapReduce应用程序的性能。基于复制的容错机制有助于缓解一些问题,但代价是存储空间利用率低。为了推广低成本、非专用环境下运行MapReduce的理念,需要智能的解决方案来保证低数据复制度的MapReduce应用的性能。在本研究中,我们提出了一种可用性感知数据放置(ADAPT)策略来提高应用程序的性能,而无需额外的存储成本。ADAPT的基本思想是根据每个节点的可用性调度数据,减少网络流量,提高数据局部性,优化应用程序性能。我们在Hadoop框架内实现了ADAPT的原型,Hadoop是MapReduce的一个开源实现。在模拟的非专用分布式环境中对ADAPT的性能进行了评估。实验结果表明,该方法可使性能提高30%以上。ADAPT无需额外的数据复制即可实现高可靠性。通过模拟对ADAPT在大规模计算环境中的应用进行了评估,取得了令人满意的结果。
{"title":"ADAPT: Availability-Aware MapReduce Data Placement for Non-dedicated Distributed Computing","authors":"Hui Jin, Xi Yang, Xian-He Sun, I. Raicu","doi":"10.1109/ICDCS.2012.48","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.48","url":null,"abstract":"The MapReduce programming paradigm is gaining more and more popularity recently due to its merits of ease of programming, data distribution and fault tolerance. The low barrier of adoption of MapReduce makes it a promising framework for non-dedicated distributed computing environments. However, the variability of hosts resources and availability could substantially degrade the performance of MapReduce applications. The replication-based fault tolerance mechanism helps to alleviate some problems at the cost of inefficient storage space utilization. Intelligent solutions that guarantee the performance of MapReduce applications with low data replication degree are needed to promote the idea of running MapReduce applications in non-dedicated environment at lower costs. In this research, we propose an Availability-aware Data Placement (ADAPT) strategy to improve the application performance without extra storage cost. The basic idea of ADAPT is to dispatch data based on the availability of each node, reduce network traffic, improve data locality, and optimize the application performance. We implement the prototype of ADAPT within the Hadoop framework, an open-source implementation of MapReduce. The performance of ADAPT is evaluated in an emulated non-dedicated distributed environment. The experimental results show that ADAPT can improve the performance by more than 30%. ADAPT achieves high reliability without the need for additional data replication. ADAPT has also been evaluated for large-scale computing environment through simulations, with promising results.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"20 1","pages":"516-525"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81145719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
PAAS: A Privacy-Preserving Attribute-Based Authentication System for eHealth Networks PAAS:一种用于电子健康网络的基于属性的隐私保护认证系统
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.45
Linke Guo, Chi Zhang, Jinyuan Sun, Yuguang Fang
Recently, eHealth systems have replaced paper based medical system due to its prominent features of convenience and accuracy. Also, since the medical data can be stored on any kind of digital devices, people can easily obtain medical services at any time and any place. However, privacy concern over patient medical data draws an increasing attention. In the current eHealth networks, patients are assigned multiple attributes which directly reflect their symptoms, undergoing treatments, etc. Those life-threatened attributes need to be verified by an authorized medical facilities, such as hospitals and clinics. When there is a need for medical services, patients have to be authenticated by showing their identities and the corresponding attributes in order to take appropriate healthcare actions. However, directly disclosing those attributes for verification may expose real identities. Therefore, existing eHealth systems fail to preserve patients' private attribute information while maintaining original functionalities of medical services. To solve this dilemma, we propose a framework called PAAS which leverages users' verifiable attributes to authenticate users in eHealth systems while preserving their privacy issues. In our system, instead of letting centralized infrastructures take care of authentication, our scheme only involves two end users. We also offer authentication strategies with progressive privacy requirements among patients or between patients and physicians. Based on the security and efficiency analysis, we show our framework is better than existing eHealth systems in terms of privacy preservation and practicality.
近年来,电子医疗系统以其方便、准确的特点取代了纸质医疗系统。此外,由于医疗数据可以存储在任何类型的数字设备上,人们可以随时随地轻松地获得医疗服务。然而,对患者医疗数据的隐私担忧越来越受到关注。在目前的电子健康网络中,患者被分配了多个属性,这些属性直接反映了他们的症状、正在接受治疗等。这些危及生命的属性需要由授权的医疗机构,如医院和诊所进行核实。当需要医疗服务时,必须通过显示其身份和相应属性对患者进行身份验证,以便采取适当的医疗保健措施。然而,直接披露这些属性进行验证可能会暴露真实身份。因此,现有的电子健康系统在保持原有医疗服务功能的同时,无法保留患者的私有属性信息。为了解决这一困境,我们提出了一个名为PAAS的框架,该框架利用用户的可验证属性对电子卫生系统中的用户进行身份验证,同时保护他们的隐私问题。在我们的系统中,我们的方案只涉及两个最终用户,而不是让集中的基础设施负责身份验证。我们还提供在患者之间或患者与医生之间具有渐进式隐私要求的身份验证策略。基于安全性和效率分析,我们证明了我们的框架在隐私保护和实用性方面优于现有的电子医疗系统。
{"title":"PAAS: A Privacy-Preserving Attribute-Based Authentication System for eHealth Networks","authors":"Linke Guo, Chi Zhang, Jinyuan Sun, Yuguang Fang","doi":"10.1109/ICDCS.2012.45","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.45","url":null,"abstract":"Recently, eHealth systems have replaced paper based medical system due to its prominent features of convenience and accuracy. Also, since the medical data can be stored on any kind of digital devices, people can easily obtain medical services at any time and any place. However, privacy concern over patient medical data draws an increasing attention. In the current eHealth networks, patients are assigned multiple attributes which directly reflect their symptoms, undergoing treatments, etc. Those life-threatened attributes need to be verified by an authorized medical facilities, such as hospitals and clinics. When there is a need for medical services, patients have to be authenticated by showing their identities and the corresponding attributes in order to take appropriate healthcare actions. However, directly disclosing those attributes for verification may expose real identities. Therefore, existing eHealth systems fail to preserve patients' private attribute information while maintaining original functionalities of medical services. To solve this dilemma, we propose a framework called PAAS which leverages users' verifiable attributes to authenticate users in eHealth systems while preserving their privacy issues. In our system, instead of letting centralized infrastructures take care of authentication, our scheme only involves two end users. We also offer authentication strategies with progressive privacy requirements among patients or between patients and physicians. Based on the security and efficiency analysis, we show our framework is better than existing eHealth systems in terms of privacy preservation and practicality.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"62 1","pages":"224-233"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80923070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
PREPARE: Predictive Performance Anomaly Prevention for Virtualized Cloud Systems PREPARE:虚拟化云系统的预测性性能异常预防
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.65
Yongmin Tan, H. Nguyen, Zhiming Shen, Xiaohui Gu, C. Venkatramani, D. Rajan
Virtualized cloud systems are prone to performance anomalies due to various reasons such as resource contentions, software bugs, and hardware failures. In this paper, we present a novel Predictive Performance Anomaly Prevention (PREPARE) system that provides automatic performance anomaly prevention for virtualized cloud computing infrastructures. PREPARE integrates online anomaly prediction, learning-based cause inference, and predictive prevention actuation to minimize the performance anomaly penalty without human intervention. We have implemented PREPARE on top of the Xen platform and tested it on the NCSU's Virtual Computing Lab using a commercial data stream processing system (IBM System S) and an online auction benchmark (RUBiS). The experimental results show that PREPARE can effectively prevent performance anomalies while imposing low overhead to the cloud infrastructure.
由于资源争夺、软件bug、硬件故障等原因,虚拟化云系统容易出现性能异常。在本文中,我们提出了一种新的预测性能异常预防(PREPARE)系统,为虚拟化云计算基础设施提供自动性能异常预防。PREPARE集成了在线异常预测、基于学习的原因推理和预测预防驱动,在没有人为干预的情况下最大限度地减少性能异常的损失。我们在Xen平台上实现了PREPARE,并在NCSU的虚拟计算实验室使用商业数据流处理系统(IBM system S)和在线拍卖基准(RUBiS)对其进行了测试。实验结果表明,PREPARE可以有效地防止性能异常,同时对云基础设施的开销很小。
{"title":"PREPARE: Predictive Performance Anomaly Prevention for Virtualized Cloud Systems","authors":"Yongmin Tan, H. Nguyen, Zhiming Shen, Xiaohui Gu, C. Venkatramani, D. Rajan","doi":"10.1109/ICDCS.2012.65","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.65","url":null,"abstract":"Virtualized cloud systems are prone to performance anomalies due to various reasons such as resource contentions, software bugs, and hardware failures. In this paper, we present a novel Predictive Performance Anomaly Prevention (PREPARE) system that provides automatic performance anomaly prevention for virtualized cloud computing infrastructures. PREPARE integrates online anomaly prediction, learning-based cause inference, and predictive prevention actuation to minimize the performance anomaly penalty without human intervention. We have implemented PREPARE on top of the Xen platform and tested it on the NCSU's Virtual Computing Lab using a commercial data stream processing system (IBM System S) and an online auction benchmark (RUBiS). The experimental results show that PREPARE can effectively prevent performance anomalies while imposing low overhead to the cloud infrastructure.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"3 1","pages":"285-294"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80023896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 160
Byte Caching in Wireless Networks 无线网络中的字节缓存
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.39
Franck Le, M. Srivatsa, A. Iyengar
The explosion of data consumption has led to a renewed interest in byte caching. With studies showing potential reductions in network traffic of 50%, this fine grained caching technique looks like a very good and attractive solution for mobile wireless operators. However, properties of wireless networks actually present new challenges. We first show that a single packet loss, re-ordering or corruption -- all common conditions over the air interface -- can result in circular dependencies and cause existing byte caching algorithms to loop endlessly. To remedy the problem, we then explore a new set of encoding algorithms. Third, we assess the impact of packet losses on byte caching performances, both in terms of byte savings and delay reduction. We found that a mere 1% packet loss can already nullify any delay reduction and instead cause significant increases that users may not be willing to tolerate. Finally, we shared several insights, including interactions between transport layer protocol's mechanisms (e.g., TCP window congestion) and byte caching operations that can cause sophisticated encoding algorithms to perform poorly. We believe that these insights are important for designing more efficient and robust byte caching encoding algorithms.
数据消费的爆炸式增长重新引起了对字节缓存的兴趣。研究表明,网络流量可能减少50%,这种细粒度缓存技术看起来是移动无线运营商非常好的、有吸引力的解决方案。然而,无线网络的特性实际上提出了新的挑战。我们首先展示了单个数据包丢失、重新排序或损坏——所有空中接口上的常见情况——都可能导致循环依赖,并导致现有的字节缓存算法无休止地循环。为了解决这个问题,我们探索了一套新的编码算法。第三,我们评估了数据包丢失对字节缓存性能的影响,包括字节节省和延迟减少。我们发现,仅仅1%的数据包丢失就可以抵消任何延迟减少,而是导致用户可能不愿意容忍的显著增加。最后,我们分享了一些见解,包括传输层协议机制(例如,TCP窗口拥塞)和字节缓存操作之间的交互,这些操作可能导致复杂的编码算法表现不佳。我们相信这些见解对于设计更高效、更健壮的字节缓存编码算法非常重要。
{"title":"Byte Caching in Wireless Networks","authors":"Franck Le, M. Srivatsa, A. Iyengar","doi":"10.1109/ICDCS.2012.39","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.39","url":null,"abstract":"The explosion of data consumption has led to a renewed interest in byte caching. With studies showing potential reductions in network traffic of 50%, this fine grained caching technique looks like a very good and attractive solution for mobile wireless operators. However, properties of wireless networks actually present new challenges. We first show that a single packet loss, re-ordering or corruption -- all common conditions over the air interface -- can result in circular dependencies and cause existing byte caching algorithms to loop endlessly. To remedy the problem, we then explore a new set of encoding algorithms. Third, we assess the impact of packet losses on byte caching performances, both in terms of byte savings and delay reduction. We found that a mere 1% packet loss can already nullify any delay reduction and instead cause significant increases that users may not be willing to tolerate. Finally, we shared several insights, including interactions between transport layer protocol's mechanisms (e.g., TCP window congestion) and byte caching operations that can cause sophisticated encoding algorithms to perform poorly. We believe that these insights are important for designing more efficient and robust byte caching encoding algorithms.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"55 1","pages":"265-274"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73594372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
When Scalability Meets Consistency: Genuine Multiversion Update-Serializable Partial Data Replication 当可伸缩性满足一致性:真正的多版本更新-可序列化的部分数据复制
Pub Date : 2012-06-18 DOI: 10.1109/ICDCS.2012.55
Sebastiano Peluso, P. Ruivo, P. Romano, F. Quaglia, L. Rodrigues
In this article we introduce GMU, a genuine partial replication protocol for transactional systems, which exploits an innovative, highly scalable, distributed multiversioning scheme. Unlike existing multiversion-based solutions, GMU does not rely on a global logical clock, which represents a contention point and can limit system scalability. Also, GMU never aborts read-only transactions and spares them from distributed validation schemes. This makes GMU particularly efficient in presence of read-intensive workloads, as typical of a wide range of real-world applications. GMU guarantees the Extended Update Serializability (EUS) isolation level. This consistency criterion is particularly attractive as it is sufficiently strong to ensure correctness even for very demanding applications (such as TPC-C), but is also weak enough to allow efficient and scalable implementations, such as GMU. Further, unlike several relaxed consistency models proposed in literature, EUS has simple and intuitive semantics, thus being an attractive, scalable consistency model for ordinary programmers. We integrated the GMU protocol in a popular open source in-memory transactional data grid, namely Infinispan. On the basis of a large scale experimental study performed on heterogeneous experimental platforms and using industry standard benchmarks (namely TPC-C and YCSB), we show that GMU achieves linear scalability and that it introduces negligible overheads (less than 10%), with respect to solutions ensuring non-serializable semantics, in a wide range of workloads.
在本文中,我们将介绍GMU,这是一种用于事务系统的真正的部分复制协议,它利用了一种创新的、高度可扩展的分布式多版本方案。与现有的基于多版本的解决方案不同,GMU不依赖全局逻辑时钟,全局逻辑时钟代表争用点,可能会限制系统的可伸缩性。此外,GMU永远不会中止只读事务,并使它们免受分布式验证方案的影响。这使得GMU在处理读密集型工作负载时特别高效,这是许多实际应用程序的典型特点。GMU保证了扩展更新串行性(EUS)隔离级别。这种一致性标准特别有吸引力,因为它足够强大,即使对于要求非常高的应用程序(如TPC-C)也能确保正确性,但也足够弱,无法实现高效和可扩展的实现,如GMU。此外,与文献中提出的几种宽松的一致性模型不同,EUS具有简单直观的语义,因此对于普通程序员来说是一种有吸引力的、可扩展的一致性模型。我们将GMU协议集成到一个流行的开源内存事务数据网格中,即Infinispan。基于在异构实验平台上进行的大规模实验研究,并使用行业标准基准(即TPC-C和YCSB),我们表明GMU实现了线性可扩展性,并且在广泛的工作负载范围内,相对于确保非序列化语义的解决方案,它引入了可以忽略不计的开销(小于10%)。
{"title":"When Scalability Meets Consistency: Genuine Multiversion Update-Serializable Partial Data Replication","authors":"Sebastiano Peluso, P. Ruivo, P. Romano, F. Quaglia, L. Rodrigues","doi":"10.1109/ICDCS.2012.55","DOIUrl":"https://doi.org/10.1109/ICDCS.2012.55","url":null,"abstract":"In this article we introduce GMU, a genuine partial replication protocol for transactional systems, which exploits an innovative, highly scalable, distributed multiversioning scheme. Unlike existing multiversion-based solutions, GMU does not rely on a global logical clock, which represents a contention point and can limit system scalability. Also, GMU never aborts read-only transactions and spares them from distributed validation schemes. This makes GMU particularly efficient in presence of read-intensive workloads, as typical of a wide range of real-world applications. GMU guarantees the Extended Update Serializability (EUS) isolation level. This consistency criterion is particularly attractive as it is sufficiently strong to ensure correctness even for very demanding applications (such as TPC-C), but is also weak enough to allow efficient and scalable implementations, such as GMU. Further, unlike several relaxed consistency models proposed in literature, EUS has simple and intuitive semantics, thus being an attractive, scalable consistency model for ordinary programmers. We integrated the GMU protocol in a popular open source in-memory transactional data grid, namely Infinispan. On the basis of a large scale experimental study performed on heterogeneous experimental platforms and using industry standard benchmarks (namely TPC-C and YCSB), we show that GMU achieves linear scalability and that it introduces negligible overheads (less than 10%), with respect to solutions ensuring non-serializable semantics, in a wide range of workloads.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"91 2 1","pages":"455-465"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90168365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 104
期刊
2012 IEEE 32nd International Conference on Distributed Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1