首页 > 最新文献

2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)最新文献

英文 中文
Adaptive rate control over mobile data networks with heuristic rate compensations 基于启发式速率补偿的移动数据网络自适应速率控制
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590420
Ke Liu, Zhuang Wang, Jack Y. B. Lee, Mingyu Chen, Lixin Zhang
Mobile data networks exhibit highly variable data rates and stochastic non-congestion-related packet loss. These challenges result in key performance bottlenecks in current Transmission Control Protocol (TCP) implementations: bandwidth inefficiency and large end-to-end delay. This work addresses these challenges by first developing a Sliding Interval based Rate Adaptation (SIRA) that tracks bandwidths with a fixed time interval and applies them to its transmission rate periodically. Extensive experiments confirmed that SIRA achieves 96.3% bandwidth utilization and reduces the average queueing delay by a factor of 1.37, compared to TCP CUBIC, the preferred variant for Internet servers. However, the resultant end-to-end delay is still much larger for interactive applications, thus we complement SIRA with two heuristic rate compensation algorithms (SIRA-H) given that the bandwidth does not vary significantly in long time scales. Specifically, SIRA-H first reduces the transmission rate of SIRA if the estimated RTT is above a prefigured threshold. Meanwhile, it computes the amount of unsent data that would be transmitted if SIRA were used, and compensates the rate reduction with those unsent data as if their ACKs were received, when the queue is detected to be empty. We evaluated SIRA-H through a combination of trace-driven emulations and real-world experiments, and showed that it reduces the 95th percentile queueing delay by a factor of over 3.9, while maintains a similar throughput compared to the original SIRA. In comparison to state of the art protocols such as Sprout and Verus, SIRA-H also reduces the 95th percentile queueing delay by a factor of over 0.8.
移动数据网络表现出高度可变的数据速率和随机的非拥塞相关的数据包丢失。这些挑战导致了当前传输控制协议(TCP)实现中的关键性能瓶颈:带宽效率低下和大的端到端延迟。这项工作通过首先开发基于滑动间隔的速率自适应(SIRA)来解决这些挑战,该方法以固定的时间间隔跟踪带宽并定期将其应用于其传输速率。大量实验证实,与Internet服务器首选的TCP CUBIC相比,SIRA实现了96.3%的带宽利用率,并将平均排队延迟降低了1.37倍。然而,由此产生的端到端延迟对于交互式应用来说仍然要大得多,因此我们使用两种启发式速率补偿算法(SIRA- h)来补充SIRA,因为带宽在长时间尺度上不会发生显着变化。具体来说,如果估计的RTT高于预先设定的阈值,SIRA- h首先降低SIRA的传播速率。同时,它计算如果使用SIRA将传输的未发送数据量,并在检测到队列为空时,用那些未发送的数据补偿速率降低,就像接收到它们的ack一样。我们通过跟踪驱动仿真和真实世界的实验对SIRA- h进行了评估,结果表明它将第95百分位队列延迟降低了3.9倍以上,同时与原始SIRA相比保持了相似的吞吐量。与Sprout和Verus等最先进的协议相比,SIRA-H还将第95百分位队列延迟降低了0.8倍以上。
{"title":"Adaptive rate control over mobile data networks with heuristic rate compensations","authors":"Ke Liu, Zhuang Wang, Jack Y. B. Lee, Mingyu Chen, Lixin Zhang","doi":"10.1109/IWQoS.2016.7590420","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590420","url":null,"abstract":"Mobile data networks exhibit highly variable data rates and stochastic non-congestion-related packet loss. These challenges result in key performance bottlenecks in current Transmission Control Protocol (TCP) implementations: bandwidth inefficiency and large end-to-end delay. This work addresses these challenges by first developing a Sliding Interval based Rate Adaptation (SIRA) that tracks bandwidths with a fixed time interval and applies them to its transmission rate periodically. Extensive experiments confirmed that SIRA achieves 96.3% bandwidth utilization and reduces the average queueing delay by a factor of 1.37, compared to TCP CUBIC, the preferred variant for Internet servers. However, the resultant end-to-end delay is still much larger for interactive applications, thus we complement SIRA with two heuristic rate compensation algorithms (SIRA-H) given that the bandwidth does not vary significantly in long time scales. Specifically, SIRA-H first reduces the transmission rate of SIRA if the estimated RTT is above a prefigured threshold. Meanwhile, it computes the amount of unsent data that would be transmitted if SIRA were used, and compensates the rate reduction with those unsent data as if their ACKs were received, when the queue is detected to be empty. We evaluated SIRA-H through a combination of trace-driven emulations and real-world experiments, and showed that it reduces the 95th percentile queueing delay by a factor of over 3.9, while maintains a similar throughput compared to the original SIRA. In comparison to state of the art protocols such as Sprout and Verus, SIRA-H also reduces the 95th percentile queueing delay by a factor of over 0.8.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115000684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On mobile instant video clip sharing with screen scrolling 在移动即时视频剪辑与屏幕滚动共享
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590431
Lei Zhang, Feng Wang, Jiangchuan Liu, Xiaoqiang Ma
Nowadays technology advances of wireless networking and mobile devices have made anytime anywhere data access become readily available. This also enables crowdsourced content capturing and sharing, especially for such multimedia data as video. One example is Twitter's Vine, which mainly target mobile devices, allowing users to create ultra-short video clips and instantly share with their followers. In this paper, we take an initial study on this new generation of mobile instant video clip sharing service and explore the potentials towards its further enhancement. We closely investigate its unique mobile interface, featured user behaviors with screen scrolling, revealing the key differences between Vine-enabled anytime anywhere data access patterns and that of traditional counterparts. We then examine the scheduling policy to maximize the user watching experience as well as the cost efficiency. We show that the generic scheduling problem involves two subproblems, namely, pre-fetching scheduling and watch-time download scheduling, and develop effective solutions towards both of them. The superiority of our solution is demonstrated by extensive trace-driven simulations. To the best of our knowledge, this is the first work on modeling and optimizing the view experience of the instant video clip sharing service on mobile devices.
如今,无线网络和移动设备的技术进步使得随时随地的数据访问变得唾手可得。这也使众包内容的捕获和共享成为可能,特别是对于视频这样的多媒体数据。Twitter的Vine就是一个例子,它主要针对移动设备,允许用户创建超短视频片段,并立即与粉丝分享。本文对新一代移动即时视频分享服务进行了初步研究,并探讨了其进一步增强的潜力。我们仔细研究了其独特的移动界面,以屏幕滚动为特征的用户行为,揭示了vine支持的随时随地数据访问模式与传统同行之间的关键差异。然后,我们检查调度策略,以最大限度地提高用户观看体验和成本效率。研究表明,一般调度问题包含两个子问题,即预取调度和观看时间下载调度,并对这两个子问题都提出了有效的解决方案。大量的跟踪驱动仿真证明了我们的解决方案的优越性。据我们所知,这是第一个在移动设备上对即时视频剪辑分享服务的观看体验进行建模和优化的工作。
{"title":"On mobile instant video clip sharing with screen scrolling","authors":"Lei Zhang, Feng Wang, Jiangchuan Liu, Xiaoqiang Ma","doi":"10.1109/IWQoS.2016.7590431","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590431","url":null,"abstract":"Nowadays technology advances of wireless networking and mobile devices have made anytime anywhere data access become readily available. This also enables crowdsourced content capturing and sharing, especially for such multimedia data as video. One example is Twitter's Vine, which mainly target mobile devices, allowing users to create ultra-short video clips and instantly share with their followers. In this paper, we take an initial study on this new generation of mobile instant video clip sharing service and explore the potentials towards its further enhancement. We closely investigate its unique mobile interface, featured user behaviors with screen scrolling, revealing the key differences between Vine-enabled anytime anywhere data access patterns and that of traditional counterparts. We then examine the scheduling policy to maximize the user watching experience as well as the cost efficiency. We show that the generic scheduling problem involves two subproblems, namely, pre-fetching scheduling and watch-time download scheduling, and develop effective solutions towards both of them. The superiority of our solution is demonstrated by extensive trace-driven simulations. To the best of our knowledge, this is the first work on modeling and optimizing the view experience of the instant video clip sharing service on mobile devices.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117060729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Toward online virtual network function placement in Software Defined Networks 软件定义网络中在线虚拟网络功能布局研究
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590425
Bowu Zhang, Jinho Hwang, Timothy Wood
Network function virtualization (NFV) and Software Defined Networks (SDN) separate and abstract network functions from underlying hardware, creating a flexible virtual networking environment that reduces cost and allows policy-based decisions. One of the biggest challenges in NFV-SDN is to map the required virtual network functions (VNFs) to the underlying hardware in substrate networks in a timely manner. In this paper, we formulate the VNF placement problem via Graph Pattern Matching, with an objective function that can be easily adapted to fit various applications. Previous work only considers off-line VNF placement as it is time consuming to find an appropriate mapping path while considering all software and hardware constraints. To reduce this time, we investigate the feasibility and effectiveness of path-precomputing, where paths are calculated prior to placement. Our approach enables online VNF placement in SDNs, allowing VNF requests to be processed as they arrive. An online placement approach (OPA) is proposed to place VNF requests on substrate networks. To the best of our knowledge, this is the first work in the literature that considers the online chaining VNF placement in SDNs. In addition, we present an application of OPA over cost minimization. Simulation results demonstrate that our online approach provides competitive performance compared with off-line algorithms.
网络功能虚拟化(NFV)和软件定义网络(SDN)将网络功能从底层硬件中分离和抽象出来,创建灵活的虚拟网络环境,从而降低成本并允许基于策略的决策。NFV-SDN面临的最大挑战之一是将所需的虚拟网络功能(VNFs)及时映射到基板网络中的底层硬件。在本文中,我们通过图模式匹配来表述VNF的放置问题,其目标函数可以很容易地适应各种应用。以前的工作只考虑离线VNF放置,因为在考虑所有软件和硬件约束的同时,寻找合适的映射路径非常耗时。为了减少这一时间,我们研究了路径预计算的可行性和有效性,其中路径在放置之前计算。我们的方法允许在sdn中在线放置VNF,允许VNF请求在到达时进行处理。提出了一种在线放置方法(OPA),将VNF请求放置在基板网络上。据我们所知,这是文献中第一个考虑在线链接VNF在sdn中的放置的工作。此外,我们还提出了OPA在成本最小化中的应用。仿真结果表明,与离线算法相比,我们的在线方法具有较好的性能。
{"title":"Toward online virtual network function placement in Software Defined Networks","authors":"Bowu Zhang, Jinho Hwang, Timothy Wood","doi":"10.1109/IWQoS.2016.7590425","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590425","url":null,"abstract":"Network function virtualization (NFV) and Software Defined Networks (SDN) separate and abstract network functions from underlying hardware, creating a flexible virtual networking environment that reduces cost and allows policy-based decisions. One of the biggest challenges in NFV-SDN is to map the required virtual network functions (VNFs) to the underlying hardware in substrate networks in a timely manner. In this paper, we formulate the VNF placement problem via Graph Pattern Matching, with an objective function that can be easily adapted to fit various applications. Previous work only considers off-line VNF placement as it is time consuming to find an appropriate mapping path while considering all software and hardware constraints. To reduce this time, we investigate the feasibility and effectiveness of path-precomputing, where paths are calculated prior to placement. Our approach enables online VNF placement in SDNs, allowing VNF requests to be processed as they arrive. An online placement approach (OPA) is proposed to place VNF requests on substrate networks. To the best of our knowledge, this is the first work in the literature that considers the online chaining VNF placement in SDNs. In addition, we present an application of OPA over cost minimization. Simulation results demonstrate that our online approach provides competitive performance compared with off-line algorithms.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128508838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Low delay streaming of DASH content with WebRTC data channel 低延迟流的DASH内容与WebRTC数据通道
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590414
Shuai Zhao, Zhu Li, D. Medhi
Instantaneous low delay on-demand video streaming with a very low start up and channel switching delay is highly desirable for users. The dominant over-the-top (OTT) solutions like DASH, which is based on HTTP and/or WebSocket, suffer from a slow start of the underlying TCP transport while the typical coding structure of the DASH content introduces additional delays. In this work, we address these issues with a new low delay transport based on a WebRTC data channel along with a new content side packetization scheme based on a new QoE metric driven DASH sub-representation to facilitate a fast start, with an agile and fine granular rate adaptation. Simulations on both NS-3 and the GENI testbed demonstrate the effectiveness of this approach. This can serve as a basis for a further peer assisted low delay over-the-top (OTT) solution for an instantaneous live video streaming solution.
具有极低启动和通道切换延迟的瞬时低延迟点播视频流是用户非常需要的。像DASH这样基于HTTP和/或WebSocket的主流OTT解决方案,底层TCP传输启动缓慢,而DASH内容的典型编码结构引入了额外的延迟。在这项工作中,我们通过基于WebRTC数据通道的新的低延迟传输以及基于新的QoE度量驱动的DASH子表示的新的内容端分组方案来解决这些问题,以促进快速启动,具有敏捷和细粒度速率适应。在NS-3和GENI试验台上的仿真验证了该方法的有效性。这可以作为进一步的对等辅助低延迟OTT (over-the-top)解决方案的基础,用于即时直播视频流解决方案。
{"title":"Low delay streaming of DASH content with WebRTC data channel","authors":"Shuai Zhao, Zhu Li, D. Medhi","doi":"10.1109/IWQoS.2016.7590414","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590414","url":null,"abstract":"Instantaneous low delay on-demand video streaming with a very low start up and channel switching delay is highly desirable for users. The dominant over-the-top (OTT) solutions like DASH, which is based on HTTP and/or WebSocket, suffer from a slow start of the underlying TCP transport while the typical coding structure of the DASH content introduces additional delays. In this work, we address these issues with a new low delay transport based on a WebRTC data channel along with a new content side packetization scheme based on a new QoE metric driven DASH sub-representation to facilitate a fast start, with an agile and fine granular rate adaptation. Simulations on both NS-3 and the GENI testbed demonstrate the effectiveness of this approach. This can serve as a basis for a further peer assisted low delay over-the-top (OTT) solution for an instantaneous live video streaming solution.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128694945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tunneling on demand: A lightweight approach for IP fast rerouting against multi-link failures 按需隧道:针对多链路故障的IP快速重路由的轻量级方法
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590416
Yuan Yang, Mingwei Xu, Qi Li
Multi-link failures in the Internet may incur heavy packet loss and degrade the network performance. Existing approaches have been proposed to address this issue by enabling routing protections. However, the effectiveness and efficiency issues of these approaches are not well addressed. In particular, it has not been answered that whether label-free routing can provide full protection against arbitrary multi-link failures in any networks. We propose a model for interface-specific-routing (ISR) which can be seen as a general label-free routing. We present that there exist some networks in which no ISR can be constructed to protect the routing against any k-link failures (k ≥ 2). To improve the protection effectiveness with little overhead in such cases, we propose a tunneling on demand (TOD) approach in this paper. With our approach, most failures can be covered by ISR, and tunneling is activated only when failures cannot be detoured around by ISR. We develop algorithms to compute ISR properly so as to minimize the number of activated tunnels, and compute the protection tunnels if necessary. We prove that TOD can protect routing against any single-link failures and dual-link failures. We evaluate TOD by simulations with real world topologies. The results show that TOD can achieve a protection ratio higher than 98% with small tunneling overhead for multi-link failures, better than existing tunnel-free approach whose protection ratio is 85% to 95%.
在Internet中,如果出现多链路故障,会导致大量丢包,降低网络性能。已经提出了通过启用路由保护来解决此问题的现有方法。但是,这些办法的效力和效率问题没有得到很好的解决。特别是,在任何网络中,无标签路由是否能够提供针对任意多链路故障的完全保护,还没有答案。我们提出了一个接口特定路由(ISR)模型,它可以看作是一个通用的无标签路由。我们提出,在某些网络中,无法构建ISR来保护路由免受任何k链路故障(k≥2)的影响。为了在这种情况下以较小的开销提高保护效果,本文提出了一种按需隧道(TOD)方法。使用我们的方法,大多数故障可以被ISR覆盖,并且只有当故障不能被ISR绕过时才会激活隧道。我们开发了适当计算ISR的算法,以尽量减少激活隧道的数量,并在必要时计算保护隧道。证明了TOD可以保护路由免受任何单链路故障和双链路故障的影响。我们通过模拟真实世界的拓扑来评估TOD。结果表明,对于多链路故障,TOD方法在隧道开销较小的情况下,保护率可达98%以上,优于现有的无隧道方法,其保护率为85% ~ 95%。
{"title":"Tunneling on demand: A lightweight approach for IP fast rerouting against multi-link failures","authors":"Yuan Yang, Mingwei Xu, Qi Li","doi":"10.1109/IWQoS.2016.7590416","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590416","url":null,"abstract":"Multi-link failures in the Internet may incur heavy packet loss and degrade the network performance. Existing approaches have been proposed to address this issue by enabling routing protections. However, the effectiveness and efficiency issues of these approaches are not well addressed. In particular, it has not been answered that whether label-free routing can provide full protection against arbitrary multi-link failures in any networks. We propose a model for interface-specific-routing (ISR) which can be seen as a general label-free routing. We present that there exist some networks in which no ISR can be constructed to protect the routing against any k-link failures (k ≥ 2). To improve the protection effectiveness with little overhead in such cases, we propose a tunneling on demand (TOD) approach in this paper. With our approach, most failures can be covered by ISR, and tunneling is activated only when failures cannot be detoured around by ISR. We develop algorithms to compute ISR properly so as to minimize the number of activated tunnels, and compute the protection tunnels if necessary. We prove that TOD can protect routing against any single-link failures and dual-link failures. We evaluate TOD by simulations with real world topologies. The results show that TOD can achieve a protection ratio higher than 98% with small tunneling overhead for multi-link failures, better than existing tunnel-free approach whose protection ratio is 85% to 95%.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124625165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Tradeoff between executing time and revenue for runtime service composition 运行时服务组合的执行时间和收益之间的权衡
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590408
Jun-Na Zhang, Shangguang Wang, Qibo Sun, Fangchun Yang
Given a service composition, it is challenging but important to have a runtime adaptation, due to the complicated execution environment and evolving feature of Web service. In this paper, we present a runtime adaptive service composition approach, taking execution time minimization and revenue maximization into consideration. Based on dynamic programming, we deduce the optimal policy. Through this policy, orchestrator selects one concrete service for per task on runtime. The experimental results show that the proposed approach outperforms previous approach.
给定一个服务组合,由于Web服务的复杂执行环境和不断发展的特性,具有运行时适应性是一项挑战,但也很重要。在本文中,我们提出了一种运行时自适应服务组合方法,考虑了执行时间最小化和收益最大化。基于动态规划,推导出最优策略。通过此策略,编排器在运行时为每个任务选择一个具体的服务。实验结果表明,该方法优于现有方法。
{"title":"Tradeoff between executing time and revenue for runtime service composition","authors":"Jun-Na Zhang, Shangguang Wang, Qibo Sun, Fangchun Yang","doi":"10.1109/IWQoS.2016.7590408","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590408","url":null,"abstract":"Given a service composition, it is challenging but important to have a runtime adaptation, due to the complicated execution environment and evolving feature of Web service. In this paper, we present a runtime adaptive service composition approach, taking execution time minimization and revenue maximization into consideration. Based on dynamic programming, we deduce the optimal policy. Through this policy, orchestrator selects one concrete service for per task on runtime. The experimental results show that the proposed approach outperforms previous approach.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134270224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentially private density estimation via Gaussian mixtures model 基于高斯混合模型的差分私有密度估计
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590445
Yuncheng Wu, Yao Wu, Hui Peng, Juru Zeng, Hong Chen, Cuiping Li
Density estimation can construct an estimate of the probability density function from the observed data. However, such a function may compromise the privacy of individuals. A notable paradigm for offering strong privacy guarantees in data analysis is differential privacy. In this paper, we propose DPGMM, a parametric density estimation algorithm using Gaussian mixtures model (GMM) under differential privacy. GMM is a well-known model that could approximate any distribution and can be solved via Expectation-Maximization (EM) algorithm. The main idea of DPGMM is to add two extra steps after getting the estimated parameters in the M step of each iteration. The first step is the noise adding step, which injects calibrated noise to the estimated parameters according to their L1-sensitivities and privacy budgets. The second step is the post-processing step, which post-processes those noisy parameters that might break their intrinsic characteristics. Extensive experiments using both real and synthetic datasets evaluate the performance of DPGMM, and demonstrate that the proposed method outperforms a state-of-art approach.
密度估计可以根据观测数据构造概率密度函数的估计值。然而,这种功能可能会损害个人的隐私。在数据分析中提供强大隐私保障的一个值得注意的范例是差异隐私。本文提出了差分隐私下基于高斯混合模型的参数密度估计算法DPGMM。GMM是一个众所周知的模型,它可以近似任何分布,并可以通过期望最大化(EM)算法求解。DPGMM的主要思想是在每次迭代的M步得到估计参数后,再增加两个额外的步骤。第一步是噪声添加步骤,根据估计参数的l1灵敏度和隐私预算向估计参数注入校准后的噪声。第二步是后处理步骤,即对可能破坏其固有特征的噪声参数进行后处理。使用真实和合成数据集进行的大量实验评估了DPGMM的性能,并证明了所提出的方法优于当前最先进的方法。
{"title":"Differentially private density estimation via Gaussian mixtures model","authors":"Yuncheng Wu, Yao Wu, Hui Peng, Juru Zeng, Hong Chen, Cuiping Li","doi":"10.1109/IWQoS.2016.7590445","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590445","url":null,"abstract":"Density estimation can construct an estimate of the probability density function from the observed data. However, such a function may compromise the privacy of individuals. A notable paradigm for offering strong privacy guarantees in data analysis is differential privacy. In this paper, we propose DPGMM, a parametric density estimation algorithm using Gaussian mixtures model (GMM) under differential privacy. GMM is a well-known model that could approximate any distribution and can be solved via Expectation-Maximization (EM) algorithm. The main idea of DPGMM is to add two extra steps after getting the estimated parameters in the M step of each iteration. The first step is the noise adding step, which injects calibrated noise to the estimated parameters according to their L1-sensitivities and privacy budgets. The second step is the post-processing step, which post-processes those noisy parameters that might break their intrinsic characteristics. Extensive experiments using both real and synthetic datasets evaluate the performance of DPGMM, and demonstrate that the proposed method outperforms a state-of-art approach.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133929061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
FlowShadow: Keeping update consistency in software-based OpenFlow switches FlowShadow:保持基于软件的OpenFlow交换机的更新一致性
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590393
Yi Wang, Dongzhe Tai, Ting Zhang, Bin Liu
The fast path, as the cache of exact-match rules in the slow path, is applied in software-based OpenFlow switches to improve the forwarding performance. A microflow in the fast path is the specification of its corresponding rules in the slow path, i.e., every field is explicit in a microflow. A rule can generate multiple microflows in the fast path, and a microflow can be generated from multiple rules since there are multiple flow tables in an OpenFlow switch. Due to the many-to-many mapping relationship between the microflows and the rules, the update consistency between the slow path and the fast path becomes a big challenge in software switches, e.g., Open vSwitch (OVS). In this paper, we propose a cache-based scheme (named FlowShadow) to achieve high update performance while keeping update consistency in OVS. In order to examine the reliability, validity, utility and scalability of FlowShadow, we implement FlowShadow on the OVS and conduct numerous experiments with different settings to measure the performance of FlowShadow. The experimental results demonstrate that FlowShadow achieves a lookup speed of 75 million packets per second on a commodity PC under the real backbone traces; the system with FlowShadow speeds up 3.4× times of the original OVS; and FlowShadow also shows high update performance and good scalability at different update speeds and with different numbers of flow tables.
快速路径作为慢路径中精确匹配规则的缓存,应用于基于软件的OpenFlow交换机中,以提高转发性能。快速路径中的微流是慢路径中相应规则的规范,即每个字段在微流中都是显式的。一个规则可以在快速路径中生成多个微流,一个微流可以由多个规则生成,因为OpenFlow交换机中有多个流表。由于微流和规则之间存在多对多的映射关系,慢路径和快路径之间的更新一致性成为软件交换机(如Open vSwitch (OVS))的一大挑战。在本文中,我们提出了一种基于缓存的方案(命名为FlowShadow),以在保持OVS更新一致性的同时实现高更新性能。为了检验FlowShadow的可靠性、有效性、实用性和可扩展性,我们在OVS上实现了FlowShadow,并在不同的设置下进行了大量的实验来测量FlowShadow的性能。实验结果表明,FlowShadow在真实骨干路径下,在商用PC上实现了每秒7500万数据包的查找速度;具有FlowShadow的系统速度是原始OVS的3.4倍;在不同的更新速度和不同数量的流表下,FlowShadow也显示出较高的更新性能和良好的可扩展性。
{"title":"FlowShadow: Keeping update consistency in software-based OpenFlow switches","authors":"Yi Wang, Dongzhe Tai, Ting Zhang, Bin Liu","doi":"10.1109/IWQoS.2016.7590393","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590393","url":null,"abstract":"The fast path, as the cache of exact-match rules in the slow path, is applied in software-based OpenFlow switches to improve the forwarding performance. A microflow in the fast path is the specification of its corresponding rules in the slow path, i.e., every field is explicit in a microflow. A rule can generate multiple microflows in the fast path, and a microflow can be generated from multiple rules since there are multiple flow tables in an OpenFlow switch. Due to the many-to-many mapping relationship between the microflows and the rules, the update consistency between the slow path and the fast path becomes a big challenge in software switches, e.g., Open vSwitch (OVS). In this paper, we propose a cache-based scheme (named FlowShadow) to achieve high update performance while keeping update consistency in OVS. In order to examine the reliability, validity, utility and scalability of FlowShadow, we implement FlowShadow on the OVS and conduct numerous experiments with different settings to measure the performance of FlowShadow. The experimental results demonstrate that FlowShadow achieves a lookup speed of 75 million packets per second on a commodity PC under the real backbone traces; the system with FlowShadow speeds up 3.4× times of the original OVS; and FlowShadow also shows high update performance and good scalability at different update speeds and with different numbers of flow tables.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"44 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131192300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Diving into cloud-based file synchronization with user collaboration 深入到基于云的文件同步与用户协作
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590396
Haiyang Wang, Xiaoqiang Ma, Feng Wang, Jiangchuan Liu, Bharath Kumar Bommana, Xin Liu
In this paper, we take a close look to understand the cloud-based file synchronization and collaboration systems. Using the popular Dropbox as a case study, our measurement reveals its cascaded computation and communication operations that are far more complicated than those in conventional file hosting. We show that this serial design is necessary for the cloud deployment, which effectively avoids the possible task interference inside the computation cloud; yet it also leads to higher service variance across users. Even worse, in a collaborative file editing session, users' updates would be discarded without any warning. The drop rate is unfortunately related to the slowest collaborator, which severely hinders the system scalability and user satisfaction. We further investigate the root causes of this phenomenon as well as other performance bottlenecks and offer hints for practical improvement.
在本文中,我们将深入了解基于云的文件同步和协作系统。以流行的Dropbox为例,我们的测量显示,它的级联计算和通信操作比传统的文件托管要复杂得多。我们证明了这种串行设计对于云部署是必要的,它有效地避免了计算云内部可能出现的任务干扰;然而,这也会导致用户之间的服务差异更大。更糟糕的是,在协作文件编辑会话中,用户的更新将在没有任何警告的情况下被丢弃。不幸的是,下降率与最慢的合作者有关,这严重阻碍了系统的可扩展性和用户满意度。我们进一步研究了这种现象的根本原因以及其他性能瓶颈,并提供了实际改进的提示。
{"title":"Diving into cloud-based file synchronization with user collaboration","authors":"Haiyang Wang, Xiaoqiang Ma, Feng Wang, Jiangchuan Liu, Bharath Kumar Bommana, Xin Liu","doi":"10.1109/IWQoS.2016.7590396","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590396","url":null,"abstract":"In this paper, we take a close look to understand the cloud-based file synchronization and collaboration systems. Using the popular Dropbox as a case study, our measurement reveals its cascaded computation and communication operations that are far more complicated than those in conventional file hosting. We show that this serial design is necessary for the cloud deployment, which effectively avoids the possible task interference inside the computation cloud; yet it also leads to higher service variance across users. Even worse, in a collaborative file editing session, users' updates would be discarded without any warning. The drop rate is unfortunately related to the slowest collaborator, which severely hinders the system scalability and user satisfaction. We further investigate the root causes of this phenomenon as well as other performance bottlenecks and offer hints for practical improvement.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133491731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network Codes-based Multi-Source Transmission Control Protocol for Content-centric Networks 内容中心网络中基于网络码的多源传输控制协议
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590427
Dongliang Xie, Xin Wang, Qingtao Wang
With the rapid shift from end-to-end communications to content-based data retrieval, there are increasing interests in exploiting Content-centric Networks (CCN) to deliver data. As the special characteristics of CCN, in-network caching and naming-based routing make traditional TCP-like transmission control protocol unsuitable. Although there are some existing efforts on improving the congestion control in CCN, the big issue of redundant transmissions caused by multiple sources has received little attention. To eliminate the redundancy and speed up the transmission, we propose a complete Network Codes-based Multi-Source Transmission Control Protocol (MSTCP), which provides an efficient and controllable multi-source content retrieval service over CCN. MSTCP takes advantage of random network coding to make full use of the coded data responded by different sources to speed up decoding and data receiving at the request side. Moreover, we design a scheduling algorithm based on a simple Expected Reception Deadline (ERD) to efficiently control the number of coded packets to send at each source. This not only effectively eliminates the redundant transmissions in CCN, but also helps to significantly speed up the information retrieval. Extensive simulations show that our mechanism greatly reduces the redundancy while speeding up the content retrievals by the network users.
随着从端到端通信到基于内容的数据检索的快速转变,人们对利用以内容为中心的网络(CCN)来传递数据越来越感兴趣。由于CCN的特殊特性,网络内缓存和基于命名的路由使得传统的类tcp传输控制协议不适合。虽然已有一些改进CCN拥塞控制的努力,但多源冗余传输的大问题却很少受到重视。为了消除冗余和提高传输速度,我们提出了一种完整的基于网络码的多源传输控制协议(MSTCP),该协议在CCN上提供了一种高效、可控的多源内容检索服务。MSTCP利用随机网络编码,充分利用不同源响应的编码数据,加快请求端解码和接收数据的速度。此外,我们设计了一种基于简单的期望接收截止时间(ERD)的调度算法,以有效地控制每个源发送的编码数据包数量。这不仅有效地消除了CCN中的冗余传输,而且有助于显著加快信息检索速度。大量的仿真表明,我们的机制大大减少了冗余,同时加快了网络用户对内容的检索速度。
{"title":"Network Codes-based Multi-Source Transmission Control Protocol for Content-centric Networks","authors":"Dongliang Xie, Xin Wang, Qingtao Wang","doi":"10.1109/IWQoS.2016.7590427","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590427","url":null,"abstract":"With the rapid shift from end-to-end communications to content-based data retrieval, there are increasing interests in exploiting Content-centric Networks (CCN) to deliver data. As the special characteristics of CCN, in-network caching and naming-based routing make traditional TCP-like transmission control protocol unsuitable. Although there are some existing efforts on improving the congestion control in CCN, the big issue of redundant transmissions caused by multiple sources has received little attention. To eliminate the redundancy and speed up the transmission, we propose a complete Network Codes-based Multi-Source Transmission Control Protocol (MSTCP), which provides an efficient and controllable multi-source content retrieval service over CCN. MSTCP takes advantage of random network coding to make full use of the coded data responded by different sources to speed up decoding and data receiving at the request side. Moreover, we design a scheduling algorithm based on a simple Expected Reception Deadline (ERD) to efficiently control the number of coded packets to send at each source. This not only effectively eliminates the redundant transmissions in CCN, but also helps to significantly speed up the information retrieval. Extensive simulations show that our mechanism greatly reduces the redundancy while speeding up the content retrievals by the network users.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123557240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1