Alan Kuhnle, Tianyi Pan, Victoria G. Crawford, M. A. Alim, M. Thai
Based upon the idea that network functionality is impaired if two nodes in a network are sufficiently separated in terms of a given metric, we introduce two combinatorial pseudocut problems generalizing the classical min-cut and multi-cut problems. We expect the pseudocut problems will find broad relevance to the study of network reliability. We comprehensively analyze the computational complexity of the pseudocut problems and provide three approximation algorithms for these problems. Motivated by applications in communication networks with strict Quality-of-Service (QoS) requirements, we demonstrate the utility of the pseudocut problems by proposing a targeted vulnerability assessment for the structure of communication networks using QoS metrics; we perform experimental evaluations of our proposed approximation algorithms in this context.
{"title":"Pseudo-Separation for Assessment of Structural Vulnerability of a Network","authors":"Alan Kuhnle, Tianyi Pan, Victoria G. Crawford, M. A. Alim, M. Thai","doi":"10.1145/3078505.3078538","DOIUrl":"https://doi.org/10.1145/3078505.3078538","url":null,"abstract":"Based upon the idea that network functionality is impaired if two nodes in a network are sufficiently separated in terms of a given metric, we introduce two combinatorial pseudocut problems generalizing the classical min-cut and multi-cut problems. We expect the pseudocut problems will find broad relevance to the study of network reliability. We comprehensively analyze the computational complexity of the pseudocut problems and provide three approximation algorithms for these problems. Motivated by applications in communication networks with strict Quality-of-Service (QoS) requirements, we demonstrate the utility of the pseudocut problems by proposing a targeted vulnerability assessment for the structure of communication networks using QoS metrics; we perform experimental evaluations of our proposed approximation algorithms in this context.","PeriodicalId":133673,"journal":{"name":"Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet Access Providers (APs) have built massive network platforms by which end-users and Content Providers (CPs) can connect and transmit data to each other. Traditionally, APs adopt one-sided pricing schemes and obtain revenues mainly from end-users. With the fast development of data-intensive services, e.g., online video streaming and cloud-based applications, Internet traffic has been growing rapidly. To sustain the traffic growth and enhance user experiences, APs have to upgrade network infrastructures and expand capacities; however, they feel that the revenues from end-users are insufficient to recoup the corresponding costs. Consequently, some APs, e.g., Comcast and AT&T, have recently shifted towards two-sided pricing schemes, i.e., they start to impose termination fees on CPs' data traffic in addition to charging end-users. Although some previous work has studied the economics of two-sided pricing in network markets, network congestion and its impacts on the utilities of different parties were often overlooked. However, the explosive traffic growth has caused severe congestion in many regional and global networks, especially during peak hours, which degrades end-users' experiences and reduces their data demand. This will strongly affect the profits of APs and the utilities of end-users and CPs. For optimizing individual and social utilities, APs and regulators need to reflect the design of pricing strategies and regulatory policies accordingly. So far, little is known about 1) the optimal two-sided pricing structure in a congested network and its changes under varying network environments, e.g., capacities of APs and congestion sensitivities of users, and 2) potential regulations on two-sided pricing for protecting social welfare from monopolistic providers. To address these questions, one challenge is to accurately capture endogenous congestion in networks. Although the level of congestion is influenced by network throughput, the users' traffic demand and throughput are also influenced by network congestion. It is crucial to capture this endogenous congestion so as to faithfully characterize the impacts of two-sided pricing in congested networks. In this work, we propose a novel model of a two-sided congested network built by an AP. We model network congestion as a function of AP's capacity and network throughput, which is also a function of the congestion level. We use different forms of the functions to capture congestion metric based on different service models, e.g., M/M/1 queue or capacity sharing, and user traffic based on different data types, e.g., online video or text. We characterize users' population and traffic demand under pricing and congestion parameters and derive an endogenous system congestion under an equilibrium. Based on the equilibrium model, we explore the structures of two-sided pricing which optimize the AP's profit and social welfare. We analyze the sensitivities of the optimal pricing under varying model par
ap (Internet Access Providers)建立了庞大的网络平台,终端用户和内容提供商(Content provider)可以通过这个平台相互连接和传输数据。传统上,ap采用单边定价方案,主要从终端用户那里获得收入。随着在线视频流和基于云的应用等数据密集型业务的快速发展,互联网流量迅速增长。为了保持流量的持续增长和提升用户体验,ap需要升级网络基础设施和扩容网络容量;但是,他们认为来自最终用户的收入不足以收回相应的成本。因此,一些ap,例如Comcast和AT&T,最近转向了双边定价方案,即,除了向最终用户收费外,他们开始对cp的数据流量征收终止费。虽然以前的一些工作研究了网络市场中双边定价的经济学,但网络拥塞及其对各方效用的影响往往被忽视。然而,爆炸性的流量增长导致许多区域和全球网络严重拥堵,特别是在高峰时段,这降低了最终用户的体验并降低了他们的数据需求。这将严重影响ap的利润以及最终用户和cp的效用。为了优化个人和社会效用,ap和监管机构需要相应地反映定价策略和监管政策的设计。到目前为止,关于1)拥塞网络中最优的双边定价结构及其在不同网络环境下的变化,如ap的容量和用户的拥塞敏感性,以及2)保护社会福利免受垄断提供商侵害的双边定价的潜在法规,我们知之甚少。为了解决这些问题,一个挑战是准确捕获网络中的内生拥塞。虽然拥塞程度受网络吞吐量的影响,但用户的流量需求和吞吐量也会受到网络拥塞的影响。捕捉这种内生拥堵是至关重要的,以便忠实地描述拥堵网络中双边定价的影响。在这项工作中,我们提出了一个由AP构建的双向拥塞网络的新模型。我们将网络拥塞建模为AP容量和网络吞吐量的函数,网络吞吐量也是拥塞水平的函数。我们使用不同形式的功能来捕获基于不同服务模型的拥塞度量,例如,M/M/1队列或容量共享,以及基于不同数据类型的用户流量,例如,在线视频或文本。我们描述了定价和拥堵参数下的用户数量和交通需求,并推导了均衡下的内生系统拥堵。在均衡模型的基础上,探讨了优化AP利润和社会福利的双边定价结构。本文分析了不同模型参数下最优定价的敏感性。、AP容量和用户拥塞敏感度。通过比较两种类型的最优定价,我们从社会福利的角度得出了监管含义。此外,我们还评估了AP和监管机构采用双边定价而不是传统的仅向用户方收费的单方面定价的动机。
{"title":"On Optimal Two-Sided Pricing of Congested Networks","authors":"Xin Wang, Richard T. B. Ma, Yinlong Xu","doi":"10.1145/3078505.3078588","DOIUrl":"https://doi.org/10.1145/3078505.3078588","url":null,"abstract":"Internet Access Providers (APs) have built massive network platforms by which end-users and Content Providers (CPs) can connect and transmit data to each other. Traditionally, APs adopt one-sided pricing schemes and obtain revenues mainly from end-users. With the fast development of data-intensive services, e.g., online video streaming and cloud-based applications, Internet traffic has been growing rapidly. To sustain the traffic growth and enhance user experiences, APs have to upgrade network infrastructures and expand capacities; however, they feel that the revenues from end-users are insufficient to recoup the corresponding costs. Consequently, some APs, e.g., Comcast and AT&T, have recently shifted towards two-sided pricing schemes, i.e., they start to impose termination fees on CPs' data traffic in addition to charging end-users. Although some previous work has studied the economics of two-sided pricing in network markets, network congestion and its impacts on the utilities of different parties were often overlooked. However, the explosive traffic growth has caused severe congestion in many regional and global networks, especially during peak hours, which degrades end-users' experiences and reduces their data demand. This will strongly affect the profits of APs and the utilities of end-users and CPs. For optimizing individual and social utilities, APs and regulators need to reflect the design of pricing strategies and regulatory policies accordingly. So far, little is known about 1) the optimal two-sided pricing structure in a congested network and its changes under varying network environments, e.g., capacities of APs and congestion sensitivities of users, and 2) potential regulations on two-sided pricing for protecting social welfare from monopolistic providers. To address these questions, one challenge is to accurately capture endogenous congestion in networks. Although the level of congestion is influenced by network throughput, the users' traffic demand and throughput are also influenced by network congestion. It is crucial to capture this endogenous congestion so as to faithfully characterize the impacts of two-sided pricing in congested networks. In this work, we propose a novel model of a two-sided congested network built by an AP. We model network congestion as a function of AP's capacity and network throughput, which is also a function of the congestion level. We use different forms of the functions to capture congestion metric based on different service models, e.g., M/M/1 queue or capacity sharing, and user traffic based on different data types, e.g., online video or text. We characterize users' population and traffic demand under pricing and congestion parameters and derive an endogenous system congestion under an equilibrium. Based on the equilibrium model, we explore the structures of two-sided pricing which optimize the AP's profit and social welfare. We analyze the sensitivities of the optimal pricing under varying model par","PeriodicalId":133673,"journal":{"name":"Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128935698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been significant interest in studying security games for modeling the interplay of attacks and defenses on various systems involving critical infrastructure, financial system security, political campaigns, and civil safeguarding. However, existing security game models typically either assume additive utility functions, or that the attacker can attack only one target. Such assumptions lead to tractable analysis, but miss key inherent dependencies that exist among different targets in current complex networks. In this paper, we generalize the classical security game models to allow for non-additive utility functions. We also allow attackers to be able to attack multiple targets. We examine such a general security game from a theoretical perspective and provide a unified view. In particular, we show that each security game is equivalent to a combinatorial optimization problem over a set system ε, which consists of defender's pure strategy space. The key technique we use is based on the transformation, projection of a polytope, and the ellipsoid method. This work settles several open questions in security game domain and extends the state-of-the-art of both the polynomial solvable and NP-hard class of the security game.
{"title":"Security Game with Non-additive Utilities and Multiple Attacker Resources","authors":"Sinong Wang, N. Shroff","doi":"10.1145/3078505.3078519","DOIUrl":"https://doi.org/10.1145/3078505.3078519","url":null,"abstract":"There has been significant interest in studying security games for modeling the interplay of attacks and defenses on various systems involving critical infrastructure, financial system security, political campaigns, and civil safeguarding. However, existing security game models typically either assume additive utility functions, or that the attacker can attack only one target. Such assumptions lead to tractable analysis, but miss key inherent dependencies that exist among different targets in current complex networks. In this paper, we generalize the classical security game models to allow for non-additive utility functions. We also allow attackers to be able to attack multiple targets. We examine such a general security game from a theoretical perspective and provide a unified view. In particular, we show that each security game is equivalent to a combinatorial optimization problem over a set system ε, which consists of defender's pure strategy space. The key technique we use is based on the transformation, projection of a polytope, and the ellipsoid method. This work settles several open questions in security game domain and extends the state-of-the-art of both the polynomial solvable and NP-hard class of the security game.","PeriodicalId":133673,"journal":{"name":"Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128751788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxime C. Cohen, Philipp W. Keller, V. Mirrokni, Morteza Zadimoghaddam
This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be immediately scheduled on physical machines in data centers. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a submodular function. We show that our model captures the risk pooling effect and can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data, and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5% to 17% depending on the provider's risk tolerance.
{"title":"Overcommitment in Cloud Services Bin packing with Chance Constraints","authors":"Maxime C. Cohen, Philipp W. Keller, V. Mirrokni, Morteza Zadimoghaddam","doi":"10.2139/ssrn.2822188","DOIUrl":"https://doi.org/10.2139/ssrn.2822188","url":null,"abstract":"This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be immediately scheduled on physical machines in data centers. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a submodular function. We show that our model captures the risk pooling effect and can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data, and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5% to 17% depending on the provider's risk tolerance.","PeriodicalId":133673,"journal":{"name":"Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128766913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Networks are integral parts of modern safety-critical systems and certification demands the provision of guarantees for data transmissions. Deterministic Network Calculus (DNC) can compute a worst-case bound on a data flow's end-to-end delay. Accuracy of DNC results has been improved steadily, resulting in two DNC branches: the classical algebraic analysis (algDNC) and the more recent optimization-based analysis (optDNC). The optimization-based branch provides a theoretical solution for tight bounds. Its computational cost grows, however, (possibly super-)exponentially with the network size. Consequently, a heuristic optimization formulation trading accuracy against computational costs was proposed. In this paper, we challenge optimization-based DNC with a novel algebraic DNC algorithm. We show that: (1) no current optimization formulation scales well with the network size and (2) algebraic DNC can be considerably improved in both aspects, accuracy and computational cost. To that end, we contribute a novel DNC algorithm that transfers the optimization's search for best attainable delay bounds to algebraic DNC. It achieves a high degree of accuracy and our novel efficiency improvements reduce the cost of the analysis dramatically. In extensive numerical experiments, we observe that our delay bounds deviate from the optimization-based ones by only 1.142% on average while computation times simultaneously decrease by several orders of magnitude.
{"title":"Quality and Cost of Deterministic Network Calculus: Design and Evaluation of an Accurate and Fast Analysis","authors":"Steffen Bondorf, Paul Nikolaus, J. Schmitt","doi":"10.1145/3078505.3078594","DOIUrl":"https://doi.org/10.1145/3078505.3078594","url":null,"abstract":"Networks are integral parts of modern safety-critical systems and certification demands the provision of guarantees for data transmissions. Deterministic Network Calculus (DNC) can compute a worst-case bound on a data flow's end-to-end delay. Accuracy of DNC results has been improved steadily, resulting in two DNC branches: the classical algebraic analysis (algDNC) and the more recent optimization-based analysis (optDNC). The optimization-based branch provides a theoretical solution for tight bounds. Its computational cost grows, however, (possibly super-)exponentially with the network size. Consequently, a heuristic optimization formulation trading accuracy against computational costs was proposed. In this paper, we challenge optimization-based DNC with a novel algebraic DNC algorithm. We show that: (1) no current optimization formulation scales well with the network size and (2) algebraic DNC can be considerably improved in both aspects, accuracy and computational cost. To that end, we contribute a novel DNC algorithm that transfers the optimization's search for best attainable delay bounds to algebraic DNC. It achieves a high degree of accuracy and our novel efficiency improvements reduce the cost of the analysis dramatically. In extensive numerical experiments, we observe that our delay bounds deviate from the optimization-based ones by only 1.142% on average while computation times simultaneously decrease by several orders of magnitude.","PeriodicalId":133673,"journal":{"name":"Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124553811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Hajek, Sewoong Oh, A. Chaintreau, L. Golubchik, Zhi-Li Zhang
Welcome to SIGMETRICS 2007! This year's annual ACM SIGMETRICS conference is being held this year in conjunction with the Federated Computing Research Conference (FCRC). The scope of SIGMETRICS encompasses the development and application of state-of-the-art, broadly-applicable analytic, simulation, and measurement-based performance evaluation techniques. In soliciting papers for this year's program, and in selecting members of the Technical Program Committee (TPC), we made a special effort to maintain a breadth of topic coverage. Application areas represented include: distributed systems, networking, multimedia, storage systems, operating system, web services, supercomputing, compilers, architecture, and more. Analytical techniques represented include: scheduling theory, queueing theory, stochastic modeling, high-dimensional geometry, matrix analytic methods, transient time-varying behaviors, competitive ratio analysis, and more. The conference received 179 submissions. We accepted 29 full papers and 19 poster papers. We followed a double-blind reviewing process, and each paper was reviewed by at least three members of the TPC. In some cases additional reviews were sought from specialists outside the TPC. The TPC consisted of 54 members from 8 countries. After extensive email discussions among the whole TPC, the final paper selection was made during a 2-day meeting held at Columbia University on January 26-27, 2007, which was attended by 35 of the PC members, the two program co-chairs, and the general chair. The quality of submissions was quite high and, as a result, the selected papers make up a very strong program, which we hope you will enjoy. After the TPC meeting, a subcommittee selected the best paper award winners. The Best Paper award goes to, "Modeling the relative fitness of storage," by Michael Mesnier, Matthew Wachs, Raja R. Sambasivan, Alice Zheng, and Gregory R. Ganger. The Kenneth C. Sevcik Outstanding Student Paper award goes to, "An Analysis of Latent Sector Errors in Disk Drives," by Lakshmi Bairavasundaram, Garth Goodson, Shankar Pasupathy, and Jiri Schindler.
欢迎来到SIGMETRICS 2007!今年的ACM SIGMETRICS年会与联邦计算研究会议(FCRC)同时举行。SIGMETRICS的范围包括了最先进的、广泛适用的分析、模拟和基于测量的性能评估技术的开发和应用。在为今年的计划征集论文和选择技术计划委员会(TPC)成员时,我们特别努力保持主题覆盖的广度。所代表的应用领域包括:分布式系统、网络、多媒体、存储系统、操作系统、web服务、超级计算、编译器、体系结构等等。分析方法包括:调度理论、排队理论、随机建模、高维几何、矩阵分析法、瞬态时变行为、竞争比分析等。会议收到179份意见书。我们接受了29篇论文全文和19篇海报论文。我们遵循双盲评审过程,每篇论文至少由三名TPC成员评审。在某些情况下,要求技委会以外的专家进行额外审查。TPC由来自8个国家的54名成员组成。在整个TPC进行了广泛的电子邮件讨论后,2007年1月26日至27日在哥伦比亚大学举行了为期两天的会议,最终选定了论文,参加会议的有35名PC成员、两位项目联合主席和总主席。提交的论文质量相当高,因此,入选的论文构成了一个非常强大的程序,我们希望您会喜欢。在TPC会议结束后,一个小组委员会选出了最佳论文奖得主。最佳论文奖授予Michael Mesnier、Matthew Wachs、Raja R. Sambasivan、Alice Zheng和Gregory R. Ganger的“存储相对适合度建模”。Kenneth C. Sevcik杰出学生论文奖授予了Lakshmi Bairavasundaram, Garth Goodson, Shankar Pasupathy和Jiri Schindler的“磁盘驱动器中潜在扇区错误的分析”。
{"title":"Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems","authors":"B. Hajek, Sewoong Oh, A. Chaintreau, L. Golubchik, Zhi-Li Zhang","doi":"10.1145/3078505","DOIUrl":"https://doi.org/10.1145/3078505","url":null,"abstract":"Welcome to SIGMETRICS 2007! This year's annual ACM SIGMETRICS conference is being held this year in conjunction with the Federated Computing Research Conference (FCRC). The scope of SIGMETRICS encompasses the development and application of state-of-the-art, broadly-applicable analytic, simulation, and measurement-based performance evaluation techniques. In soliciting papers for this year's program, and in selecting members of the Technical Program Committee (TPC), we made a special effort to maintain a breadth of topic coverage. Application areas represented include: distributed systems, networking, multimedia, storage systems, operating system, web services, supercomputing, compilers, architecture, and more. Analytical techniques represented include: scheduling theory, queueing theory, stochastic modeling, high-dimensional geometry, matrix analytic methods, transient time-varying behaviors, competitive ratio analysis, and more. \u0000 \u0000The conference received 179 submissions. We accepted 29 full papers and 19 poster papers. We followed a double-blind reviewing process, and each paper was reviewed by at least three members of the TPC. In some cases additional reviews were sought from specialists outside the TPC. The TPC consisted of 54 members from 8 countries. After extensive email discussions among the whole TPC, the final paper selection was made during a 2-day meeting held at Columbia University on January 26-27, 2007, which was attended by 35 of the PC members, the two program co-chairs, and the general chair. The quality of submissions was quite high and, as a result, the selected papers make up a very strong program, which we hope you will enjoy. \u0000 \u0000After the TPC meeting, a subcommittee selected the best paper award winners. The Best Paper award goes to, \"Modeling the relative fitness of storage,\" by Michael Mesnier, Matthew Wachs, Raja R. Sambasivan, Alice Zheng, and Gregory R. Ganger. The Kenneth C. Sevcik Outstanding Student Paper award goes to, \"An Analysis of Latent Sector Errors in Disk Drives,\" by Lakshmi Bairavasundaram, Garth Goodson, Shankar Pasupathy, and Jiri Schindler.","PeriodicalId":133673,"journal":{"name":"Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122078406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}